diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 1280e6a6e5..d7e49de5cb 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,26 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.16.5-nightly.2 + - 3.16.5-nightly.1 + - 3.16.4 + - 3.16.4-nightly.3 + - 3.16.4-nightly.2 + - 3.16.4-nightly.1 + - 3.16.3 + - 3.16.3-nightly.5 + - 3.16.3-nightly.4 + - 3.16.3-nightly.3 + - 3.16.3-nightly.2 + - 3.16.3-nightly.1 + - 3.16.2 + - 3.16.2-nightly.2 + - 3.16.2-nightly.1 + - 3.16.1 + - 3.16.0 + - 3.16.0-nightly.2 + - 3.16.0-nightly.1 + - 3.15.12 - 3.15.12-nightly.4 - 3.15.12-nightly.3 - 3.15.12-nightly.2 @@ -115,26 +135,6 @@ body: - 3.14.8-nightly.4 - 3.14.8-nightly.3 - 3.14.8-nightly.2 - - 3.14.8-nightly.1 - - 3.14.7 - - 3.14.7-nightly.8 - - 3.14.7-nightly.7 - - 3.14.7-nightly.6 - - 3.14.7-nightly.5 - - 3.14.7-nightly.4 - - 3.14.7-nightly.3 - - 3.14.7-nightly.2 - - 3.14.7-nightly.1 - - 3.14.6 - - 3.14.6-nightly.3 - - 3.14.6-nightly.2 - - 3.14.6-nightly.1 - - 3.14.5 - - 3.14.5-nightly.3 - - 3.14.5-nightly.2 - - 3.14.5-nightly.1 - - 3.14.4 - - 3.14.4-nightly.4 validations: required: true - type: dropdown diff --git a/.github/workflows/miletone_release_trigger.yml b/.github/workflows/miletone_release_trigger.yml index 4a031be7f9..d755f7eb9f 100644 --- a/.github/workflows/miletone_release_trigger.yml +++ b/.github/workflows/miletone_release_trigger.yml @@ -5,12 +5,6 @@ on: inputs: milestone: required: true - release-type: - type: choice - description: What release should be created - options: - - release - - pre-release milestone: types: closed diff --git a/.gitignore b/.gitignore index 50f52f65a3..622d55fb88 100644 --- a/.gitignore +++ b/.gitignore @@ -37,6 +37,7 @@ Temporary Items ########### /build /dist/ +/server_addon/packages/* /vendor/bin/* /vendor/python/* diff --git a/CHANGELOG.md b/CHANGELOG.md index 095e0d96e4..f1948b1a3f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,2929 @@ # Changelog +## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.3...3.16.4) + +### **🆕 New features** + + +
+Feature: Download last published workfile specify version #4998 + +Setting `workfile_version` key to hook's `self.launch_context.data` allow you to specify the workfile version you want sync service to download if none is matched locally. This is helpful if the last version hasn't been correctly published/synchronized, and you want to recover the previous one (or some you'd like).Version could be set in two ways: +- OP's absolute version, matching the `version` index in DB. +- Relative version in reverse order from the last one: `-2`, `-3`...I don't know where I should write documentation about that. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: allow not creation of group for Import loaders #5427 + +This PR enhances previous one. All ReferenceLoaders could not wrap imported products into explicit group.Also `Import` Loaders have same options. Control for this is separate in Settings, eg. Reference might wrap loaded items in group, `Import` might not. + + +___ + +
+ + +
+3dsMax: Settings for Ayon #5388 + +Max Addon Setting for Ayon + + +___ + +
+ + +
+General: Navigation to Folder from Launcher #5404 + +Adds an action in launcher to open the directory of the asset. + + +___ + +
+ + +
+Chore: Default variant in create plugin #5429 + +Attribute `default_variant` on create plugins always returns string and if default variant is not filled other ways how to get one are implemented. + + +___ + +
+ + +
+Publisher: Thumbnail widget enhancements #5439 + +Thumbnails widget in Publisher has new 3 options to choose from: Paste (from clipboard), Take screenshot and Browse. Clear button and new options are not visible by default, user must expand options button to show them. + + +___ + +
+ + +
+AYON: Update ayon api to '0.3.5' #5460 + +Updated ayon-python-api to 0.3.5. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: Apply unknown ayon settings first #5435 + +Settings of custom addons are available in converted settings. + + +___ + +
+ + +
+Maya: Fix wrong subset name of render family in deadline #5442 + +New Publisher is creating different subset names than previously which resulted in duplication of `render` string in final subset name of `render` family published on Deadline.This PR solves that, it also fixes issues with legacy instances from old publisher, it matches the subset name as was before.This solves same issue in Max implementation. + + +___ + +
+ + +
+Maya: Fix setting of version to workfile instance #5452 + +If there are multiple instances of renderlayer published, previous logic resulted in unpredictable rewrite of instance family to 'workfile' if `Sync render version with workfile` was on. + + +___ + +
+ + +
+Maya: Context plugin shouldn't be tied to family #5464 + +`Maya Current File` collector was tied to `workfile` unnecessary. It should run even if `workile` instance is not being published. + + +___ + +
+ + +
+Unreal: Fix loading hero version for static and skeletal meshes #5393 + +Fixed a problem with loading hero versions for static ans skeletal meshes. + + +___ + +
+ + +
+TVPaint: Fix 'repeat' behavior #5412 + +Calculation of frames for repeat behavior is working correctly. + + +___ + +
+ + +
+AYON: Thumbnails cache and api prep #5437 + +Moved thumbnails cache from ayon python api to OpenPype and prepare AYON thumbnail resolver for new api functions. Current implementation should work with old and new ayon-python-api. + + +___ + +
+ + +
+Nuke: Name of the Read Node should be updated correctly when switching versions or assets. #5444 + +Bug fixing of the read node's name not being updated correctly when setting version or switching asset. + + +___ + +
+ + +
+Farm publishing: asymmetric handles fixed #5446 + +Handles are now set correctly on farm published product version if asymmetric were set to shot attributes. + + +___ + +
+ + +
+Scene Inventory: Provider icons fix #5450 + +Fix how provider icons are accessed in scene inventory. + + +___ + +
+ + +
+Fix typo on Deadline OP plugin name #5453 + +Surprised that no one has hit this bug yet... but it seems like there was a typo on the name of the OP Deadline plugin when submitting jobs to it. + + +___ + +
+ + +
+AYON: Fix version attributes update #5472 + +Fixed updates of attribs in AYON mode. + + +___ + +
+ +### **Merged pull requests** + + +
+Added missing defaults for import_loader #5447 + + +___ + +
+ + +
+Bug: Local settings don't open on 3.14.7 #5220 + +### Before posting a new ticket, have you looked through the documentation to find an answer? + +Yes I have + +### Have you looked through the existing tickets to find any related issues ? + +Not yet + +### Author of the bug + +@FadyFS + +### Version + +3.15.11-nightly.3 + +### What platform you are running OpenPype on? + +Linux / Centos + +### Current Behavior: + +the previous behavior (bug) : +![image](https://github.com/quadproduction/OpenPype/assets/135602303/09bff9d5-3f8b-4339-a1e5-30c04ade828c) + + +### Expected Behavior: + +![image](https://github.com/quadproduction/OpenPype/assets/135602303/c505a103-7965-4796-bcdf-73bcc48a469b) + + +### What type of bug is it ? + +Happened only once in a particular configuration + +### Which project / workfile / asset / ... + +open settings with 3.14.7 + +### Steps To Reproduce: + +1. Run openpype on the 3.15.11-nightly.3 version +2. Open settings in 3.14.7 version + +### Relevant log output: + +_No response_ + +### Additional context: + +_No response_ + +___ + +
+ + +
+Tests: Add automated targets for tests #5443 + +Without it plugins with 'automated' targets won't be triggered (eg `CloseAE` etc.) + + +___ + +
+ + + + +## [3.16.3](https://github.com/ynput/OpenPype/tree/3.16.3) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.2...3.16.3) + +### **🆕 New features** + + +
+AYON: 3rd party addon usage #5300 + +Prepare OpenPype code to be able use `ayon-third-party` addon which supply ffmpeg and OpenImageIO executables. Because they both can support to define custom arguments (more than one) a new functions were needed to supply.New functions are `get_ffmpeg_tool_args` and `get_oiio_tool_args`. They work similar to previous but instead of string are returning list of strings. All places using previous functions `get_ffmpeg_tool_path` and `get_oiio_tool_path` are now using new ones. They should be backwards compatible and even with addon if returns single argument. + + +___ + +
+ + +
+AYON: Addon settings in OpenPype #5347 + +Moved settings addons to OpenPype server addon. Modified create package to create zip files for server for each settings addon and for openpype addon. + + +___ + +
+ + +
+AYON: Add folder to template data #5417 + +Added `folder` to template data, so `{folder[name]}` can be used in templates. + + +___ + +
+ + +
+Option to start versioning from 0 #5262 + +This PR adds a settings option to start all versioning from 0.This PR will replace #4455. + + +___ + +
+ + +
+Ayon: deadline implementation #5321 + +Quick implementation of deadline in Ayon. New Ayon plugin added for Deadline repository + + +___ + +
+ + +
+AYON: Remove AYON launch logic from OpenPype #5348 + +Removed AYON launch logic from OpenPype. The logic is outdated at this moment and is replaced by `ayon-launcher`. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Bug: Error on multiple instance rig with maya #5310 + +I change endswith method by startswith method because the set are automacaly name out_SET, out_SET1, out_SET2 ... + + +___ + +
+ + +
+Applications: Use prelaunch hooks to extract environments #5387 + +Environment variable preparation is based on prelaunch hooks. This should allow to pass OCIO environment variables to farm jobs. + + +___ + +
+ + +
+Applications: Launch hooks cleanup #5395 + +Use `set` instead of `list` for filtering attributes in launch hooks. Celaction hooks dir does not contain `__init__.py`. Celaction prelaunch hook is reusing `CELACTION_ROOT_DIR`. Launch hooks are using full import from `openpype.lib.applications`. + + +___ + +
+ + +
+Applications: Environment variables order #5245 + +Changed order of set environment variables. First are set context environment variables and then project environment overrides. Also asset and task environemnt variables are optional. + + +___ + +
+ + +
+Autosave preferences can be read after Nuke opens the script #5295 + +Looks like I need to open the script in Nuke to be able to correctly load the autosave preferences.This PR reads the Nuke script in context, and offers owerwriting the current script with autosaved one if autosave exists. + + +___ + +
+ + +
+Resolve: Update with compatible resolve version and latest docs #5317 + +Missing information about compatible Resolve version and latest docs from https://github.com/ynput/OpenPype/tree/develop/openpype/hosts/resolve + + +___ + +
+ + +
+Chore: Remove deprecated functions #5323 + +Removed functions/classes that are deprecated and marked to be removed. + + +___ + +
+ + +
+Nuke Render and Prerender nodes Process Order - OP-3555 #5332 + +This PR exposes control over the order of processing of the instances, by sorting the instances created. The sorting happens on the `render_order` and subset name. If the knob `render_order` is found on the instance, we'll sort by that first before sorting by subset name.`render_order` instances are processed before nodes without `render_order`. This could be extended in the future by querying other knobs but I dont know of a usecase for this.Hardcoded the creator `order` attribute of the `prerender` class to be before the `render`. Could be exposed to the user/studio but dont know of a use case for this. + + +___ + +
+ + +
+Unreal: Python Environment Improvements #5344 + +Automatically set `UE_PYTHONPATH` as `PYTHONPATH` when launching Unreal. + + +___ + +
+ + +
+Unreal: Custom location for Unreal Ayon Plugin #5346 + +Added a new environment variable `AYON_BUILT_UNREAL_PLUGIN` to set an already existing and built Ayon Plugin for Unreal. + + +___ + +
+ + +
+Unreal: Better handling of Exceptions in UE Worker threads #5349 + +Implemented a new `UEWorker` base class to handle exception during the execution of UE Workers. + + +___ + +
+ + +
+Houdini: Add farm toggle on creation menu #5350 + +Deadline Farm publishing and Rendering for Houdini was possible with this PR #4825 farm publishing is enabled by default some ROP nodes which may surprise new users (like me).I think adding a toggle (on by default) on creation UI is better so that users will be aware that there's a farm option for this publish instance.ROPs Modified : +- [x] Mantra ROP +- [x] Karma ROP +- [x] Arnold ROP +- [x] Redshift ROP +- [x] Vray ROP + + +___ + +
+ + +
+Ftrack: Sync to avalon settings #5353 + +Added roles settings for sync to avalon action. + + +___ + +
+ + +
+Chore: Schemas inside OpenPype #5354 + +Moved/copied schemas from repository root inside openpype/pipeline. + + +___ + +
+ + +
+AYON: Addons creation enhancements #5356 + +Enhanced AYON addons creation. Fix issue with `Pattern` typehint. Zip filenames contain version. OpenPype package is skipping modules that are already separated in AYON. Updated settings of addons. + + +___ + +
+ + +
+AYON: Update staging icons #5372 + +Updated staging icons for staging mode. + + +___ + +
+ + +
+Enhancement: Houdini Update pointcache labels #5373 + +To me it's logical to find pointcaches types listed one after another, but they were named differentlySo, I made this PR to update their labels + + +___ + +
+ + +
+nuke: split write node product instance features #5389 + +Improving Write node product instances by allowing precise activation of specific features. + + +___ + +
+ + +
+Max: Use the empty modifiers in container to store AYON Parameter #5396 + +Instead of adding AYON/OP Parameter along with other attributes inside the container, empty modifiers would be created to store AYON/OP custom attributes + + +___ + +
+ + +
+AfterEffects: Removed unused imports #5397 + +Removed unused import from extract local render plugin file. + + +___ + +
+ + +
+Nuke: adding BBox knob type to settings #5405 + +Nuke knob types in settings having new `Box` type for reposition nodes like Crop or Reformat. + + +___ + +
+ + +
+SyncServer: Existence of module is optional #5413 + +Existence of SyncServer module is optional and not required. Added `sync_server` module back to ignored modules when openpype addon is created for AYON. Command `syncserver` is marked as deprecated and redirected to sync server cli. + + +___ + +
+ + +
+Webpublisher: Self contain test publish logic #5414 + +Moved test logic of publishing to webpublisher. Simplified `remote_publish` to remove webpublisher specific logic. + + +___ + +
+ + +
+Webpublisher: Cleanup targets #5418 + +Removed `remote` target from webpublisher and replaced it with 2 targets `webpublisher` and `automated`. + + +___ + +
+ + +
+nuke: update server addon settings with box #5419 + +updtaing nuke ayon server settings for Box option in knob types. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: fix validate frame range on review attached to other instances #5296 + +Fixes situation where frame range validator can't be turned off on models if they are attached to reviewable camera in Maya. + + +___ + +
+ + +
+Maya: Apply project settings to creators #5303 + +Project settings were not applied to the creators. + + +___ + +
+ + +
+Maya: Validate Model Content #5336 + +`assemblies` in `cmds.ls` does not seem to work; +```python + +from maya import cmds + + +content_instance = ['|group2|pSphere1_GEO', '|group2|pSphere1_GEO|pSphere1_GEOShape', '|group1|pSphere1_GEO', '|group1|pSphere1_GEO|pSphere1_GEOShape'] +assemblies = cmds.ls(content_instance, assemblies=True, long=True) +print(assemblies) +``` + +Fixing with string splitting instead. + + +___ + +
+ + +
+Bugfix: Maya update defaults variable #5368 + +So, something was forgotten while moving out from `LegacyCreator` to `NewCreator``LegacyCreator` used `defaults` to list suggested subset names which was changed into `default_variants` in the the `NewCreator`and setting `defaults` to any values has no effect!This update affects: +- [x] Model +- [x] Set Dress + + +___ + +
+ + +
+Chore: Python 2 support fix #5375 + +Fix Python 2 support by adding `click` into python 2 dependencies and removing f-string from maya. + + +___ + +
+ + +
+Maya: do not create top level group on reference #5402 + +This PR allows to not wrapping loaded referenced assets in top level group either explicitly for artist or by configuration in Settings.Artists can control group creation in ReferenceLoader options.Default no group creation could be set by emptying `Group Name` in `project_settings/maya/load/reference_loader` + + +___ + +
+ + +
+Settings: Houdini & Maya create plugin settings #5436 + +Fixes related to Maya and Houdini settings. Renamed `defaults` to `default_variants` in plugin settings to match attribute name on create plugin in both OpenPype and AYON settings. Fixed Houdini AYON settings where were missing settings for defautlt varaints and fixed Maya AYON settings where default factory had wrong assignment. + + +___ + +
+ + +
+Maya: Hide CreateAnimation #5297 + +When converting `animation` family or loading a `rig` family, need to include the `animation` creator but hide it in creator context. + + +___ + +
+ + +
+Nuke Anamorphic slate - Read pixel aspect from input #5304 + +When asset pixel aspect differs from rendered pixel aspect, Nuke slate pixel aspect is not longer taken from asset, but is readed via ffprobe. + + +___ + +
+ + +
+Nuke - Allow ExtractReviewDataMov with no timecode knob #5305 + +ExtractReviewDataMov allows to specify file type. Trying to write some other extension than mov fails on generate_mov assuming that mov64_write_timecode knob exists. + + +___ + +
+ + +
+Nuke: removing settings schema with defaults for OpenPype #5306 + +continuation of https://github.com/ynput/OpenPype/pull/5275 + + +___ + +
+ + +
+Bugfix: Dependency without 'inputLinks' not downloaded #5337 + +Remove condition that avoids downloading dependency without `inputLinks`. + + +___ + +
+ + +
+Bugfix: Houdini Creator use selection even if it was toggled off #5359 + +When creating many product types (families) one after another without refreshing the creator window manually if you toggled `Use selection` once, all the later product types will use selection even if it was toggled offHere's Before it will keep use selection even if it was toggled off, unless you refresh window manuallyhttps://github.com/ynput/OpenPype/assets/20871534/8b890122-5b53-4c6b-897d-6a2f3aa3388aHere's After it works as expectedhttps://github.com/ynput/OpenPype/assets/20871534/6b1db990-de1b-428e-8828-04ab59a44e28 + + +___ + +
+ + +
+Houdini: Correct camera selection for karma renderer when using selected node #5360 + +When user creates the karma rop with selected camera by use selection, it will give the error message of "no render camera found in selection".This PR is to fix the bug of creating karma rop when using selected camera node in Houdini + + +___ + +
+ + +
+AYON: Environment variables and functions #5361 + +Prepare code for ayon-launcher compatibility. Fix ayon launcher subprocess calls, added more checks for `AYON_SERVER_ENABLED`, use ayon launcher suitable environment variables in AYON mode and changed outputs of some functions. Replaced usages of `OPENPYPE_REPOS_ROOT` environment variable with `PACKAGE_DIR` variable -> correct paths are used. + + +___ + +
+ + +
+Nuke: farm rendering of prerender ignore roots in nuke #5366 + +`prerender` family was using wrong subset, same as `render` which should be different. + + +___ + +
+ + +
+Bugfix: Houdini update defaults variable #5367 + +So, something was forgotten while moving out from `LegacyCreator` to `NewCreator``LegacyCreator` used `defaults` to list suggested subset names which was changed into `default_variants` in the the `NewCreator`and setting `defaults` to any values has no effect!This update affects: +- [x] Arnold ASS +- [x] Arnold ROP +- [x] Karma ROP +- [x] Mantra ROP +- [x] Redshift ROP +- [x] VRay ROP + + +___ + +
+ + +
+Publisher: Fix create/publish animation #5369 + +Use geometry movement instead of changing min/max width. + + +___ + +
+ + +
+Unreal: Move unreal splash screen to unreal #5370 + +Moved splash screen code to unreal integration and removed import from Igniter. + + +___ + +
+ + +
+Nuke: returned not cleaning of renders folder on the farm #5374 + +Previous PR enabled explicit cleanup of `renders` folder after farm publishing. This is not matching customer's workflows. Customer wants to have access to files in `renders` folder and potentially redo some frames for long frame sequences.This PR extends logic of marking rendered files for deletion only if instance doesn't have `stagingDir_persistent`.For backwards compatibility all Nuke instances have `stagingDir_persistent` set to True, eg. `renders` folder won't be cleaned after farm publish. + + +___ + +
+ + +
+Nuke: loading sequences is working #5376 + +Loading image sequences was broken after the latest release, version 3.16. However, I am pleased to inform you that it is now functioning as expected. + + +___ + +
+ + +
+AYON: Fix settings conversion for ayon addons #5377 + +AYON addon settings are available in system settings and does not have available the same values in `"modules"` subkey. + + +___ + +
+ + +
+Nuke: OCIO env var workflow #5379 + +The OCIO environment variable needs to be consistently handled across all platforms. Nuke resolves the custom OCIO config path differently depending on the platform, so we included the ocio config path in the workfile with a partial replacement using an environment variable. Additionally, for Windows sessions, we replaced backward slashes with a TCL expression. + + +___ + +
+ + +
+Unreal: Fix Unreal build script #5381 + +Define 'AYON_UNREAL_ROOT' environment variable in unreal addon. + + +___ + +
+ + +
+3dsMax: Use relative path to MAX_HOST_DIR #5382 + +Use `MAX_HOST_DIR` to calculate startup script path instead of use relative path to `OPENPYPE_ROOT` environment variable. + + +___ + +
+ + +
+Bugfix: Houdini abc validator error message #5386 + +When ABC path validator fails, it prints node objects not node paths or namesThis bug happened because of updating `get_invalid` method to return nodes instead of node pathsBeforeAfter + + +___ + +
+ + +
+Nuke: node name influence product (subset) name #5392 + +Nuke now allows users to duplicate publishing instances, making the workflow easier. By duplicating a node and changing its name, users can set the product (subset) name in the publishing context.Users now have the ability to change the variant name in Publisher, which will automatically rename the associated instance node. + + +___ + +
+ + +
+Houdini: delete redundant bgeo sop validator #5394 + +I found out that this `Validate BGEO SOP Path` validator is redundant, it catches two cases that are already implemented in "Validate Output Node". "Validate Output Node" works with `bgeo` as well as `abc` because `"pointcache"` is listed in its families + + +___ + +
+ + +
+Nuke: workfile is not reopening after change of context #5399 + +Nuke no longer reopens the latest workfile when the context is changed to a different task using the Workfile tool. The issue also affected the Script Clean (from Nuke File menu) and Close feature, but it has now been fixed. + + +___ + +
+ + +
+Bugfix: houdini hard coded project settings #5400 + +I made this PR to solve the issue with hard-coded settings in houdini + + +___ + +
+ + +
+AYON: 3dsMax settings #5401 + +Keep `adsk_3dsmax` group in applications settings. + + +___ + +
+ + +
+Bugfix: update defaults to default_variants in maya and houdini OP DCC settings #5407 + +On moving out to new creator in Maya and Houdini updating settings was missed. + + +___ + +
+ + +
+Applications: Attributes creation #5408 + +Applications addon does not cause infinite server restart loop. + + +___ + +
+ + +
+Max: fix the bug of handling Object deletion in OP Parameter #5410 + +If the object is added to the OP parameter and user delete it in the scene thereafter, it will error out the container with OP attributes. This PR resolves the bug.This PR also fixes the bug of not adding the attribute into OP parameter correctly when the user enables "use selections" to link the object into the OP parameter. + + +___ + +
+ + +
+Colorspace: including environments from launcher process #5411 + +Fixed bug in GitHub PR where the OCIO config template was not properly formatting environment variables from System Settings `general/environment`. + + +___ + +
+ + +
+Nuke: workfile template fixes #5428 + +Some bunch of small bugs needed to be fixed + + +___ + +
+ + +
+Houdini, Max: Fix missed function interface change #5430 + +This PR https://github.com/ynput/OpenPype/pull/5321/files from @kalisp missed updating the `add_render_job_env_var` in Houdini and Max as they are passing an extra arg: +``` +TypeError: add_render_job_env_var() takes 1 positional argument but 2 were given +``` + + +___ + +
+ + +
+Scene Inventory: Fix issue with 'sync_server' #5431 + +Fix accesss to `sync_server` attribute in scene inventory. + + +___ + +
+ + +
+Unpack project: Fix import issue #5433 + +Added `load_json_file`, `replace_project_documents` and `store_project_documents` to mongo init. + + +___ + +
+ + +
+Chore: Versions post fixes #5441 + +Fixed issues caused by my fault. Filled right version value to anatomy data. + + +___ + +
+ +### **📃 Testing** + + +
+Tests: Copy file_handler as it will be removed by purging ayon code #5357 + +Ayon code will get purged in the future from this repo/addon, therefore all `ayon_common` will be gone. `file_handler` gets internalized to tests as it is not used anywhere else. + + +___ + +
+ + + + +## [3.16.2](https://github.com/ynput/OpenPype/tree/3.16.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.1...3.16.2) + +### **🆕 New features** + + +
+Fusion - Set selected tool to active #5327 + +When you run the action to select a node, this PR makes the node-flow show the selected node + you'll see the nodes controls in the inspector. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: All base create plugins #5326 + +Prepared base classes for each creator type in Maya. Extended `MayaCreatorBase` to have default implementations of common logic with instances which is used in each type of plugin. + + +___ + +
+ + +
+Windows: Support long paths on zip updates. #5265 + +Support long paths for version extract on Windows.Use case is when having long paths in for example an addon. You can install to the C drive but because the zip files are extracted in the local users folder, it'll add additional sub directories to the paths and quickly get too long paths for Windows to handle the zip updates. + + +___ + +
+ + +
+Blender: Added setting to set resolution and start/end frames at startup #5338 + +This PR adds `set_resolution_startup`and `set_frames_startup` settings. They automatically set respectively the resolution and start/end frames and FPS in Blender when opening a file or creating a new one. + + +___ + +
+ + +
+Blender: Support for ExtractBurnin #5339 + +This PR adds support for ExtractBurnin for Blender, when publishing a Review. + + +___ + +
+ + +
+Blender: Extract Camera as Alembic #5343 + +Added support to extract Alembic Cameras in Blender. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Instance In Context #5335 + +Missing new publisher error so the repair action shows up. + + +___ + +
+ + +
+Settings: Fix default settings #5311 + +Fixed defautl settings for shotgrid. Renamed `FarmRootEnumEntity` to `DynamicEnumEntity` and removed doubled ABC metaclass definition (all settings entities have abstract metaclass). + + +___ + +
+ + +
+Deadline: missing context argument #5312 + +Updated function arguments + + +___ + +
+ + +
+Qt UI: Multiselection combobox PySide6 compatibility #5314 + +- The check states are replaced with the values for PySide6 +- `QtCore.Qt.ItemIsUserTristate` is used instead of `QtCore.Qt.ItemIsTristate` to avoid crashes on PySide6 + + +___ + +
+ + +
+Docker: handle openssl 1.1.1 for centos 7 docker build #5319 + +Move to python 3.9 has added need to use openssl 1.1.x - but it is not by default available on centos 7 image. This is fixing it. + + +___ + +
+ + +
+houdini: fix typo in redshift proxy #5320 + +I believe there's a typo in `create_redshift_proxy.py` ( extra ` ) in filename, and I made this PR to suggest a fix + + +___ + +
+ + +
+Houdini: fix wrong creator identifier in pointCache workflow #5324 + +FIxing a bug in publishing alembics, were invalid creator identifier caused missing family association. + + +___ + +
+ + +
+Fix colorspace compatibility check #5334 + +for some reason a user may have `PyOpenColorIO` installed to his machine, _in my case it came with renderman._it can trick the compatibility check as `import PyOpenColorIO` won't raise an error however it may be an old version _like my case_Beforecompatibility check was true and It used wrapper directly After Fix It will use wrapper via subprocess instead + + +___ + +
+ +### **Merged pull requests** + + +
+Remove forgotten dev logging #5315 + + +___ + +
+ + + + +## [3.16.1](https://github.com/ynput/OpenPype/tree/3.16.1) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.0...3.16.1) + +### **🆕 New features** + + +
+Royal Render: Maya and Nuke support #5191 + +Basic working implementation of Royal Render support in Maya.It expects New publisher implemented in Maya. + + +___ + +
+ + +
+Blender: Blend File Family #4321 + +Implementation of the Blend File family analogue to the Maya Scene one. + + +___ + +
+ + +
+Houdini: simple bgeo publishing #4588 + +Support for simple publishing of bgeo files. + +This is adding basic support for bgeo publishing in Houdini. It will allow publishing bgeo in all supported formats (selectable in the creator options). If selected node has `output` on sop level, it will be used automatically as path in file node. + + +___ + +
+ +### **🚀 Enhancements** + + +
+General: delivery action add renamed frame number in Loader #5024 + +Frame Offset options for delivery in Openpype loader + + +___ + +
+ + +
+Enhancement/houdini add path action for abc validator #5237 + +Add a default path attribute Action.it's a helper action more than a repair action, which used to add a default single value. + + +___ + +
+ + +
+Nuke: auto apply all settings after template build #5277 + +Adding auto run of Apply All Settings after template is builder is finishing its process. This will apply Frame-range, Image size, Colorspace found in context of a task shot. + + +___ + +
+ + +
+Harmony:Removed loader settings for Harmony #5289 + +It shouldn't be configurable, it is internal logic. By adding additional extension it wouldn't start to work magically. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: Make appdirs case sensitive #5298 + +Appdirs for AYON are case sensitive for linux and mac so we needed to change them to match ayon launcher. Changed 'ayon' to 'AYON' and 'ynput' to 'Ynput'. + + +___ + +
+ + +
+Traypublisher: Fix plugin order #5299 + +Frame range collector for traypublisher was moved to traypublisher plugins and changed order to make sure `assetEntity` is filled in `instance.data`. + + +___ + +
+ + +
+Deadline: removing OPENPYPE_VERSION from some host submitters #5302 + +Removing deprecated method of adding OPENPYPE_VERSION to job environment. It was leftover and other hosts have already been cleared. + + +___ + +
+ + +
+AYON: Fix args for workfile conversion util #5308 + +Workfile update conversion util function have right expected arguments. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Maya: Refactor imports to `lib.get_reference_node` since the other function… #5258 + +Refactor imports to `lib.get_reference_node` since the other function is deprecated. + + +___ + +
+ + + + +## [3.16.0](https://github.com/ynput/OpenPype/tree/3.16.0) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/...3.16.0) + +### **🆕 New features** + + +
+General: Reduce usage of legacy io #4723 + +Replace usages of `legacy_io` with getter methods or reuse already available information. Create plugins using CreateContext are using context from CreateContext object. Loaders are usign getter function from context tools. Publish plugin are using information instance.data or context.data. In some cases were pieces of code refactored a little e.g. fps getter in maya. + + +___ + +
+ + +
+Documentation: API docs reborn - yet again #4419 + +## Feature + +Add functional base for API Documentation using Sphinx and AutoAPI. + +After unsuccessful #2512, #834 and #210 this is yet another try. But this time without ambition to solve the whole issue. This is making Shinx script to work and nothing else. Any changes and improvements in API docs should be made in subsequent PRs. + +## How to use it + +You can run: + +```sh +cd .\docs +make.bat html +``` + +or + +```sh +cd ./docs +make html +``` + +This will go over our code and generate **.rst** files in `/docs/source/autoapi` and from those it will generate full html documentation in `/docs/build/html`. + +During the build you'll see tons of red errors that are pointing to our issues: + +1) **Wrong imports** + Invalid import are usually wrong relative imports (too deep) or circular imports. + +2) **Invalid doc-strings** + Doc-strings to be processed into documentation needs to follow some syntax - this can be checked by running + `pydocstyle` that is already included with OpenPype +3) **Invalid markdown/rst files** + md/rst files can be included inside rst files using `.. include::` directive. But they have to be properly formatted. + + +## Editing rst templates + +Everything starts with `/docs/source/index.rst` - this file should be properly edited, Right now it just includes `readme.rst` that in turn include and parse main `README.md`. This is entrypoint to API documentation. All templates generated by AutoAPI are in `/docs/source/autoapi`. They should be eventually commited to repository and edited too. + +## Steps for enhancing API documentation + +1) Run `/docs/make.bat html` +2) Read the red errors/warnings - fix it in the code +3) Run `/docs/make.bat html` again until there are not red lines +4) Edit rst files and add some meaningfull content there + +> **Note** +> This can (should) be merged as is without doc-string fixes in the code or changes in templates. All additional improvements on API documentation should be made in new PRs. + +> **Warning** +> You need to add new dependencies to use it. Run `create_venv`. + +Connected to #2490 +___ + +
+ + +
+Global: custom location for OP local versions #4673 + +This provides configurable location to unzip Openpype version zips. By default, it was hardcoded to artist's app data folder, which might be problematic/slow with roaming profiles.Location must be accessible by user running OP Tray with write permissions (so `Program Files` might be problematic) + + +___ + +
+ + +
+AYON: Update settings conversion #4837 + +Updated conversion script of AYON settings to v3 settings. PR is related to changes in addons repository https://github.com/ynput/ayon-addons/pull/6 . Changed how the conversion happens -> conversion output does not start with openpype defaults but as empty dictionary. + + +___ + +
+ + +
+AYON: Implement integrate links publish plugin #4842 + +Implemented entity links get/create functions. Added new integrator which replaces v3 integrator for links. + + +___ + +
+ + +
+General: Version attributes integration #4991 + +Implemented unified integrate plugin to update version attributes after all integrations for AYON. The goal is to be able update attribute values in a unified way to a version when all addon integrators are done, so e.g. ftrack can add ftrack id to matching version in AYON server etc.The can be stored under `"versionAttributes"` key. + + +___ + +
+ + +
+AYON: Staging versions can be used #4992 + +Added ability to use staging versions in AYON mode. + + +___ + +
+ + +
+AYON: Preparation for products #5038 + +Prepare ayon settings conversion script for `product` settings conversion. + + +___ + +
+ + +
+Loader: Hide inactive versions in UI #5101 + +Added support for `active` argument to hide versions with active set to False in Loader UI when in AYON mode. + + +___ + +
+ + +
+General: CLI addon command #5109 + +Added `addon` alias for `module` in OpenPype cli commands. + + +___ + +
+ + +
+AYON: OpenPype as server addon #5199 + +OpenPype repository can be converted to AYON addon for distribution. Addon has defined dependencies that are required to use it and are not in base ayon-launcher (desktop application). + + +___ + +
+ + +
+General: Runtime dependencies #5206 + +Defined runtime dependencies in pyproject toml. Moved python ocio and otio modules there. + + +___ + +
+ + +
+AYON: Bundle distribution #5209 + +Since AYON server 0.3.0 are addon versions defined by bundles which affects how addons, dependency packages and installers are handled. Only source of truth, about any version of anything that should be used, is server bundle. + + +___ + +
+ + +
+Feature/blender handle q application #5264 + +This edit is to change the way the QApplication is run for Blender. It calls in the singleton (QApplication) during the register. This is made so that other Qt applications and addons are able to run on Blender. In its current implementation, if a QApplication is already running, all functionality of OpenPype becomes unavailable. + + +___ + +
+ +### **🚀 Enhancements** + + +
+General: Connect to AYON server (base) #3924 + +Initial implementation of being able use AYON server in current OpenPype client. Added ability to connect to AYON server and use base queries. + +AYON mode has it's own executable (and start script). To start in AYON mode just replace `start.py` with `ayon_start.py` (added tray start script to tools). Added constant `AYON_SERVER_ENABLED` to `openpype/__init__.py` to know if ayon mode is enabled. In that case Mongo is not used at all and any attempts will cause crashes.I had to modify `~/openpype/client` content to be able do this switch. Mongo implementation was moved to `mongo` subfolder and use "star imports" in files from where current imports are used. Logic of any tool or query in code was not changed at all. Since functions were based on mongo queries they don't use full potential of AYON server abilities.ATM implementation has login UI, distribution of files from server and replacement of mongo queries. For queries is used `ayon_api` module. Which is in live development so the versions may change from day to day. + + +___ + +
+ + +
+Enhancement kitsu note with exceptions #4537 + +Adding a setting to choose some exceptions to IntegrateKitsuNote task status changes. + + +___ + +
+ + +
+General: Environment variable for default OCIO configs #4670 + +Define environment variable which lead to root of builtin ocio configs to be able change the root without changing settings. For the path in settings was used `"{OPENPYPE_ROOT}/vendor/bin/ocioconfig/OpenColorIOConfig"` which disallow to change the root somewhere else. That will be needed in AYON where configs won't be part of desktop application but downloaded from server. + + +___ + +
+ + +
+AYON: Editorial hierarchy creation #4699 + +Implemented extract hierarchy to AYON plugin which created entities in AYON using ayon api. + + +___ + +
+ + +
+AYON: Vendorize ayon api #4753 + +Vendorize ayon api into openpype vendor directory. The reason is that `ayon-python-api` is in live development and will fix/add features often in next few weeks/months, and because update of dependency requires new release -> new build, we want to avoid the need of doing that as it would affect OpenPype development. + + +___ + +
+ + +
+General: Update PySide 6 for MacOs #4764 + +New version of PySide6 does not have issues with settings UI. It is still breaking UI stylesheets so it is not changed for other plaforms but it is enhancement from previous state. + + +___ + +
+ + +
+General: Removed unused cli commands #4902 + +Removed `texturecopy` and `launch` cli commands from cli commands. + + +___ + +
+ + +
+AYON: Linux & MacOS launch script #4970 + +Added shell script to launch tray in AYON mode. + + +___ + +
+ + +
+General: Qt scale enhancement #5059 + +Set ~~'QT_SCALE_FACTOR_ROUNDING_POLICY'~~ scale factor rounding policy of QApplication to `PassThrough` so the scaling can be 'float' number and not just 'int' (150% -> 1.5 scale). + + +___ + +
+ + +
+CI: WPS linting instead of Hound (rebase) 2 #5115 + +Because Hound currently used to lint the code on GH ships with really old flake8 support, it fails miserably on any newer Python syntax. This PR is adding WPS linter to GitHub workflows that should step in. + + +___ + +
+ + +
+Max: OP parameters only displays what is attached to the container #5229 + +The OP parameter in 3dsmax only displays what is currently attached to the container while deleting while you can see the items which is not added when you are adding to the container. + + +___ + +
+ + +
+Testing: improving logging during testing #5271 + +Unit testing logging was crashing on more then one nested layers of inherited loggers. + + +___ + +
+ + +
+Nuke: removing deprecated settings in baking #5275 + +Removing deprecated settings for baking with reformat. This option was only for single reformat node and it had been substituted with multiple reposition nodes. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+AYON: General fixes and updates #4975 + +Few smaller fixes related to AYON connection. Some of fixes were taken from this PR. + + +___ + +
+ + +
+Start script: Change returncode on validate or list versions #4515 + +Change exit code from `1` to `0` when versions are printed or when version is validated. + +Return code `1` is indicating error but there didn't happen any error. + + +___ + +
+ + +
+AYON: Change login UI works #4754 + +Fixed change of login UI. Logic change UI did show up, new login was successful, but after restart was used the previous login. This change fix the issue. + + +___ + +
+ + +
+AYON: General issues #4763 + +Vendorized `ayon_api` from PR broke OpenPype launch, because `ayon_api` is not available. Moved `ayon_api` from ayon specific subforlder to `common` python vendor in OpenPype, and removed login in ayon start script (which was invalid anyway). Also made fixed compatibility with PySide6 by using `qtpy` instead of `Qt` and changing code which is not PySide6 compatible. + + +___ + +
+ + +
+AYON: Small fixes #4841 + +Bugsfixes and enhancements related to AYON logic. Define `BUILTIN_OCIO_ROOT` environment variable so OCIO configs are working. Use constants from ayon api instead of hardcoding them in codebase. Change process name from "openpype" to "ayon". Don't execute login dialog when application is not yet running but use `open` method instead. Fixed missing modules settings which were not taken from openpype defaults. Updated ayon api to `0.1.17`. + + +___ + +
+ + +
+Bugfix - Update gazu to 0.9.3 #4845 + +This updates Gazu to 0.9.3 to make sure Gazu works with Kitsu and Zou 0.16.x+ + + +___ + +
+ + +
+Igniter: fix error reports in silent mode #4909 + +Some errors in silent mode commands in Igniter were suppressed and not visible for example in Deadline log. + + +___ + +
+ + +
+General: Remove ayon api from poetry lock #4964 + +Remove AYON python api from pyproject.toml and poetry.lock again. + + +___ + +
+ + +
+Ftrack: Fix AYON settings conversion #4967 + +Fix conversion of ftrack settings in AYON mode. + + +___ + +
+ + +
+AYON: ISO date format conversion issues #4981 + +Function `datetime.fromisoformat` was replaced with `arrow.get` to be used instead. + + +___ + +
+ + +
+AYON: Missing files on representations #4989 + +Fix integration of files into representation in server database. + + +___ + +
+ + +
+General: Fix Python 2 vendor for arrow #4993 + +Moved remaining dependencies for arrow from ftrack to python 2 vendor. + + +___ + +
+ + +
+General: Fix new load plugins for next minor relase #5000 + +Fix access to `fname` attribute which is not available on load plugin anymore. + + +___ + +
+ + +
+General: Fix mongo secure connection #5031 + +Fix `ssl` and `tls` keys checks in mongo uri query string. + + +___ + +
+ + +
+AYON: Fix site sync settings #5069 + +Fixed settings for AYON variant of sync server. + + +___ + +
+ + +
+General: Replace deprecated keyword argument in PyMongo #5080 + +Use argument `tlsCAFile` instead of `ssl_ca_certs` to avoid deprecation warnings. + + +___ + +
+ + +
+Igniter: QApplication is created #5081 + +Function `_get_qt_app` actually creates new `QApplication` if was not created yet. + + +___ + +
+ + +
+General: Lower unidecode version #5090 + +Use older version of Unidecode module to support Python 2. + + +___ + +
+ + +
+General: Lower cryptography to 39.0.0 #5099 + +Lower cryptography to 39.0.0 to avoid breaking of DCCs like Maya and Nuke. + + +___ + +
+ + +
+AYON: Global environments key fix #5118 + +Seems that when converting ayon settings to OP settings the `environments` setting is put under the `environments` key in `general` however when populating the environment the `environment` key gets picked up, which does not contain the environment variables from the `core/environments` setting + + +___ + +
+ + +
+Add collector to tray publisher for getting frame range data #5152 + +Add collector to tray publisher to get frame range data. User can choose to enable this collector if they need this in the publisher.Resolve #5136 + + +___ + +
+ + +
+Unreal: get current project settings not using unreal project name #5170 + +There was a bug where Unreal project name was used to query project settings. But Unreal project name can differ from the "real" one because of naming convention rules set by Unreal. This is fixing it by asking for current project settings. + + +___ + +
+ + +
+Substance Painter: Fix Collect Texture Set Images unable to copy.deepcopy due to QMenu #5238 + +Fix `copy.deepcopy` of `instance.data`. + + +___ + +
+ + +
+Ayon: server returns different key #5251 + +Package returned from server has `filename` instead of `name`. + + +___ + +
+ + +
+Substance Painter: Fix default color management settings #5259 + +The default settings for color management for Substance Painter were invalid, it was set to override the global config by default but specified no valid config paths of its own - and thus errored that the paths were not correct.This sets the defaults correctly to match other hosts._I quickly checked - this seems to be the only host with the wrong default settings_ + + +___ + +
+ + +
+Nuke: fixing container data if windows path in value #5267 + +Windows path in container data are reformatted. Previously it was reported that Nuke was rising `utf8 0xc0` error if backward slashes were in data values. + + +___ + +
+ + +
+Houdini: fix typo error in collect arnold rop #5281 + +Fixing a typo error in `collect_arnold_rop.py`Reference: #5280 + + +___ + +
+ + +
+Slack - enhanced logging and protection against failure #5287 + +Covered issues found in production on customer site. SlackAPI exception doesn't need to have 'error', covered uncaught exception. + + +___ + +
+ + +
+Maya: Removed unnecessary import of pyblish.cli #5292 + +This import resulted in adding additional logging handler which lead to duplication of logs in hosts with plugins containing `is_in_tests` method. Import is unnecessary for testing functionality. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Loader: Remove `context` argument from Loader.__init__() #4602 + +Remove the previously required `context` argument. + + +___ + +
+ + +
+Global: Remove legacy integrator #4786 + +Remove the legacy integrator. + + +___ + +
+ +### **📃 Documentation** + + +
+Next Minor Release #5291 + + +___ + +
+ +### **Merged pull requests** + + +
+Maya: Refactor to new publisher #4388 + +**Refactor Maya to use the new publisher with new creators.** + + +- [x] Legacy instance can be converted in UI using `SubsetConvertorPlugin` +- [x] Fix support for old style "render" and "vrayscene" instance to the new per layer format. +- [x] Context data is stored with scene +- [x] Workfile instance converted to AutoCreator +- [x] Converted Creator classes +- [x] Create animation +- [x] Create ass +- [x] Create assembly +- [x] Create camera +- [x] Create layout +- [x] Create look +- [x] Create mayascene +- [x] Create model +- [x] Create multiverse look +- [x] Create multiverse usd +- [x] Create multiverse usd comp +- [x] Create multiverse usd over +- [x] Create pointcache +- [x] Create proxy abc +- [x] Create redshift proxy +- [x] Create render +- [x] Create rendersetup +- [x] Create review +- [x] Create rig +- [x] Create setdress +- [x] Create unreal skeletalmesh +- [x] Create unreal staticmesh +- [x] Create vrayproxy +- [x] Create vrayscene +- [x] Create xgen +- [x] Create yeti cache +- [x] Create yeti rig +- [ ] Tested new Creator publishes +- [x] Publish animation +- [x] Publish ass +- [x] Publish assembly +- [x] Publish camera +- [x] Publish layout +- [x] Publish look +- [x] Publish mayascene +- [x] Publish model +- [ ] Publish multiverse look +- [ ] Publish multiverse usd +- [ ] Publish multiverse usd comp +- [ ] Publish multiverse usd over +- [x] Publish pointcache +- [x] Publish proxy abc +- [x] Publish redshift proxy +- [x] Publish render +- [x] Publish rendersetup +- [x] Publish review +- [x] Publish rig +- [x] Publish setdress +- [x] Publish unreal skeletalmesh +- [x] Publish unreal staticmesh +- [x] Publish vrayproxy +- [x] Publish vrayscene +- [x] Publish xgen +- [x] Publish yeti cache +- [x] Publish yeti rig +- [x] Publish workfile +- [x] Rig loader correctly generates a new style animation creator instance +- [ ] Validations / Error messages for common validation failures look nice and usable as a report. +- [ ] Make Create Animation hidden to the user (should not create manually?) +- [x] Correctly detect difference between **'creator_attributes'** and **'instance_data'** since both are "flattened" to the top node. + + +___ + +
+ + +
+Start script: Fix possible issues with destination drive path #4478 + +Drive paths for windows are fixing possibly missing slash at the end of destination path. + +Windows `subst` command require to have destination path with slash if it's a drive (it should be `G:\` not `G:`). + + +___ + +
+ + +
+Global: Move PyOpenColorIO to vendor/python #4946 + +So that DCCs don't conflict with their own. + +See https://github.com/ynput/OpenPype/pull/4267#issuecomment-1537153263 for the issue with Gaffer. + +I'm not sure if this is the correct approach, but I assume PySide/Shiboken is under `vendor/python` for this reason as well... +___ + +
+ + +
+RuntimeError with Click on deadline publish #5065 + +I changed Click to version 8.0 instead of 7.1.2 to solve this error: +``` +2023-05-30 16:16:51: 0: STDOUT: Traceback (most recent call last): +2023-05-30 16:16:51: 0: STDOUT: File "start.py", line 1126, in boot +2023-05-30 16:16:51: 0: STDOUT: File "/prod/softprod/apps/openpype/LINUX/3.15/dependencies/click/core.py", line 829, in __call__ +2023-05-30 16:16:51: 0: STDOUT: return self.main(*args, **kwargs) +2023-05-30 16:16:51: 0: STDOUT: File "/prod/softprod/apps/openpype/LINUX/3.15/dependencies/click/core.py", line 760, in main +2023-05-30 16:16:51: 0: STDOUT: _verify_python3_env() +2023-05-30 16:16:51: 0: STDOUT: File "/prod/softprod/apps/openpype/LINUX/3.15/dependencies/click/_unicodefun.py", line 126, in _verify_python3_env +2023-05-30 16:16:51: 0: STDOUT: raise RuntimeError( +2023-05-30 16:16:51: 0: STDOUT: RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Consult https://click.palletsprojects.com/python3/ for mitigation steps. +``` + + +___ + +
+ + + + +## [3.15.12](https://github.com/ynput/OpenPype/tree/3.15.12) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.11...3.15.12) + +### **🆕 New features** + + +
+Tray Publisher: User can set colorspace per instance explicitly #4901 + +With this feature a user can set/override the colorspace for the representations of an instance explicitly instead of relying on the File Rules from project settings or alike. This way you can ingest any file and explicitly say "this file is colorspace X". + + +___ + +
+ + +
+Review Family in Max #5001 + +Review Feature by creating preview animation in 3dsmax(The code is still cleaning up so there is going to be some updates until it is ready for review) + + +___ + +
+ + +
+AfterEffects: support for workfile template builder #5163 + +This PR add functionality of templated workfile builder. It allows someone to prepare AE workfile with placeholders as for automatically loading particular representation of particular subset of particular asset from context where workfile is opened.Selection from multiple prepared workfiles is provided with usage of templates, specific type of tasks could use particular workfile template etc.Artists then can build workfile from template when opening new workfile. + + +___ + +
+ + +
+CreatePlugin: Get next version helper #5242 + +Implemented helper functions to get next available versions for create instances. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: Improve Templates #4854 + +Use library method for fetching reference node and support parent in hierarchy. + + +___ + +
+ + +
+Bug: Maya - xgen sidecar files arent moved when saving workfile as an new asset workfile changing context - OP-6222 #5215 + +This PR manages the Xgen files when switching context in the Workfiles app. + + +___ + +
+ + +
+node references to check for duplicates in Max #5192 + +No duplicates for node references in Max when users trying to select nodes before publishing + + +___ + +
+ + +
+Tweak profiles logging to debug level #5194 + +Tweak profiles logging to debug level since they aren't artist facing logs. + + +___ + +
+ + +
+Enhancement: Reduce more visual clutter for artists in new publisher reports #5208 + +Got this from one of our artists' reports - figured some of these logs were definitely not for the artist, reduced those logs to debug level. + + +___ + +
+ + +
+Cosmetics: Tweak pyblish repair actions (icon, logs, docstring) #5213 + +- Add icon to RepairContextAction +- logs to debug level +- also add attempt repair for RepairAction for consistency +- fix RepairContextAction docstring to mention correct argument name + +#### Additional info + +We should not forget to remove this ["deprecated" actions.py file](https://github.com/ynput/OpenPype/blob/3501d0d23a78fbaef106da2fffe946cb49bef855/openpype/action.py) in 3.16 (next-minor) + +## Testing notes: + +1. Run some fabulous repairs! + +___ + +
+ + +
+Maya: fix save file prompt on launch last workfile with color management enabled + restructure `set_colorspace` #5225 + +- Only set `configFilePath` when OCIO env var is not set since it doesn't do anything if OCIO var is set anyway. +- Set the Maya 2022+ default OCIO path using the resources path instead of "" to avoid Maya Save File on new file after launch +- **Bugfix: This is what fixes the Save prompt on open last workfile feature with Global color management enabled** +- Move all code related to applying the maya settings together after querying the settings +- Swap around the `if use_workfile_settings` since the check was reversed +- Use `get_current_project_name()` instead of environment vars + + +___ + +
+ + +
+Enhancement: More descriptive error messages for Loaders #5227 + +Tweak raised errors and error messages for loader errors. + + +___ + +
+ + +
+Houdini: add select invalid action for ValidateSopOutputNode #5231 + +This PR adds `SelectROPAction` action to `houdini\api\action.py`and it's used in `Validate Output Node``SelectROPAction` is used to select the associated ROPs with the errored instances. + + +___ + +
+ + +
+Remove new lines from the delivery template string #5235 + +If the delivery template has a new line symbol at the end, say it was copied from the text editor, the delivery process will fail with `OSError` due to incorrect destination path. To avoid that I added `rstrip()` to the `delivery_path` processing. + + +___ + +
+ + +
+Houdini: better selection on pointcache creation #5250 + +Houdini allows `ObjNode` path as `sop_path` in the `ROP` unlike OP/ Ayon require `sop_path` to be set to a sop node path explicitly In this code, better selection is used to filter out invalid selections from OP/ Ayon point of viewValid selections are +- `SopNode` that has parent of type `geo` or `subnet` +- `ObjNode` of type `geo` that has +- `SopNode` of type `output` +- `SopNode` with render flag `on` (if no `Sopnode` of type `output`)this effectively filter +- empty `ObjNode` +- `ObjNode`(s) of other types like `cam` and `dopnet` +- `SopNode`(s) that thier parents of other types like `cam` and `sop solver` + + +___ + +
+ + +
+Update scene inventory even if any errors occurred during update #5252 + +When selecting many items in the scene inventory to update versions and one of the items would error out the updating stops. However, before this PR the scene inventory would also NOT refresh making you think it did nothing.Also implemented as method to allow some code deduplication. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Convert frame values to integers #5188 + +Convert frame values to integers. + + +___ + +
+ + +
+Maya: fix the register_event_callback correctly collecting workfile save after #5214 + +fixing the bug of register_event_callback not being able to collect action of "workfile_save_after" for lock file action + + +___ + +
+ + +
+Maya: aligning default settings to distributed aces 1.2 config #5233 + +Maya colorspace setttings defaults are set the way they align our distributed ACES 1.2 config file set in global colorspace configs. + + +___ + +
+ + +
+RepairAction and SelectInvalidAction filter instances failed on the exact plugin #5240 + +RepairAction and SelectInvalidAction actually filter to instances that failed on the exact plugin - not on "any failure" + + +___ + +
+ + +
+Maya: Bugfix look update nodes by id with non-unique shape names (query with `fullPath`) #5257 + +Fixes a bug where updating attributes on nodes with assigned shader if shape name existed more than once in the scene due to `cmds.listRelatives` call not being done with the `fullPath=True` flag.Original error: +```python +# Traceback (most recent call last): +# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 264, in +# lambda: self._show_version_dialog(items)) +# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 722, in _show_version_dialog +# self._update_containers(items, version) +# File "E:\openpype\OpenPype\openpype\tools\sceneinventory\view.py", line 849, in _update_containers +# update_container(item, item_version) +# File "E:\openpype\OpenPype\openpype\pipeline\load\utils.py", line 502, in update_container +# return loader.update(container, new_representation) +# File "E:\openpype\OpenPype\openpype\hosts\maya\plugins\load\load_look.py", line 119, in update +# nodes_by_id[lib.get_id(n)].append(n) +# File "E:\openpype\OpenPype\openpype\hosts\maya\api\lib.py", line 1420, in get_id +# sel.add(node) +``` + + +___ + +
+ + +
+Nuke: Create nodes with inpanel=False #5051 + +This PR is meant to remove the annoyance of the UI changing focus to the properties window just for the property window of the newly created node to disappear. Instead of using node.hideControlPanel I'm implementing the concealment during the creation of the node which will not change the focus of the current window. +___ + +
+ + +
+Fix the reset frame range not setting up the right timeline in Max #5187 + +Resolve #5181 + + +___ + +
+ + +
+Resolve: after launch automatization fixes #5193 + +Workfile is no correctly created and aligned witch actual project. Also the launching mechanism is now fixed so even no workfile had been saved yet it will open OpenPype menu automatically. + + +___ + +
+ + +
+General: Revert backward incompatible change of path to template to multiplatform #5197 + +Now platformity is still handed by usage of `work[root]` (or any other root that is accessible across platforms.) + + +___ + +
+ + +
+Nuke: root set format updating in node graph #5198 + +Nuke root node needs to be reset on some values so any knobs could be updated in node graph. This works the same way as an user would change frame number so expressions would update its values in knobs. + + +___ + +
+ + +
+Hiero: fixing otio current project and cosmetics #5200 + +Otio were not returning correct current project once additional Untitled project was open in project manager stack. + + +___ + +
+ + +
+Max: Publisher instances dont hold its enabled disabled states when Publisher reopened again #5202 + +Resolve #5183, general maxscript conversion issue to python (e.g. bool conversion, true in maxscript while True in Python)(Also resolve the ValueError when you change the subset to publish into list view menu) + + +___ + +
+ + +
+Burnins: Filter script is defined only for video streams #5205 + +Burnins are working for inputs with audio. + + +___ + +
+ + +
+Colorspace lib fix compatible python version comparison #5212 + +Fix python version comparison. + + +___ + +
+ + +
+Houdini: Fix `get_color_management_preferences` #5217 + +Fix the issue described here where the logic for retrieving the current OCIO display and view was incorrectly trying to apply a regex to it. + + +___ + +
+ + +
+Houdini: Redshift ROP image format bug #5218 + +Problem : +"RS_outputFileFormat" parm value was missing +and there were more "image_format" than redshift rop supports + +Fix: +1) removed unnecessary formats from `image_format_enum` +2) add the selected format value to `RS_outputFileFormat` +___ + +
+ + +
+Colorspace: check PyOpenColorIO rather then python version #5223 + +Fixing previously merged PR (https://github.com/ynput/OpenPype/pull/5212) And applying better way to check compatibility with PyOpenColorIO python api. + + +___ + +
+ + +
+Validate delivery action representations status #5228 + +- disable delivery button if no representations checked +- fix macos combobox layout +- add error message if no delivery templates found + + +___ + +
+ + +
+ Houdini: Add geometry check for pointcache family #5230 + +When `sop_path` on ABC ROP node points to a non `SopNode`, these validators `validate_abc_primitive_to_detail.py`, `validate_primitive_hierarchy_paths.py` will error and crash when this line is executed `geo = output_node.geometryAtFrame(frame)` + + +___ + +
+ + +
+Houdini: Add geometry check for VDB family #5232 + +When `sop_path` on Geometry ROP node points to a non SopNode, this validator `validate_vdb_output_node.py` will error and crash when this line is executed`sop_node.geometryAtFrame(frame)` + + +___ + +
+ + +
+Substance Painter: Include the setting only in publish tab #5234 + +Instead of having two settings in both create and publish tab, there is solely one setting in the publish tab for users to set up the parameters.Resolve #5172 + + +___ + +
+ + +
+Maya: Fix collecting arnold prefix when none #5243 + +When no prefix is specified in render settings, the renderlayer collector would error. + + +___ + +
+ + +
+Deadline: OPENPYPE_VERSION should only be added when running from build #5244 + +When running from source the environment variable `OPENPYPE_VERSION` should not be added. This is a bugfix for the feature #4489 + + +___ + +
+ + +
+Fix no prompt for "unsaved changes" showing when opening workfile in Houdini #5246 + +Fix no prompt for "unsaved changes" showing when opening workfile in Houdini. + + +___ + +
+ + +
+Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter #5248 + +Fix no prompt for "unsaved changes" showing when opening workfile in Substance Painter. + + +___ + +
+ + +
+General: add the os library before os.environ.get #5249 + +Adding os library into `creator_plugins.py` due to `os.environ.get` in line 667 + + +___ + +
+ + +
+Maya: Fix set_attribute for enum attributes #5261 + +Fix for #5260 + + +___ + +
+ + +
+Unreal: Move Qt imports away from module init #5268 + +Importing `Window` creates errors in headless mode. +``` +*** WRN: >>> { ModulesLoader }: [ FAILED to import host folder unreal ] +============================= +No Qt bindings could be found +============================= +Traceback (most recent call last): + File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 252, in + from PySide6 import __version__ as PYSIDE_VERSION # analysis:ignore +ModuleNotFoundERROR: No module named 'PySide6' + +During handling of the above exception, another exception occurred: + +Traceback (most recent call last): + File "C:\Users\tokejepsen\OpenPype\openpype\modules\base.py", line 385, in _load_modules + default_module = __import__( + File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\__init__.py", line 1, in + from .addon import UnrealAddon + File "C:\Users\tokejepsen\OpenPype\openpype\hosts\unreal\addon.py", line 4, in + from openpype.widgets.message_window import Window + File "C:\Users\tokejepsen\OpenPype\openpype\widgets\__init__.py", line 1, in + from .password_dialog import PasswordDialog + File "C:\Users\tokejepsen\OpenPype\openpype\widgets\password_dialog.py", line 1, in + from qtpy import QtWidgets, QtCore, QtGui + File "C:\Users\tokejepsen\OpenPype\.venv\lib\site-packages\qtpy\__init__.py", line 259, in + raise QtBindingsNotFoundERROR() +qtpy.QtBindingsNotFoundERROR: No Qt bindings could be found +``` + + +___ + +
+ +### **🔀 Refactored code** + + +
+Maya: Minor refactoring and code cleanup #5226 + +Some small cleanup and refactoring of logic. Removing old comments, unused imports and some minor optimization. Also removed the prints of the loader names of each container the scene in `fix_incompatible_containers` + optimizing by using `set` and defining only once. Moved some UI related code/tweaks to run `on_init` only if not in headless mode. Removed an empty `obj.py` file.Each commit message kind of describes why the change was made. + + +___ + +
+ +### **Merged pull requests** + + +
+Bug: Template builder fails when loading data without outliner representation #5222 + +I add an assertion management in case the container does not have a represention in outliner. + + +___ + +
+ + +
+AfterEffects - add container check validator to AE settings #5203 + +Adds check if scene contains only latest version of loaded containers. + + +___ + +
+ + + + ## [3.15.11](https://github.com/ynput/OpenPype/tree/3.15.11) @@ -1970,7 +4893,7 @@ ___
Maya Load References - Add Display Handle Setting #4904 -When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. +When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. ___ @@ -2078,7 +5001,7 @@ ___
Patchelf version locked #4853 -For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. +For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. ___ diff --git a/Dockerfile.centos7 b/Dockerfile.centos7 index ce1a624a4f..9217140f20 100644 --- a/Dockerfile.centos7 +++ b/Dockerfile.centos7 @@ -32,12 +32,16 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n wget \ gcc \ zlib-devel \ + pcre-devel \ + perl-core \ bzip2 \ bzip2-devel \ readline-devel \ sqlite sqlite-devel \ openssl-devel \ openssl-libs \ + openssl11-devel \ + openssl11-libs \ tk-devel libffi-devel \ patchelf \ automake \ @@ -71,7 +75,12 @@ RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \ && echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \ && echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \ && echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc -RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION} +RUN source $HOME/.bashrc \ + && export CPPFLAGS="-I/usr/include/openssl11" \ + && export LDFLAGS="-L/usr/lib64/openssl11 -lssl -lcrypto" \ + && export PATH=/usr/local/openssl/bin:$PATH \ + && export LD_LIBRARY_PATH=/usr/local/openssl/lib:$LD_LIBRARY_PATH \ + && pyenv install ${OPENPYPE_PYTHON_VERSION} COPY . /opt/openpype/ RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet." @@ -93,12 +102,13 @@ RUN source $HOME/.bashrc \ RUN source $HOME/.bashrc \ && ./tools/fetch_thirdparty_libs.sh +RUN echo 'export PYTHONPATH="/opt/openpype/vendor/python:$PYTHONPATH"'>> $HOME/.bashrc RUN source $HOME/.bashrc \ && bash ./tools/build.sh RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.9/lib \ - && cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.9/lib \ - && cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.9/lib \ + && cp /usr/lib64/openssl11/libssl* ./build/exe.linux-x86_64-3.9/lib \ + && cp /usr/lib64/openssl11/libcrypto* ./build/exe.linux-x86_64-3.9/lib \ && cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.9/lib \ && cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.9/vendor/python/PySide2/Qt/lib diff --git a/README.md b/README.md index 8757e3db92..6caed8061c 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ [![All Contributors](https://img.shields.io/badge/all_contributors-28-orange.svg?style=flat-square)](#contributors-) OpenPype -==== +======== [![documentation](https://github.com/pypeclub/pype/actions/workflows/documentation.yml/badge.svg)](https://github.com/pypeclub/pype/actions/workflows/documentation.yml) ![GitHub VFX Platform](https://img.shields.io/badge/vfx%20platform-2022-lightgrey?labelColor=303846) @@ -47,7 +47,7 @@ It can be built and ran on all common platforms. We develop and test on the foll For more details on requirements visit [requirements documentation](https://openpype.io/docs/dev_requirements) Building OpenPype -------------- +----------------- To build OpenPype you currently need [Python 3.9](https://www.python.org/downloads/) as we are following [vfx platform](https://vfxplatform.com). Because of some Linux distros comes with newer Python version @@ -67,9 +67,9 @@ git clone --recurse-submodules git@github.com:Pypeclub/OpenPype.git #### To build OpenPype: -1) Run `.\tools\create_env.ps1` to create virtual environment in `.\venv` +1) Run `.\tools\create_env.ps1` to create virtual environment in `.\venv`. 2) Run `.\tools\fetch_thirdparty_libs.ps1` to download third-party dependencies like ffmpeg and oiio. Those will be included in build. -3) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\` +3) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\`. To create distributable OpenPype versions, run `./tools/create_zip.ps1` - that will create zip file with name `openpype-vx.x.x.zip` parsed from current OpenPype repository and @@ -88,38 +88,38 @@ some OpenPype dependencies like [CMake](https://cmake.org/) and **XCode Command Easy way of installing everything necessary is to use [Homebrew](https://brew.sh): 1) Install **Homebrew**: -```sh -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -``` + ```sh + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + ``` 2) Install **cmake**: -```sh -brew install cmake -``` + ```sh + brew install cmake + ``` 3) Install [pyenv](https://github.com/pyenv/pyenv): -```sh -brew install pyenv -echo 'eval "$(pyenv init -)"' >> ~/.zshrc -pyenv init -exec "$SHELL" -PATH=$(pyenv root)/shims:$PATH -``` + ```sh + brew install pyenv + echo 'eval "$(pyenv init -)"' >> ~/.zshrc + pyenv init + exec "$SHELL" + PATH=$(pyenv root)/shims:$PATH + ``` -4) Pull in required Python version 3.9.x -```sh -# install Python build dependences -brew install openssl readline sqlite3 xz zlib +4) Pull in required Python version 3.9.x: + ```sh + # install Python build dependences + brew install openssl readline sqlite3 xz zlib -# replace with up-to-date 3.9.x version -pyenv install 3.9.6 -``` + # replace with up-to-date 3.9.x version + pyenv install 3.9.6 + ``` -5) Set local Python version -```sh -# switch to OpenPype source directory -pyenv local 3.9.6 -``` +5) Set local Python version: + ```sh + # switch to OpenPype source directory + pyenv local 3.9.6 + ``` #### To build OpenPype: @@ -241,7 +241,7 @@ pyenv local 3.9.6 Running OpenPype ------------- +---------------- OpenPype can by executed either from live sources (this repository) or from *"frozen code"* - executables that can be build using steps described above. @@ -289,7 +289,7 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`. Developer tools -------------- +--------------- In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`). diff --git a/common/openpype_common/distribution/README.md b/common/openpype_common/distribution/README.md deleted file mode 100644 index 212eb267b8..0000000000 --- a/common/openpype_common/distribution/README.md +++ /dev/null @@ -1,18 +0,0 @@ -Addon distribution tool ------------------------- - -Code in this folder is backend portion of Addon distribution logic for v4 server. - -Each host, module will be separate Addon in the future. Each v4 server could run different set of Addons. - -Client (running on artist machine) will in the first step ask v4 for list of enabled addons. -(It expects list of json documents matching to `addon_distribution.py:AddonInfo` object.) -Next it will compare presence of enabled addon version in local folder. In the case of missing version of -an addon, client will use information in the addon to download (from http/shared local disk/git) zip file -and unzip it. - -Required part of addon distribution will be sharing of dependencies (python libraries, utilities) which is not part of this folder. - -Location of this folder might change in the future as it will be required for a clint to add this folder to sys.path reliably. - -This code needs to be independent on Openpype code as much as possible! \ No newline at end of file diff --git a/common/openpype_common/distribution/addon_distribution.py b/common/openpype_common/distribution/addon_distribution.py deleted file mode 100644 index 5e48639dec..0000000000 --- a/common/openpype_common/distribution/addon_distribution.py +++ /dev/null @@ -1,208 +0,0 @@ -import os -from enum import Enum -from abc import abstractmethod -import attr -import logging -import requests -import platform -import shutil - -from .file_handler import RemoteFileHandler -from .addon_info import AddonInfo - - -class UpdateState(Enum): - EXISTS = "exists" - UPDATED = "updated" - FAILED = "failed" - - -class AddonDownloader: - log = logging.getLogger(__name__) - - def __init__(self): - self._downloaders = {} - - def register_format(self, downloader_type, downloader): - self._downloaders[downloader_type.value] = downloader - - def get_downloader(self, downloader_type): - downloader = self._downloaders.get(downloader_type) - if not downloader: - raise ValueError(f"{downloader_type} not implemented") - return downloader() - - @classmethod - @abstractmethod - def download(cls, source, destination): - """Returns url to downloaded addon zip file. - - Args: - source (dict): {type:"http", "url":"https://} ...} - destination (str): local folder to unzip - Returns: - (str) local path to addon zip file - """ - pass - - @classmethod - def check_hash(cls, addon_path, addon_hash): - """Compares 'hash' of downloaded 'addon_url' file. - - Args: - addon_path (str): local path to addon zip file - addon_hash (str): sha256 hash of zip file - Raises: - ValueError if hashes doesn't match - """ - if not os.path.exists(addon_path): - raise ValueError(f"{addon_path} doesn't exist.") - if not RemoteFileHandler.check_integrity(addon_path, - addon_hash, - hash_type="sha256"): - raise ValueError(f"{addon_path} doesn't match expected hash.") - - @classmethod - def unzip(cls, addon_zip_path, destination): - """Unzips local 'addon_zip_path' to 'destination'. - - Args: - addon_zip_path (str): local path to addon zip file - destination (str): local folder to unzip - """ - RemoteFileHandler.unzip(addon_zip_path, destination) - os.remove(addon_zip_path) - - @classmethod - def remove(cls, addon_url): - pass - - -class OSAddonDownloader(AddonDownloader): - - @classmethod - def download(cls, source, destination): - # OS doesnt need to download, unzip directly - addon_url = source["path"].get(platform.system().lower()) - if not os.path.exists(addon_url): - raise ValueError("{} is not accessible".format(addon_url)) - return addon_url - - -class HTTPAddonDownloader(AddonDownloader): - CHUNK_SIZE = 100000 - - @classmethod - def download(cls, source, destination): - source_url = source["url"] - cls.log.debug(f"Downloading {source_url} to {destination}") - file_name = os.path.basename(destination) - _, ext = os.path.splitext(file_name) - if (ext.replace(".", '') not - in set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS)): - file_name += ".zip" - RemoteFileHandler.download_url(source_url, - destination, - filename=file_name) - - return os.path.join(destination, file_name) - - -def get_addons_info(server_endpoint): - """Returns list of addon information from Server""" - # TODO temp - # addon_info = AddonInfo( - # **{"name": "openpype_slack", - # "version": "1.0.0", - # "addon_url": "c:/projects/openpype_slack_1.0.0.zip", - # "type": UrlType.FILESYSTEM, - # "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658"}) # noqa - # - # http_addon = AddonInfo( - # **{"name": "openpype_slack", - # "version": "1.0.0", - # "addon_url": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing", # noqa - # "type": UrlType.HTTP, - # "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658"}) # noqa - - response = requests.get(server_endpoint) - if not response.ok: - raise Exception(response.text) - - addons_info = [] - for addon in response.json(): - addons_info.append(AddonInfo(**addon)) - return addons_info - - -def update_addon_state(addon_infos, destination_folder, factory, - log=None): - """Loops through all 'addon_infos', compares local version, unzips. - - Loops through server provided list of dictionaries with information about - available addons. Looks if each addon is already present and deployed. - If isn't, addon zip gets downloaded and unzipped into 'destination_folder'. - Args: - addon_infos (list of AddonInfo) - destination_folder (str): local path - factory (AddonDownloader): factory to get appropriate downloader per - addon type - log (logging.Logger) - Returns: - (dict): {"addon_full_name": UpdateState.value - (eg. "exists"|"updated"|"failed") - """ - if not log: - log = logging.getLogger(__name__) - - download_states = {} - for addon in addon_infos: - full_name = "{}_{}".format(addon.name, addon.version) - addon_dest = os.path.join(destination_folder, full_name) - - if os.path.isdir(addon_dest): - log.debug(f"Addon version folder {addon_dest} already exists.") - download_states[full_name] = UpdateState.EXISTS.value - continue - - for source in addon.sources: - download_states[full_name] = UpdateState.FAILED.value - try: - downloader = factory.get_downloader(source.type) - zip_file_path = downloader.download(attr.asdict(source), - addon_dest) - downloader.check_hash(zip_file_path, addon.hash) - downloader.unzip(zip_file_path, addon_dest) - download_states[full_name] = UpdateState.UPDATED.value - break - except Exception: - log.warning(f"Error happened during updating {addon.name}", - exc_info=True) - if os.path.isdir(addon_dest): - log.debug(f"Cleaning {addon_dest}") - shutil.rmtree(addon_dest) - - return download_states - - -def check_addons(server_endpoint, addon_folder, downloaders): - """Main entry point to compare existing addons with those on server. - - Args: - server_endpoint (str): url to v4 server endpoint - addon_folder (str): local dir path for addons - downloaders (AddonDownloader): factory of downloaders - - Raises: - (RuntimeError) if any addon failed update - """ - addons_info = get_addons_info(server_endpoint) - result = update_addon_state(addons_info, - addon_folder, - downloaders) - if UpdateState.FAILED.value in result.values(): - raise RuntimeError(f"Unable to update some addons {result}") - - -def cli(*args): - raise NotImplementedError diff --git a/common/openpype_common/distribution/addon_info.py b/common/openpype_common/distribution/addon_info.py deleted file mode 100644 index 00ece11f3b..0000000000 --- a/common/openpype_common/distribution/addon_info.py +++ /dev/null @@ -1,80 +0,0 @@ -import attr -from enum import Enum - - -class UrlType(Enum): - HTTP = "http" - GIT = "git" - FILESYSTEM = "filesystem" - - -@attr.s -class MultiPlatformPath(object): - windows = attr.ib(default=None) - linux = attr.ib(default=None) - darwin = attr.ib(default=None) - - -@attr.s -class AddonSource(object): - type = attr.ib() - - -@attr.s -class LocalAddonSource(AddonSource): - path = attr.ib(default=attr.Factory(MultiPlatformPath)) - - -@attr.s -class WebAddonSource(AddonSource): - url = attr.ib(default=None) - - -@attr.s -class VersionData(object): - version_data = attr.ib(default=None) - - -@attr.s -class AddonInfo(object): - """Object matching json payload from Server""" - name = attr.ib() - version = attr.ib() - title = attr.ib(default=None) - sources = attr.ib(default=attr.Factory(dict)) - hash = attr.ib(default=None) - description = attr.ib(default=None) - license = attr.ib(default=None) - authors = attr.ib(default=None) - - @classmethod - def from_dict(cls, data): - sources = [] - - production_version = data.get("productionVersion") - if not production_version: - return - - # server payload contains info about all versions - # active addon must have 'productionVersion' and matching version info - version_data = data.get("versions", {})[production_version] - - for source in version_data.get("clientSourceInfo", []): - if source.get("type") == UrlType.FILESYSTEM.value: - source_addon = LocalAddonSource(type=source["type"], - path=source["path"]) - if source.get("type") == UrlType.HTTP.value: - source_addon = WebAddonSource(type=source["type"], - url=source["url"]) - - sources.append(source_addon) - - return cls(name=data.get("name"), - version=production_version, - sources=sources, - hash=data.get("hash"), - description=data.get("description"), - title=data.get("title"), - license=data.get("license"), - authors=data.get("authors")) - diff --git a/common/openpype_common/distribution/tests/test_addon_distributtion.py b/common/openpype_common/distribution/tests/test_addon_distributtion.py deleted file mode 100644 index 765ea0596a..0000000000 --- a/common/openpype_common/distribution/tests/test_addon_distributtion.py +++ /dev/null @@ -1,167 +0,0 @@ -import pytest -import attr -import tempfile - -from common.openpype_common.distribution.addon_distribution import ( - AddonDownloader, - OSAddonDownloader, - HTTPAddonDownloader, - AddonInfo, - update_addon_state, - UpdateState -) -from common.openpype_common.distribution.addon_info import UrlType - - -@pytest.fixture -def addon_downloader(): - addon_downloader = AddonDownloader() - addon_downloader.register_format(UrlType.FILESYSTEM, OSAddonDownloader) - addon_downloader.register_format(UrlType.HTTP, HTTPAddonDownloader) - - yield addon_downloader - - -@pytest.fixture -def http_downloader(addon_downloader): - yield addon_downloader.get_downloader(UrlType.HTTP.value) - - -@pytest.fixture -def temp_folder(): - yield tempfile.mkdtemp() - - -@pytest.fixture -def sample_addon_info(): - addon_info = { - "versions": { - "1.0.0": { - "clientPyproject": { - "tool": { - "poetry": { - "dependencies": { - "nxtools": "^1.6", - "orjson": "^3.6.7", - "typer": "^0.4.1", - "email-validator": "^1.1.3", - "python": "^3.10", - "fastapi": "^0.73.0" - } - } - } - }, - "hasSettings": True, - "clientSourceInfo": [ - { - "type": "http", - "url": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing" # noqa - }, - { - "type": "filesystem", - "path": { - "windows": ["P:/sources/some_file.zip", - "W:/sources/some_file.zip"], # noqa - "linux": ["/mnt/srv/sources/some_file.zip"], - "darwin": ["/Volumes/srv/sources/some_file.zip"] - } - } - ], - "frontendScopes": { - "project": { - "sidebar": "hierarchy" - } - } - } - }, - "description": "", - "title": "Slack addon", - "name": "openpype_slack", - "productionVersion": "1.0.0", - "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658" # noqa - } - yield addon_info - - -def test_register(printer): - addon_downloader = AddonDownloader() - - assert len(addon_downloader._downloaders) == 0, "Contains registered" - - addon_downloader.register_format(UrlType.FILESYSTEM, OSAddonDownloader) - assert len(addon_downloader._downloaders) == 1, "Should contain one" - - -def test_get_downloader(printer, addon_downloader): - assert addon_downloader.get_downloader(UrlType.FILESYSTEM.value), "Should find" # noqa - - with pytest.raises(ValueError): - addon_downloader.get_downloader("unknown"), "Shouldn't find" - - -def test_addon_info(printer, sample_addon_info): - """Tests parsing of expected payload from v4 server into AadonInfo.""" - valid_minimum = { - "name": "openpype_slack", - "productionVersion": "1.0.0", - "versions": { - "1.0.0": { - "clientSourceInfo": [ - { - "type": "filesystem", - "path": { - "windows": [ - "P:/sources/some_file.zip", - "W:/sources/some_file.zip"], - "linux": [ - "/mnt/srv/sources/some_file.zip"], - "darwin": [ - "/Volumes/srv/sources/some_file.zip"] # noqa - } - } - ] - } - } - } - - assert AddonInfo.from_dict(valid_minimum), "Missing required fields" - - valid_minimum["versions"].pop("1.0.0") - with pytest.raises(KeyError): - assert not AddonInfo.from_dict(valid_minimum), "Must fail without version data" # noqa - - valid_minimum.pop("productionVersion") - assert not AddonInfo.from_dict( - valid_minimum), "none if not productionVersion" # noqa - - addon = AddonInfo.from_dict(sample_addon_info) - assert addon, "Should be created" - assert addon.name == "openpype_slack", "Incorrect name" - assert addon.version == "1.0.0", "Incorrect version" - - with pytest.raises(TypeError): - assert addon["name"], "Dict approach not implemented" - - addon_as_dict = attr.asdict(addon) - assert addon_as_dict["name"], "Dict approach should work" - - -def test_update_addon_state(printer, sample_addon_info, - temp_folder, addon_downloader): - """Tests possible cases of addon update.""" - addon_info = AddonInfo.from_dict(sample_addon_info) - orig_hash = addon_info.hash - - addon_info.hash = "brokenhash" - result = update_addon_state([addon_info], temp_folder, addon_downloader) - assert result["openpype_slack_1.0.0"] == UpdateState.FAILED.value, \ - "Update should failed because of wrong hash" - - addon_info.hash = orig_hash - result = update_addon_state([addon_info], temp_folder, addon_downloader) - assert result["openpype_slack_1.0.0"] == UpdateState.UPDATED.value, \ - "Addon should have been updated" - - result = update_addon_state([addon_info], temp_folder, addon_downloader) - assert result["openpype_slack_1.0.0"] == UpdateState.EXISTS.value, \ - "Addon should already exist" diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000000..102da990aa --- /dev/null +++ b/docs/README.md @@ -0,0 +1,74 @@ +API Documentation +================= + +This documents the way how to build and modify API documentation using Sphinx and AutoAPI. Ground for documentation +should be directly in sources - in docstrings and markdowns. Sphinx and AutoAPI will crawl over them and generate +RST files that are in turn used to generate HTML documentation. For docstrings we prefer "Napoleon" or "Google" style +docstrings, but RST is also acceptable mainly in cases where you need to use Sphinx directives. + +Using only docstrings is not really viable as some documentation should be done on higher level - like overview of +some modules/functionality and so on. This should be done directly in RST files and committed to repository. + +Configuration +------------- +Configuration is done in `/docs/source/conf.py`. The most important settings are: + +- `autodoc_mock_imports`: add modules that can't be actually imported by Sphinx in running environment, like `nuke`, `maya`, etc. +- `autoapi_ignore`: add directories that shouldn't be processed by **AutoAPI**, like vendor dirs, etc. +- `html_theme_options`: you can use these options to influence how the html theme of the generated files will look. +- `myst_gfm_only`: are Myst parser option for Markdown setting what flavour of Markdown should be used. + +How to build it +--------------- + +You can run: + +```sh +cd .\docs +make.bat html +``` + +on linux/macOS: + +```sh +cd ./docs +make html +``` + +This will go over our code and generate **.rst** files in `/docs/source/autoapi` and from those it will generate +full html documentation in `/docs/build/html`. + +During the build you may see tons of red errors that are pointing to our issues: + +1) **Wrong imports** - +Invalid import are usually wrong relative imports (too deep) or circular imports. +2) **Invalid docstrings** - +Docstrings to be processed into documentation needs to follow some syntax - this can be checked by running +`pydocstyle` that is already included with OpenPype +3) **Invalid markdown/rst files** - +Markdown/RST files can be included inside RST files using `.. include::` directive. But they have to be properly +formatted. + +Editing RST templates +--------------------- +Everything starts with `/docs/source/index.rst` - this file should be properly edited, Right now it just +includes `readme.rst` that in turn include and parse main `README.md`. This is entrypoint to API documentation. +All templates generated by AutoAPI are in `/docs/source/autoapi`. They should be eventually committed to repository +and edited too. + +Steps for enhancing API documentation +------------------------------------- + +1) Run `/docs/make.bat html` +2) Read the red errors/warnings - fix it in the code +3) Run `/docs/make.bat html` - again until there are no red lines +4) Edit RST files and add some meaningful content there + +Resources +========= + +- [ReStructuredText on Wikipedia](https://en.wikipedia.org/wiki/ReStructuredText) +- [RST Quick Reference](https://docutils.sourceforge.io/docs/user/rst/quickref.html) +- [Sphinx AutoAPI Documentation](https://sphinx-autoapi.readthedocs.io/en/latest/) +- [Example of Google Style Python Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) +- [Sphinx Directives](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html) diff --git a/docs/make.bat b/docs/make.bat index 4d9eb83d9f..1d261df277 100644 --- a/docs/make.bat +++ b/docs/make.bat @@ -5,7 +5,7 @@ pushd %~dp0 REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build + set SPHINXBUILD=..\.poetry\bin\poetry run sphinx-build ) set SOURCEDIR=source set BUILDDIR=build diff --git a/docs/source/_static/AYON_tight_G.svg b/docs/source/_static/AYON_tight_G.svg new file mode 100644 index 0000000000..2c5b73deea --- /dev/null +++ b/docs/source/_static/AYON_tight_G.svg @@ -0,0 +1,38 @@ + + + + + + + + + + diff --git a/common/openpype_common/distribution/__init__.py b/docs/source/_static/README.md similarity index 100% rename from common/openpype_common/distribution/__init__.py rename to docs/source/_static/README.md diff --git a/docs/source/_templates/autoapi/index.rst b/docs/source/_templates/autoapi/index.rst new file mode 100644 index 0000000000..95d0ad8911 --- /dev/null +++ b/docs/source/_templates/autoapi/index.rst @@ -0,0 +1,15 @@ +API Reference +============= + +This page contains auto-generated API reference documentation [#f1]_. + +.. toctree:: + :titlesonly: + + {% for page in pages %} + {% if page.top_level_object and page.display %} + {{ page.include_path }} + {% endif %} + {% endfor %} + +.. [#f1] Created with `sphinx-autoapi `_ diff --git a/docs/source/_templates/autoapi/python/attribute.rst b/docs/source/_templates/autoapi/python/attribute.rst new file mode 100644 index 0000000000..ebaba555ad --- /dev/null +++ b/docs/source/_templates/autoapi/python/attribute.rst @@ -0,0 +1 @@ +{% extends "python/data.rst" %} diff --git a/docs/source/_templates/autoapi/python/class.rst b/docs/source/_templates/autoapi/python/class.rst new file mode 100644 index 0000000000..df5edffb62 --- /dev/null +++ b/docs/source/_templates/autoapi/python/class.rst @@ -0,0 +1,58 @@ +{% if obj.display %} +.. py:{{ obj.type }}:: {{ obj.short_name }}{% if obj.args %}({{ obj.args }}){% endif %} +{% for (args, return_annotation) in obj.overloads %} + {{ " " * (obj.type | length) }} {{ obj.short_name }}{% if args %}({{ args }}){% endif %} +{% endfor %} + + + {% if obj.bases %} + {% if "show-inheritance" in autoapi_options %} + Bases: {% for base in obj.bases %}{{ base|link_objs }}{% if not loop.last %}, {% endif %}{% endfor %} + {% endif %} + + + {% if "show-inheritance-diagram" in autoapi_options and obj.bases != ["object"] %} + .. autoapi-inheritance-diagram:: {{ obj.obj["full_name"] }} + :parts: 1 + {% if "private-members" in autoapi_options %} + :private-bases: + {% endif %} + + {% endif %} + {% endif %} + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} + {% if "inherited-members" in autoapi_options %} + {% set visible_classes = obj.classes|selectattr("display")|list %} + {% else %} + {% set visible_classes = obj.classes|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for klass in visible_classes %} + {{ klass.render()|indent(3) }} + {% endfor %} + {% if "inherited-members" in autoapi_options %} + {% set visible_properties = obj.properties|selectattr("display")|list %} + {% else %} + {% set visible_properties = obj.properties|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for property in visible_properties %} + {{ property.render()|indent(3) }} + {% endfor %} + {% if "inherited-members" in autoapi_options %} + {% set visible_attributes = obj.attributes|selectattr("display")|list %} + {% else %} + {% set visible_attributes = obj.attributes|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for attribute in visible_attributes %} + {{ attribute.render()|indent(3) }} + {% endfor %} + {% if "inherited-members" in autoapi_options %} + {% set visible_methods = obj.methods|selectattr("display")|list %} + {% else %} + {% set visible_methods = obj.methods|rejectattr("inherited")|selectattr("display")|list %} + {% endif %} + {% for method in visible_methods %} + {{ method.render()|indent(3) }} + {% endfor %} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/data.rst b/docs/source/_templates/autoapi/python/data.rst new file mode 100644 index 0000000000..3d12b2d0c7 --- /dev/null +++ b/docs/source/_templates/autoapi/python/data.rst @@ -0,0 +1,37 @@ +{% if obj.display %} +.. py:{{ obj.type }}:: {{ obj.name }} + {%- if obj.annotation is not none %} + + :type: {%- if obj.annotation %} {{ obj.annotation }}{%- endif %} + + {%- endif %} + + {%- if obj.value is not none %} + + :value: {% if obj.value is string and obj.value.splitlines()|count > 1 -%} + Multiline-String + + .. raw:: html + +
Show Value + + .. code-block:: python + + """{{ obj.value|indent(width=8,blank=true) }}""" + + .. raw:: html + +
+ + {%- else -%} + {%- if obj.value is string -%} + {{ "%r" % obj.value|string|truncate(100) }} + {%- else -%} + {{ obj.value|string|truncate(100) }} + {%- endif -%} + {%- endif %} + {%- endif %} + + + {{ obj.docstring|indent(3) }} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/exception.rst b/docs/source/_templates/autoapi/python/exception.rst new file mode 100644 index 0000000000..92f3d38fd5 --- /dev/null +++ b/docs/source/_templates/autoapi/python/exception.rst @@ -0,0 +1 @@ +{% extends "python/class.rst" %} diff --git a/docs/source/_templates/autoapi/python/function.rst b/docs/source/_templates/autoapi/python/function.rst new file mode 100644 index 0000000000..b00d5c2445 --- /dev/null +++ b/docs/source/_templates/autoapi/python/function.rst @@ -0,0 +1,15 @@ +{% if obj.display %} +.. py:function:: {{ obj.short_name }}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %} + +{% for (args, return_annotation) in obj.overloads %} + {{ obj.short_name }}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %} + +{% endfor %} + {% for property in obj.properties %} + :{{ property }}: + {% endfor %} + + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/method.rst b/docs/source/_templates/autoapi/python/method.rst new file mode 100644 index 0000000000..723cb7bbe5 --- /dev/null +++ b/docs/source/_templates/autoapi/python/method.rst @@ -0,0 +1,19 @@ +{%- if obj.display %} +.. py:method:: {{ obj.short_name }}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %} + +{% for (args, return_annotation) in obj.overloads %} + {{ obj.short_name }}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %} + +{% endfor %} + {% if obj.properties %} + {% for property in obj.properties %} + :{{ property }}: + {% endfor %} + + {% else %} + + {% endif %} + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} +{% endif %} diff --git a/docs/source/_templates/autoapi/python/module.rst b/docs/source/_templates/autoapi/python/module.rst new file mode 100644 index 0000000000..d2714f6c9d --- /dev/null +++ b/docs/source/_templates/autoapi/python/module.rst @@ -0,0 +1,114 @@ +{% if not obj.display %} +:orphan: + +{% endif %} +:py:mod:`{{ obj.name }}` +=========={{ "=" * obj.name|length }} + +.. py:module:: {{ obj.name }} + +{% if obj.docstring %} +.. autoapi-nested-parse:: + + {{ obj.docstring|indent(3) }} + +{% endif %} + +{% block subpackages %} +{% set visible_subpackages = obj.subpackages|selectattr("display")|list %} +{% if visible_subpackages %} +Subpackages +----------- +.. toctree:: + :titlesonly: + :maxdepth: 3 + +{% for subpackage in visible_subpackages %} + {{ subpackage.short_name }}/index.rst +{% endfor %} + + +{% endif %} +{% endblock %} +{% block submodules %} +{% set visible_submodules = obj.submodules|selectattr("display")|list %} +{% if visible_submodules %} +Submodules +---------- +.. toctree:: + :titlesonly: + :maxdepth: 1 + +{% for submodule in visible_submodules %} + {{ submodule.short_name }}/index.rst +{% endfor %} + + +{% endif %} +{% endblock %} +{% block content %} +{% if obj.all is not none %} +{% set visible_children = obj.children|selectattr("short_name", "in", obj.all)|list %} +{% elif obj.type is equalto("package") %} +{% set visible_children = obj.children|selectattr("display")|list %} +{% else %} +{% set visible_children = obj.children|selectattr("display")|rejectattr("imported")|list %} +{% endif %} +{% if visible_children %} +{{ obj.type|title }} Contents +{{ "-" * obj.type|length }}--------- + +{% set visible_classes = visible_children|selectattr("type", "equalto", "class")|list %} +{% set visible_functions = visible_children|selectattr("type", "equalto", "function")|list %} +{% set visible_attributes = visible_children|selectattr("type", "equalto", "data")|list %} +{% if "show-module-summary" in autoapi_options and (visible_classes or visible_functions) %} +{% block classes scoped %} +{% if visible_classes %} +Classes +~~~~~~~ + +.. autoapisummary:: + +{% for klass in visible_classes %} + {{ klass.id }} +{% endfor %} + + +{% endif %} +{% endblock %} + +{% block functions scoped %} +{% if visible_functions %} +Functions +~~~~~~~~~ + +.. autoapisummary:: + +{% for function in visible_functions %} + {{ function.id }} +{% endfor %} + + +{% endif %} +{% endblock %} + +{% block attributes scoped %} +{% if visible_attributes %} +Attributes +~~~~~~~~~~ + +.. autoapisummary:: + +{% for attribute in visible_attributes %} + {{ attribute.id }} +{% endfor %} + + +{% endif %} +{% endblock %} +{% endif %} +{% for obj_item in visible_children %} +{{ obj_item.render()|indent(0) }} +{% endfor %} +{% endif %} +{% endblock %} diff --git a/docs/source/_templates/autoapi/python/package.rst b/docs/source/_templates/autoapi/python/package.rst new file mode 100644 index 0000000000..fb9a64965e --- /dev/null +++ b/docs/source/_templates/autoapi/python/package.rst @@ -0,0 +1 @@ +{% extends "python/module.rst" %} diff --git a/docs/source/_templates/autoapi/python/property.rst b/docs/source/_templates/autoapi/python/property.rst new file mode 100644 index 0000000000..70af24236f --- /dev/null +++ b/docs/source/_templates/autoapi/python/property.rst @@ -0,0 +1,15 @@ +{%- if obj.display %} +.. py:property:: {{ obj.short_name }} + {% if obj.annotation %} + :type: {{ obj.annotation }} + {% endif %} + {% if obj.properties %} + {% for property in obj.properties %} + :{{ property }}: + {% endfor %} + {% endif %} + + {% if obj.docstring %} + {{ obj.docstring|indent(3) }} + {% endif %} +{% endif %} diff --git a/docs/source/conf.py b/docs/source/conf.py index 5b34ff8dc0..916a397e8e 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -17,18 +17,29 @@ import os import sys -pype_root = os.path.abspath('../..') -sys.path.insert(0, pype_root) +import revitron_sphinx_theme + +openpype_root = os.path.abspath('../..') +sys.path.insert(0, openpype_root) +# app = QApplication([]) + +""" repos = os.listdir(os.path.abspath("../../repos")) -repos = [os.path.join(pype_root, "repos", repo) for repo in repos] +repos = [os.path.join(openpype_root, "repos", repo) for repo in repos] for repo in repos: sys.path.append(repo) +""" + +todo_include_todos = True +autodoc_mock_imports = ["maya", "pymel", "nuke", "nukestudio", "nukescripts", + "hiero", "bpy", "fusion", "houdini", "hou", "unreal", + "__builtin__", "resolve", "pysync", "DaVinciResolveScript"] # -- Project information ----------------------------------------------------- -project = 'pype' -copyright = '2019, Orbi Tools' -author = 'Orbi Tools' +project = 'OpenPype' +copyright = '2023 Ynput' +author = 'Ynput' # The short X.Y version version = '' @@ -52,11 +63,41 @@ extensions = [ 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.mathjax', - 'sphinx.ext.viewcode', 'sphinx.ext.autosummary', - 'recommonmark' + 'revitron_sphinx_theme', + 'autoapi.extension', + 'myst_parser' ] +############################## +# Autoapi settings +############################## + +autoapi_dirs = ['../../openpype', '../../igniter'] + +# bypass modules with a lot of python2 content for now +autoapi_ignore = [ + "*vendor*", + "*schemas*", + "*startup/*", + "*/website*", + "*openpype/hooks*", + "*openpype/style*", + "openpype/tests*", + # to many levels of relative import: + "*/modules/sync_server/*" +] +autoapi_keep_files = True +autoapi_options = [ + 'members', + 'undoc-members', + 'show-inheritance', + 'show-module-summary' +] +autoapi_add_toctree_entry = True +autoapi_template_dir = '_templates/autoapi' + + # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] @@ -64,7 +105,7 @@ templates_path = ['_templates'] # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] -source_suffix = '.rst' +source_suffix = ['.rst', '.md'] # The master toctree document. master_doc = 'index' @@ -74,12 +115,15 @@ master_doc = 'index' # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. -language = None +language = "English" # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. -exclude_patterns = [] +exclude_patterns = [ + "openpype.hosts.resolve.*", + "openpype.tools.*" + ] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'friendly' @@ -97,15 +141,22 @@ autosummary_generate = True # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # -html_theme = 'sphinx_rtd_theme' +html_theme = 'revitron_sphinx_theme' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = { - 'collapse_navigation': False + 'collapse_navigation': True, + 'sticky_navigation': True, + 'navigation_depth': 4, + 'includehidden': True, + 'titles_only': False, + 'github_url': '', } +html_logo = '_static/AYON_tight_G.svg' + # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, @@ -153,8 +204,8 @@ latex_elements = { # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ - (master_doc, 'pype.tex', 'pype Documentation', - 'OrbiTools', 'manual'), + (master_doc, 'openpype.tex', 'OpenPype Documentation', + 'Ynput', 'manual'), ] @@ -163,7 +214,7 @@ latex_documents = [ # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ - (master_doc, 'pype', 'pype Documentation', + (master_doc, 'openpype', 'OpenPype Documentation', [author], 1) ] @@ -174,8 +225,8 @@ man_pages = [ # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ - (master_doc, 'pype', 'pype Documentation', - author, 'pype', 'One line description of project.', + (master_doc, 'OpenPype', 'OpenPype Documentation', + author, 'OpenPype', 'Pipeline for studios', 'Miscellaneous'), ] @@ -207,7 +258,4 @@ intersphinx_mapping = { 'https://docs.python.org/3/': None } -# -- Options for todo extension ---------------------------------------------- - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = True +myst_gfm_only = True diff --git a/docs/source/igniter.bootstrap_repos.rst b/docs/source/igniter.bootstrap_repos.rst deleted file mode 100644 index 7c6e0a0757..0000000000 --- a/docs/source/igniter.bootstrap_repos.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.bootstrap\_repos module -=============================== - -.. automodule:: igniter.bootstrap_repos - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.install_dialog.rst b/docs/source/igniter.install_dialog.rst deleted file mode 100644 index bf30ec270e..0000000000 --- a/docs/source/igniter.install_dialog.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.install\_dialog module -============================== - -.. automodule:: igniter.install_dialog - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.install_thread.rst b/docs/source/igniter.install_thread.rst deleted file mode 100644 index 6c19516219..0000000000 --- a/docs/source/igniter.install_thread.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.install\_thread module -============================== - -.. automodule:: igniter.install_thread - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.rst b/docs/source/igniter.rst deleted file mode 100644 index b4aebe88b0..0000000000 --- a/docs/source/igniter.rst +++ /dev/null @@ -1,42 +0,0 @@ -igniter package -=============== - -.. automodule:: igniter - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -igniter.bootstrap\_repos module -------------------------------- - -.. automodule:: igniter.bootstrap_repos - :members: - :undoc-members: - :show-inheritance: - -igniter.install\_dialog module ------------------------------- - -.. automodule:: igniter.install_dialog - :members: - :undoc-members: - :show-inheritance: - -igniter.install\_thread module ------------------------------- - -.. automodule:: igniter.install_thread - :members: - :undoc-members: - :show-inheritance: - -igniter.tools module --------------------- - -.. automodule:: igniter.tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/igniter.tools.rst b/docs/source/igniter.tools.rst deleted file mode 100644 index 4fdbdf9d29..0000000000 --- a/docs/source/igniter.tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -igniter.tools module -==================== - -.. automodule:: igniter.tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/index.rst b/docs/source/index.rst index b54d153894..f703468fca 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -1,14 +1,15 @@ -.. pype documentation master file, created by +.. openpype documentation master file, created by sphinx-quickstart on Mon May 13 17:18:23 2019. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. -Welcome to pype's documentation! -================================ +Welcome to OpenPype's API documentation! +======================================== .. toctree:: - readme - modules + + Readme + Indices and tables ================== diff --git a/docs/source/modules.rst b/docs/source/modules.rst deleted file mode 100644 index 1956d9ed04..0000000000 --- a/docs/source/modules.rst +++ /dev/null @@ -1,8 +0,0 @@ -igniter -======= - -.. toctree:: - :maxdepth: 6 - - igniter - pype \ No newline at end of file diff --git a/docs/source/pype.action.rst b/docs/source/pype.action.rst deleted file mode 100644 index 62a32e08b5..0000000000 --- a/docs/source/pype.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.action module -================== - -.. automodule:: pype.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.api.rst b/docs/source/pype.api.rst deleted file mode 100644 index af3602a895..0000000000 --- a/docs/source/pype.api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.api module -=============== - -.. automodule:: pype.api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.cli.rst b/docs/source/pype.cli.rst deleted file mode 100644 index 7e4a336fa9..0000000000 --- a/docs/source/pype.cli.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.cli module -=============== - -.. automodule:: pype.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.aftereffects.rst b/docs/source/pype.hosts.aftereffects.rst deleted file mode 100644 index 3c2b2dda41..0000000000 --- a/docs/source/pype.hosts.aftereffects.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.aftereffects package -=============================== - -.. automodule:: pype.hosts.aftereffects - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.blender.action.rst b/docs/source/pype.hosts.blender.action.rst deleted file mode 100644 index a6444b1efc..0000000000 --- a/docs/source/pype.hosts.blender.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.blender.action module -================================ - -.. automodule:: pype.hosts.blender.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.blender.plugin.rst b/docs/source/pype.hosts.blender.plugin.rst deleted file mode 100644 index cf6a8feec8..0000000000 --- a/docs/source/pype.hosts.blender.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.blender.plugin module -================================ - -.. automodule:: pype.hosts.blender.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.blender.rst b/docs/source/pype.hosts.blender.rst deleted file mode 100644 index 19cb85e5f3..0000000000 --- a/docs/source/pype.hosts.blender.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.blender package -========================== - -.. automodule:: pype.hosts.blender - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.blender.action module --------------------------------- - -.. automodule:: pype.hosts.blender.action - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.blender.plugin module --------------------------------- - -.. automodule:: pype.hosts.blender.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.celaction.cli.rst b/docs/source/pype.hosts.celaction.cli.rst deleted file mode 100644 index c8843b90bd..0000000000 --- a/docs/source/pype.hosts.celaction.cli.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.celaction.cli module -=============================== - -.. automodule:: pype.hosts.celaction.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.celaction.rst b/docs/source/pype.hosts.celaction.rst deleted file mode 100644 index 1aa236397e..0000000000 --- a/docs/source/pype.hosts.celaction.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.hosts.celaction package -============================ - -.. automodule:: pype.hosts.celaction - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.celaction.cli module -------------------------------- - -.. automodule:: pype.hosts.celaction.cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.lib.rst b/docs/source/pype.hosts.fusion.lib.rst deleted file mode 100644 index 32b8f501f5..0000000000 --- a/docs/source/pype.hosts.fusion.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.lib module -============================ - -.. automodule:: pype.hosts.fusion.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.menu.rst b/docs/source/pype.hosts.fusion.menu.rst deleted file mode 100644 index ec5bf76612..0000000000 --- a/docs/source/pype.hosts.fusion.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.menu module -============================= - -.. automodule:: pype.hosts.fusion.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.pipeline.rst b/docs/source/pype.hosts.fusion.pipeline.rst deleted file mode 100644 index ff2a6440a8..0000000000 --- a/docs/source/pype.hosts.fusion.pipeline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.pipeline module -================================= - -.. automodule:: pype.hosts.fusion.pipeline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.rst b/docs/source/pype.hosts.fusion.rst deleted file mode 100644 index 7c2fee827c..0000000000 --- a/docs/source/pype.hosts.fusion.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.fusion package -========================= - -.. automodule:: pype.hosts.fusion - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.hosts.fusion.scripts - -Submodules ----------- - -pype.hosts.fusion.lib module ----------------------------- - -.. automodule:: pype.hosts.fusion.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.duplicate_with_inputs.rst b/docs/source/pype.hosts.fusion.scripts.duplicate_with_inputs.rst deleted file mode 100644 index 2503c20f3b..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.duplicate_with_inputs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.scripts.duplicate\_with\_inputs module -======================================================== - -.. automodule:: pype.hosts.fusion.scripts.duplicate_with_inputs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.fusion_switch_shot.rst b/docs/source/pype.hosts.fusion.scripts.fusion_switch_shot.rst deleted file mode 100644 index 770300116f..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.fusion_switch_shot.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.scripts.fusion\_switch\_shot module -===================================================== - -.. automodule:: pype.hosts.fusion.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.rst b/docs/source/pype.hosts.fusion.scripts.rst deleted file mode 100644 index 5de5f66652..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.fusion.scripts package -================================= - -.. automodule:: pype.hosts.fusion.scripts - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.fusion.scripts.fusion\_switch\_shot module ------------------------------------------------------ - -.. automodule:: pype.hosts.fusion.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.fusion.scripts.publish\_filesequence module ------------------------------------------------------- - -.. automodule:: pype.hosts.fusion.scripts.publish_filesequence - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.scripts.set_rendermode.rst b/docs/source/pype.hosts.fusion.scripts.set_rendermode.rst deleted file mode 100644 index 27bff63466..0000000000 --- a/docs/source/pype.hosts.fusion.scripts.set_rendermode.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.scripts.set\_rendermode module -================================================ - -.. automodule:: pype.hosts.fusion.scripts.set_rendermode - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.fusion.utils.rst b/docs/source/pype.hosts.fusion.utils.rst deleted file mode 100644 index b6de3d0510..0000000000 --- a/docs/source/pype.hosts.fusion.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.fusion.utils module -============================== - -.. automodule:: pype.hosts.fusion.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.harmony.rst b/docs/source/pype.hosts.harmony.rst deleted file mode 100644 index 60e1fcdce6..0000000000 --- a/docs/source/pype.hosts.harmony.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.harmony package -========================== - -.. automodule:: pype.hosts.harmony - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.events.rst b/docs/source/pype.hosts.hiero.events.rst deleted file mode 100644 index 874abbffba..0000000000 --- a/docs/source/pype.hosts.hiero.events.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.events module -============================== - -.. automodule:: pype.hosts.hiero.events - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.lib.rst b/docs/source/pype.hosts.hiero.lib.rst deleted file mode 100644 index 8c0d33b03b..0000000000 --- a/docs/source/pype.hosts.hiero.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.lib module -=========================== - -.. automodule:: pype.hosts.hiero.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.menu.rst b/docs/source/pype.hosts.hiero.menu.rst deleted file mode 100644 index baa1317e61..0000000000 --- a/docs/source/pype.hosts.hiero.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.menu module -============================ - -.. automodule:: pype.hosts.hiero.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.rst b/docs/source/pype.hosts.hiero.rst deleted file mode 100644 index 9a7891b45e..0000000000 --- a/docs/source/pype.hosts.hiero.rst +++ /dev/null @@ -1,19 +0,0 @@ -pype.hosts.hiero package -======================== - -.. automodule:: pype.hosts.hiero - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.hosts.hiero.events - pype.hosts.hiero.lib - pype.hosts.hiero.menu - pype.hosts.hiero.tags - pype.hosts.hiero.workio diff --git a/docs/source/pype.hosts.hiero.tags.rst b/docs/source/pype.hosts.hiero.tags.rst deleted file mode 100644 index 0df33279d5..0000000000 --- a/docs/source/pype.hosts.hiero.tags.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.tags module -============================ - -.. automodule:: pype.hosts.hiero.tags - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.hiero.workio.rst b/docs/source/pype.hosts.hiero.workio.rst deleted file mode 100644 index 11aae43212..0000000000 --- a/docs/source/pype.hosts.hiero.workio.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.hiero.workio module -============================== - -.. automodule:: pype.hosts.hiero.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.houdini.lib.rst b/docs/source/pype.hosts.houdini.lib.rst deleted file mode 100644 index ba6e60d5f3..0000000000 --- a/docs/source/pype.hosts.houdini.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.houdini.lib module -============================= - -.. automodule:: pype.hosts.houdini.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.houdini.rst b/docs/source/pype.hosts.houdini.rst deleted file mode 100644 index 5db18ab3d4..0000000000 --- a/docs/source/pype.hosts.houdini.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.hosts.houdini package -========================== - -.. automodule:: pype.hosts.houdini - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.houdini.lib module ------------------------------ - -.. automodule:: pype.hosts.houdini.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.action.rst b/docs/source/pype.hosts.maya.action.rst deleted file mode 100644 index e1ad7e5d43..0000000000 --- a/docs/source/pype.hosts.maya.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.action module -============================= - -.. automodule:: pype.hosts.maya.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.customize.rst b/docs/source/pype.hosts.maya.customize.rst deleted file mode 100644 index 335e75b0d4..0000000000 --- a/docs/source/pype.hosts.maya.customize.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.customize module -================================ - -.. automodule:: pype.hosts.maya.customize - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.expected_files.rst b/docs/source/pype.hosts.maya.expected_files.rst deleted file mode 100644 index 0ecf22e502..0000000000 --- a/docs/source/pype.hosts.maya.expected_files.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.expected\_files module -====================================== - -.. automodule:: pype.hosts.maya.expected_files - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.lib.rst b/docs/source/pype.hosts.maya.lib.rst deleted file mode 100644 index 7d7dbe4502..0000000000 --- a/docs/source/pype.hosts.maya.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.lib module -========================== - -.. automodule:: pype.hosts.maya.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.menu.rst b/docs/source/pype.hosts.maya.menu.rst deleted file mode 100644 index 614e113769..0000000000 --- a/docs/source/pype.hosts.maya.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.menu module -=========================== - -.. automodule:: pype.hosts.maya.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.plugin.rst b/docs/source/pype.hosts.maya.plugin.rst deleted file mode 100644 index 5796b40c70..0000000000 --- a/docs/source/pype.hosts.maya.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.maya.plugin module -============================= - -.. automodule:: pype.hosts.maya.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.maya.rst b/docs/source/pype.hosts.maya.rst deleted file mode 100644 index 0beab888fc..0000000000 --- a/docs/source/pype.hosts.maya.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.hosts.maya package -======================= - -.. automodule:: pype.hosts.maya - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.maya.action module ------------------------------ - -.. automodule:: pype.hosts.maya.action - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.customize module --------------------------------- - -.. automodule:: pype.hosts.maya.customize - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.expected\_files module --------------------------------------- - -.. automodule:: pype.hosts.maya.expected_files - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.lib module --------------------------- - -.. automodule:: pype.hosts.maya.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.menu module ---------------------------- - -.. automodule:: pype.hosts.maya.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.maya.plugin module ------------------------------ - -.. automodule:: pype.hosts.maya.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.actions.rst b/docs/source/pype.hosts.nuke.actions.rst deleted file mode 100644 index d5e8849a38..0000000000 --- a/docs/source/pype.hosts.nuke.actions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.actions module -============================== - -.. automodule:: pype.hosts.nuke.actions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.lib.rst b/docs/source/pype.hosts.nuke.lib.rst deleted file mode 100644 index c177a27f2d..0000000000 --- a/docs/source/pype.hosts.nuke.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.lib module -========================== - -.. automodule:: pype.hosts.nuke.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.menu.rst b/docs/source/pype.hosts.nuke.menu.rst deleted file mode 100644 index 190e488b95..0000000000 --- a/docs/source/pype.hosts.nuke.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.menu module -=========================== - -.. automodule:: pype.hosts.nuke.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.plugin.rst b/docs/source/pype.hosts.nuke.plugin.rst deleted file mode 100644 index ddd5f1db89..0000000000 --- a/docs/source/pype.hosts.nuke.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.plugin module -============================= - -.. automodule:: pype.hosts.nuke.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.presets.rst b/docs/source/pype.hosts.nuke.presets.rst deleted file mode 100644 index a69aa8a367..0000000000 --- a/docs/source/pype.hosts.nuke.presets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.presets module -============================== - -.. automodule:: pype.hosts.nuke.presets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.rst b/docs/source/pype.hosts.nuke.rst deleted file mode 100644 index 559de65927..0000000000 --- a/docs/source/pype.hosts.nuke.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.hosts.nuke package -======================= - -.. automodule:: pype.hosts.nuke - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.nuke.actions module ------------------------------- - -.. automodule:: pype.hosts.nuke.actions - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.lib module --------------------------- - -.. automodule:: pype.hosts.nuke.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.menu module ---------------------------- - -.. automodule:: pype.hosts.nuke.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.plugin module ------------------------------ - -.. automodule:: pype.hosts.nuke.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.presets module ------------------------------- - -.. automodule:: pype.hosts.nuke.presets - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nuke.utils module ----------------------------- - -.. automodule:: pype.hosts.nuke.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nuke.utils.rst b/docs/source/pype.hosts.nuke.utils.rst deleted file mode 100644 index 66974dc707..0000000000 --- a/docs/source/pype.hosts.nuke.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.nuke.utils module -============================ - -.. automodule:: pype.hosts.nuke.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.nukestudio.rst b/docs/source/pype.hosts.nukestudio.rst deleted file mode 100644 index c718d699fa..0000000000 --- a/docs/source/pype.hosts.nukestudio.rst +++ /dev/null @@ -1,50 +0,0 @@ -pype.hosts.nukestudio package -============================= - -.. automodule:: pype.hosts.nukestudio - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.nukestudio.events module ------------------------------------ - -.. automodule:: pype.hosts.nukestudio.events - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.lib module --------------------------------- - -.. automodule:: pype.hosts.nukestudio.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.menu module ---------------------------------- - -.. automodule:: pype.hosts.nukestudio.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.tags module ---------------------------------- - -.. automodule:: pype.hosts.nukestudio.tags - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.nukestudio.workio module ------------------------------------ - -.. automodule:: pype.hosts.nukestudio.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.photoshop.rst b/docs/source/pype.hosts.photoshop.rst deleted file mode 100644 index f77ea79874..0000000000 --- a/docs/source/pype.hosts.photoshop.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.photoshop package -============================ - -.. automodule:: pype.hosts.photoshop - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.premiere.lib.rst b/docs/source/pype.hosts.premiere.lib.rst deleted file mode 100644 index e2c2723841..0000000000 --- a/docs/source/pype.hosts.premiere.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.premiere.lib module -============================== - -.. automodule:: pype.hosts.premiere.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.premiere.rst b/docs/source/pype.hosts.premiere.rst deleted file mode 100644 index 7c38d52c22..0000000000 --- a/docs/source/pype.hosts.premiere.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.hosts.premiere package -=========================== - -.. automodule:: pype.hosts.premiere - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.premiere.lib module ------------------------------- - -.. automodule:: pype.hosts.premiere.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.action.rst b/docs/source/pype.hosts.resolve.action.rst deleted file mode 100644 index 781694781f..0000000000 --- a/docs/source/pype.hosts.resolve.action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.action module -================================ - -.. automodule:: pype.hosts.resolve.action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.lib.rst b/docs/source/pype.hosts.resolve.lib.rst deleted file mode 100644 index 5860f783cc..0000000000 --- a/docs/source/pype.hosts.resolve.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.lib module -============================= - -.. automodule:: pype.hosts.resolve.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.menu.rst b/docs/source/pype.hosts.resolve.menu.rst deleted file mode 100644 index df87dcde98..0000000000 --- a/docs/source/pype.hosts.resolve.menu.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.menu module -============================== - -.. automodule:: pype.hosts.resolve.menu - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.otio.davinci_export.rst b/docs/source/pype.hosts.resolve.otio.davinci_export.rst deleted file mode 100644 index 498f96a7ed..0000000000 --- a/docs/source/pype.hosts.resolve.otio.davinci_export.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.otio.davinci\_export module -============================================== - -.. automodule:: pype.hosts.resolve.otio.davinci_export - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.otio.davinci_import.rst b/docs/source/pype.hosts.resolve.otio.davinci_import.rst deleted file mode 100644 index 30f43cc9fe..0000000000 --- a/docs/source/pype.hosts.resolve.otio.davinci_import.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.otio.davinci\_import module -============================================== - -.. automodule:: pype.hosts.resolve.otio.davinci_import - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.otio.rst b/docs/source/pype.hosts.resolve.otio.rst deleted file mode 100644 index 523d8937ca..0000000000 --- a/docs/source/pype.hosts.resolve.otio.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.hosts.resolve.otio package -=============================== - -.. automodule:: pype.hosts.resolve.otio - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.hosts.resolve.otio.davinci_export - pype.hosts.resolve.otio.davinci_import - pype.hosts.resolve.otio.utils diff --git a/docs/source/pype.hosts.resolve.otio.utils.rst b/docs/source/pype.hosts.resolve.otio.utils.rst deleted file mode 100644 index 765f492732..0000000000 --- a/docs/source/pype.hosts.resolve.otio.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.otio.utils module -==================================== - -.. automodule:: pype.hosts.resolve.otio.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.pipeline.rst b/docs/source/pype.hosts.resolve.pipeline.rst deleted file mode 100644 index 3efc24137b..0000000000 --- a/docs/source/pype.hosts.resolve.pipeline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.pipeline module -================================== - -.. automodule:: pype.hosts.resolve.pipeline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.plugin.rst b/docs/source/pype.hosts.resolve.plugin.rst deleted file mode 100644 index 26f6c56aef..0000000000 --- a/docs/source/pype.hosts.resolve.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.plugin module -================================ - -.. automodule:: pype.hosts.resolve.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.preload_console.rst b/docs/source/pype.hosts.resolve.preload_console.rst deleted file mode 100644 index 0d38ae14ea..0000000000 --- a/docs/source/pype.hosts.resolve.preload_console.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.preload\_console module -========================================== - -.. automodule:: pype.hosts.resolve.preload_console - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.rst b/docs/source/pype.hosts.resolve.rst deleted file mode 100644 index 368129e43e..0000000000 --- a/docs/source/pype.hosts.resolve.rst +++ /dev/null @@ -1,74 +0,0 @@ -pype.hosts.resolve package -========================== - -.. automodule:: pype.hosts.resolve - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.resolve.action module --------------------------------- - -.. automodule:: pype.hosts.resolve.action - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.lib module ------------------------------ - -.. automodule:: pype.hosts.resolve.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.menu module ------------------------------- - -.. automodule:: pype.hosts.resolve.menu - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.pipeline module ----------------------------------- - -.. automodule:: pype.hosts.resolve.pipeline - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.plugin module --------------------------------- - -.. automodule:: pype.hosts.resolve.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.preload\_console module ------------------------------------------- - -.. automodule:: pype.hosts.resolve.preload_console - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.utils module -------------------------------- - -.. automodule:: pype.hosts.resolve.utils - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.resolve.workio module --------------------------------- - -.. automodule:: pype.hosts.resolve.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.todo-rendering.rst b/docs/source/pype.hosts.resolve.todo-rendering.rst deleted file mode 100644 index 8ea80183ce..0000000000 --- a/docs/source/pype.hosts.resolve.todo-rendering.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.todo\-rendering module -========================================= - -.. automodule:: pype.hosts.resolve.todo-rendering - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.utils.rst b/docs/source/pype.hosts.resolve.utils.rst deleted file mode 100644 index e390a5d026..0000000000 --- a/docs/source/pype.hosts.resolve.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.utils module -=============================== - -.. automodule:: pype.hosts.resolve.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.resolve.workio.rst b/docs/source/pype.hosts.resolve.workio.rst deleted file mode 100644 index 5dceb99d64..0000000000 --- a/docs/source/pype.hosts.resolve.workio.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.resolve.workio module -================================ - -.. automodule:: pype.hosts.resolve.workio - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.rst b/docs/source/pype.hosts.rst deleted file mode 100644 index e2d9121501..0000000000 --- a/docs/source/pype.hosts.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts package -================== - -.. automodule:: pype.hosts - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.hosts.blender - pype.hosts.celaction - pype.hosts.fusion - pype.hosts.harmony - pype.hosts.houdini - pype.hosts.maya - pype.hosts.nuke - pype.hosts.nukestudio - pype.hosts.photoshop - pype.hosts.premiere - pype.hosts.resolve - pype.hosts.unreal diff --git a/docs/source/pype.hosts.tvpaint.api.rst b/docs/source/pype.hosts.tvpaint.api.rst deleted file mode 100644 index 43273e8ec5..0000000000 --- a/docs/source/pype.hosts.tvpaint.api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.tvpaint.api package -============================== - -.. automodule:: pype.hosts.tvpaint.api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.tvpaint.rst b/docs/source/pype.hosts.tvpaint.rst deleted file mode 100644 index 561be3a9dc..0000000000 --- a/docs/source/pype.hosts.tvpaint.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.hosts.tvpaint package -========================== - -.. automodule:: pype.hosts.tvpaint - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.hosts.tvpaint.api diff --git a/docs/source/pype.hosts.unreal.lib.rst b/docs/source/pype.hosts.unreal.lib.rst deleted file mode 100644 index b891e71c47..0000000000 --- a/docs/source/pype.hosts.unreal.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.unreal.lib module -============================ - -.. automodule:: pype.hosts.unreal.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.unreal.plugin.rst b/docs/source/pype.hosts.unreal.plugin.rst deleted file mode 100644 index e3ef81c7c7..0000000000 --- a/docs/source/pype.hosts.unreal.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.hosts.unreal.plugin module -=============================== - -.. automodule:: pype.hosts.unreal.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.hosts.unreal.rst b/docs/source/pype.hosts.unreal.rst deleted file mode 100644 index f46140298b..0000000000 --- a/docs/source/pype.hosts.unreal.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.hosts.unreal package -========================= - -.. automodule:: pype.hosts.unreal - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.hosts.unreal.lib module ----------------------------- - -.. automodule:: pype.hosts.unreal.lib - :members: - :undoc-members: - :show-inheritance: - -pype.hosts.unreal.plugin module -------------------------------- - -.. automodule:: pype.hosts.unreal.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.launcher_actions.rst b/docs/source/pype.launcher_actions.rst deleted file mode 100644 index c7525acbd1..0000000000 --- a/docs/source/pype.launcher_actions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.launcher\_actions module -============================= - -.. automodule:: pype.launcher_actions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_collect_render.rst b/docs/source/pype.lib.abstract_collect_render.rst deleted file mode 100644 index d6adadc271..0000000000 --- a/docs/source/pype.lib.abstract_collect_render.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_collect\_render module -========================================= - -.. automodule:: pype.lib.abstract_collect_render - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_expected_files.rst b/docs/source/pype.lib.abstract_expected_files.rst deleted file mode 100644 index 904aeb3375..0000000000 --- a/docs/source/pype.lib.abstract_expected_files.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_expected\_files module -========================================= - -.. automodule:: pype.lib.abstract_expected_files - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_metaplugins.rst b/docs/source/pype.lib.abstract_metaplugins.rst deleted file mode 100644 index 9f2751b630..0000000000 --- a/docs/source/pype.lib.abstract_metaplugins.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_metaplugins module -===================================== - -.. automodule:: pype.lib.abstract_metaplugins - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.abstract_submit_deadline.rst b/docs/source/pype.lib.abstract_submit_deadline.rst deleted file mode 100644 index a57222add3..0000000000 --- a/docs/source/pype.lib.abstract_submit_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.abstract\_submit\_deadline module -========================================== - -.. automodule:: pype.lib.abstract_submit_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.anatomy.rst b/docs/source/pype.lib.anatomy.rst deleted file mode 100644 index 7bddb37c8a..0000000000 --- a/docs/source/pype.lib.anatomy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.anatomy module -======================= - -.. automodule:: pype.lib.anatomy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.applications.rst b/docs/source/pype.lib.applications.rst deleted file mode 100644 index 8d1ff9b2c6..0000000000 --- a/docs/source/pype.lib.applications.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.applications module -============================ - -.. automodule:: pype.lib.applications - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.avalon_context.rst b/docs/source/pype.lib.avalon_context.rst deleted file mode 100644 index 067ea3380f..0000000000 --- a/docs/source/pype.lib.avalon_context.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.avalon\_context module -=============================== - -.. automodule:: pype.lib.avalon_context - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.config.rst b/docs/source/pype.lib.config.rst deleted file mode 100644 index ce4c13f4e7..0000000000 --- a/docs/source/pype.lib.config.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.config module -====================== - -.. automodule:: pype.lib.config - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.deprecated.rst b/docs/source/pype.lib.deprecated.rst deleted file mode 100644 index ec5ee58d67..0000000000 --- a/docs/source/pype.lib.deprecated.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.deprecated module -========================== - -.. automodule:: pype.lib.deprecated - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.editorial.rst b/docs/source/pype.lib.editorial.rst deleted file mode 100644 index d32e495e51..0000000000 --- a/docs/source/pype.lib.editorial.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.editorial module -========================= - -.. automodule:: pype.lib.editorial - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.env_tools.rst b/docs/source/pype.lib.env_tools.rst deleted file mode 100644 index cb470207c8..0000000000 --- a/docs/source/pype.lib.env_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.env\_tools module -========================== - -.. automodule:: pype.lib.env_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.execute.rst b/docs/source/pype.lib.execute.rst deleted file mode 100644 index 82c4ef0ad8..0000000000 --- a/docs/source/pype.lib.execute.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.execute module -======================= - -.. automodule:: pype.lib.execute - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.ffmpeg_utils.rst b/docs/source/pype.lib.ffmpeg_utils.rst deleted file mode 100644 index 968a3f39c8..0000000000 --- a/docs/source/pype.lib.ffmpeg_utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.ffmpeg\_utils module -============================= - -.. automodule:: pype.lib.ffmpeg_utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.git_progress.rst b/docs/source/pype.lib.git_progress.rst deleted file mode 100644 index 017cf4c3c7..0000000000 --- a/docs/source/pype.lib.git_progress.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.git\_progress module -============================= - -.. automodule:: pype.lib.git_progress - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.log.rst b/docs/source/pype.lib.log.rst deleted file mode 100644 index 6282178850..0000000000 --- a/docs/source/pype.lib.log.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.log module -=================== - -.. automodule:: pype.lib.log - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.mongo.rst b/docs/source/pype.lib.mongo.rst deleted file mode 100644 index 34fbc6af7f..0000000000 --- a/docs/source/pype.lib.mongo.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.mongo module -===================== - -.. automodule:: pype.lib.mongo - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.path_tools.rst b/docs/source/pype.lib.path_tools.rst deleted file mode 100644 index c19c41eea3..0000000000 --- a/docs/source/pype.lib.path_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.path\_tools module -=========================== - -.. automodule:: pype.lib.path_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.plugin_tools.rst b/docs/source/pype.lib.plugin_tools.rst deleted file mode 100644 index 6eadc5d3be..0000000000 --- a/docs/source/pype.lib.plugin_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.plugin\_tools module -============================= - -.. automodule:: pype.lib.plugin_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.profiling.rst b/docs/source/pype.lib.profiling.rst deleted file mode 100644 index 1fded0c8fd..0000000000 --- a/docs/source/pype.lib.profiling.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.profiling module -========================= - -.. automodule:: pype.lib.profiling - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.python_module_tools.rst b/docs/source/pype.lib.python_module_tools.rst deleted file mode 100644 index c916080bce..0000000000 --- a/docs/source/pype.lib.python_module_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.python\_module\_tools module -===================================== - -.. automodule:: pype.lib.python_module_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.rst b/docs/source/pype.lib.rst deleted file mode 100644 index ea880eea3e..0000000000 --- a/docs/source/pype.lib.rst +++ /dev/null @@ -1,90 +0,0 @@ -pype.lib package -================ - -.. automodule:: pype.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.lib.anatomy module ------------------------ - -.. automodule:: pype.lib.anatomy - :members: - :undoc-members: - :show-inheritance: - -pype.lib.config module ----------------------- - -.. automodule:: pype.lib.config - :members: - :undoc-members: - :show-inheritance: - -pype.lib.execute module ------------------------ - -.. automodule:: pype.lib.execute - :members: - :undoc-members: - :show-inheritance: - -pype.lib.git\_progress module ------------------------------ - -.. automodule:: pype.lib.git_progress - :members: - :undoc-members: - :show-inheritance: - -pype.lib.lib module -------------------- - -.. automodule:: pype.lib.lib - :members: - :undoc-members: - :show-inheritance: - -pype.lib.log module -------------------- - -.. automodule:: pype.lib.log - :members: - :undoc-members: - :show-inheritance: - -pype.lib.mongo module ---------------------- - -.. automodule:: pype.lib.mongo - :members: - :undoc-members: - :show-inheritance: - -pype.lib.profiling module -------------------------- - -.. automodule:: pype.lib.profiling - :members: - :undoc-members: - :show-inheritance: - -pype.lib.terminal module ------------------------- - -.. automodule:: pype.lib.terminal - :members: - :undoc-members: - :show-inheritance: - -pype.lib.user\_settings module ------------------------------- - -.. automodule:: pype.lib.user_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.terminal.rst b/docs/source/pype.lib.terminal.rst deleted file mode 100644 index dafe1d8f69..0000000000 --- a/docs/source/pype.lib.terminal.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.terminal module -======================== - -.. automodule:: pype.lib.terminal - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.terminal_splash.rst b/docs/source/pype.lib.terminal_splash.rst deleted file mode 100644 index 06038f0f09..0000000000 --- a/docs/source/pype.lib.terminal_splash.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.terminal\_splash module -================================ - -.. automodule:: pype.lib.terminal_splash - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.lib.user_settings.rst b/docs/source/pype.lib.user_settings.rst deleted file mode 100644 index 7b4e8ced78..0000000000 --- a/docs/source/pype.lib.user_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.lib.user\_settings module -============================== - -.. automodule:: pype.lib.user_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.adobe_comunicator.rst b/docs/source/pype.modules.adobe_communicator.adobe_comunicator.rst deleted file mode 100644 index aadbaa0dc5..0000000000 --- a/docs/source/pype.modules.adobe_communicator.adobe_comunicator.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.adobe\_communicator.adobe\_comunicator module -========================================================== - -.. automodule:: pype.modules.adobe_communicator.adobe_comunicator - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.lib.publish.rst b/docs/source/pype.modules.adobe_communicator.lib.publish.rst deleted file mode 100644 index a16bf1dd0a..0000000000 --- a/docs/source/pype.modules.adobe_communicator.lib.publish.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.adobe\_communicator.lib.publish module -=================================================== - -.. automodule:: pype.modules.adobe_communicator.lib.publish - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.lib.rest_api.rst b/docs/source/pype.modules.adobe_communicator.lib.rest_api.rst deleted file mode 100644 index 457bebef99..0000000000 --- a/docs/source/pype.modules.adobe_communicator.lib.rest_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.adobe\_communicator.lib.rest\_api module -===================================================== - -.. automodule:: pype.modules.adobe_communicator.lib.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.lib.rst b/docs/source/pype.modules.adobe_communicator.lib.rst deleted file mode 100644 index cdec4ce80e..0000000000 --- a/docs/source/pype.modules.adobe_communicator.lib.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.adobe\_communicator.lib package -============================================ - -.. automodule:: pype.modules.adobe_communicator.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.adobe\_communicator.lib.publish module ---------------------------------------------------- - -.. automodule:: pype.modules.adobe_communicator.lib.publish - :members: - :undoc-members: - :show-inheritance: - -pype.modules.adobe\_communicator.lib.rest\_api module ------------------------------------------------------ - -.. automodule:: pype.modules.adobe_communicator.lib.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.adobe_communicator.rst b/docs/source/pype.modules.adobe_communicator.rst deleted file mode 100644 index f2fa40ced4..0000000000 --- a/docs/source/pype.modules.adobe_communicator.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.adobe\_communicator package -======================================== - -.. automodule:: pype.modules.adobe_communicator - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.adobe_communicator.lib - -Submodules ----------- - -pype.modules.adobe\_communicator.adobe\_comunicator module ----------------------------------------------------------- - -.. automodule:: pype.modules.adobe_communicator.adobe_comunicator - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.avalon_apps.avalon_app.rst b/docs/source/pype.modules.avalon_apps.avalon_app.rst deleted file mode 100644 index 43f467e748..0000000000 --- a/docs/source/pype.modules.avalon_apps.avalon_app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.avalon\_apps.avalon\_app module -============================================ - -.. automodule:: pype.modules.avalon_apps.avalon_app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.avalon_apps.rest_api.rst b/docs/source/pype.modules.avalon_apps.rest_api.rst deleted file mode 100644 index d89c979311..0000000000 --- a/docs/source/pype.modules.avalon_apps.rest_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.avalon\_apps.rest\_api module -========================================== - -.. automodule:: pype.modules.avalon_apps.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.avalon_apps.rst b/docs/source/pype.modules.avalon_apps.rst deleted file mode 100644 index 4755eddae6..0000000000 --- a/docs/source/pype.modules.avalon_apps.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.avalon\_apps package -================================= - -.. automodule:: pype.modules.avalon_apps - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.avalon\_apps.avalon\_app module --------------------------------------------- - -.. automodule:: pype.modules.avalon_apps.avalon_app - :members: - :undoc-members: - :show-inheritance: - -pype.modules.avalon\_apps.rest\_api module ------------------------------------------- - -.. automodule:: pype.modules.avalon_apps.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.base.rst b/docs/source/pype.modules.base.rst deleted file mode 100644 index 7cd3cfbd44..0000000000 --- a/docs/source/pype.modules.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.base module -======================== - -.. automodule:: pype.modules.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.clockify.rst b/docs/source/pype.modules.clockify.clockify.rst deleted file mode 100644 index a3deaab81d..0000000000 --- a/docs/source/pype.modules.clockify.clockify.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.clockify module -===================================== - -.. automodule:: pype.modules.clockify.clockify - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.clockify_api.rst b/docs/source/pype.modules.clockify.clockify_api.rst deleted file mode 100644 index 2facc550c5..0000000000 --- a/docs/source/pype.modules.clockify.clockify_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.clockify\_api module -========================================== - -.. automodule:: pype.modules.clockify.clockify_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.clockify_module.rst b/docs/source/pype.modules.clockify.clockify_module.rst deleted file mode 100644 index 85f8e75ad1..0000000000 --- a/docs/source/pype.modules.clockify.clockify_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.clockify\_module module -============================================= - -.. automodule:: pype.modules.clockify.clockify_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.constants.rst b/docs/source/pype.modules.clockify.constants.rst deleted file mode 100644 index e30a073bfc..0000000000 --- a/docs/source/pype.modules.clockify.constants.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.constants module -====================================== - -.. automodule:: pype.modules.clockify.constants - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.rst b/docs/source/pype.modules.clockify.rst deleted file mode 100644 index 550ba049c2..0000000000 --- a/docs/source/pype.modules.clockify.rst +++ /dev/null @@ -1,42 +0,0 @@ -pype.modules.clockify package -============================= - -.. automodule:: pype.modules.clockify - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.clockify.clockify module -------------------------------------- - -.. automodule:: pype.modules.clockify.clockify - :members: - :undoc-members: - :show-inheritance: - -pype.modules.clockify.clockify\_api module ------------------------------------------- - -.. automodule:: pype.modules.clockify.clockify_api - :members: - :undoc-members: - :show-inheritance: - -pype.modules.clockify.constants module --------------------------------------- - -.. automodule:: pype.modules.clockify.constants - :members: - :undoc-members: - :show-inheritance: - -pype.modules.clockify.widgets module ------------------------------------- - -.. automodule:: pype.modules.clockify.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.clockify.widgets.rst b/docs/source/pype.modules.clockify.widgets.rst deleted file mode 100644 index e9809fb048..0000000000 --- a/docs/source/pype.modules.clockify.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.clockify.widgets module -==================================== - -.. automodule:: pype.modules.clockify.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.deadline.deadline_module.rst b/docs/source/pype.modules.deadline.deadline_module.rst deleted file mode 100644 index 43e7198a8b..0000000000 --- a/docs/source/pype.modules.deadline.deadline_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.deadline.deadline\_module module -============================================= - -.. automodule:: pype.modules.deadline.deadline_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.deadline.rst b/docs/source/pype.modules.deadline.rst deleted file mode 100644 index 7633b2b950..0000000000 --- a/docs/source/pype.modules.deadline.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.modules.deadline package -============================= - -.. automodule:: pype.modules.deadline - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.deadline.deadline_module diff --git a/docs/source/pype.modules.ftrack.ftrack_module.rst b/docs/source/pype.modules.ftrack.ftrack_module.rst deleted file mode 100644 index 4188ffbed8..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_module module -========================================= - -.. automodule:: pype.modules.ftrack.ftrack_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.custom_db_connector.rst b/docs/source/pype.modules.ftrack.ftrack_server.custom_db_connector.rst deleted file mode 100644 index b42c3e054d..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.custom_db_connector.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.custom\_db\_connector module -=============================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.custom_db_connector - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.event_server_cli.rst b/docs/source/pype.modules.ftrack.ftrack_server.event_server_cli.rst deleted file mode 100644 index d6404f965c..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.event_server_cli.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.event\_server\_cli module -============================================================ - -.. automodule:: pype.modules.ftrack.ftrack_server.event_server_cli - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.ftrack_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.ftrack_server.rst deleted file mode 100644 index af2783c263..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.ftrack_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.ftrack\_server module -======================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.ftrack_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.lib.rst b/docs/source/pype.modules.ftrack.ftrack_server.lib.rst deleted file mode 100644 index 2ac4cef517..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.lib module -============================================= - -.. automodule:: pype.modules.ftrack.ftrack_server.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.rst deleted file mode 100644 index 417acc1a45..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.rst +++ /dev/null @@ -1,90 +0,0 @@ -pype.modules.ftrack.ftrack\_server package -========================================== - -.. automodule:: pype.modules.ftrack.ftrack_server - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.ftrack.ftrack\_server.custom\_db\_connector module ---------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.custom_db_connector - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.event\_server\_cli module ------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.event_server_cli - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.ftrack\_server module --------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.ftrack_server - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.lib module ---------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.lib - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.socket\_thread module --------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.socket_thread - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_event\_processor module ---------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_processor - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_event\_status module ------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_status - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_event\_storer module ------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_storer - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_legacy\_server module -------------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_legacy_server - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.ftrack\_server.sub\_user\_server module ------------------------------------------------------------ - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_user_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.socket_thread.rst b/docs/source/pype.modules.ftrack.ftrack_server.socket_thread.rst deleted file mode 100644 index d8d24a8288..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.socket_thread.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.socket\_thread module -======================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.socket_thread - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_processor.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_event_processor.rst deleted file mode 100644 index 04f863e347..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_processor.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_event\_processor module -=============================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_processor - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_status.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_event_status.rst deleted file mode 100644 index 876b7313cf..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_status.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_event\_status module -============================================================ - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_status - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_storer.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_event_storer.rst deleted file mode 100644 index 3d2d400d55..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_event_storer.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_event\_storer module -============================================================ - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_event_storer - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_legacy_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_legacy_server.rst deleted file mode 100644 index d25cdfe8de..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_legacy_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_legacy\_server module -============================================================= - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_legacy_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.ftrack_server.sub_user_server.rst b/docs/source/pype.modules.ftrack.ftrack_server.sub_user_server.rst deleted file mode 100644 index c13095d5f1..0000000000 --- a/docs/source/pype.modules.ftrack.ftrack_server.sub_user_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.ftrack\_server.sub\_user\_server module -=========================================================== - -.. automodule:: pype.modules.ftrack.ftrack_server.sub_user_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.avalon_sync.rst b/docs/source/pype.modules.ftrack.lib.avalon_sync.rst deleted file mode 100644 index 954ec4d911..0000000000 --- a/docs/source/pype.modules.ftrack.lib.avalon_sync.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.avalon\_sync module -=========================================== - -.. automodule:: pype.modules.ftrack.lib.avalon_sync - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.credentials.rst b/docs/source/pype.modules.ftrack.lib.credentials.rst deleted file mode 100644 index 3965dc406d..0000000000 --- a/docs/source/pype.modules.ftrack.lib.credentials.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.credentials module -========================================== - -.. automodule:: pype.modules.ftrack.lib.credentials - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_action_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_action_handler.rst deleted file mode 100644 index cec38f9b8a..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_action_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_action\_handler module -====================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_action_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_app_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_app_handler.rst deleted file mode 100644 index 1f7395927d..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_app_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_app\_handler module -=================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_app_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_base_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_base_handler.rst deleted file mode 100644 index 94fab7c940..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_base_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_base\_handler module -==================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_base_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.ftrack_event_handler.rst b/docs/source/pype.modules.ftrack.lib.ftrack_event_handler.rst deleted file mode 100644 index 0b57219b50..0000000000 --- a/docs/source/pype.modules.ftrack.lib.ftrack_event_handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.ftrack\_event\_handler module -===================================================== - -.. automodule:: pype.modules.ftrack.lib.ftrack_event_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.rst b/docs/source/pype.modules.ftrack.lib.rst deleted file mode 100644 index 32a219ab3a..0000000000 --- a/docs/source/pype.modules.ftrack.lib.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.modules.ftrack.lib package -=============================== - -.. automodule:: pype.modules.ftrack.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.ftrack.lib.avalon\_sync module -------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.avalon_sync - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.credentials module ------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.credentials - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_action\_handler module ------------------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.ftrack_action_handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_app\_handler module ---------------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.ftrack_app_handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_base\_handler module ----------------------------------------------------- - -.. automodule:: pype.modules.ftrack.lib.ftrack_base_handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.lib.ftrack\_event\_handler module ------------------------------------------------------ - -.. automodule:: pype.modules.ftrack.lib.ftrack_event_handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.lib.settings.rst b/docs/source/pype.modules.ftrack.lib.settings.rst deleted file mode 100644 index 255d52178a..0000000000 --- a/docs/source/pype.modules.ftrack.lib.settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.lib.settings module -======================================= - -.. automodule:: pype.modules.ftrack.lib.settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.rst b/docs/source/pype.modules.ftrack.rst deleted file mode 100644 index 13a92db808..0000000000 --- a/docs/source/pype.modules.ftrack.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.modules.ftrack package -=========================== - -.. automodule:: pype.modules.ftrack - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.ftrack.ftrack_server - pype.modules.ftrack.lib - pype.modules.ftrack.tray diff --git a/docs/source/pype.modules.ftrack.tray.ftrack_module.rst b/docs/source/pype.modules.ftrack.tray.ftrack_module.rst deleted file mode 100644 index c4a370472c..0000000000 --- a/docs/source/pype.modules.ftrack.tray.ftrack_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.ftrack\_module module -============================================== - -.. automodule:: pype.modules.ftrack.tray.ftrack_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.ftrack_tray.rst b/docs/source/pype.modules.ftrack.tray.ftrack_tray.rst deleted file mode 100644 index 147647e9b4..0000000000 --- a/docs/source/pype.modules.ftrack.tray.ftrack_tray.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.ftrack\_tray module -============================================ - -.. automodule:: pype.modules.ftrack.tray.ftrack_tray - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.login_dialog.rst b/docs/source/pype.modules.ftrack.tray.login_dialog.rst deleted file mode 100644 index dabc2e73a7..0000000000 --- a/docs/source/pype.modules.ftrack.tray.login_dialog.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.login\_dialog module -============================================= - -.. automodule:: pype.modules.ftrack.tray.login_dialog - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.login_tools.rst b/docs/source/pype.modules.ftrack.tray.login_tools.rst deleted file mode 100644 index 00ec690866..0000000000 --- a/docs/source/pype.modules.ftrack.tray.login_tools.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.ftrack.tray.login\_tools module -============================================ - -.. automodule:: pype.modules.ftrack.tray.login_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.ftrack.tray.rst b/docs/source/pype.modules.ftrack.tray.rst deleted file mode 100644 index 79772a9c3b..0000000000 --- a/docs/source/pype.modules.ftrack.tray.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.modules.ftrack.tray package -================================ - -.. automodule:: pype.modules.ftrack.tray - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.ftrack.tray.ftrack\_module module ----------------------------------------------- - -.. automodule:: pype.modules.ftrack.tray.ftrack_module - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.tray.login\_dialog module ---------------------------------------------- - -.. automodule:: pype.modules.ftrack.tray.login_dialog - :members: - :undoc-members: - :show-inheritance: - -pype.modules.ftrack.tray.login\_tools module --------------------------------------------- - -.. automodule:: pype.modules.ftrack.tray.login_tools - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.idle_manager.idle_manager.rst b/docs/source/pype.modules.idle_manager.idle_manager.rst deleted file mode 100644 index 8e93f97e6b..0000000000 --- a/docs/source/pype.modules.idle_manager.idle_manager.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.idle\_manager.idle\_manager module -=============================================== - -.. automodule:: pype.modules.idle_manager.idle_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.idle_manager.rst b/docs/source/pype.modules.idle_manager.rst deleted file mode 100644 index a3f7922999..0000000000 --- a/docs/source/pype.modules.idle_manager.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.modules.idle\_manager package -================================== - -.. automodule:: pype.modules.idle_manager - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.idle\_manager.idle\_manager module ------------------------------------------------ - -.. automodule:: pype.modules.idle_manager.idle_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.launcher_action.rst b/docs/source/pype.modules.launcher_action.rst deleted file mode 100644 index a63408e747..0000000000 --- a/docs/source/pype.modules.launcher_action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.launcher\_action module -==================================== - -.. automodule:: pype.modules.launcher_action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.log_view_module.rst b/docs/source/pype.modules.log_viewer.log_view_module.rst deleted file mode 100644 index 8d80170a9c..0000000000 --- a/docs/source/pype.modules.log_viewer.log_view_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.log\_view\_module module -================================================= - -.. automodule:: pype.modules.log_viewer.log_view_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.rst b/docs/source/pype.modules.log_viewer.rst deleted file mode 100644 index e275d56086..0000000000 --- a/docs/source/pype.modules.log_viewer.rst +++ /dev/null @@ -1,23 +0,0 @@ -pype.modules.log\_viewer package -================================ - -.. automodule:: pype.modules.log_viewer - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.modules.log_viewer.tray - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.log_viewer.log_view_module diff --git a/docs/source/pype.modules.log_viewer.tray.app.rst b/docs/source/pype.modules.log_viewer.tray.app.rst deleted file mode 100644 index 0948a05594..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.tray.app module -======================================== - -.. automodule:: pype.modules.log_viewer.tray.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.tray.models.rst b/docs/source/pype.modules.log_viewer.tray.models.rst deleted file mode 100644 index 4da3887600..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.models.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.tray.models module -=========================================== - -.. automodule:: pype.modules.log_viewer.tray.models - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.log_viewer.tray.rst b/docs/source/pype.modules.log_viewer.tray.rst deleted file mode 100644 index 5f4b92f627..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.modules.log\_viewer.tray package -===================================== - -.. automodule:: pype.modules.log_viewer.tray - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.log_viewer.tray.app - pype.modules.log_viewer.tray.models - pype.modules.log_viewer.tray.widgets diff --git a/docs/source/pype.modules.log_viewer.tray.widgets.rst b/docs/source/pype.modules.log_viewer.tray.widgets.rst deleted file mode 100644 index cb57c96559..0000000000 --- a/docs/source/pype.modules.log_viewer.tray.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.log\_viewer.tray.widgets module -============================================ - -.. automodule:: pype.modules.log_viewer.tray.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.muster.muster.rst b/docs/source/pype.modules.muster.muster.rst deleted file mode 100644 index d3ba1e7052..0000000000 --- a/docs/source/pype.modules.muster.muster.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.muster.muster module -================================= - -.. automodule:: pype.modules.muster.muster - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.muster.rst b/docs/source/pype.modules.muster.rst deleted file mode 100644 index d8d0f762f4..0000000000 --- a/docs/source/pype.modules.muster.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.muster package -=========================== - -.. automodule:: pype.modules.muster - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.muster.muster module ---------------------------------- - -.. automodule:: pype.modules.muster.muster - :members: - :undoc-members: - :show-inheritance: - -pype.modules.muster.widget\_login module ----------------------------------------- - -.. automodule:: pype.modules.muster.widget_login - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.muster.widget_login.rst b/docs/source/pype.modules.muster.widget_login.rst deleted file mode 100644 index 1c59cec820..0000000000 --- a/docs/source/pype.modules.muster.widget_login.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.muster.widget\_login module -======================================== - -.. automodule:: pype.modules.muster.widget_login - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.base_class.rst b/docs/source/pype.modules.rest_api.base_class.rst deleted file mode 100644 index c2a1030a78..0000000000 --- a/docs/source/pype.modules.rest_api.base_class.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.base\_class module -========================================= - -.. automodule:: pype.modules.rest_api.base_class - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.exceptions.rst b/docs/source/pype.modules.rest_api.lib.exceptions.rst deleted file mode 100644 index d755420ad0..0000000000 --- a/docs/source/pype.modules.rest_api.lib.exceptions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.exceptions module -============================================ - -.. automodule:: pype.modules.rest_api.lib.exceptions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.factory.rst b/docs/source/pype.modules.rest_api.lib.factory.rst deleted file mode 100644 index 2131d1b8da..0000000000 --- a/docs/source/pype.modules.rest_api.lib.factory.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.factory module -========================================= - -.. automodule:: pype.modules.rest_api.lib.factory - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.handler.rst b/docs/source/pype.modules.rest_api.lib.handler.rst deleted file mode 100644 index 6e340daf9b..0000000000 --- a/docs/source/pype.modules.rest_api.lib.handler.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.handler module -========================================= - -.. automodule:: pype.modules.rest_api.lib.handler - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.lib.rst b/docs/source/pype.modules.rest_api.lib.lib.rst deleted file mode 100644 index 19663788e0..0000000000 --- a/docs/source/pype.modules.rest_api.lib.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.lib.lib module -===================================== - -.. automodule:: pype.modules.rest_api.lib.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.lib.rst b/docs/source/pype.modules.rest_api.lib.rst deleted file mode 100644 index ed8288ee73..0000000000 --- a/docs/source/pype.modules.rest_api.lib.rst +++ /dev/null @@ -1,42 +0,0 @@ -pype.modules.rest\_api.lib package -================================== - -.. automodule:: pype.modules.rest_api.lib - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.rest\_api.lib.exceptions module --------------------------------------------- - -.. automodule:: pype.modules.rest_api.lib.exceptions - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.lib.factory module ------------------------------------------ - -.. automodule:: pype.modules.rest_api.lib.factory - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.lib.handler module ------------------------------------------ - -.. automodule:: pype.modules.rest_api.lib.handler - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.lib.lib module -------------------------------------- - -.. automodule:: pype.modules.rest_api.lib.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.rest_api.rst b/docs/source/pype.modules.rest_api.rest_api.rst deleted file mode 100644 index e3d951ac9f..0000000000 --- a/docs/source/pype.modules.rest_api.rest_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.rest\_api.rest\_api module -======================================= - -.. automodule:: pype.modules.rest_api.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rest_api.rst b/docs/source/pype.modules.rest_api.rst deleted file mode 100644 index 09c58c84f8..0000000000 --- a/docs/source/pype.modules.rest_api.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.modules.rest\_api package -============================== - -.. automodule:: pype.modules.rest_api - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.rest_api.lib - -Submodules ----------- - -pype.modules.rest\_api.base\_class module ------------------------------------------ - -.. automodule:: pype.modules.rest_api.base_class - :members: - :undoc-members: - :show-inheritance: - -pype.modules.rest\_api.rest\_api module ---------------------------------------- - -.. automodule:: pype.modules.rest_api.rest_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.rst b/docs/source/pype.modules.rst deleted file mode 100644 index 148c2084b4..0000000000 --- a/docs/source/pype.modules.rst +++ /dev/null @@ -1,36 +0,0 @@ -pype.modules package -==================== - -.. automodule:: pype.modules - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.adobe_communicator - pype.modules.avalon_apps - pype.modules.clockify - pype.modules.ftrack - pype.modules.idle_manager - pype.modules.muster - pype.modules.rest_api - pype.modules.standalonepublish - pype.modules.timers_manager - pype.modules.user - pype.modules.websocket_server - -Submodules ----------- - -pype.modules.base module ------------------------- - -.. automodule:: pype.modules.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.settings_action.rst b/docs/source/pype.modules.settings_action.rst deleted file mode 100644 index 10f0881ced..0000000000 --- a/docs/source/pype.modules.settings_action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.settings\_action module -==================================== - -.. automodule:: pype.modules.settings_action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.standalonepublish.rst b/docs/source/pype.modules.standalonepublish.rst deleted file mode 100644 index 2ed366af5c..0000000000 --- a/docs/source/pype.modules.standalonepublish.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.modules.standalonepublish package -====================================== - -.. automodule:: pype.modules.standalonepublish - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.standalonepublish.standalonepublish\_module module ---------------------------------------------------------------- - -.. automodule:: pype.modules.standalonepublish.standalonepublish_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.standalonepublish.standalonepublish_module.rst b/docs/source/pype.modules.standalonepublish.standalonepublish_module.rst deleted file mode 100644 index a78826a4b4..0000000000 --- a/docs/source/pype.modules.standalonepublish.standalonepublish_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.standalonepublish.standalonepublish\_module module -=============================================================== - -.. automodule:: pype.modules.standalonepublish.standalonepublish_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.standalonepublish_action.rst b/docs/source/pype.modules.standalonepublish_action.rst deleted file mode 100644 index d51dbcefa0..0000000000 --- a/docs/source/pype.modules.standalonepublish_action.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.standalonepublish\_action module -============================================= - -.. automodule:: pype.modules.standalonepublish_action - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.sync_server.rst b/docs/source/pype.modules.sync_server.rst deleted file mode 100644 index a26dc7e212..0000000000 --- a/docs/source/pype.modules.sync_server.rst +++ /dev/null @@ -1,16 +0,0 @@ -pype.modules.sync\_server package -================================= - -.. automodule:: pype.modules.sync_server - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.modules.sync_server.sync_server - pype.modules.sync_server.utils diff --git a/docs/source/pype.modules.sync_server.sync_server.rst b/docs/source/pype.modules.sync_server.sync_server.rst deleted file mode 100644 index 36d6aa68ed..0000000000 --- a/docs/source/pype.modules.sync_server.sync_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.sync\_server.sync\_server module -============================================= - -.. automodule:: pype.modules.sync_server.sync_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.sync_server.utils.rst b/docs/source/pype.modules.sync_server.utils.rst deleted file mode 100644 index 325d5e435d..0000000000 --- a/docs/source/pype.modules.sync_server.utils.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.sync\_server.utils module -====================================== - -.. automodule:: pype.modules.sync_server.utils - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.timers_manager.rst b/docs/source/pype.modules.timers_manager.rst deleted file mode 100644 index 6c971e9dc1..0000000000 --- a/docs/source/pype.modules.timers_manager.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.timers\_manager package -==================================== - -.. automodule:: pype.modules.timers_manager - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.timers\_manager.timers\_manager module ---------------------------------------------------- - -.. automodule:: pype.modules.timers_manager.timers_manager - :members: - :undoc-members: - :show-inheritance: - -pype.modules.timers\_manager.widget\_user\_idle module ------------------------------------------------------- - -.. automodule:: pype.modules.timers_manager.widget_user_idle - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.timers_manager.timers_manager.rst b/docs/source/pype.modules.timers_manager.timers_manager.rst deleted file mode 100644 index fe18e4d15c..0000000000 --- a/docs/source/pype.modules.timers_manager.timers_manager.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.timers\_manager.timers\_manager module -=================================================== - -.. automodule:: pype.modules.timers_manager.timers_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.timers_manager.widget_user_idle.rst b/docs/source/pype.modules.timers_manager.widget_user_idle.rst deleted file mode 100644 index b072879c7a..0000000000 --- a/docs/source/pype.modules.timers_manager.widget_user_idle.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.timers\_manager.widget\_user\_idle module -====================================================== - -.. automodule:: pype.modules.timers_manager.widget_user_idle - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.user.rst b/docs/source/pype.modules.user.rst deleted file mode 100644 index d181b263e5..0000000000 --- a/docs/source/pype.modules.user.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.user package -========================= - -.. automodule:: pype.modules.user - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.user.user\_module module -------------------------------------- - -.. automodule:: pype.modules.user.user_module - :members: - :undoc-members: - :show-inheritance: - -pype.modules.user.widget\_user module -------------------------------------- - -.. automodule:: pype.modules.user.widget_user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.user.user_module.rst b/docs/source/pype.modules.user.user_module.rst deleted file mode 100644 index a8e0cd6bad..0000000000 --- a/docs/source/pype.modules.user.user_module.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.user.user\_module module -===================================== - -.. automodule:: pype.modules.user.user_module - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.user.widget_user.rst b/docs/source/pype.modules.user.widget_user.rst deleted file mode 100644 index 2979e5ead4..0000000000 --- a/docs/source/pype.modules.user.widget_user.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.user.widget\_user module -===================================== - -.. automodule:: pype.modules.user.widget_user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.aftereffects.rst b/docs/source/pype.modules.websocket_server.hosts.aftereffects.rst deleted file mode 100644 index 9f4720ae14..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.aftereffects.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.hosts.aftereffects module -======================================================== - -.. automodule:: pype.modules.websocket_server.hosts.aftereffects - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.external_app_1.rst b/docs/source/pype.modules.websocket_server.hosts.external_app_1.rst deleted file mode 100644 index 4ac69d9015..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.external_app_1.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.hosts.external\_app\_1 module -============================================================ - -.. automodule:: pype.modules.websocket_server.hosts.external_app_1 - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.photoshop.rst b/docs/source/pype.modules.websocket_server.hosts.photoshop.rst deleted file mode 100644 index cbda61275a..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.photoshop.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.hosts.photoshop module -===================================================== - -.. automodule:: pype.modules.websocket_server.hosts.photoshop - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.hosts.rst b/docs/source/pype.modules.websocket_server.hosts.rst deleted file mode 100644 index d5ce7c3f8e..0000000000 --- a/docs/source/pype.modules.websocket_server.hosts.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.websocket\_server.hosts package -============================================ - -.. automodule:: pype.modules.websocket_server.hosts - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.modules.websocket\_server.hosts.external\_app\_1 module ------------------------------------------------------------- - -.. automodule:: pype.modules.websocket_server.hosts.external_app_1 - :members: - :undoc-members: - :show-inheritance: - -pype.modules.websocket\_server.hosts.photoshop module ------------------------------------------------------ - -.. automodule:: pype.modules.websocket_server.hosts.photoshop - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.rst b/docs/source/pype.modules.websocket_server.rst deleted file mode 100644 index a83d371df1..0000000000 --- a/docs/source/pype.modules.websocket_server.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.modules.websocket\_server package -====================================== - -.. automodule:: pype.modules.websocket_server - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.modules.websocket_server.hosts - -Submodules ----------- - -pype.modules.websocket\_server.websocket\_server module -------------------------------------------------------- - -.. automodule:: pype.modules.websocket_server.websocket_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules.websocket_server.websocket_server.rst b/docs/source/pype.modules.websocket_server.websocket_server.rst deleted file mode 100644 index 354c9e6cf9..0000000000 --- a/docs/source/pype.modules.websocket_server.websocket_server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules.websocket\_server.websocket\_server module -======================================================= - -.. automodule:: pype.modules.websocket_server.websocket_server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.modules_manager.rst b/docs/source/pype.modules_manager.rst deleted file mode 100644 index a5f2327d65..0000000000 --- a/docs/source/pype.modules_manager.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.modules\_manager module -============================ - -.. automodule:: pype.modules_manager - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugin.rst b/docs/source/pype.plugin.rst deleted file mode 100644 index c20bb77b2b..0000000000 --- a/docs/source/pype.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugin module -================== - -.. automodule:: pype.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_animation.rst b/docs/source/pype.plugins.maya.publish.collect_animation.rst deleted file mode 100644 index 497c497057..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_animation module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_ass.rst b/docs/source/pype.plugins.maya.publish.collect_ass.rst deleted file mode 100644 index a44e61ce98..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_ass.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_ass module -============================================= - -.. automodule:: pype.plugins.maya.publish.collect_ass - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_assembly.rst b/docs/source/pype.plugins.maya.publish.collect_assembly.rst deleted file mode 100644 index 5baa91818b..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_assembly.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_assembly module -================================================== - -.. automodule:: pype.plugins.maya.publish.collect_assembly - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_file_dependencies.rst b/docs/source/pype.plugins.maya.publish.collect_file_dependencies.rst deleted file mode 100644 index efe857140e..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_file_dependencies.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_file\_dependencies module -============================================================ - -.. automodule:: pype.plugins.maya.publish.collect_file_dependencies - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_ftrack_family.rst b/docs/source/pype.plugins.maya.publish.collect_ftrack_family.rst deleted file mode 100644 index 872bbc69a4..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_ftrack_family.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_ftrack\_family module -======================================================== - -.. automodule:: pype.plugins.maya.publish.collect_ftrack_family - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_history.rst b/docs/source/pype.plugins.maya.publish.collect_history.rst deleted file mode 100644 index 5a98778c24..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_history.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_history module -================================================= - -.. automodule:: pype.plugins.maya.publish.collect_history - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_instances.rst b/docs/source/pype.plugins.maya.publish.collect_instances.rst deleted file mode 100644 index 33c8b97597..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_instances.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_instances module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_instances - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_look.rst b/docs/source/pype.plugins.maya.publish.collect_look.rst deleted file mode 100644 index 234fcf20d1..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_look.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_look module -============================================== - -.. automodule:: pype.plugins.maya.publish.collect_look - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_maya_units.rst b/docs/source/pype.plugins.maya.publish.collect_maya_units.rst deleted file mode 100644 index 0cb01b0fa7..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_maya_units.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_maya\_units module -===================================================== - -.. automodule:: pype.plugins.maya.publish.collect_maya_units - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_maya_workspace.rst b/docs/source/pype.plugins.maya.publish.collect_maya_workspace.rst deleted file mode 100644 index 7447052004..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_maya_workspace.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_maya\_workspace module -========================================================= - -.. automodule:: pype.plugins.maya.publish.collect_maya_workspace - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_mayaascii.rst b/docs/source/pype.plugins.maya.publish.collect_mayaascii.rst deleted file mode 100644 index 14fe826229..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_mayaascii.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_mayaascii module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_mayaascii - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_model.rst b/docs/source/pype.plugins.maya.publish.collect_model.rst deleted file mode 100644 index b30bf3fb22..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_model module -=============================================== - -.. automodule:: pype.plugins.maya.publish.collect_model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_remove_marked.rst b/docs/source/pype.plugins.maya.publish.collect_remove_marked.rst deleted file mode 100644 index a0bf9498d7..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_remove_marked.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_remove\_marked module -======================================================== - -.. automodule:: pype.plugins.maya.publish.collect_remove_marked - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_render.rst b/docs/source/pype.plugins.maya.publish.collect_render.rst deleted file mode 100644 index 6de8827119..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_render.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_render module -================================================ - -.. automodule:: pype.plugins.maya.publish.collect_render - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_render_layer_aovs.rst b/docs/source/pype.plugins.maya.publish.collect_render_layer_aovs.rst deleted file mode 100644 index ab511fc5dd..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_render_layer_aovs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_render\_layer\_aovs module -============================================================= - -.. automodule:: pype.plugins.maya.publish.collect_render_layer_aovs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_renderable_camera.rst b/docs/source/pype.plugins.maya.publish.collect_renderable_camera.rst deleted file mode 100644 index c98e8000a1..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_renderable_camera.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_renderable\_camera module -============================================================ - -.. automodule:: pype.plugins.maya.publish.collect_renderable_camera - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_review.rst b/docs/source/pype.plugins.maya.publish.collect_review.rst deleted file mode 100644 index d73127aa85..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_review.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_review module -================================================ - -.. automodule:: pype.plugins.maya.publish.collect_review - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_rig.rst b/docs/source/pype.plugins.maya.publish.collect_rig.rst deleted file mode 100644 index e7c0528482..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_rig module -============================================= - -.. automodule:: pype.plugins.maya.publish.collect_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_scene.rst b/docs/source/pype.plugins.maya.publish.collect_scene.rst deleted file mode 100644 index c5c2fef222..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_scene.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_scene module -=============================================== - -.. automodule:: pype.plugins.maya.publish.collect_scene - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_unreal_staticmesh.rst b/docs/source/pype.plugins.maya.publish.collect_unreal_staticmesh.rst deleted file mode 100644 index 673f0865fd..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_unreal_staticmesh.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_unreal\_staticmesh module -============================================================ - -.. automodule:: pype.plugins.maya.publish.collect_unreal_staticmesh - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_workscene_fps.rst b/docs/source/pype.plugins.maya.publish.collect_workscene_fps.rst deleted file mode 100644 index ed4386a7ba..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_workscene_fps.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_workscene\_fps module -======================================================== - -.. automodule:: pype.plugins.maya.publish.collect_workscene_fps - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_yeti_cache.rst b/docs/source/pype.plugins.maya.publish.collect_yeti_cache.rst deleted file mode 100644 index 32ab50baca..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_yeti_cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_yeti\_cache module -===================================================== - -.. automodule:: pype.plugins.maya.publish.collect_yeti_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.collect_yeti_rig.rst b/docs/source/pype.plugins.maya.publish.collect_yeti_rig.rst deleted file mode 100644 index 8cf968b7c5..0000000000 --- a/docs/source/pype.plugins.maya.publish.collect_yeti_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.collect\_yeti\_rig module -=================================================== - -.. automodule:: pype.plugins.maya.publish.collect_yeti_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.determine_future_version.rst b/docs/source/pype.plugins.maya.publish.determine_future_version.rst deleted file mode 100644 index 55c6155680..0000000000 --- a/docs/source/pype.plugins.maya.publish.determine_future_version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.determine\_future\_version module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.determine_future_version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_animation.rst b/docs/source/pype.plugins.maya.publish.extract_animation.rst deleted file mode 100644 index 3649723042..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_animation module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_ass.rst b/docs/source/pype.plugins.maya.publish.extract_ass.rst deleted file mode 100644 index be8123e5d7..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_ass.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_ass module -============================================= - -.. automodule:: pype.plugins.maya.publish.extract_ass - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_assembly.rst b/docs/source/pype.plugins.maya.publish.extract_assembly.rst deleted file mode 100644 index b36e8f6d30..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_assembly.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_assembly module -================================================== - -.. automodule:: pype.plugins.maya.publish.extract_assembly - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_assproxy.rst b/docs/source/pype.plugins.maya.publish.extract_assproxy.rst deleted file mode 100644 index fc97a2ee46..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_assproxy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_assproxy module -================================================== - -.. automodule:: pype.plugins.maya.publish.extract_assproxy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_camera_alembic.rst b/docs/source/pype.plugins.maya.publish.extract_camera_alembic.rst deleted file mode 100644 index a9df3da011..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_camera_alembic.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_camera\_alembic module -========================================================= - -.. automodule:: pype.plugins.maya.publish.extract_camera_alembic - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_camera_mayaScene.rst b/docs/source/pype.plugins.maya.publish.extract_camera_mayaScene.rst deleted file mode 100644 index db1799f52f..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_camera_mayaScene.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_camera\_mayaScene module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.extract_camera_mayaScene - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_fbx.rst b/docs/source/pype.plugins.maya.publish.extract_fbx.rst deleted file mode 100644 index fffd5a6394..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_fbx.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_fbx module -============================================= - -.. automodule:: pype.plugins.maya.publish.extract_fbx - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_look.rst b/docs/source/pype.plugins.maya.publish.extract_look.rst deleted file mode 100644 index f2708678ce..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_look.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_look module -============================================== - -.. automodule:: pype.plugins.maya.publish.extract_look - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_maya_scene_raw.rst b/docs/source/pype.plugins.maya.publish.extract_maya_scene_raw.rst deleted file mode 100644 index 1e080dd0eb..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_maya_scene_raw.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_maya\_scene\_raw module -========================================================== - -.. automodule:: pype.plugins.maya.publish.extract_maya_scene_raw - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_model.rst b/docs/source/pype.plugins.maya.publish.extract_model.rst deleted file mode 100644 index c78b49c777..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_model module -=============================================== - -.. automodule:: pype.plugins.maya.publish.extract_model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_playblast.rst b/docs/source/pype.plugins.maya.publish.extract_playblast.rst deleted file mode 100644 index 1aa284b370..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_playblast.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_playblast module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_playblast - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_pointcache.rst b/docs/source/pype.plugins.maya.publish.extract_pointcache.rst deleted file mode 100644 index 97ebde4933..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_pointcache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_pointcache module -==================================================== - -.. automodule:: pype.plugins.maya.publish.extract_pointcache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_rendersetup.rst b/docs/source/pype.plugins.maya.publish.extract_rendersetup.rst deleted file mode 100644 index 86cb178f42..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_rendersetup.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_rendersetup module -===================================================== - -.. automodule:: pype.plugins.maya.publish.extract_rendersetup - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_rig.rst b/docs/source/pype.plugins.maya.publish.extract_rig.rst deleted file mode 100644 index f6419c9473..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_rig module -============================================= - -.. automodule:: pype.plugins.maya.publish.extract_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_thumbnail.rst b/docs/source/pype.plugins.maya.publish.extract_thumbnail.rst deleted file mode 100644 index 2d03e11d55..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_thumbnail.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_thumbnail module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_thumbnail - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_vrayproxy.rst b/docs/source/pype.plugins.maya.publish.extract_vrayproxy.rst deleted file mode 100644 index 5439ff59ca..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_vrayproxy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_vrayproxy module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_vrayproxy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_yeti_cache.rst b/docs/source/pype.plugins.maya.publish.extract_yeti_cache.rst deleted file mode 100644 index 7ad84dfc70..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_yeti_cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_yeti\_cache module -===================================================== - -.. automodule:: pype.plugins.maya.publish.extract_yeti_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.extract_yeti_rig.rst b/docs/source/pype.plugins.maya.publish.extract_yeti_rig.rst deleted file mode 100644 index 76d483d91b..0000000000 --- a/docs/source/pype.plugins.maya.publish.extract_yeti_rig.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.extract\_yeti\_rig module -=================================================== - -.. automodule:: pype.plugins.maya.publish.extract_yeti_rig - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.increment_current_file_deadline.rst b/docs/source/pype.plugins.maya.publish.increment_current_file_deadline.rst deleted file mode 100644 index 97126a6c77..0000000000 --- a/docs/source/pype.plugins.maya.publish.increment_current_file_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.increment\_current\_file\_deadline module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.increment_current_file_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.rst b/docs/source/pype.plugins.maya.publish.rst deleted file mode 100644 index dba0a9118c..0000000000 --- a/docs/source/pype.plugins.maya.publish.rst +++ /dev/null @@ -1,146 +0,0 @@ -pype.plugins.maya.publish package -================================= - -.. automodule:: pype.plugins.maya.publish - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.plugins.maya.publish.collect_animation - pype.plugins.maya.publish.collect_ass - pype.plugins.maya.publish.collect_assembly - pype.plugins.maya.publish.collect_file_dependencies - pype.plugins.maya.publish.collect_ftrack_family - pype.plugins.maya.publish.collect_history - pype.plugins.maya.publish.collect_instances - pype.plugins.maya.publish.collect_look - pype.plugins.maya.publish.collect_maya_units - pype.plugins.maya.publish.collect_maya_workspace - pype.plugins.maya.publish.collect_mayaascii - pype.plugins.maya.publish.collect_model - pype.plugins.maya.publish.collect_remove_marked - pype.plugins.maya.publish.collect_render - pype.plugins.maya.publish.collect_render_layer_aovs - pype.plugins.maya.publish.collect_renderable_camera - pype.plugins.maya.publish.collect_review - pype.plugins.maya.publish.collect_rig - pype.plugins.maya.publish.collect_scene - pype.plugins.maya.publish.collect_unreal_staticmesh - pype.plugins.maya.publish.collect_workscene_fps - pype.plugins.maya.publish.collect_yeti_cache - pype.plugins.maya.publish.collect_yeti_rig - pype.plugins.maya.publish.determine_future_version - pype.plugins.maya.publish.extract_animation - pype.plugins.maya.publish.extract_ass - pype.plugins.maya.publish.extract_assembly - pype.plugins.maya.publish.extract_assproxy - pype.plugins.maya.publish.extract_camera_alembic - pype.plugins.maya.publish.extract_camera_mayaScene - pype.plugins.maya.publish.extract_fbx - pype.plugins.maya.publish.extract_look - pype.plugins.maya.publish.extract_maya_scene_raw - pype.plugins.maya.publish.extract_model - pype.plugins.maya.publish.extract_playblast - pype.plugins.maya.publish.extract_pointcache - pype.plugins.maya.publish.extract_rendersetup - pype.plugins.maya.publish.extract_rig - pype.plugins.maya.publish.extract_thumbnail - pype.plugins.maya.publish.extract_vrayproxy - pype.plugins.maya.publish.extract_yeti_cache - pype.plugins.maya.publish.extract_yeti_rig - pype.plugins.maya.publish.increment_current_file_deadline - pype.plugins.maya.publish.save_scene - pype.plugins.maya.publish.submit_maya_deadline - pype.plugins.maya.publish.submit_maya_muster - pype.plugins.maya.publish.validate_animation_content - pype.plugins.maya.publish.validate_animation_out_set_related_node_ids - pype.plugins.maya.publish.validate_ass_relative_paths - pype.plugins.maya.publish.validate_assembly_name - pype.plugins.maya.publish.validate_assembly_namespaces - pype.plugins.maya.publish.validate_assembly_transforms - pype.plugins.maya.publish.validate_attributes - pype.plugins.maya.publish.validate_camera_attributes - pype.plugins.maya.publish.validate_camera_contents - pype.plugins.maya.publish.validate_color_sets - pype.plugins.maya.publish.validate_current_renderlayer_renderable - pype.plugins.maya.publish.validate_deadline_connection - pype.plugins.maya.publish.validate_frame_range - pype.plugins.maya.publish.validate_instance_has_members - pype.plugins.maya.publish.validate_instance_subset - pype.plugins.maya.publish.validate_instancer_content - pype.plugins.maya.publish.validate_instancer_frame_ranges - pype.plugins.maya.publish.validate_joints_hidden - pype.plugins.maya.publish.validate_look_contents - pype.plugins.maya.publish.validate_look_default_shaders_connections - pype.plugins.maya.publish.validate_look_id_reference_edits - pype.plugins.maya.publish.validate_look_members_unique - pype.plugins.maya.publish.validate_look_no_default_shaders - pype.plugins.maya.publish.validate_look_sets - pype.plugins.maya.publish.validate_look_shading_group - pype.plugins.maya.publish.validate_look_single_shader - pype.plugins.maya.publish.validate_maya_units - pype.plugins.maya.publish.validate_mesh_arnold_attributes - pype.plugins.maya.publish.validate_mesh_has_uv - pype.plugins.maya.publish.validate_mesh_lamina_faces - pype.plugins.maya.publish.validate_mesh_no_negative_scale - pype.plugins.maya.publish.validate_mesh_non_manifold - pype.plugins.maya.publish.validate_mesh_non_zero_edge - pype.plugins.maya.publish.validate_mesh_normals_unlocked - pype.plugins.maya.publish.validate_mesh_overlapping_uvs - pype.plugins.maya.publish.validate_mesh_shader_connections - pype.plugins.maya.publish.validate_mesh_single_uv_set - pype.plugins.maya.publish.validate_mesh_uv_set_map1 - pype.plugins.maya.publish.validate_mesh_vertices_have_edges - pype.plugins.maya.publish.validate_model_content - pype.plugins.maya.publish.validate_model_name - pype.plugins.maya.publish.validate_muster_connection - pype.plugins.maya.publish.validate_no_animation - pype.plugins.maya.publish.validate_no_default_camera - pype.plugins.maya.publish.validate_no_namespace - pype.plugins.maya.publish.validate_no_null_transforms - pype.plugins.maya.publish.validate_no_unknown_nodes - pype.plugins.maya.publish.validate_no_vraymesh - pype.plugins.maya.publish.validate_node_ids - pype.plugins.maya.publish.validate_node_ids_deformed_shapes - pype.plugins.maya.publish.validate_node_ids_in_database - pype.plugins.maya.publish.validate_node_ids_related - pype.plugins.maya.publish.validate_node_ids_unique - pype.plugins.maya.publish.validate_node_no_ghosting - pype.plugins.maya.publish.validate_render_image_rule - pype.plugins.maya.publish.validate_render_no_default_cameras - pype.plugins.maya.publish.validate_render_single_camera - pype.plugins.maya.publish.validate_renderlayer_aovs - pype.plugins.maya.publish.validate_rendersettings - pype.plugins.maya.publish.validate_resources - pype.plugins.maya.publish.validate_rig_contents - pype.plugins.maya.publish.validate_rig_controllers - pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes - pype.plugins.maya.publish.validate_rig_out_set_node_ids - pype.plugins.maya.publish.validate_rig_output_ids - pype.plugins.maya.publish.validate_scene_set_workspace - pype.plugins.maya.publish.validate_shader_name - pype.plugins.maya.publish.validate_shape_default_names - pype.plugins.maya.publish.validate_shape_render_stats - pype.plugins.maya.publish.validate_single_assembly - pype.plugins.maya.publish.validate_skinCluster_deformer_set - pype.plugins.maya.publish.validate_step_size - pype.plugins.maya.publish.validate_transform_naming_suffix - pype.plugins.maya.publish.validate_transform_zero - pype.plugins.maya.publish.validate_unicode_strings - pype.plugins.maya.publish.validate_unreal_mesh_triangulated - pype.plugins.maya.publish.validate_unreal_staticmesh_naming - pype.plugins.maya.publish.validate_unreal_up_axis - pype.plugins.maya.publish.validate_vray_distributed_rendering - pype.plugins.maya.publish.validate_vray_translator_settings - pype.plugins.maya.publish.validate_vrayproxy - pype.plugins.maya.publish.validate_vrayproxy_members - pype.plugins.maya.publish.validate_yeti_renderscript_callbacks - pype.plugins.maya.publish.validate_yeti_rig_cache_state - pype.plugins.maya.publish.validate_yeti_rig_input_in_instance - pype.plugins.maya.publish.validate_yeti_rig_settings diff --git a/docs/source/pype.plugins.maya.publish.save_scene.rst b/docs/source/pype.plugins.maya.publish.save_scene.rst deleted file mode 100644 index 2537bca03d..0000000000 --- a/docs/source/pype.plugins.maya.publish.save_scene.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.save\_scene module -============================================ - -.. automodule:: pype.plugins.maya.publish.save_scene - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.submit_maya_deadline.rst b/docs/source/pype.plugins.maya.publish.submit_maya_deadline.rst deleted file mode 100644 index 0e521cec4e..0000000000 --- a/docs/source/pype.plugins.maya.publish.submit_maya_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.submit\_maya\_deadline module -======================================================= - -.. automodule:: pype.plugins.maya.publish.submit_maya_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.submit_maya_muster.rst b/docs/source/pype.plugins.maya.publish.submit_maya_muster.rst deleted file mode 100644 index 4ae263e157..0000000000 --- a/docs/source/pype.plugins.maya.publish.submit_maya_muster.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.submit\_maya\_muster module -===================================================== - -.. automodule:: pype.plugins.maya.publish.submit_maya_muster - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_animation_content.rst b/docs/source/pype.plugins.maya.publish.validate_animation_content.rst deleted file mode 100644 index 65191bb957..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_animation_content.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_animation\_content module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_animation_content - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_animation_out_set_related_node_ids.rst b/docs/source/pype.plugins.maya.publish.validate_animation_out_set_related_node_ids.rst deleted file mode 100644 index ea289e84ed..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_animation_out_set_related_node_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_animation\_out\_set\_related\_node\_ids module -================================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_animation_out_set_related_node_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_ass_relative_paths.rst b/docs/source/pype.plugins.maya.publish.validate_ass_relative_paths.rst deleted file mode 100644 index f35ef916cc..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_ass_relative_paths.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_ass\_relative\_paths module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_ass_relative_paths - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_assembly_name.rst b/docs/source/pype.plugins.maya.publish.validate_assembly_name.rst deleted file mode 100644 index c8178226b2..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_assembly_name.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_assembly\_name module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_assembly_name - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_assembly_namespaces.rst b/docs/source/pype.plugins.maya.publish.validate_assembly_namespaces.rst deleted file mode 100644 index 847b90281e..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_assembly_namespaces.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_assembly\_namespaces module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_assembly_namespaces - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_assembly_transforms.rst b/docs/source/pype.plugins.maya.publish.validate_assembly_transforms.rst deleted file mode 100644 index b4348a2908..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_assembly_transforms.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_assembly\_transforms module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_assembly_transforms - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_attributes.rst deleted file mode 100644 index 862820a7c0..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_attributes module -===================================================== - -.. automodule:: pype.plugins.maya.publish.validate_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_camera_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_camera_attributes.rst deleted file mode 100644 index 054198f812..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_camera_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_camera\_attributes module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_camera_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_camera_contents.rst b/docs/source/pype.plugins.maya.publish.validate_camera_contents.rst deleted file mode 100644 index 9cf6604f7a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_camera_contents.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_camera\_contents module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_camera_contents - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_color_sets.rst b/docs/source/pype.plugins.maya.publish.validate_color_sets.rst deleted file mode 100644 index 59bb5607bf..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_color_sets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_color\_sets module -====================================================== - -.. automodule:: pype.plugins.maya.publish.validate_color_sets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_current_renderlayer_renderable.rst b/docs/source/pype.plugins.maya.publish.validate_current_renderlayer_renderable.rst deleted file mode 100644 index 31c52477aa..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_current_renderlayer_renderable.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_current\_renderlayer\_renderable module -=========================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_current_renderlayer_renderable - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_deadline_connection.rst b/docs/source/pype.plugins.maya.publish.validate_deadline_connection.rst deleted file mode 100644 index 3f8c4b6313..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_deadline_connection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_deadline\_connection module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_deadline_connection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_frame_range.rst b/docs/source/pype.plugins.maya.publish.validate_frame_range.rst deleted file mode 100644 index 0ccc8ed1cd..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_frame_range.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_frame\_range module -======================================================= - -.. automodule:: pype.plugins.maya.publish.validate_frame_range - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instance_has_members.rst b/docs/source/pype.plugins.maya.publish.validate_instance_has_members.rst deleted file mode 100644 index 862d32f114..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instance_has_members.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instance\_has\_members module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_instance_has_members - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instance_subset.rst b/docs/source/pype.plugins.maya.publish.validate_instance_subset.rst deleted file mode 100644 index f71febb73c..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instance_subset.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instance\_subset module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_instance_subset - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instancer_content.rst b/docs/source/pype.plugins.maya.publish.validate_instancer_content.rst deleted file mode 100644 index 761889dd4d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instancer_content.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instancer\_content module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_instancer_content - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_instancer_frame_ranges.rst b/docs/source/pype.plugins.maya.publish.validate_instancer_frame_ranges.rst deleted file mode 100644 index 85338c3e2d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_instancer_frame_ranges.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_instancer\_frame\_ranges module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_instancer_frame_ranges - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_joints_hidden.rst b/docs/source/pype.plugins.maya.publish.validate_joints_hidden.rst deleted file mode 100644 index ede5af0c67..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_joints_hidden.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_joints\_hidden module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_joints_hidden - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_contents.rst b/docs/source/pype.plugins.maya.publish.validate_look_contents.rst deleted file mode 100644 index 946f924fb3..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_contents.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_contents module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_look_contents - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_default_shaders_connections.rst b/docs/source/pype.plugins.maya.publish.validate_look_default_shaders_connections.rst deleted file mode 100644 index e293cfc0f1..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_default_shaders_connections.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_default\_shaders\_connections module -============================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_default_shaders_connections - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_id_reference_edits.rst b/docs/source/pype.plugins.maya.publish.validate_look_id_reference_edits.rst deleted file mode 100644 index 007f4e2d03..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_id_reference_edits.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_id\_reference\_edits module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_id_reference_edits - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_members_unique.rst b/docs/source/pype.plugins.maya.publish.validate_look_members_unique.rst deleted file mode 100644 index 3378e8a0f6..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_members_unique.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_members\_unique module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_look_members_unique - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_no_default_shaders.rst b/docs/source/pype.plugins.maya.publish.validate_look_no_default_shaders.rst deleted file mode 100644 index 662e2c7621..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_no_default_shaders.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_no\_default\_shaders module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_no_default_shaders - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_sets.rst b/docs/source/pype.plugins.maya.publish.validate_look_sets.rst deleted file mode 100644 index 5427331568..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_sets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_sets module -===================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_sets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_shading_group.rst b/docs/source/pype.plugins.maya.publish.validate_look_shading_group.rst deleted file mode 100644 index 259f4952b7..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_shading_group.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_shading\_group module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_shading_group - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_look_single_shader.rst b/docs/source/pype.plugins.maya.publish.validate_look_single_shader.rst deleted file mode 100644 index fa43283416..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_look_single_shader.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_look\_single\_shader module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_look_single_shader - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_maya_units.rst b/docs/source/pype.plugins.maya.publish.validate_maya_units.rst deleted file mode 100644 index 16af19f6d9..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_maya_units.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_maya\_units module -====================================================== - -.. automodule:: pype.plugins.maya.publish.validate_maya_units - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_arnold_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_arnold_attributes.rst deleted file mode 100644 index ef18ad1457..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_arnold_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_arnold\_attributes module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_arnold_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_has_uv.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_has_uv.rst deleted file mode 100644 index c6af7063c3..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_has_uv.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_has\_uv module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_has_uv - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_lamina_faces.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_lamina_faces.rst deleted file mode 100644 index 006488e77f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_lamina_faces.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_lamina\_faces module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_lamina_faces - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_no_negative_scale.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_no_negative_scale.rst deleted file mode 100644 index 8720f3d018..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_no_negative_scale.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_no\_negative\_scale module -==================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_no_negative_scale - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_non_manifold.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_non_manifold.rst deleted file mode 100644 index a69a4c6fc4..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_non_manifold.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_non\_manifold module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_non_manifold - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_non_zero_edge.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_non_zero_edge.rst deleted file mode 100644 index 89ea60d1bc..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_non_zero_edge.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_non\_zero\_edge module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_mesh_non_zero_edge - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_normals_unlocked.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_normals_unlocked.rst deleted file mode 100644 index 7dfbd0717d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_normals_unlocked.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_normals\_unlocked module -================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_normals_unlocked - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_overlapping_uvs.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_overlapping_uvs.rst deleted file mode 100644 index f5df633124..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_overlapping_uvs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_overlapping\_uvs module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_mesh_overlapping_uvs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_shader_connections.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_shader_connections.rst deleted file mode 100644 index b3cd77ab2a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_shader_connections.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_shader\_connections module -==================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_shader_connections - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_single_uv_set.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_single_uv_set.rst deleted file mode 100644 index 29a1217437..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_single_uv_set.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_single\_uv\_set module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_mesh_single_uv_set - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_uv_set_map1.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_uv_set_map1.rst deleted file mode 100644 index 49d1b22497..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_uv_set_map1.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_uv\_set\_map1 module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_uv_set_map1 - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_mesh_vertices_have_edges.rst b/docs/source/pype.plugins.maya.publish.validate_mesh_vertices_have_edges.rst deleted file mode 100644 index 99e3047e3d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_mesh_vertices_have_edges.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_mesh\_vertices\_have\_edges module -====================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_mesh_vertices_have_edges - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_model_content.rst b/docs/source/pype.plugins.maya.publish.validate_model_content.rst deleted file mode 100644 index dc0a415718..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_model_content.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_model\_content module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_model_content - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_model_name.rst b/docs/source/pype.plugins.maya.publish.validate_model_name.rst deleted file mode 100644 index ea78ceea70..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_model_name.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_model\_name module -====================================================== - -.. automodule:: pype.plugins.maya.publish.validate_model_name - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_muster_connection.rst b/docs/source/pype.plugins.maya.publish.validate_muster_connection.rst deleted file mode 100644 index 4a4a1e926b..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_muster_connection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_muster\_connection module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_muster_connection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_animation.rst b/docs/source/pype.plugins.maya.publish.validate_no_animation.rst deleted file mode 100644 index b42021369d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_animation module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_default_camera.rst b/docs/source/pype.plugins.maya.publish.validate_no_default_camera.rst deleted file mode 100644 index 59544369f6..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_default_camera.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_default\_camera module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_default_camera - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_namespace.rst b/docs/source/pype.plugins.maya.publish.validate_no_namespace.rst deleted file mode 100644 index bdf4ceb324..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_namespace.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_namespace module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_namespace - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_null_transforms.rst b/docs/source/pype.plugins.maya.publish.validate_no_null_transforms.rst deleted file mode 100644 index 12beed8c33..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_null_transforms.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_null\_transforms module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_no_null_transforms - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_unknown_nodes.rst b/docs/source/pype.plugins.maya.publish.validate_no_unknown_nodes.rst deleted file mode 100644 index 12c977dbb9..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_unknown_nodes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_unknown\_nodes module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_no_unknown_nodes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_no_vraymesh.rst b/docs/source/pype.plugins.maya.publish.validate_no_vraymesh.rst deleted file mode 100644 index a1a0b9ee64..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_no_vraymesh.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_no\_vraymesh module -======================================================= - -.. automodule:: pype.plugins.maya.publish.validate_no_vraymesh - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids.rst deleted file mode 100644 index 7b1d79100f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids module -==================================================== - -.. automodule:: pype.plugins.maya.publish.validate_node_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_deformed_shapes.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_deformed_shapes.rst deleted file mode 100644 index 90ef81c5b5..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_deformed_shapes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_deformed\_shapes module -====================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_deformed_shapes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_in_database.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_in_database.rst deleted file mode 100644 index 5eb0047d16..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_in_database.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_in\_database module -================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_in_database - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_related.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_related.rst deleted file mode 100644 index 1f030462ae..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_related.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_related module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_related - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_ids_unique.rst b/docs/source/pype.plugins.maya.publish.validate_node_ids_unique.rst deleted file mode 100644 index 20ba3a3a6d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_ids_unique.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_ids\_unique module -============================================================ - -.. automodule:: pype.plugins.maya.publish.validate_node_ids_unique - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_node_no_ghosting.rst b/docs/source/pype.plugins.maya.publish.validate_node_no_ghosting.rst deleted file mode 100644 index 8315888630..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_node_no_ghosting.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_node\_no\_ghosting module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_node_no_ghosting - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_render_image_rule.rst b/docs/source/pype.plugins.maya.publish.validate_render_image_rule.rst deleted file mode 100644 index 88870a9ea8..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_render_image_rule.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_render\_image\_rule module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_render_image_rule - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_render_no_default_cameras.rst b/docs/source/pype.plugins.maya.publish.validate_render_no_default_cameras.rst deleted file mode 100644 index b464dbeab6..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_render_no_default_cameras.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_render\_no\_default\_cameras module -======================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_render_no_default_cameras - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_render_single_camera.rst b/docs/source/pype.plugins.maya.publish.validate_render_single_camera.rst deleted file mode 100644 index 60a0cbd6fb..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_render_single_camera.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_render\_single\_camera module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_render_single_camera - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_renderlayer_aovs.rst b/docs/source/pype.plugins.maya.publish.validate_renderlayer_aovs.rst deleted file mode 100644 index 65d5181065..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_renderlayer_aovs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_renderlayer\_aovs module -============================================================ - -.. automodule:: pype.plugins.maya.publish.validate_renderlayer_aovs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rendersettings.rst b/docs/source/pype.plugins.maya.publish.validate_rendersettings.rst deleted file mode 100644 index fce7dba5b8..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rendersettings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rendersettings module -========================================================= - -.. automodule:: pype.plugins.maya.publish.validate_rendersettings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_resources.rst b/docs/source/pype.plugins.maya.publish.validate_resources.rst deleted file mode 100644 index 0a866acdbb..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_resources.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_resources module -==================================================== - -.. automodule:: pype.plugins.maya.publish.validate_resources - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_contents.rst b/docs/source/pype.plugins.maya.publish.validate_rig_contents.rst deleted file mode 100644 index dbd7d84bed..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_contents.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_contents module -======================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_contents - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_controllers.rst b/docs/source/pype.plugins.maya.publish.validate_rig_controllers.rst deleted file mode 100644 index 3bf075e8ad..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_controllers.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_controllers module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_controllers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes.rst b/docs/source/pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes.rst deleted file mode 100644 index 67e9256f3a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_controllers\_arnold\_attributes module -=============================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_controllers_arnold_attributes - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_out_set_node_ids.rst b/docs/source/pype.plugins.maya.publish.validate_rig_out_set_node_ids.rst deleted file mode 100644 index e4f1cfc428..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_out_set_node_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_out\_set\_node\_ids module -=================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_out_set_node_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_rig_output_ids.rst b/docs/source/pype.plugins.maya.publish.validate_rig_output_ids.rst deleted file mode 100644 index e1d3b1a659..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_rig_output_ids.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_rig\_output\_ids module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_rig_output_ids - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_scene_set_workspace.rst b/docs/source/pype.plugins.maya.publish.validate_scene_set_workspace.rst deleted file mode 100644 index daf2f152d9..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_scene_set_workspace.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_scene\_set\_workspace module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_scene_set_workspace - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_shader_name.rst b/docs/source/pype.plugins.maya.publish.validate_shader_name.rst deleted file mode 100644 index ae5b196a1d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_shader_name.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_shader\_name module -======================================================= - -.. automodule:: pype.plugins.maya.publish.validate_shader_name - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_shape_default_names.rst b/docs/source/pype.plugins.maya.publish.validate_shape_default_names.rst deleted file mode 100644 index 49effc932d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_shape_default_names.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_shape\_default\_names module -================================================================ - -.. automodule:: pype.plugins.maya.publish.validate_shape_default_names - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_shape_render_stats.rst b/docs/source/pype.plugins.maya.publish.validate_shape_render_stats.rst deleted file mode 100644 index 359af50a0f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_shape_render_stats.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_shape\_render\_stats module -=============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_shape_render_stats - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_single_assembly.rst b/docs/source/pype.plugins.maya.publish.validate_single_assembly.rst deleted file mode 100644 index 090f57b3ff..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_single_assembly.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_single\_assembly module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_single_assembly - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_skinCluster_deformer_set.rst b/docs/source/pype.plugins.maya.publish.validate_skinCluster_deformer_set.rst deleted file mode 100644 index 607a610097..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_skinCluster_deformer_set.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_skinCluster\_deformer\_set module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_skinCluster_deformer_set - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_step_size.rst b/docs/source/pype.plugins.maya.publish.validate_step_size.rst deleted file mode 100644 index bb883ea7b5..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_step_size.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_step\_size module -===================================================== - -.. automodule:: pype.plugins.maya.publish.validate_step_size - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_transform_naming_suffix.rst b/docs/source/pype.plugins.maya.publish.validate_transform_naming_suffix.rst deleted file mode 100644 index 4d7edda78d..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_transform_naming_suffix.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_transform\_naming\_suffix module -==================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_transform_naming_suffix - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_transform_zero.rst b/docs/source/pype.plugins.maya.publish.validate_transform_zero.rst deleted file mode 100644 index 6d5cacfe00..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_transform_zero.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_transform\_zero module -========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_transform_zero - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unicode_strings.rst b/docs/source/pype.plugins.maya.publish.validate_unicode_strings.rst deleted file mode 100644 index 9cc17d6810..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unicode_strings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unicode\_strings module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unicode_strings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unreal_mesh_triangulated.rst b/docs/source/pype.plugins.maya.publish.validate_unreal_mesh_triangulated.rst deleted file mode 100644 index 4dcb518194..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unreal_mesh_triangulated.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unreal\_mesh\_triangulated module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unreal_mesh_triangulated - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unreal_staticmesh_naming.rst b/docs/source/pype.plugins.maya.publish.validate_unreal_staticmesh_naming.rst deleted file mode 100644 index f7225ab395..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unreal_staticmesh_naming.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unreal\_staticmesh\_naming module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unreal_staticmesh_naming - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_unreal_up_axis.rst b/docs/source/pype.plugins.maya.publish.validate_unreal_up_axis.rst deleted file mode 100644 index ff688c493f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_unreal_up_axis.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_unreal\_up\_axis module -=========================================================== - -.. automodule:: pype.plugins.maya.publish.validate_unreal_up_axis - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vray_distributed_rendering.rst b/docs/source/pype.plugins.maya.publish.validate_vray_distributed_rendering.rst deleted file mode 100644 index f5d05e6d76..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vray_distributed_rendering.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vray\_distributed\_rendering module -======================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_vray_distributed_rendering - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vray_referenced_aovs.rst b/docs/source/pype.plugins.maya.publish.validate_vray_referenced_aovs.rst deleted file mode 100644 index 16ad9666aa..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vray_referenced_aovs.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vray\_referenced\_aovs module -================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_vray_referenced_aovs - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vray_translator_settings.rst b/docs/source/pype.plugins.maya.publish.validate_vray_translator_settings.rst deleted file mode 100644 index a06a9531dd..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vray_translator_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vray\_translator\_settings module -===================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_vray_translator_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vrayproxy.rst b/docs/source/pype.plugins.maya.publish.validate_vrayproxy.rst deleted file mode 100644 index 081f58924a..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vrayproxy.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vrayproxy module -==================================================== - -.. automodule:: pype.plugins.maya.publish.validate_vrayproxy - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_vrayproxy_members.rst b/docs/source/pype.plugins.maya.publish.validate_vrayproxy_members.rst deleted file mode 100644 index 7c587f39b0..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_vrayproxy_members.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_vrayproxy\_members module -============================================================= - -.. automodule:: pype.plugins.maya.publish.validate_vrayproxy_members - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_renderscript_callbacks.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_renderscript_callbacks.rst deleted file mode 100644 index 889d469b2f..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_renderscript_callbacks.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_renderscript\_callbacks module -======================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_yeti_renderscript_callbacks - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_cache_state.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_rig_cache_state.rst deleted file mode 100644 index 4138b1e8a4..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_cache_state.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_rig\_cache\_state module -================================================================== - -.. automodule:: pype.plugins.maya.publish.validate_yeti_rig_cache_state - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_input_in_instance.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_rig_input_in_instance.rst deleted file mode 100644 index 37b862926c..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_input_in_instance.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_rig\_input\_in\_instance module -========================================================================= - -.. automodule:: pype.plugins.maya.publish.validate_yeti_rig_input_in_instance - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_settings.rst b/docs/source/pype.plugins.maya.publish.validate_yeti_rig_settings.rst deleted file mode 100644 index 9fd54193dc..0000000000 --- a/docs/source/pype.plugins.maya.publish.validate_yeti_rig_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.plugins.maya.publish.validate\_yeti\_rig\_settings module -============================================================== - -.. automodule:: pype.plugins.maya.publish.validate_yeti_rig_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.plugins.maya.rst b/docs/source/pype.plugins.maya.rst deleted file mode 100644 index 129cf5fce9..0000000000 --- a/docs/source/pype.plugins.maya.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.plugins.maya package -========================= - -.. automodule:: pype.plugins.maya - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.plugins.maya.publish diff --git a/docs/source/pype.plugins.rst b/docs/source/pype.plugins.rst deleted file mode 100644 index 8e5e45ba5d..0000000000 --- a/docs/source/pype.plugins.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.plugins package -==================== - -.. automodule:: pype.plugins - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 10 - - pype.plugins.maya diff --git a/docs/source/pype.pype_commands.rst b/docs/source/pype.pype_commands.rst deleted file mode 100644 index b8a416df7b..0000000000 --- a/docs/source/pype.pype_commands.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.pype\_commands module -========================== - -.. automodule:: pype.pype_commands - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.resources.rst b/docs/source/pype.resources.rst deleted file mode 100644 index 2fb5b92dce..0000000000 --- a/docs/source/pype.resources.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.resources package -====================== - -.. automodule:: pype.resources - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.rst b/docs/source/pype.rst deleted file mode 100644 index 3589d2f3fe..0000000000 --- a/docs/source/pype.rst +++ /dev/null @@ -1,99 +0,0 @@ -pype package -============ - -.. automodule:: pype - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.hosts - pype.lib - pype.modules - pype.resources - pype.scripts - pype.settings - pype.tests - pype.tools - pype.vendor - pype.widgets - -Submodules ----------- - -pype.action module ------------------- - -.. automodule:: pype.action - :members: - :undoc-members: - :show-inheritance: - -pype.api module ---------------- - -.. automodule:: pype.api - :members: - :undoc-members: - :show-inheritance: - -pype.cli module ---------------- - -.. automodule:: pype.cli - :members: - :undoc-members: - :show-inheritance: - -pype.launcher\_actions module ------------------------------ - -.. automodule:: pype.launcher_actions - :members: - :undoc-members: - :show-inheritance: - -pype.modules\_manager module ----------------------------- - -.. automodule:: pype.modules_manager - :members: - :undoc-members: - :show-inheritance: - -pype.plugin module ------------------- - -.. automodule:: pype.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.pype\_commands module --------------------------- - -.. automodule:: pype.pype_commands - :members: - :undoc-members: - :show-inheritance: - -pype.setdress\_api module -------------------------- - -.. automodule:: pype.setdress_api - :members: - :undoc-members: - :show-inheritance: - -pype.version module -------------------- - -.. automodule:: pype.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.export_maya_ass_job.rst b/docs/source/pype.scripts.export_maya_ass_job.rst deleted file mode 100644 index c35cc49ddd..0000000000 --- a/docs/source/pype.scripts.export_maya_ass_job.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.export\_maya\_ass\_job module -========================================== - -.. automodule:: pype.scripts.export_maya_ass_job - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.fusion_switch_shot.rst b/docs/source/pype.scripts.fusion_switch_shot.rst deleted file mode 100644 index 39d3473d16..0000000000 --- a/docs/source/pype.scripts.fusion_switch_shot.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.fusion\_switch\_shot module -======================================== - -.. automodule:: pype.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.otio_burnin.rst b/docs/source/pype.scripts.otio_burnin.rst deleted file mode 100644 index e6a93017f5..0000000000 --- a/docs/source/pype.scripts.otio_burnin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.otio\_burnin module -================================ - -.. automodule:: pype.scripts.otio_burnin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.publish_deadline.rst b/docs/source/pype.scripts.publish_deadline.rst deleted file mode 100644 index d134e17244..0000000000 --- a/docs/source/pype.scripts.publish_deadline.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.publish\_deadline module -===================================== - -.. automodule:: pype.scripts.publish_deadline - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.publish_filesequence.rst b/docs/source/pype.scripts.publish_filesequence.rst deleted file mode 100644 index 440d52caad..0000000000 --- a/docs/source/pype.scripts.publish_filesequence.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.publish\_filesequence module -========================================= - -.. automodule:: pype.scripts.publish_filesequence - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.rst b/docs/source/pype.scripts.rst deleted file mode 100644 index 5985771b97..0000000000 --- a/docs/source/pype.scripts.rst +++ /dev/null @@ -1,58 +0,0 @@ -pype.scripts package -==================== - -.. automodule:: pype.scripts - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.scripts.slates - -Submodules ----------- - -pype.scripts.export\_maya\_ass\_job module ------------------------------------------- - -.. automodule:: pype.scripts.export_maya_ass_job - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.fusion\_switch\_shot module ----------------------------------------- - -.. automodule:: pype.scripts.fusion_switch_shot - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.otio\_burnin module --------------------------------- - -.. automodule:: pype.scripts.otio_burnin - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.publish\_deadline module -------------------------------------- - -.. automodule:: pype.scripts.publish_deadline - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.publish\_filesequence module ------------------------------------------ - -.. automodule:: pype.scripts.publish_filesequence - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.rst b/docs/source/pype.scripts.slates.rst deleted file mode 100644 index 74b4cb4343..0000000000 --- a/docs/source/pype.scripts.slates.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.scripts.slates package -=========================== - -.. automodule:: pype.scripts.slates - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.scripts.slates.slate_base diff --git a/docs/source/pype.scripts.slates.slate_base.api.rst b/docs/source/pype.scripts.slates.slate_base.api.rst deleted file mode 100644 index 0016a5c42a..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.api module -========================================== - -.. automodule:: pype.scripts.slates.slate_base.api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.base.rst b/docs/source/pype.scripts.slates.slate_base.base.rst deleted file mode 100644 index 5e34d654b0..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.base module -=========================================== - -.. automodule:: pype.scripts.slates.slate_base.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.example.rst b/docs/source/pype.scripts.slates.slate_base.example.rst deleted file mode 100644 index 95ebcc835a..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.example.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.example module -============================================== - -.. automodule:: pype.scripts.slates.slate_base.example - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.font_factory.rst b/docs/source/pype.scripts.slates.slate_base.font_factory.rst deleted file mode 100644 index c53efef554..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.font_factory.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.font\_factory module -==================================================== - -.. automodule:: pype.scripts.slates.slate_base.font_factory - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.items.rst b/docs/source/pype.scripts.slates.slate_base.items.rst deleted file mode 100644 index 25abb11bb9..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.items.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.items module -============================================ - -.. automodule:: pype.scripts.slates.slate_base.items - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.layer.rst b/docs/source/pype.scripts.slates.slate_base.layer.rst deleted file mode 100644 index 8681e3accf..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.layer.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.layer module -============================================ - -.. automodule:: pype.scripts.slates.slate_base.layer - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.lib.rst b/docs/source/pype.scripts.slates.slate_base.lib.rst deleted file mode 100644 index c4ef2c912e..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.lib module -========================================== - -.. automodule:: pype.scripts.slates.slate_base.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.main_frame.rst b/docs/source/pype.scripts.slates.slate_base.main_frame.rst deleted file mode 100644 index 5093c28a74..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.main_frame.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.scripts.slates.slate\_base.main\_frame module -================================================== - -.. automodule:: pype.scripts.slates.slate_base.main_frame - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.scripts.slates.slate_base.rst b/docs/source/pype.scripts.slates.slate_base.rst deleted file mode 100644 index 00726c04bf..0000000000 --- a/docs/source/pype.scripts.slates.slate_base.rst +++ /dev/null @@ -1,74 +0,0 @@ -pype.scripts.slates.slate\_base package -======================================= - -.. automodule:: pype.scripts.slates.slate_base - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.scripts.slates.slate\_base.api module ------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.api - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.base module -------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.base - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.example module ----------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.example - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.font\_factory module ----------------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.font_factory - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.items module --------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.items - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.layer module --------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.layer - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.lib module ------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.lib - :members: - :undoc-members: - :show-inheritance: - -pype.scripts.slates.slate\_base.main\_frame module --------------------------------------------------- - -.. automodule:: pype.scripts.slates.slate_base.main_frame - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.setdress_api.rst b/docs/source/pype.setdress_api.rst deleted file mode 100644 index 95638ea64d..0000000000 --- a/docs/source/pype.setdress_api.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.setdress\_api module -========================= - -.. automodule:: pype.setdress_api - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.constants.rst b/docs/source/pype.settings.constants.rst deleted file mode 100644 index ac652089c8..0000000000 --- a/docs/source/pype.settings.constants.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.settings.constants module -============================== - -.. automodule:: pype.settings.constants - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.handlers.rst b/docs/source/pype.settings.handlers.rst deleted file mode 100644 index 60ea0ae952..0000000000 --- a/docs/source/pype.settings.handlers.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.settings.handlers module -============================= - -.. automodule:: pype.settings.handlers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.lib.rst b/docs/source/pype.settings.lib.rst deleted file mode 100644 index d6e3e8bd06..0000000000 --- a/docs/source/pype.settings.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.settings.lib module -======================== - -.. automodule:: pype.settings.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.settings.rst b/docs/source/pype.settings.rst deleted file mode 100644 index 5bf131d555..0000000000 --- a/docs/source/pype.settings.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.settings package -===================== - -.. automodule:: pype.settings - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.settings.lib module ------------------------- - -.. automodule:: pype.settings.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.lib.rst b/docs/source/pype.tests.lib.rst deleted file mode 100644 index 375ebd0258..0000000000 --- a/docs/source/pype.tests.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.lib module -===================== - -.. automodule:: pype.tests.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.rst b/docs/source/pype.tests.rst deleted file mode 100644 index 3f34cdcd77..0000000000 --- a/docs/source/pype.tests.rst +++ /dev/null @@ -1,42 +0,0 @@ -pype.tests package -================== - -.. automodule:: pype.tests - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tests.lib module ---------------------- - -.. automodule:: pype.tests.lib - :members: - :undoc-members: - :show-inheritance: - -pype.tests.test\_avalon\_plugin\_presets module ------------------------------------------------ - -.. automodule:: pype.tests.test_avalon_plugin_presets - :members: - :undoc-members: - :show-inheritance: - -pype.tests.test\_mongo\_performance module ------------------------------------------- - -.. automodule:: pype.tests.test_mongo_performance - :members: - :undoc-members: - :show-inheritance: - -pype.tests.test\_pyblish\_filter module ---------------------------------------- - -.. automodule:: pype.tests.test_pyblish_filter - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_avalon_plugin_presets.rst b/docs/source/pype.tests.test_avalon_plugin_presets.rst deleted file mode 100644 index b4ff802256..0000000000 --- a/docs/source/pype.tests.test_avalon_plugin_presets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_avalon\_plugin\_presets module -=============================================== - -.. automodule:: pype.tests.test_avalon_plugin_presets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_lib_restructuralization.rst b/docs/source/pype.tests.test_lib_restructuralization.rst deleted file mode 100644 index 8d426fcb6b..0000000000 --- a/docs/source/pype.tests.test_lib_restructuralization.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_lib\_restructuralization module -================================================ - -.. automodule:: pype.tests.test_lib_restructuralization - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_mongo_performance.rst b/docs/source/pype.tests.test_mongo_performance.rst deleted file mode 100644 index 4686247e59..0000000000 --- a/docs/source/pype.tests.test_mongo_performance.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_mongo\_performance module -========================================== - -.. automodule:: pype.tests.test_mongo_performance - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tests.test_pyblish_filter.rst b/docs/source/pype.tests.test_pyblish_filter.rst deleted file mode 100644 index 196ec02433..0000000000 --- a/docs/source/pype.tests.test_pyblish_filter.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tests.test\_pyblish\_filter module -======================================= - -.. automodule:: pype.tests.test_pyblish_filter - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.app.rst b/docs/source/pype.tools.assetcreator.app.rst deleted file mode 100644 index b46281b07a..0000000000 --- a/docs/source/pype.tools.assetcreator.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.assetcreator.app module -================================== - -.. automodule:: pype.tools.assetcreator.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.model.rst b/docs/source/pype.tools.assetcreator.model.rst deleted file mode 100644 index 752791d07c..0000000000 --- a/docs/source/pype.tools.assetcreator.model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.assetcreator.model module -==================================== - -.. automodule:: pype.tools.assetcreator.model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.rst b/docs/source/pype.tools.assetcreator.rst deleted file mode 100644 index b95c3b3c60..0000000000 --- a/docs/source/pype.tools.assetcreator.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.tools.assetcreator package -=============================== - -.. automodule:: pype.tools.assetcreator - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.assetcreator.app module ----------------------------------- - -.. automodule:: pype.tools.assetcreator.app - :members: - :undoc-members: - :show-inheritance: - -pype.tools.assetcreator.model module ------------------------------------- - -.. automodule:: pype.tools.assetcreator.model - :members: - :undoc-members: - :show-inheritance: - -pype.tools.assetcreator.widget module -------------------------------------- - -.. automodule:: pype.tools.assetcreator.widget - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.assetcreator.widget.rst b/docs/source/pype.tools.assetcreator.widget.rst deleted file mode 100644 index 23ed335306..0000000000 --- a/docs/source/pype.tools.assetcreator.widget.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.assetcreator.widget module -===================================== - -.. automodule:: pype.tools.assetcreator.widget - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.actions.rst b/docs/source/pype.tools.launcher.actions.rst deleted file mode 100644 index e2ec217d4b..0000000000 --- a/docs/source/pype.tools.launcher.actions.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.actions module -================================== - -.. automodule:: pype.tools.launcher.actions - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.delegates.rst b/docs/source/pype.tools.launcher.delegates.rst deleted file mode 100644 index e8a7519cd5..0000000000 --- a/docs/source/pype.tools.launcher.delegates.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.delegates module -==================================== - -.. automodule:: pype.tools.launcher.delegates - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.flickcharm.rst b/docs/source/pype.tools.launcher.flickcharm.rst deleted file mode 100644 index 5105d3235e..0000000000 --- a/docs/source/pype.tools.launcher.flickcharm.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.flickcharm module -===================================== - -.. automodule:: pype.tools.launcher.flickcharm - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.lib.rst b/docs/source/pype.tools.launcher.lib.rst deleted file mode 100644 index 28db8a6540..0000000000 --- a/docs/source/pype.tools.launcher.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.lib module -============================== - -.. automodule:: pype.tools.launcher.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.models.rst b/docs/source/pype.tools.launcher.models.rst deleted file mode 100644 index 701826284e..0000000000 --- a/docs/source/pype.tools.launcher.models.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.models module -================================= - -.. automodule:: pype.tools.launcher.models - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.rst b/docs/source/pype.tools.launcher.rst deleted file mode 100644 index c4782bf9bb..0000000000 --- a/docs/source/pype.tools.launcher.rst +++ /dev/null @@ -1,66 +0,0 @@ -pype.tools.launcher package -=========================== - -.. automodule:: pype.tools.launcher - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.launcher.actions module ----------------------------------- - -.. automodule:: pype.tools.launcher.actions - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.delegates module ------------------------------------- - -.. automodule:: pype.tools.launcher.delegates - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.flickcharm module -------------------------------------- - -.. automodule:: pype.tools.launcher.flickcharm - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.lib module ------------------------------- - -.. automodule:: pype.tools.launcher.lib - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.models module ---------------------------------- - -.. automodule:: pype.tools.launcher.models - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.widgets module ----------------------------------- - -.. automodule:: pype.tools.launcher.widgets - :members: - :undoc-members: - :show-inheritance: - -pype.tools.launcher.window module ---------------------------------- - -.. automodule:: pype.tools.launcher.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.widgets.rst b/docs/source/pype.tools.launcher.widgets.rst deleted file mode 100644 index 400a5b7a2c..0000000000 --- a/docs/source/pype.tools.launcher.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.widgets module -================================== - -.. automodule:: pype.tools.launcher.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.launcher.window.rst b/docs/source/pype.tools.launcher.window.rst deleted file mode 100644 index ae92207795..0000000000 --- a/docs/source/pype.tools.launcher.window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.launcher.window module -================================= - -.. automodule:: pype.tools.launcher.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.app.rst b/docs/source/pype.tools.pyblish_pype.app.rst deleted file mode 100644 index a70aada725..0000000000 --- a/docs/source/pype.tools.pyblish_pype.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.app module -=================================== - -.. automodule:: pype.tools.pyblish_pype.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.awesome.rst b/docs/source/pype.tools.pyblish_pype.awesome.rst deleted file mode 100644 index 50a81ac5e8..0000000000 --- a/docs/source/pype.tools.pyblish_pype.awesome.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.awesome module -======================================= - -.. automodule:: pype.tools.pyblish_pype.awesome - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.compat.rst b/docs/source/pype.tools.pyblish_pype.compat.rst deleted file mode 100644 index 4beee41e00..0000000000 --- a/docs/source/pype.tools.pyblish_pype.compat.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.compat module -====================================== - -.. automodule:: pype.tools.pyblish_pype.compat - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.constants.rst b/docs/source/pype.tools.pyblish_pype.constants.rst deleted file mode 100644 index bab67a2270..0000000000 --- a/docs/source/pype.tools.pyblish_pype.constants.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.constants module -========================================= - -.. automodule:: pype.tools.pyblish_pype.constants - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.control.rst b/docs/source/pype.tools.pyblish_pype.control.rst deleted file mode 100644 index c2f8c0031e..0000000000 --- a/docs/source/pype.tools.pyblish_pype.control.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.control module -======================================= - -.. automodule:: pype.tools.pyblish_pype.control - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.delegate.rst b/docs/source/pype.tools.pyblish_pype.delegate.rst deleted file mode 100644 index 8796c9830f..0000000000 --- a/docs/source/pype.tools.pyblish_pype.delegate.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.delegate module -======================================== - -.. automodule:: pype.tools.pyblish_pype.delegate - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.mock.rst b/docs/source/pype.tools.pyblish_pype.mock.rst deleted file mode 100644 index 8c22e80856..0000000000 --- a/docs/source/pype.tools.pyblish_pype.mock.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.mock module -==================================== - -.. automodule:: pype.tools.pyblish_pype.mock - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.model.rst b/docs/source/pype.tools.pyblish_pype.model.rst deleted file mode 100644 index 983b06cc8a..0000000000 --- a/docs/source/pype.tools.pyblish_pype.model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.model module -===================================== - -.. automodule:: pype.tools.pyblish_pype.model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.rst b/docs/source/pype.tools.pyblish_pype.rst deleted file mode 100644 index 9479b5399f..0000000000 --- a/docs/source/pype.tools.pyblish_pype.rst +++ /dev/null @@ -1,130 +0,0 @@ -pype.tools.pyblish\_pype package -================================ - -.. automodule:: pype.tools.pyblish_pype - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.pyblish_pype.vendor - -Submodules ----------- - -pype.tools.pyblish\_pype.app module ------------------------------------ - -.. automodule:: pype.tools.pyblish_pype.app - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.awesome module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.awesome - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.compat module --------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.compat - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.constants module ------------------------------------------ - -.. automodule:: pype.tools.pyblish_pype.constants - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.control module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.control - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.delegate module ----------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.delegate - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.mock module ------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.mock - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.model module -------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.model - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.settings module ----------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.settings - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.util module ------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.util - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.version module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.version - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.view module ------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.view - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.widgets module ---------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.widgets - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.window module --------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.settings.rst b/docs/source/pype.tools.pyblish_pype.settings.rst deleted file mode 100644 index 2e4e95cca0..0000000000 --- a/docs/source/pype.tools.pyblish_pype.settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.settings module -======================================== - -.. automodule:: pype.tools.pyblish_pype.settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.util.rst b/docs/source/pype.tools.pyblish_pype.util.rst deleted file mode 100644 index fa34295f12..0000000000 --- a/docs/source/pype.tools.pyblish_pype.util.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.util module -==================================== - -.. automodule:: pype.tools.pyblish_pype.util - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.animation.rst b/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.animation.rst deleted file mode 100644 index a892128308..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.animation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.vendor.qtawesome.animation module -========================================================== - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.animation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.iconic_font.rst b/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.iconic_font.rst deleted file mode 100644 index 4f4337348f..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.iconic_font.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.vendor.qtawesome.iconic\_font module -============================================================= - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.iconic_font - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.rst b/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.rst deleted file mode 100644 index 68b2ec4659..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.qtawesome.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.tools.pyblish\_pype.vendor.qtawesome package -================================================= - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.pyblish\_pype.vendor.qtawesome.animation module ----------------------------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.animation - :members: - :undoc-members: - :show-inheritance: - -pype.tools.pyblish\_pype.vendor.qtawesome.iconic\_font module -------------------------------------------------------------- - -.. automodule:: pype.tools.pyblish_pype.vendor.qtawesome.iconic_font - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.vendor.rst b/docs/source/pype.tools.pyblish_pype.vendor.rst deleted file mode 100644 index 69e6096053..0000000000 --- a/docs/source/pype.tools.pyblish_pype.vendor.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.tools.pyblish\_pype.vendor package -======================================= - -.. automodule:: pype.tools.pyblish_pype.vendor - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.pyblish_pype.vendor.qtawesome diff --git a/docs/source/pype.tools.pyblish_pype.version.rst b/docs/source/pype.tools.pyblish_pype.version.rst deleted file mode 100644 index a6ddcd5ce8..0000000000 --- a/docs/source/pype.tools.pyblish_pype.version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.version module -======================================= - -.. automodule:: pype.tools.pyblish_pype.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.view.rst b/docs/source/pype.tools.pyblish_pype.view.rst deleted file mode 100644 index 21d34d9daa..0000000000 --- a/docs/source/pype.tools.pyblish_pype.view.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.view module -==================================== - -.. automodule:: pype.tools.pyblish_pype.view - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.widgets.rst b/docs/source/pype.tools.pyblish_pype.widgets.rst deleted file mode 100644 index 8a0d3c380a..0000000000 --- a/docs/source/pype.tools.pyblish_pype.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.widgets module -======================================= - -.. automodule:: pype.tools.pyblish_pype.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.pyblish_pype.window.rst b/docs/source/pype.tools.pyblish_pype.window.rst deleted file mode 100644 index 10f7b1a36e..0000000000 --- a/docs/source/pype.tools.pyblish_pype.window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.pyblish\_pype.window module -====================================== - -.. automodule:: pype.tools.pyblish_pype.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.rst b/docs/source/pype.tools.rst deleted file mode 100644 index d82ed3384a..0000000000 --- a/docs/source/pype.tools.rst +++ /dev/null @@ -1,19 +0,0 @@ -pype.tools package -================== - -.. automodule:: pype.tools - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.assetcreator - pype.tools.launcher - pype.tools.pyblish_pype - pype.tools.settings - pype.tools.standalonepublish diff --git a/docs/source/pype.tools.settings.rst b/docs/source/pype.tools.settings.rst deleted file mode 100644 index ef54851ab1..0000000000 --- a/docs/source/pype.tools.settings.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.tools.settings package -=========================== - -.. automodule:: pype.tools.settings - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.settings.settings diff --git a/docs/source/pype.tools.settings.settings.rst b/docs/source/pype.tools.settings.settings.rst deleted file mode 100644 index 793914e1a8..0000000000 --- a/docs/source/pype.tools.settings.settings.rst +++ /dev/null @@ -1,16 +0,0 @@ -pype.tools.settings.settings package -==================================== - -.. automodule:: pype.tools.settings.settings - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.settings.settings.style - pype.tools.settings.settings.widgets diff --git a/docs/source/pype.tools.settings.settings.style.rst b/docs/source/pype.tools.settings.settings.style.rst deleted file mode 100644 index 228322245c..0000000000 --- a/docs/source/pype.tools.settings.settings.style.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.style package -========================================== - -.. automodule:: pype.tools.settings.settings.style - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.anatomy_types.rst b/docs/source/pype.tools.settings.settings.widgets.anatomy_types.rst deleted file mode 100644 index ca951c82f0..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.anatomy_types.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.anatomy\_types module -========================================================== - -.. automodule:: pype.tools.settings.settings.widgets.anatomy_types - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.base.rst b/docs/source/pype.tools.settings.settings.widgets.base.rst deleted file mode 100644 index 8964d6f628..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.base module -================================================ - -.. automodule:: pype.tools.settings.settings.widgets.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.item_types.rst b/docs/source/pype.tools.settings.settings.widgets.item_types.rst deleted file mode 100644 index 5e505538a7..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.item_types.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.item\_types module -======================================================= - -.. automodule:: pype.tools.settings.settings.widgets.item_types - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.lib.rst b/docs/source/pype.tools.settings.settings.widgets.lib.rst deleted file mode 100644 index ae100c74b2..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.lib module -=============================================== - -.. automodule:: pype.tools.settings.settings.widgets.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.multiselection_combobox.rst b/docs/source/pype.tools.settings.settings.widgets.multiselection_combobox.rst deleted file mode 100644 index 004f2ae21f..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.multiselection_combobox.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.multiselection\_combobox module -==================================================================== - -.. automodule:: pype.tools.settings.settings.widgets.multiselection_combobox - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.rst b/docs/source/pype.tools.settings.settings.widgets.rst deleted file mode 100644 index 8734280a08..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.rst +++ /dev/null @@ -1,74 +0,0 @@ -pype.tools.settings.settings.widgets package -============================================ - -.. automodule:: pype.tools.settings.settings.widgets - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.tools.settings.settings.widgets.anatomy\_types module ----------------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.anatomy_types - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.base module ------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.base - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.item\_types module -------------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.item_types - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.lib module ------------------------------------------------ - -.. automodule:: pype.tools.settings.settings.widgets.lib - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.multiselection\_combobox module --------------------------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.multiselection_combobox - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.tests module -------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.tests - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.widgets module ---------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.widgets - :members: - :undoc-members: - :show-inheritance: - -pype.tools.settings.settings.widgets.window module --------------------------------------------------- - -.. automodule:: pype.tools.settings.settings.widgets.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.tests.rst b/docs/source/pype.tools.settings.settings.widgets.tests.rst deleted file mode 100644 index fe8d6dabef..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.tests.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.tests module -================================================= - -.. automodule:: pype.tools.settings.settings.widgets.tests - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.widgets.rst b/docs/source/pype.tools.settings.settings.widgets.widgets.rst deleted file mode 100644 index 238e584ac3..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.widgets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.widgets module -=================================================== - -.. automodule:: pype.tools.settings.settings.widgets.widgets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.settings.settings.widgets.window.rst b/docs/source/pype.tools.settings.settings.widgets.window.rst deleted file mode 100644 index d67678012f..0000000000 --- a/docs/source/pype.tools.settings.settings.widgets.window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.settings.settings.widgets.window module -================================================== - -.. automodule:: pype.tools.settings.settings.widgets.window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.app.rst b/docs/source/pype.tools.standalonepublish.app.rst deleted file mode 100644 index 74776b80fe..0000000000 --- a/docs/source/pype.tools.standalonepublish.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.app module -======================================= - -.. automodule:: pype.tools.standalonepublish.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.publish.rst b/docs/source/pype.tools.standalonepublish.publish.rst deleted file mode 100644 index 47ad57e7fb..0000000000 --- a/docs/source/pype.tools.standalonepublish.publish.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.publish module -=========================================== - -.. automodule:: pype.tools.standalonepublish.publish - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.rst b/docs/source/pype.tools.standalonepublish.rst deleted file mode 100644 index 5ca8194b61..0000000000 --- a/docs/source/pype.tools.standalonepublish.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.tools.standalonepublish package -==================================== - -.. automodule:: pype.tools.standalonepublish - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.standalonepublish.widgets - -Submodules ----------- - -pype.tools.standalonepublish.app module ---------------------------------------- - -.. automodule:: pype.tools.standalonepublish.app - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.publish module -------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.publish - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_asset.rst b/docs/source/pype.tools.standalonepublish.widgets.model_asset.rst deleted file mode 100644 index 84d0ca2d93..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_asset.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_asset module -======================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_asset - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match.rst b/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match.rst deleted file mode 100644 index 0c3ae79b99..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_exact\_match module -============================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort.rst b/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort.rst deleted file mode 100644 index b828b75030..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_recursive\_sort module -================================================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_node.rst b/docs/source/pype.tools.standalonepublish.widgets.model_node.rst deleted file mode 100644 index 4789b14501..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_node.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_node module -======================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.model_node - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_tasks_template.rst b/docs/source/pype.tools.standalonepublish.widgets.model_tasks_template.rst deleted file mode 100644 index dbee838530..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_tasks_template.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_tasks\_template module -================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_tasks_template - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_tree.rst b/docs/source/pype.tools.standalonepublish.widgets.model_tree.rst deleted file mode 100644 index 38eecb095a..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_tree.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_tree module -======================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.model_tree_view_deselectable.rst b/docs/source/pype.tools.standalonepublish.widgets.model_tree_view_deselectable.rst deleted file mode 100644 index 9afb505113..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.model_tree_view_deselectable.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.model\_tree\_view\_deselectable module -=========================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree_view_deselectable - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.resources.rst b/docs/source/pype.tools.standalonepublish.widgets.resources.rst deleted file mode 100644 index a0eddae63e..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.resources.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.resources package -====================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.resources - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.rst b/docs/source/pype.tools.standalonepublish.widgets.rst deleted file mode 100644 index 65bbcb62fc..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.rst +++ /dev/null @@ -1,146 +0,0 @@ -pype.tools.standalonepublish.widgets package -============================================ - -.. automodule:: pype.tools.standalonepublish.widgets - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.tools.standalonepublish.widgets.resources - -Submodules ----------- - -pype.tools.standalonepublish.widgets.model\_asset module --------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_asset - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_exact\_match module ------------------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_exact_match - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_filter\_proxy\_recursive\_sort module ---------------------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_filter_proxy_recursive_sort - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_node module -------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_node - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_tasks\_template module ------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_tasks_template - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_tree module -------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.model\_tree\_view\_deselectable module ---------------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.model_tree_view_deselectable - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_asset module ---------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_asset - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_component\_item module -------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_component_item - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_components module --------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_components\_list module --------------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components_list - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_drop\_empty module ---------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_empty - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_drop\_frame module ---------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_frame - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_family module ----------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_family\_desc module ----------------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family_desc - :members: - :undoc-members: - :show-inheritance: - -pype.tools.standalonepublish.widgets.widget\_shadow module ----------------------------------------------------------- - -.. automodule:: pype.tools.standalonepublish.widgets.widget_shadow - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_asset.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_asset.rst deleted file mode 100644 index 51a3763628..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_asset.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_asset module -========================================================= - -.. automodule:: pype.tools.standalonepublish.widgets.widget_asset - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_component_item.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_component_item.rst deleted file mode 100644 index 3495fdf192..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_component_item.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_component\_item module -=================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_component_item - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_components.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_components.rst deleted file mode 100644 index be7c19af9f..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_components.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_components module -============================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_components_list.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_components_list.rst deleted file mode 100644 index 051efe07fe..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_components_list.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_components\_list module -==================================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_components_list - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_empty.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_drop_empty.rst deleted file mode 100644 index b5b0a6acac..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_empty.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_drop\_empty module -=============================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_empty - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_frame.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_drop_frame.rst deleted file mode 100644 index 6b3e3690e0..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_drop_frame.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_drop\_frame module -=============================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_drop_frame - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_family.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_family.rst deleted file mode 100644 index 24c9d5496f..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_family.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_family module -========================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_family_desc.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_family_desc.rst deleted file mode 100644 index 5a7f92177f..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_family_desc.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_family\_desc module -================================================================ - -.. automodule:: pype.tools.standalonepublish.widgets.widget_family_desc - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.standalonepublish.widgets.widget_shadow.rst b/docs/source/pype.tools.standalonepublish.widgets.widget_shadow.rst deleted file mode 100644 index 19f5c22198..0000000000 --- a/docs/source/pype.tools.standalonepublish.widgets.widget_shadow.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.standalonepublish.widgets.widget\_shadow module -========================================================== - -.. automodule:: pype.tools.standalonepublish.widgets.widget_shadow - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.tray.pype_tray.rst b/docs/source/pype.tools.tray.pype_tray.rst deleted file mode 100644 index 9fc49c5763..0000000000 --- a/docs/source/pype.tools.tray.pype_tray.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.tray.pype\_tray module -================================= - -.. automodule:: pype.tools.tray.pype_tray - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.tray.rst b/docs/source/pype.tools.tray.rst deleted file mode 100644 index b28059d170..0000000000 --- a/docs/source/pype.tools.tray.rst +++ /dev/null @@ -1,15 +0,0 @@ -pype.tools.tray package -======================= - -.. automodule:: pype.tools.tray - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.tools.tray.pype_tray diff --git a/docs/source/pype.tools.workfiles.app.rst b/docs/source/pype.tools.workfiles.app.rst deleted file mode 100644 index a3a46b8a07..0000000000 --- a/docs/source/pype.tools.workfiles.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.workfiles.app module -=============================== - -.. automodule:: pype.tools.workfiles.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.workfiles.model.rst b/docs/source/pype.tools.workfiles.model.rst deleted file mode 100644 index 44cea32b97..0000000000 --- a/docs/source/pype.tools.workfiles.model.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.workfiles.model module -================================= - -.. automodule:: pype.tools.workfiles.model - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.tools.workfiles.rst b/docs/source/pype.tools.workfiles.rst deleted file mode 100644 index 147c4cebbe..0000000000 --- a/docs/source/pype.tools.workfiles.rst +++ /dev/null @@ -1,17 +0,0 @@ -pype.tools.workfiles package -============================ - -.. automodule:: pype.tools.workfiles - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -.. toctree:: - :maxdepth: 10 - - pype.tools.workfiles.app - pype.tools.workfiles.model - pype.tools.workfiles.view diff --git a/docs/source/pype.tools.workfiles.view.rst b/docs/source/pype.tools.workfiles.view.rst deleted file mode 100644 index acd32ed250..0000000000 --- a/docs/source/pype.tools.workfiles.view.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.tools.workfiles.view module -================================ - -.. automodule:: pype.tools.workfiles.view - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.configparser.helpers.rst b/docs/source/pype.vendor.backports.configparser.helpers.rst deleted file mode 100644 index 8d44d0a8c4..0000000000 --- a/docs/source/pype.vendor.backports.configparser.helpers.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.backports.configparser.helpers module -================================================= - -.. automodule:: pype.vendor.backports.configparser.helpers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.configparser.rst b/docs/source/pype.vendor.backports.configparser.rst deleted file mode 100644 index 4f778a4a87..0000000000 --- a/docs/source/pype.vendor.backports.configparser.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.vendor.backports.configparser package -========================================== - -.. automodule:: pype.vendor.backports.configparser - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.backports.configparser.helpers module -------------------------------------------------- - -.. automodule:: pype.vendor.backports.configparser.helpers - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.functools_lru_cache.rst b/docs/source/pype.vendor.backports.functools_lru_cache.rst deleted file mode 100644 index 26f2746cec..0000000000 --- a/docs/source/pype.vendor.backports.functools_lru_cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.backports.functools\_lru\_cache module -================================================== - -.. automodule:: pype.vendor.backports.functools_lru_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.backports.rst b/docs/source/pype.vendor.backports.rst deleted file mode 100644 index ff9efc29c5..0000000000 --- a/docs/source/pype.vendor.backports.rst +++ /dev/null @@ -1,26 +0,0 @@ -pype.vendor.backports package -============================= - -.. automodule:: pype.vendor.backports - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.backports.configparser - -Submodules ----------- - -pype.vendor.backports.functools\_lru\_cache module --------------------------------------------------- - -.. automodule:: pype.vendor.backports.functools_lru_cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.builtins.rst b/docs/source/pype.vendor.builtins.rst deleted file mode 100644 index e21fb768ed..0000000000 --- a/docs/source/pype.vendor.builtins.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.builtins package -============================ - -.. automodule:: pype.vendor.builtins - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture.rst b/docs/source/pype.vendor.capture.rst deleted file mode 100644 index d42e073fb5..0000000000 --- a/docs/source/pype.vendor.capture.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture module -========================== - -.. automodule:: pype.vendor.capture - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.accordion.rst b/docs/source/pype.vendor.capture_gui.accordion.rst deleted file mode 100644 index cca228f151..0000000000 --- a/docs/source/pype.vendor.capture_gui.accordion.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.accordion module -========================================= - -.. automodule:: pype.vendor.capture_gui.accordion - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.app.rst b/docs/source/pype.vendor.capture_gui.app.rst deleted file mode 100644 index 291296834e..0000000000 --- a/docs/source/pype.vendor.capture_gui.app.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.app module -=================================== - -.. automodule:: pype.vendor.capture_gui.app - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.colorpicker.rst b/docs/source/pype.vendor.capture_gui.colorpicker.rst deleted file mode 100644 index c9e56500f2..0000000000 --- a/docs/source/pype.vendor.capture_gui.colorpicker.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.colorpicker module -=========================================== - -.. automodule:: pype.vendor.capture_gui.colorpicker - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.lib.rst b/docs/source/pype.vendor.capture_gui.lib.rst deleted file mode 100644 index e94a3bd196..0000000000 --- a/docs/source/pype.vendor.capture_gui.lib.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.lib module -=================================== - -.. automodule:: pype.vendor.capture_gui.lib - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.plugin.rst b/docs/source/pype.vendor.capture_gui.plugin.rst deleted file mode 100644 index 2e8f58c873..0000000000 --- a/docs/source/pype.vendor.capture_gui.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.plugin module -====================================== - -.. automodule:: pype.vendor.capture_gui.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.presets.rst b/docs/source/pype.vendor.capture_gui.presets.rst deleted file mode 100644 index c81b4e1c23..0000000000 --- a/docs/source/pype.vendor.capture_gui.presets.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.presets module -======================================= - -.. automodule:: pype.vendor.capture_gui.presets - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.rst b/docs/source/pype.vendor.capture_gui.rst deleted file mode 100644 index f7efce3501..0000000000 --- a/docs/source/pype.vendor.capture_gui.rst +++ /dev/null @@ -1,82 +0,0 @@ -pype.vendor.capture\_gui package -================================ - -.. automodule:: pype.vendor.capture_gui - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.capture_gui.vendor - -Submodules ----------- - -pype.vendor.capture\_gui.accordion module ------------------------------------------ - -.. automodule:: pype.vendor.capture_gui.accordion - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.app module ------------------------------------ - -.. automodule:: pype.vendor.capture_gui.app - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.colorpicker module -------------------------------------------- - -.. automodule:: pype.vendor.capture_gui.colorpicker - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.lib module ------------------------------------ - -.. automodule:: pype.vendor.capture_gui.lib - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.plugin module --------------------------------------- - -.. automodule:: pype.vendor.capture_gui.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.presets module ---------------------------------------- - -.. automodule:: pype.vendor.capture_gui.presets - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.tokens module --------------------------------------- - -.. automodule:: pype.vendor.capture_gui.tokens - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.capture\_gui.version module ---------------------------------------- - -.. automodule:: pype.vendor.capture_gui.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.tokens.rst b/docs/source/pype.vendor.capture_gui.tokens.rst deleted file mode 100644 index 9e144a4d37..0000000000 --- a/docs/source/pype.vendor.capture_gui.tokens.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.tokens module -====================================== - -.. automodule:: pype.vendor.capture_gui.tokens - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.vendor.Qt.rst b/docs/source/pype.vendor.capture_gui.vendor.Qt.rst deleted file mode 100644 index 447e6dd812..0000000000 --- a/docs/source/pype.vendor.capture_gui.vendor.Qt.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.vendor.Qt module -========================================= - -.. automodule:: pype.vendor.capture_gui.vendor.Qt - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.vendor.rst b/docs/source/pype.vendor.capture_gui.vendor.rst deleted file mode 100644 index 0befc4bbb7..0000000000 --- a/docs/source/pype.vendor.capture_gui.vendor.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.vendor.capture\_gui.vendor package -======================================= - -.. automodule:: pype.vendor.capture_gui.vendor - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.capture\_gui.vendor.Qt module ------------------------------------------ - -.. automodule:: pype.vendor.capture_gui.vendor.Qt - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.capture_gui.version.rst b/docs/source/pype.vendor.capture_gui.version.rst deleted file mode 100644 index 3f0cfbabfd..0000000000 --- a/docs/source/pype.vendor.capture_gui.version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.capture\_gui.version module -======================================= - -.. automodule:: pype.vendor.capture_gui.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.base.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.base.rst deleted file mode 100644 index 5155df82aa..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor.base module -================================================= - -.. automodule:: pype.vendor.ftrack_api_old.accessor.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.disk.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.disk.rst deleted file mode 100644 index 3040fe18fd..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.disk.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor.disk module -================================================= - -.. automodule:: pype.vendor.ftrack_api_old.accessor.disk - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.rst deleted file mode 100644 index 1f7b522460..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor package -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.accessor - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.accessor.base module -------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.accessor.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.accessor.disk module -------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.accessor.disk - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.accessor.server module ---------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.accessor.server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.accessor.server.rst b/docs/source/pype.vendor.ftrack_api_old.accessor.server.rst deleted file mode 100644 index db835f99c4..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.accessor.server.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.accessor.server module -=================================================== - -.. automodule:: pype.vendor.ftrack_api_old.accessor.server - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.attribute.rst b/docs/source/pype.vendor.ftrack_api_old.attribute.rst deleted file mode 100644 index 54276ceb2a..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.attribute.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.attribute module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.attribute - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.cache.rst b/docs/source/pype.vendor.ftrack_api_old.cache.rst deleted file mode 100644 index 396bc5a1cd..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.cache.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.cache module -========================================= - -.. automodule:: pype.vendor.ftrack_api_old.cache - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.collection.rst b/docs/source/pype.vendor.ftrack_api_old.collection.rst deleted file mode 100644 index de911619fc..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.collection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.collection module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.collection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.data.rst b/docs/source/pype.vendor.ftrack_api_old.data.rst deleted file mode 100644 index 2f67185cee..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.data.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.data module -======================================== - -.. automodule:: pype.vendor.ftrack_api_old.data - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.asset_version.rst b/docs/source/pype.vendor.ftrack_api_old.entity.asset_version.rst deleted file mode 100644 index 7ad3d87fd9..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.asset_version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.asset\_version module -========================================================= - -.. automodule:: pype.vendor.ftrack_api_old.entity.asset_version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.base.rst b/docs/source/pype.vendor.ftrack_api_old.entity.base.rst deleted file mode 100644 index b87428f817..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.base module -=============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.component.rst b/docs/source/pype.vendor.ftrack_api_old.entity.component.rst deleted file mode 100644 index 27901ab786..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.component.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.component module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.component - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.factory.rst b/docs/source/pype.vendor.ftrack_api_old.entity.factory.rst deleted file mode 100644 index caada5c3c8..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.factory.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.factory module -================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.factory - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.job.rst b/docs/source/pype.vendor.ftrack_api_old.entity.job.rst deleted file mode 100644 index 6f4ca18323..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.job.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.job module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.job - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.location.rst b/docs/source/pype.vendor.ftrack_api_old.entity.location.rst deleted file mode 100644 index 2f0b380349..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.location.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.location module -=================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.location - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.note.rst b/docs/source/pype.vendor.ftrack_api_old.entity.note.rst deleted file mode 100644 index c04e959402..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.note.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.note module -=============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.note - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.project_schema.rst b/docs/source/pype.vendor.ftrack_api_old.entity.project_schema.rst deleted file mode 100644 index 6332a2e523..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.project_schema.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.project\_schema module -========================================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.project_schema - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.rst b/docs/source/pype.vendor.ftrack_api_old.entity.rst deleted file mode 100644 index bb43a7621b..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.rst +++ /dev/null @@ -1,82 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity package -=========================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.entity.asset\_version module ---------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.asset_version - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.base module ------------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.entity.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.component module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.component - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.factory module --------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.factory - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.job module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.job - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.location module ---------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.location - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.note module ------------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.entity.note - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.project\_schema module ----------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.entity.project_schema - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.entity.user module ------------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.entity.user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.entity.user.rst b/docs/source/pype.vendor.ftrack_api_old.entity.user.rst deleted file mode 100644 index c0fe6574a6..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.entity.user.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.entity.user module -=============================================== - -.. automodule:: pype.vendor.ftrack_api_old.entity.user - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.base.rst b/docs/source/pype.vendor.ftrack_api_old.event.base.rst deleted file mode 100644 index 74b86e3bb2..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.base module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.expression.rst b/docs/source/pype.vendor.ftrack_api_old.event.expression.rst deleted file mode 100644 index 860678797b..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.expression.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.expression module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.expression - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.hub.rst b/docs/source/pype.vendor.ftrack_api_old.event.hub.rst deleted file mode 100644 index d09d52eedf..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.hub.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.hub module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.event.hub - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.rst b/docs/source/pype.vendor.ftrack_api_old.event.rst deleted file mode 100644 index 2db27bf7f8..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.rst +++ /dev/null @@ -1,50 +0,0 @@ -pype.vendor.ftrack\_api\_old.event package -========================================== - -.. automodule:: pype.vendor.ftrack_api_old.event - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.event.base module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.expression module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.expression - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.hub module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.hub - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.subscriber module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.subscriber - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.event.subscription module ------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.event.subscription - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.subscriber.rst b/docs/source/pype.vendor.ftrack_api_old.event.subscriber.rst deleted file mode 100644 index a9bd13aabc..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.subscriber.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.subscriber module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.subscriber - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.event.subscription.rst b/docs/source/pype.vendor.ftrack_api_old.event.subscription.rst deleted file mode 100644 index 423fa9a688..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.event.subscription.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.event.subscription module -====================================================== - -.. automodule:: pype.vendor.ftrack_api_old.event.subscription - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.exception.rst b/docs/source/pype.vendor.ftrack_api_old.exception.rst deleted file mode 100644 index 54dbeeac36..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.exception.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.exception module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.exception - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.formatter.rst b/docs/source/pype.vendor.ftrack_api_old.formatter.rst deleted file mode 100644 index 75a23eefca..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.formatter.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.formatter module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.formatter - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.inspection.rst b/docs/source/pype.vendor.ftrack_api_old.inspection.rst deleted file mode 100644 index 2b8849b3d0..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.inspection.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.inspection module -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.inspection - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.logging.rst b/docs/source/pype.vendor.ftrack_api_old.logging.rst deleted file mode 100644 index a10fa10c26..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.logging.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.logging module -=========================================== - -.. automodule:: pype.vendor.ftrack_api_old.logging - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.operation.rst b/docs/source/pype.vendor.ftrack_api_old.operation.rst deleted file mode 100644 index a1d9d606f8..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.operation.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.operation module -============================================= - -.. automodule:: pype.vendor.ftrack_api_old.operation - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.plugin.rst b/docs/source/pype.vendor.ftrack_api_old.plugin.rst deleted file mode 100644 index 0f26c705d2..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.plugin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.plugin module -========================================== - -.. automodule:: pype.vendor.ftrack_api_old.plugin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.query.rst b/docs/source/pype.vendor.ftrack_api_old.query.rst deleted file mode 100644 index 5cf5aba0e4..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.query.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.query module -========================================= - -.. automodule:: pype.vendor.ftrack_api_old.query - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.base.rst b/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.base.rst deleted file mode 100644 index dccf51ea71..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.resource\_identifier\_transformer.base module -========================================================================== - -.. automodule:: pype.vendor.ftrack_api_old.resource_identifier_transformer.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.rst b/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.rst deleted file mode 100644 index 342ecd9321..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.resource_identifier_transformer.rst +++ /dev/null @@ -1,18 +0,0 @@ -pype.vendor.ftrack\_api\_old.resource\_identifier\_transformer package -====================================================================== - -.. automodule:: pype.vendor.ftrack_api_old.resource_identifier_transformer - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.resource\_identifier\_transformer.base module --------------------------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.resource_identifier_transformer.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.rst b/docs/source/pype.vendor.ftrack_api_old.rst deleted file mode 100644 index 51d0a29357..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.rst +++ /dev/null @@ -1,126 +0,0 @@ -pype.vendor.ftrack\_api\_old package -==================================== - -.. automodule:: pype.vendor.ftrack_api_old - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.ftrack_api_old.accessor - pype.vendor.ftrack_api_old.entity - pype.vendor.ftrack_api_old.event - pype.vendor.ftrack_api_old.resource_identifier_transformer - pype.vendor.ftrack_api_old.structure - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.attribute module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.attribute - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.cache module ------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.cache - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.collection module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.collection - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.data module ----------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.data - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.exception module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.exception - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.formatter module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.formatter - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.inspection module ----------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.inspection - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.logging module -------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.logging - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.operation module ---------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.operation - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.plugin module ------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.plugin - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.query module ------------------------------------------ - -.. automodule:: pype.vendor.ftrack_api_old.query - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.session module -------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.session - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.symbol module ------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.symbol - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.session.rst b/docs/source/pype.vendor.ftrack_api_old.session.rst deleted file mode 100644 index beecdeb6af..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.session.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.session module -=========================================== - -.. automodule:: pype.vendor.ftrack_api_old.session - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.base.rst b/docs/source/pype.vendor.ftrack_api_old.structure.base.rst deleted file mode 100644 index 617d8aaed7..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.base.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.base module -================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.base - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.entity_id.rst b/docs/source/pype.vendor.ftrack_api_old.structure.entity_id.rst deleted file mode 100644 index ab6fd0997a..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.entity_id.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.entity\_id module -======================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.entity_id - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.id.rst b/docs/source/pype.vendor.ftrack_api_old.structure.id.rst deleted file mode 100644 index 6b887b7917..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.id.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.id module -================================================ - -.. automodule:: pype.vendor.ftrack_api_old.structure.id - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.origin.rst b/docs/source/pype.vendor.ftrack_api_old.structure.origin.rst deleted file mode 100644 index 8ad5fbdc11..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.origin.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.origin module -==================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.origin - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.rst b/docs/source/pype.vendor.ftrack_api_old.structure.rst deleted file mode 100644 index 2402430589..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.rst +++ /dev/null @@ -1,50 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure package -============================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.vendor.ftrack\_api\_old.structure.base module --------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.base - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.entity\_id module --------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.entity_id - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.id module ------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.id - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.origin module ----------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.origin - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.ftrack\_api\_old.structure.standard module ------------------------------------------------------- - -.. automodule:: pype.vendor.ftrack_api_old.structure.standard - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.structure.standard.rst b/docs/source/pype.vendor.ftrack_api_old.structure.standard.rst deleted file mode 100644 index 800201084f..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.structure.standard.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.structure.standard module -====================================================== - -.. automodule:: pype.vendor.ftrack_api_old.structure.standard - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.ftrack_api_old.symbol.rst b/docs/source/pype.vendor.ftrack_api_old.symbol.rst deleted file mode 100644 index bc358d374a..0000000000 --- a/docs/source/pype.vendor.ftrack_api_old.symbol.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.ftrack\_api\_old.symbol module -========================================== - -.. automodule:: pype.vendor.ftrack_api_old.symbol - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.pysync.rst b/docs/source/pype.vendor.pysync.rst deleted file mode 100644 index fbe5b33fb7..0000000000 --- a/docs/source/pype.vendor.pysync.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.vendor.pysync module -========================= - -.. automodule:: pype.vendor.pysync - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.vendor.rst b/docs/source/pype.vendor.rst deleted file mode 100644 index 23aa17f7ab..0000000000 --- a/docs/source/pype.vendor.rst +++ /dev/null @@ -1,37 +0,0 @@ -pype.vendor package -=================== - -.. automodule:: pype.vendor - :members: - :undoc-members: - :show-inheritance: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 6 - - pype.vendor.backports - pype.vendor.builtins - pype.vendor.capture_gui - pype.vendor.ftrack_api_old - -Submodules ----------- - -pype.vendor.capture module --------------------------- - -.. automodule:: pype.vendor.capture - :members: - :undoc-members: - :show-inheritance: - -pype.vendor.pysync module -------------------------- - -.. automodule:: pype.vendor.pysync - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.version.rst b/docs/source/pype.version.rst deleted file mode 100644 index 7ec69dc423..0000000000 --- a/docs/source/pype.version.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.version module -=================== - -.. automodule:: pype.version - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.message_window.rst b/docs/source/pype.widgets.message_window.rst deleted file mode 100644 index 60be203837..0000000000 --- a/docs/source/pype.widgets.message_window.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.widgets.message\_window module -=================================== - -.. automodule:: pype.widgets.message_window - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.popup.rst b/docs/source/pype.widgets.popup.rst deleted file mode 100644 index 7186ff48de..0000000000 --- a/docs/source/pype.widgets.popup.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.widgets.popup module -========================= - -.. automodule:: pype.widgets.popup - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.project_settings.rst b/docs/source/pype.widgets.project_settings.rst deleted file mode 100644 index 9589cf5479..0000000000 --- a/docs/source/pype.widgets.project_settings.rst +++ /dev/null @@ -1,7 +0,0 @@ -pype.widgets.project\_settings module -===================================== - -.. automodule:: pype.widgets.project_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/pype.widgets.rst b/docs/source/pype.widgets.rst deleted file mode 100644 index 1f09318b67..0000000000 --- a/docs/source/pype.widgets.rst +++ /dev/null @@ -1,34 +0,0 @@ -pype.widgets package -==================== - -.. automodule:: pype.widgets - :members: - :undoc-members: - :show-inheritance: - -Submodules ----------- - -pype.widgets.message\_window module ------------------------------------ - -.. automodule:: pype.widgets.message_window - :members: - :undoc-members: - :show-inheritance: - -pype.widgets.popup module -------------------------- - -.. automodule:: pype.widgets.popup - :members: - :undoc-members: - :show-inheritance: - -pype.widgets.project\_settings module -------------------------------------- - -.. automodule:: pype.widgets.project_settings - :members: - :undoc-members: - :show-inheritance: diff --git a/docs/source/readme.rst b/docs/source/readme.rst index 823c0df3c8..138b88bba8 100644 --- a/docs/source/readme.rst +++ b/docs/source/readme.rst @@ -1,2 +1,6 @@ -.. title:: Pype Readme +=============== +OpenPype Readme +=============== + .. include:: ../../README.md + :parser: myst_parser.sphinx_ diff --git a/igniter/__init__.py b/igniter/__init__.py index aa1b1d209e..16ffb940f6 100644 --- a/igniter/__init__.py +++ b/igniter/__init__.py @@ -19,21 +19,37 @@ if "OpenPypeVersion" not in sys.modules: sys.modules["OpenPypeVersion"] = OpenPypeVersion +def _get_qt_app(): + from qtpy import QtWidgets, QtCore + + app = QtWidgets.QApplication.instance() + if app is not None: + return app + + for attr_name in ( + "AA_EnableHighDpiScaling", + "AA_UseHighDpiPixmaps", + ): + attr = getattr(QtCore.Qt, attr_name, None) + if attr is not None: + QtWidgets.QApplication.setAttribute(attr) + + if hasattr(QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy"): + QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( + QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough + ) + + return QtWidgets.QApplication(sys.argv) + + def open_dialog(): """Show Igniter dialog.""" if os.getenv("OPENPYPE_HEADLESS_MODE"): print("!!! Can't open dialog in headless mode. Exiting.") sys.exit(1) - from qtpy import QtWidgets, QtCore from .install_dialog import InstallDialog - scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None) - if scale_attr is not None: - QtWidgets.QApplication.setAttribute(scale_attr) - - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = _get_qt_app() d = InstallDialog() d.open() @@ -47,16 +63,10 @@ def open_update_window(openpype_version): if os.getenv("OPENPYPE_HEADLESS_MODE"): print("!!! Can't open dialog in headless mode. Exiting.") sys.exit(1) - from qtpy import QtWidgets, QtCore + from .update_window import UpdateWindow - scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None) - if scale_attr is not None: - QtWidgets.QApplication.setAttribute(scale_attr) - - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = _get_qt_app() d = UpdateWindow(version=openpype_version) d.open() @@ -71,16 +81,10 @@ def show_message_dialog(title, message): if os.getenv("OPENPYPE_HEADLESS_MODE"): print("!!! Can't open dialog in headless mode. Exiting.") sys.exit(1) - from qtpy import QtWidgets, QtCore + from .message_dialog import MessageDialog - scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None) - if scale_attr is not None: - QtWidgets.QApplication.setAttribute(scale_attr) - - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = _get_qt_app() dialog = MessageDialog(title, message) dialog.open() diff --git a/igniter/bootstrap_repos.py b/igniter/bootstrap_repos.py index 6c7c834062..408764e1a8 100644 --- a/igniter/bootstrap_repos.py +++ b/igniter/bootstrap_repos.py @@ -25,7 +25,8 @@ from .user_settings import ( from .tools import ( get_openpype_global_settings, get_openpype_path_from_settings, - get_expected_studio_version_str + get_expected_studio_version_str, + get_local_openpype_path_from_settings ) @@ -34,6 +35,29 @@ LOG_WARNING = 1 LOG_ERROR = 3 +def sanitize_long_path(path): + """Sanitize long paths (260 characters) when on Windows. + + Long paths are not capatible with ZipFile or reading a file, so we can + shorten the path to use. + + Args: + path (str): path to either directory or file. + + Returns: + str: sanitized path + """ + if platform.system().lower() != "windows": + return path + path = os.path.abspath(path) + + if path.startswith("\\\\"): + path = "\\\\?\\UNC\\" + path[2:] + else: + path = "\\\\?\\" + path + return path + + def sha256sum(filename): """Calculate sha256 for content of the file. @@ -53,6 +77,13 @@ def sha256sum(filename): return h.hexdigest() +class ZipFileLongPaths(ZipFile): + def _extract_member(self, member, targetpath, pwd): + return ZipFile._extract_member( + self, member, sanitize_long_path(targetpath), pwd + ) + + class OpenPypeVersion(semver.VersionInfo): """Class for storing information about OpenPype version. @@ -61,6 +92,8 @@ class OpenPypeVersion(semver.VersionInfo): """ path = None + + _local_openpype_path = None # this should match any string complying with https://semver.org/ _VERSION_REGEX = re.compile(r"(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P[a-zA-Z\d\-.]*))?(?:\+(?P[a-zA-Z\d\-.]*))?") # noqa: E501 _installed_version = None @@ -289,6 +322,23 @@ class OpenPypeVersion(semver.VersionInfo): """ return os.getenv("OPENPYPE_PATH") + @classmethod + def get_local_openpype_path(cls): + """Path to unzipped versions. + + By default it should be user appdata, but could be overridden by + settings. + """ + if cls._local_openpype_path: + return cls._local_openpype_path + + settings = get_openpype_global_settings(os.environ["OPENPYPE_MONGO"]) + data_dir = get_local_openpype_path_from_settings(settings) + if not data_dir: + data_dir = Path(user_data_dir("openpype", "pypeclub")) + cls._local_openpype_path = data_dir + return data_dir + @classmethod def openpype_path_is_set(cls): """Path to OpenPype zip directory is set.""" @@ -319,9 +369,8 @@ class OpenPypeVersion(semver.VersionInfo): list: of compatible versions available on the machine. """ - # DEPRECATED: backwards compatible way to look for versions in root - dir_to_search = Path(user_data_dir("openpype", "pypeclub")) - versions = OpenPypeVersion.get_versions_from_directory(dir_to_search) + dir_to_search = cls.get_local_openpype_path() + versions = cls.get_versions_from_directory(dir_to_search) return list(sorted(set(versions))) @@ -533,17 +582,15 @@ class BootstrapRepos: """ # vendor and app used to construct user data dir - self._vendor = "pypeclub" - self._app = "openpype" + self._message = message self._log = log.getLogger(str(__class__)) - self.data_dir = Path(user_data_dir(self._app, self._vendor)) + self.set_data_dir(None) self.secure_registry = OpenPypeSecureRegistry("mongodb") self.registry = OpenPypeSettingsRegistry() self.zip_filter = [".pyc", "__pycache__"] self.openpype_filter = [ "openpype", "schema", "LICENSE" ] - self._message = message # dummy progress reporter def empty_progress(x: int): @@ -554,6 +601,13 @@ class BootstrapRepos: progress_callback = empty_progress self._progress_callback = progress_callback + def set_data_dir(self, data_dir): + if not data_dir: + self.data_dir = Path(user_data_dir("openpype", "pypeclub")) + else: + self._print(f"overriding local folder: {data_dir}") + self.data_dir = data_dir + @staticmethod def get_version_path_from_list( version: str, version_list: list) -> Union[Path, None]: @@ -756,7 +810,7 @@ class BootstrapRepos: def _create_openpype_zip(self, zip_path: Path, openpype_path: Path) -> None: """Pack repositories and OpenPype into zip. - We are using :mod:`zipfile` instead :meth:`shutil.make_archive` + We are using :mod:`ZipFile` instead :meth:`shutil.make_archive` because we need to decide what file and directories to include in zip and what not. They are determined by :attr:`zip_filter` on file level and :attr:`openpype_filter` on top level directory in OpenPype @@ -810,7 +864,7 @@ class BootstrapRepos: checksums.append( ( - sha256sum(file.as_posix()), + sha256sum(sanitize_long_path(file.as_posix())), file.resolve().relative_to(openpype_root) ) ) @@ -934,7 +988,9 @@ class BootstrapRepos: if platform.system().lower() == "windows": file_name = file_name.replace("/", "\\") try: - current = sha256sum((path / file_name).as_posix()) + current = sha256sum( + sanitize_long_path((path / file_name).as_posix()) + ) except FileNotFoundError: return False, f"Missing file [ {file_name} ]" @@ -1246,7 +1302,7 @@ class BootstrapRepos: # extract zip there self._print("Extracting zip to destination ...") - with ZipFile(version.path, "r") as zip_ref: + with ZipFileLongPaths(version.path, "r") as zip_ref: zip_ref.extractall(destination) self._print(f"Installed as {version.path.stem}") @@ -1362,7 +1418,7 @@ class BootstrapRepos: # extract zip there self._print("extracting zip to destination ...") - with ZipFile(openpype_version.path, "r") as zip_ref: + with ZipFileLongPaths(openpype_version.path, "r") as zip_ref: self._progress_callback(75) zip_ref.extractall(destination) self._progress_callback(100) diff --git a/igniter/install_thread.py b/igniter/install_thread.py index 4723e6adfb..1d55213de7 100644 --- a/igniter/install_thread.py +++ b/igniter/install_thread.py @@ -14,7 +14,11 @@ from .bootstrap_repos import ( OpenPypeVersion ) -from .tools import validate_mongo_connection +from .tools import ( + get_openpype_global_settings, + get_local_openpype_path_from_settings, + validate_mongo_connection +) class InstallThread(QtCore.QThread): @@ -80,6 +84,15 @@ class InstallThread(QtCore.QThread): return os.environ["OPENPYPE_MONGO"] = self._mongo + if not validate_mongo_connection(self._mongo): + self.message.emit(f"Cannot connect to {self._mongo}", True) + self._set_result(-1) + return + + global_settings = get_openpype_global_settings(self._mongo) + data_dir = get_local_openpype_path_from_settings(global_settings) + bs.set_data_dir(data_dir) + self.message.emit( f"Detecting installed OpenPype versions in {bs.data_dir}", False) diff --git a/igniter/tools.py b/igniter/tools.py index 79235b2329..9dea203f0c 100644 --- a/igniter/tools.py +++ b/igniter/tools.py @@ -40,7 +40,7 @@ def should_add_certificate_path_to_mongo_url(mongo_url): add_certificate = False # Check if url 'ssl' or 'tls' are set to 'true' for key in ("ssl", "tls"): - if key in query and "true" in query["ssl"]: + if key in query and "true" in query[key]: add_certificate = True break @@ -73,7 +73,7 @@ def validate_mongo_connection(cnx: str) -> (bool, str): } # Add certificate path if should be required if should_add_certificate_path_to_mongo_url(cnx): - kwargs["ssl_ca_certs"] = certifi.where() + kwargs["tlsCAFile"] = certifi.where() try: client = MongoClient(cnx, **kwargs) @@ -147,7 +147,7 @@ def get_openpype_global_settings(url: str) -> dict: """ kwargs = {} if should_add_certificate_path_to_mongo_url(url): - kwargs["ssl_ca_certs"] = certifi.where() + kwargs["tlsCAFile"] = certifi.where() try: # Create mongo connection @@ -188,6 +188,26 @@ def get_openpype_path_from_settings(settings: dict) -> Union[str, None]: return next((path for path in paths if os.path.exists(path)), None) +def get_local_openpype_path_from_settings(settings: dict) -> Union[str, None]: + """Get OpenPype local path from global settings. + + Used to download and unzip OP versions. + Args: + settings (dict): settings from DB. + + Returns: + path to OpenPype or None if not found + """ + path = ( + settings + .get("local_openpype_path", {}) + .get(platform.system().lower()) + ) + if path: + return Path(path) + return None + + def get_expected_studio_version_str( staging=False, global_settings=None ) -> str: diff --git a/igniter/update_thread.py b/igniter/update_thread.py index e98c95f892..0223477d0a 100644 --- a/igniter/update_thread.py +++ b/igniter/update_thread.py @@ -48,6 +48,8 @@ class UpdateThread(QtCore.QThread): """ bs = BootstrapRepos( progress_callback=self.set_progress, message=self.message) + + bs.set_data_dir(OpenPypeVersion.get_local_openpype_path()) version_path = bs.install_version(self._openpype_version) self._set_result(version_path) diff --git a/openpype/__init__.py b/openpype/__init__.py index 810664707a..e6b77b1853 100644 --- a/openpype/__init__.py +++ b/openpype/__init__.py @@ -3,3 +3,5 @@ import os PACKAGE_DIR = os.path.dirname(os.path.abspath(__file__)) PLUGINS_DIR = os.path.join(PACKAGE_DIR, "plugins") + +AYON_SERVER_ENABLED = os.environ.get("USE_AYON_SERVER") == "1" diff --git a/openpype/action.py b/openpype/action.py deleted file mode 100644 index 6114c65fd4..0000000000 --- a/openpype/action.py +++ /dev/null @@ -1,135 +0,0 @@ -import warnings -import functools -import pyblish.api - - -class ActionDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", ActionDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=ActionDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.publish.get_errored_instances_from_context") -def get_errored_instances_from_context(context, plugin=None): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_instances_from_context - - return get_errored_instances_from_context(context, plugin=plugin) - - -@deprecated("openpype.pipeline.publish.get_errored_plugins_from_context") -def get_errored_plugins_from_data(context): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_plugins_from_context - - return get_errored_plugins_from_context(context) - - -class RepairAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - icon = "wrench" # Icon from Awesome Icon - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) - for instance in errored_instances: - plugin.repair(instance) - - -class RepairContextAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_plugins = get_errored_plugins_from_data(context) - - # Apply pyblish.logic to get the instances for the plug-in - if plugin in errored_plugins: - self.log.info("Attempting fix ...") - plugin.repair(context) diff --git a/openpype/cli.py b/openpype/cli.py index 54af42920d..0df277fb0a 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -5,11 +5,25 @@ import sys import code import click -# import sys +from openpype import AYON_SERVER_ENABLED from .pype_commands import PypeCommands -@click.group(invoke_without_command=True) +class AliasedGroup(click.Group): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + self._aliases = {} + + def set_alias(self, src_name, dst_name): + self._aliases[dst_name] = src_name + + def get_command(self, ctx, cmd_name): + if cmd_name in self._aliases: + cmd_name = self._aliases[cmd_name] + return super().get_command(ctx, cmd_name) + + +@click.group(cls=AliasedGroup, invoke_without_command=True) @click.pass_context @click.option("--use-version", expose_value=False, help="use specified version") @@ -33,7 +47,11 @@ def main(ctx): if ctx.invoked_subcommand is None: # Print help if headless mode is used - if os.environ.get("OPENPYPE_HEADLESS_MODE") == "1": + if AYON_SERVER_ENABLED: + is_headless = os.getenv("AYON_HEADLESS_MODE") == "1" + else: + is_headless = os.getenv("OPENPYPE_HEADLESS_MODE") == "1" + if is_headless: print(ctx.get_help()) sys.exit(0) else: @@ -44,6 +62,9 @@ def main(ctx): @click.option("-d", "--dev", is_flag=True, help="Settings in Dev mode") def settings(dev): """Show Pype Settings UI.""" + + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'settings' command.") PypeCommands().launch_settings_gui(dev) @@ -58,16 +79,20 @@ def tray(): @PypeCommands.add_modules -@main.group(help="Run command line arguments of OpenPype modules") +@main.group(help="Run command line arguments of OpenPype addons") @click.pass_context def module(ctx): - """Module specific commands created dynamically. + """Addon specific commands created dynamically. - These commands are generated dynamically by currently loaded addon/modules. + These commands are generated dynamically by currently loaded addons. """ pass +# Add 'addon' as alias for module +main.set_alias("module", "addon") + + @main.command() @click.option("--ftrack-url", envvar="FTRACK_SERVER", help="Ftrack server url") @@ -93,6 +118,8 @@ def eventserver(ftrack_url, on linux and window service). """ + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'eventserver' command.") PypeCommands().launch_eventservercli( ftrack_url, ftrack_user, @@ -117,6 +144,10 @@ def webpublisherwebserver(executable, upload_dir, host=None, port=None): Expect "pype.club" user created on Ftrack. """ + if AYON_SERVER_ENABLED: + raise RuntimeError( + "AYON does not support 'webpublisherwebserver' command." + ) PypeCommands().launch_webpublisher_webservercli( upload_dir=upload_dir, executable=executable, @@ -165,122 +196,10 @@ def publish(paths, targets, gui): PypeCommands.publish(list(paths), targets, gui) -@main.command() -@click.argument("path") -@click.option("-h", "--host", help="Host") -@click.option("-u", "--user", help="User email address") -@click.option("-p", "--project", help="Project") -@click.option("-t", "--targets", help="Targets", default=None, - multiple=True) -def remotepublishfromapp(project, path, host, user=None, targets=None): - """Start CLI publishing. - - Publish collects json from paths provided as an argument. - More than one path is allowed. - """ - - PypeCommands.remotepublishfromapp( - project, path, host, user, targets=targets - ) - - -@main.command() -@click.argument("path") -@click.option("-u", "--user", help="User email address") -@click.option("-p", "--project", help="Project") -@click.option("-t", "--targets", help="Targets", default=None, - multiple=True) -def remotepublish(project, path, user=None, targets=None): - """Start CLI publishing. - - Publish collects json from paths provided as an argument. - More than one path is allowed. - """ - - PypeCommands.remotepublish(project, path, user, targets=targets) - - -@main.command() -@click.option("-p", "--project", required=True, - help="name of project asset is under") -@click.option("-a", "--asset", required=True, - help="name of asset to which we want to copy textures") -@click.option("--path", required=True, - help="path where textures are found", - type=click.Path(exists=True)) -def texturecopy(project, asset, path): - """Copy specified textures to provided asset path. - - It validates if project and asset exists. Then it will use speedcopy to - copy all textures found in all directories under --path to destination - folder, determined by template texture in anatomy. I will use source - filename and automatically rise version number on directory. - - Result will be copied without directory structure so it will be flat then. - Nothing is written to database. - """ - - PypeCommands().texture_copy(project, asset, path) - - -@main.command(context_settings={"ignore_unknown_options": True}) -@click.option("--app", help="Registered application name") -@click.option("--project", help="Project name", - default=lambda: os.environ.get('AVALON_PROJECT', '')) -@click.option("--asset", help="Asset name", - default=lambda: os.environ.get('AVALON_ASSET', '')) -@click.option("--task", help="Task name", - default=lambda: os.environ.get('AVALON_TASK', '')) -@click.option("--tools", help="List of tools to add") -@click.option("--user", help="Pype user name", - default=lambda: os.environ.get('OPENPYPE_USERNAME', '')) -@click.option("-fs", - "--ftrack-server", - help="Registered application name", - default=lambda: os.environ.get('FTRACK_SERVER', '')) -@click.option("-fu", - "--ftrack-user", - help="Registered application name", - default=lambda: os.environ.get('FTRACK_API_USER', '')) -@click.option("-fk", - "--ftrack-key", - help="Registered application name", - default=lambda: os.environ.get('FTRACK_API_KEY', '')) -@click.argument('arguments', nargs=-1) -def launch(app, project, asset, task, - ftrack_server, ftrack_user, ftrack_key, tools, arguments, user): - """Launch registered application name in Pype context. - - You can define applications in pype-config toml files. Project, asset name - and task name must be provided (even if they are not used by app itself). - Optionally you can specify ftrack credentials if needed. - - ARGUMENTS are passed to launched application. - - """ - # TODO: this needs to switch for Settings - if ftrack_server: - os.environ["FTRACK_SERVER"] = ftrack_server - - if ftrack_server: - os.environ["FTRACK_API_USER"] = ftrack_user - - if ftrack_server: - os.environ["FTRACK_API_KEY"] = ftrack_key - - if user: - os.environ["OPENPYPE_USERNAME"] = user - - # test required - if not project or not asset or not task: - print("!!! Missing required arguments") - return - - PypeCommands().run_application(app, project, asset, task, tools, arguments) - - @main.command(context_settings={"ignore_unknown_options": True}) def projectmanager(): + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'projectmanager' command.") PypeCommands().launch_project_manager() @@ -378,12 +297,18 @@ def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant, persist, app_variant, timeout, setup_only) -@main.command() +@main.command(help="DEPRECATED - run sync server") +@click.pass_context @click.option("-a", "--active_site", required=True, - help="Name of active stie") -def syncserver(active_site): + help="Name of active site") +def syncserver(ctx, active_site): """Run sync site server in background. + Deprecated: + This command is deprecated and will be removed in future versions. + Use '~/openpype_console module sync_server syncservice' instead. + + Details: Some Site Sync use cases need to expose site to another one. For example if majority of artists work in studio, they are not using SS at all, but if you want to expose published assets to 'studio' site @@ -397,7 +322,12 @@ def syncserver(active_site): var OPENPYPE_LOCAL_ID set to 'active_site'. """ - PypeCommands().syncserver(active_site) + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'syncserver' command.") + + from openpype.modules.sync_server.sync_server_module import ( + syncservice) + ctx.invoke(syncservice, active_site=active_site) @main.command() @@ -409,6 +339,8 @@ def repack_version(directory): recalculating file checksums. It will try to use version detected in directory name. """ + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'repack-version' command.") PypeCommands().repack_version(directory) @@ -420,6 +352,9 @@ def repack_version(directory): "--dbonly", help="Store only Database data", default=False, is_flag=True) def pack_project(project, dirpath, dbonly): """Create a package of project with all files and database dump.""" + + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'pack-project' command.") PypeCommands().pack_project(project, dirpath, dbonly) @@ -432,6 +367,8 @@ def pack_project(project, dirpath, dbonly): "--dbonly", help="Store only Database data", default=False, is_flag=True) def unpack_project(zipfile, root, dbonly): """Create a package of project with all files and database dump.""" + if AYON_SERVER_ENABLED: + raise RuntimeError("AYON does not support 'unpack-project' command.") PypeCommands().unpack_project(zipfile, root, dbonly) @@ -446,9 +383,17 @@ def interactive(): Executable 'openpype_gui' on Windows won't work. """ - from openpype.version import __version__ + if AYON_SERVER_ENABLED: + version = os.environ["AYON_VERSION"] + banner = ( + f"AYON launcher {version}\nPython {sys.version} on {sys.platform}" + ) + else: + from openpype.version import __version__ - banner = f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}" + banner = ( + f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}" + ) code.interact(banner) @@ -457,11 +402,13 @@ def interactive(): is_flag=True, default=False) def version(build): """Print OpenPype version.""" + if AYON_SERVER_ENABLED: + print(os.environ["AYON_VERSION"]) + return from openpype.version import __version__ from igniter.bootstrap_repos import BootstrapRepos, OpenPypeVersion from pathlib import Path - import os if getattr(sys, 'frozen', False): local_version = BootstrapRepos.get_version( diff --git a/openpype/client/entities.py b/openpype/client/entities.py index adbdd7a47c..5d9654c611 100644 --- a/openpype/client/entities.py +++ b/openpype/client/entities.py @@ -1,1553 +1,6 @@ -"""Unclear if these will have public functions like these. +from openpype import AYON_SERVER_ENABLED -Goal is that most of functions here are called on (or with) an object -that has project name as a context (e.g. on 'ProjectEntity'?). - -+ We will need more specific functions doing very specific queries really fast. -""" - -import re -import collections - -import six -from bson.objectid import ObjectId - -from .mongo import get_project_database, get_project_connection - -PatternType = type(re.compile("")) - - -def _prepare_fields(fields, required_fields=None): - if not fields: - return None - - output = { - field: True - for field in fields - } - if "_id" not in output: - output["_id"] = True - - if required_fields: - for key in required_fields: - output[key] = True - return output - - -def convert_id(in_id): - """Helper function for conversion of id from string to ObjectId. - - Args: - in_id (Union[str, ObjectId, Any]): Entity id that should be converted - to right type for queries. - - Returns: - Union[ObjectId, Any]: Converted ids to ObjectId or in type. - """ - - if isinstance(in_id, six.string_types): - return ObjectId(in_id) - return in_id - - -def convert_ids(in_ids): - """Helper function for conversion of ids from string to ObjectId. - - Args: - in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that - should be converted to right type for queries. - - Returns: - List[ObjectId]: Converted ids to ObjectId. - """ - - _output = set() - for in_id in in_ids: - if in_id is not None: - _output.add(convert_id(in_id)) - return list(_output) - - -def get_projects(active=True, inactive=False, fields=None): - """Yield all project entity documents. - - Args: - active (Optional[bool]): Include active projects. Defaults to True. - inactive (Optional[bool]): Include inactive projects. - Defaults to False. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Yields: - dict: Project entity data which can be reduced to specified 'fields'. - None is returned if project with specified filters was not found. - """ - mongodb = get_project_database() - for project_name in mongodb.collection_names(): - if project_name in ("system.indexes",): - continue - project_doc = get_project( - project_name, active=active, inactive=inactive, fields=fields - ) - if project_doc is not None: - yield project_doc - - -def get_project(project_name, active=True, inactive=True, fields=None): - """Return project entity document by project name. - - Args: - project_name (str): Name of project. - active (Optional[bool]): Allow active project. Defaults to True. - inactive (Optional[bool]): Allow inactive project. Defaults to True. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Project entity data which can be reduced to - specified 'fields'. None is returned if project with specified - filters was not found. - """ - # Skip if both are disabled - if not active and not inactive: - return None - - query_filter = {"type": "project"} - # Keep query untouched if both should be available - if active and inactive: - pass - - # Add filter to keep only active - elif active: - query_filter["$or"] = [ - {"data.active": {"$exists": False}}, - {"data.active": True}, - ] - - # Add filter to keep only inactive - elif inactive: - query_filter["$or"] = [ - {"data.active": {"$exists": False}}, - {"data.active": False}, - ] - - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_whole_project(project_name): - """Receive all documents from project. - - Helper that can be used to get all document from whole project. For example - for backups etc. - - Returns: - Cursor: Query cursor as iterable which returns all documents from - project collection. - """ - - conn = get_project_connection(project_name) - return conn.find({}) - - -def get_asset_by_id(project_name, asset_id, fields=None): - """Receive asset data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_id (Union[str, ObjectId]): Asset's id. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Asset entity data which can be reduced to - specified 'fields'. None is returned if asset with specified - filters was not found. - """ - - asset_id = convert_id(asset_id) - if not asset_id: - return None - - query_filter = {"type": "asset", "_id": asset_id} - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_asset_by_name(project_name, asset_name, fields=None): - """Receive asset data by its name. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_name (str): Asset's name. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Asset entity data which can be reduced to - specified 'fields'. None is returned if asset with specified - filters was not found. - """ - - if not asset_name: - return None - - query_filter = {"type": "asset", "name": asset_name} - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -# NOTE this could be just public function? -# - any better variable name instead of 'standard'? -# - same approach can be used for rest of types -def _get_assets( - project_name, - asset_ids=None, - asset_names=None, - parent_ids=None, - standard=True, - archived=False, - fields=None -): - """Assets for specified project by passed filters. - - Passed filters (ids and names) are always combined so all conditions must - match. - - To receive all assets from project just keep filters empty. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should - be found. - asset_names (Iterable[str]): Name assets that should be found. - parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - standard (bool): Query standard assets (type 'asset'). - archived (bool): Query archived assets (type 'archived_asset'). - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Query cursor as iterable which returns asset documents matching - passed filters. - """ - - asset_types = [] - if standard: - asset_types.append("asset") - if archived: - asset_types.append("archived_asset") - - if not asset_types: - return [] - - if len(asset_types) == 1: - query_filter = {"type": asset_types[0]} - else: - query_filter = {"type": {"$in": asset_types}} - - if asset_ids is not None: - asset_ids = convert_ids(asset_ids) - if not asset_ids: - return [] - query_filter["_id"] = {"$in": asset_ids} - - if asset_names is not None: - if not asset_names: - return [] - query_filter["name"] = {"$in": list(asset_names)} - - if parent_ids is not None: - parent_ids = convert_ids(parent_ids) - if not parent_ids: - return [] - query_filter["data.visualParent"] = {"$in": parent_ids} - - conn = get_project_connection(project_name) - - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_assets( - project_name, - asset_ids=None, - asset_names=None, - parent_ids=None, - archived=False, - fields=None -): - """Assets for specified project by passed filters. - - Passed filters (ids and names) are always combined so all conditions must - match. - - To receive all assets from project just keep filters empty. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should - be found. - asset_names (Iterable[str]): Name assets that should be found. - parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - archived (bool): Add also archived assets. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Query cursor as iterable which returns asset documents matching - passed filters. - """ - - return _get_assets( - project_name, - asset_ids, - asset_names, - parent_ids, - True, - archived, - fields - ) - - -def get_archived_assets( - project_name, - asset_ids=None, - asset_names=None, - parent_ids=None, - fields=None -): - """Archived assets for specified project by passed filters. - - Passed filters (ids and names) are always combined so all conditions must - match. - - To receive all archived assets from project just keep filters empty. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should - be found. - asset_names (Iterable[str]): Name assets that should be found. - parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Query cursor as iterable which returns asset documents matching - passed filters. - """ - - return _get_assets( - project_name, asset_ids, asset_names, parent_ids, False, True, fields - ) - - -def get_asset_ids_with_subsets(project_name, asset_ids=None): - """Find out which assets have existing subsets. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_ids (Iterable[Union[str, ObjectId]]): Look only for entered - asset ids. - - Returns: - Iterable[ObjectId]: Asset ids that have existing subsets. - """ - - subset_query = { - "type": "subset" - } - if asset_ids is not None: - asset_ids = convert_ids(asset_ids) - if not asset_ids: - return [] - subset_query["parent"] = {"$in": asset_ids} - - conn = get_project_connection(project_name) - result = conn.aggregate([ - { - "$match": subset_query - }, - { - "$group": { - "_id": "$parent", - "count": {"$sum": 1} - } - } - ]) - asset_ids_with_subsets = [] - for item in result: - asset_id = item["_id"] - count = item["count"] - if count > 0: - asset_ids_with_subsets.append(asset_id) - return asset_ids_with_subsets - - -def get_subset_by_id(project_name, subset_id, fields=None): - """Single subset entity data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_id (Union[str, ObjectId]): Id of subset which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Subset entity data which can be reduced to - specified 'fields'. None is returned if subset with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - query_filters = {"type": "subset", "_id": subset_id} - conn = get_project_connection(project_name) - return conn.find_one(query_filters, _prepare_fields(fields)) - - -def get_subset_by_name(project_name, subset_name, asset_id, fields=None): - """Single subset entity data by its name and its version id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_name (str): Name of subset. - asset_id (Union[str, ObjectId]): Id of parent asset. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Subset entity data which can be reduced to - specified 'fields'. None is returned if subset with specified - filters was not found. - """ - if not subset_name: - return None - - asset_id = convert_id(asset_id) - if not asset_id: - return None - - query_filters = { - "type": "subset", - "name": subset_name, - "parent": asset_id - } - conn = get_project_connection(project_name) - return conn.find_one(query_filters, _prepare_fields(fields)) - - -def get_subsets( - project_name, - subset_ids=None, - subset_names=None, - asset_ids=None, - names_by_asset_ids=None, - archived=False, - fields=None -): - """Subset entities data from one project filtered by entered filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should be - queried. Filter ignored if 'None' is passed. - subset_names (Iterable[str]): Subset names that should be queried. - Filter ignored if 'None' is passed. - asset_ids (Iterable[Union[str, ObjectId]]): Asset ids under which - should look for the subsets. Filter ignored if 'None' is passed. - names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering - using asset ids and list of subset names under the asset. - archived (bool): Look for archived subsets too. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching subsets. - """ - - subset_types = ["subset"] - if archived: - subset_types.append("archived_subset") - - if len(subset_types) == 1: - query_filter = {"type": subset_types[0]} - else: - query_filter = {"type": {"$in": subset_types}} - - if asset_ids is not None: - asset_ids = convert_ids(asset_ids) - if not asset_ids: - return [] - query_filter["parent"] = {"$in": asset_ids} - - if subset_ids is not None: - subset_ids = convert_ids(subset_ids) - if not subset_ids: - return [] - query_filter["_id"] = {"$in": subset_ids} - - if subset_names is not None: - if not subset_names: - return [] - query_filter["name"] = {"$in": list(subset_names)} - - if names_by_asset_ids is not None: - or_query = [] - for asset_id, names in names_by_asset_ids.items(): - if asset_id and names: - or_query.append({ - "parent": convert_id(asset_id), - "name": {"$in": list(names)} - }) - if not or_query: - return [] - query_filter["$or"] = or_query - - conn = get_project_connection(project_name) - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_subset_families(project_name, subset_ids=None): - """Set of main families of subsets. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should - be queried. All subsets from project are used if 'None' is passed. - - Returns: - set[str]: Main families of matching subsets. - """ - - subset_filter = { - "type": "subset" - } - if subset_ids is not None: - if not subset_ids: - return set() - subset_filter["_id"] = {"$in": list(subset_ids)} - - conn = get_project_connection(project_name) - result = list(conn.aggregate([ - {"$match": subset_filter}, - {"$project": { - "family": {"$arrayElemAt": ["$data.families", 0]} - }}, - {"$group": { - "_id": "family_group", - "families": {"$addToSet": "$family"} - }} - ])) - if result: - return set(result[0]["families"]) - return set() - - -def get_version_by_id(project_name, version_id, fields=None): - """Single version entity data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Id of version which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - version_id = convert_id(version_id) - if not version_id: - return None - - query_filter = { - "type": {"$in": ["version", "hero_version"]}, - "_id": version_id - } - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_version_by_name(project_name, version, subset_id, fields=None): - """Single version entity data by its name and subset id. - - Args: - project_name (str): Name of project where to look for queried entities. - version (int): name of version entity (its version). - subset_id (Union[str, ObjectId]): Id of version which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - conn = get_project_connection(project_name) - query_filter = { - "type": "version", - "parent": subset_id, - "name": version - } - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def version_is_latest(project_name, version_id): - """Is version the latest from its subset. - - Note: - Hero versions are considered as latest. - - Todo: - Maybe raise exception when version was not found? - - Args: - project_name (str):Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Version id which is checked. - - Returns: - bool: True if is latest version from subset else False. - """ - - version_id = convert_id(version_id) - if not version_id: - return False - version_doc = get_version_by_id( - project_name, version_id, fields=["_id", "type", "parent"] - ) - # What to do when version is not found? - if not version_doc: - return False - - if version_doc["type"] == "hero_version": - return True - - last_version = get_last_version_by_subset_id( - project_name, version_doc["parent"], fields=["_id"] - ) - return last_version["_id"] == version_id - - -def _get_versions( - project_name, - subset_ids=None, - version_ids=None, - versions=None, - standard=True, - hero=False, - fields=None -): - version_types = [] - if standard: - version_types.append("version") - - if hero: - version_types.append("hero_version") - - if not version_types: - return [] - elif len(version_types) == 1: - query_filter = {"type": version_types[0]} - else: - query_filter = {"type": {"$in": version_types}} - - if subset_ids is not None: - subset_ids = convert_ids(subset_ids) - if not subset_ids: - return [] - query_filter["parent"] = {"$in": subset_ids} - - if version_ids is not None: - version_ids = convert_ids(version_ids) - if not version_ids: - return [] - query_filter["_id"] = {"$in": version_ids} - - if versions is not None: - versions = list(versions) - if not versions: - return [] - - if len(versions) == 1: - query_filter["name"] = versions[0] - else: - query_filter["name"] = {"$in": versions} - - conn = get_project_connection(project_name) - - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_versions( - project_name, - version_ids=None, - subset_ids=None, - versions=None, - hero=False, - fields=None -): - """Version entities data from one project filtered by entered filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - version_ids (Iterable[Union[str, ObjectId]]): Version ids that will - be queried. Filter ignored if 'None' is passed. - subset_ids (Iterable[str]): Subset ids that will be queried. - Filter ignored if 'None' is passed. - versions (Iterable[int]): Version names (as integers). - Filter ignored if 'None' is passed. - hero (bool): Look also for hero versions. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching versions. - """ - - return _get_versions( - project_name, - subset_ids, - version_ids, - versions, - standard=True, - hero=hero, - fields=fields - ) - - -def get_hero_version_by_subset_id(project_name, subset_id, fields=None): - """Hero version by subset id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_id (Union[str, ObjectId]): Subset id under which - is hero version. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Hero version entity data which can be reduced to - specified 'fields'. None is returned if hero version with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - versions = list(_get_versions( - project_name, - subset_ids=[subset_id], - standard=False, - hero=True, - fields=fields - )) - if versions: - return versions[0] - return None - - -def get_hero_version_by_id(project_name, version_id, fields=None): - """Hero version by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Hero version id. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Hero version entity data which can be reduced to - specified 'fields'. None is returned if hero version with specified - filters was not found. - """ - - version_id = convert_id(version_id) - if not version_id: - return None - - versions = list(_get_versions( - project_name, - version_ids=[version_id], - standard=False, - hero=True, - fields=fields - )) - if versions: - return versions[0] - return None - - -def get_hero_versions( - project_name, - subset_ids=None, - version_ids=None, - fields=None -): - """Hero version entities data from one project filtered by entered filters. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): Subset ids for which - should look for hero versions. Filter ignored if 'None' is passed. - version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter - ignored if 'None' is passed. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor|list: Iterable yielding hero versions matching passed filters. - """ - - return _get_versions( - project_name, - subset_ids, - version_ids, - standard=False, - hero=True, - fields=fields - ) - - -def get_output_link_versions(project_name, version_id, fields=None): - """Versions where passed version was used as input. - - Question: - Not 100% sure about the usage of the function so the name and docstring - maybe does not match what it does? - - Args: - project_name (str): Name of project where to look for queried entities. - version_id (Union[str, ObjectId]): Version id which can be used - as input link for other versions. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Iterable: Iterable cursor yielding versions that are used as input - links for passed version. - """ - - version_id = convert_id(version_id) - if not version_id: - return [] - - conn = get_project_connection(project_name) - # Does make sense to look for hero versions? - query_filter = { - "type": "version", - "data.inputLinks.id": version_id - } - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_last_versions(project_name, subset_ids, active=None, fields=None): - """Latest versions for entered subset_ids. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids. - active (Optional[bool]): If True only active versions are returned. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - dict[ObjectId, int]: Key is subset id and value is last version name. - """ - - subset_ids = convert_ids(subset_ids) - if not subset_ids: - return {} - - if fields is not None: - fields = list(fields) - if not fields: - return {} - - # Avoid double query if only name and _id are requested - name_needed = False - limit_query = False - if fields: - fields_s = set(fields) - if "name" in fields_s: - name_needed = True - fields_s.remove("name") - - for field in ("_id", "parent"): - if field in fields_s: - fields_s.remove(field) - limit_query = len(fields_s) == 0 - - group_item = { - "_id": "$parent", - "_version_id": {"$last": "$_id"} - } - # Add name if name is needed (only for limit query) - if name_needed: - group_item["name"] = {"$last": "$name"} - - aggregate_filter = { - "type": "version", - "parent": {"$in": subset_ids} - } - if active is False: - aggregate_filter["data.active"] = active - elif active is True: - aggregate_filter["$or"] = [ - {"data.active": {"$exists": 0}}, - {"data.active": active}, - ] - - aggregation_pipeline = [ - # Find all versions of those subsets - {"$match": aggregate_filter}, - # Sorting versions all together - {"$sort": {"name": 1}}, - # Group them by "parent", but only take the last - {"$group": group_item} - ] - - conn = get_project_connection(project_name) - aggregate_result = conn.aggregate(aggregation_pipeline) - if limit_query: - output = {} - for item in aggregate_result: - subset_id = item["_id"] - item_data = {"_id": item["_version_id"], "parent": subset_id} - if name_needed: - item_data["name"] = item["name"] - output[subset_id] = item_data - return output - - version_ids = [ - doc["_version_id"] - for doc in aggregate_result - ] - - fields = _prepare_fields(fields, ["parent"]) - - version_docs = get_versions( - project_name, version_ids=version_ids, fields=fields - ) - - return { - version_doc["parent"]: version_doc - for version_doc in version_docs - } - - -def get_last_version_by_subset_id(project_name, subset_id, fields=None): - """Last version for passed subset id. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_id (Union[str, ObjectId]): Id of version which should be found. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - subset_id = convert_id(subset_id) - if not subset_id: - return None - - last_versions = get_last_versions( - project_name, subset_ids=[subset_id], fields=fields - ) - return last_versions.get(subset_id) - - -def get_last_version_by_subset_name( - project_name, subset_name, asset_id=None, asset_name=None, fields=None -): - """Last version for passed subset name under asset id/name. - - It is required to pass 'asset_id' or 'asset_name'. Asset id is recommended - if is available. - - Args: - project_name (str): Name of project where to look for queried entities. - subset_name (str): Name of subset. - asset_id (Union[str, ObjectId]): Asset id which is parent of passed - subset name. - asset_name (str): Asset name which is parent of passed subset name. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Version entity data which can be reduced to - specified 'fields'. None is returned if version with specified - filters was not found. - """ - - if not asset_id and not asset_name: - return None - - if not asset_id: - asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) - if not asset_doc: - return None - asset_id = asset_doc["_id"] - subset_doc = get_subset_by_name( - project_name, subset_name, asset_id, fields=["_id"] - ) - if not subset_doc: - return None - return get_last_version_by_subset_id( - project_name, subset_doc["_id"], fields=fields - ) - - -def get_representation_by_id(project_name, representation_id, fields=None): - """Representation entity data by its id. - - Args: - project_name (str): Name of project where to look for queried entities. - representation_id (Union[str, ObjectId]): Representation id. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Representation entity data which can be reduced to - specified 'fields'. None is returned if representation with - specified filters was not found. - """ - - if not representation_id: - return None - - repre_types = ["representation", "archived_representation"] - query_filter = { - "type": {"$in": repre_types} - } - if representation_id is not None: - query_filter["_id"] = convert_id(representation_id) - - conn = get_project_connection(project_name) - - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_representation_by_name( - project_name, representation_name, version_id, fields=None -): - """Representation entity data by its name and its version id. - - Args: - project_name (str): Name of project where to look for queried entities. - representation_name (str): Representation name. - version_id (Union[str, ObjectId]): Id of parent version entity. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[dict[str, Any], None]: Representation entity data which can be - reduced to specified 'fields'. None is returned if representation - with specified filters was not found. - """ - - version_id = convert_id(version_id) - if not version_id or not representation_name: - return None - repre_types = ["representation", "archived_representations"] - query_filter = { - "type": {"$in": repre_types}, - "name": representation_name, - "parent": version_id - } - - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def _flatten_dict(data): - flatten_queue = collections.deque() - flatten_queue.append(data) - output = {} - while flatten_queue: - item = flatten_queue.popleft() - for key, value in item.items(): - if not isinstance(value, dict): - output[key] = value - continue - - tmp = {} - for subkey, subvalue in value.items(): - new_key = "{}.{}".format(key, subkey) - tmp[new_key] = subvalue - flatten_queue.append(tmp) - return output - - -def _regex_filters(filters): - output = [] - for key, value in filters.items(): - regexes = [] - a_values = [] - if isinstance(value, PatternType): - regexes.append(value) - elif isinstance(value, (list, tuple, set)): - for item in value: - if isinstance(item, PatternType): - regexes.append(item) - else: - a_values.append(item) - else: - a_values.append(value) - - key_filters = [] - if len(a_values) == 1: - key_filters.append({key: a_values[0]}) - elif a_values: - key_filters.append({key: {"$in": a_values}}) - - for regex in regexes: - key_filters.append({key: {"$regex": regex}}) - - if len(key_filters) == 1: - output.append(key_filters[0]) - else: - output.append({"$or": key_filters}) - - return output - - -def _get_representations( - project_name, - representation_ids, - representation_names, - version_ids, - context_filters, - names_by_version_ids, - standard, - archived, - fields -): - default_output = [] - repre_types = [] - if standard: - repre_types.append("representation") - if archived: - repre_types.append("archived_representation") - - if not repre_types: - return default_output - - if len(repre_types) == 1: - query_filter = {"type": repre_types[0]} - else: - query_filter = {"type": {"$in": repre_types}} - - if representation_ids is not None: - representation_ids = convert_ids(representation_ids) - if not representation_ids: - return default_output - query_filter["_id"] = {"$in": representation_ids} - - if representation_names is not None: - if not representation_names: - return default_output - query_filter["name"] = {"$in": list(representation_names)} - - if version_ids is not None: - version_ids = convert_ids(version_ids) - if not version_ids: - return default_output - query_filter["parent"] = {"$in": version_ids} - - or_queries = [] - if names_by_version_ids is not None: - or_query = [] - for version_id, names in names_by_version_ids.items(): - if version_id and names: - or_query.append({ - "parent": convert_id(version_id), - "name": {"$in": list(names)} - }) - if not or_query: - return default_output - or_queries.append(or_query) - - if context_filters is not None: - if not context_filters: - return [] - _flatten_filters = _flatten_dict(context_filters) - flatten_filters = {} - for key, value in _flatten_filters.items(): - if not key.startswith("context"): - key = "context.{}".format(key) - flatten_filters[key] = value - - for item in _regex_filters(flatten_filters): - for key, value in item.items(): - if key != "$or": - query_filter[key] = value - - elif value: - or_queries.append(value) - - if len(or_queries) == 1: - query_filter["$or"] = or_queries[0] - elif or_queries: - and_query = [] - for or_query in or_queries: - if isinstance(or_query, list): - or_query = {"$or": or_query} - and_query.append(or_query) - query_filter["$and"] = and_query - - conn = get_project_connection(project_name) - - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_representations( - project_name, - representation_ids=None, - representation_names=None, - version_ids=None, - context_filters=None, - names_by_version_ids=None, - archived=False, - standard=True, - fields=None -): - """Representation entities data from one project filtered by filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - representation_ids (Iterable[Union[str, ObjectId]]): Representation ids - used as filter. Filter ignored if 'None' is passed. - representation_names (Iterable[str]): Representations names used - as filter. Filter ignored if 'None' is passed. - version_ids (Iterable[str]): Subset ids used as parent filter. Filter - ignored if 'None' is passed. - context_filters (Dict[str, List[str, PatternType]]): Filter by - representation context fields. - names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering - using version ids and list of names under the version. - archived (bool): Output will also contain archived representations. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching representations. - """ - - return _get_representations( - project_name=project_name, - representation_ids=representation_ids, - representation_names=representation_names, - version_ids=version_ids, - context_filters=context_filters, - names_by_version_ids=names_by_version_ids, - standard=standard, - archived=archived, - fields=fields - ) - - -def get_archived_representations( - project_name, - representation_ids=None, - representation_names=None, - version_ids=None, - context_filters=None, - names_by_version_ids=None, - fields=None -): - """Archived representation entities data from project with applied filters. - - Filters are additive (all conditions must pass to return subset). - - Args: - project_name (str): Name of project where to look for queried entities. - representation_ids (Iterable[Union[str, ObjectId]]): Representation ids - used as filter. Filter ignored if 'None' is passed. - representation_names (Iterable[str]): Representations names used - as filter. Filter ignored if 'None' is passed. - version_ids (Iterable[str]): Subset ids used as parent filter. Filter - ignored if 'None' is passed. - context_filters (Dict[str, List[str, PatternType]]): Filter by - representation context fields. - names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering - using version ids and list of names under the version. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Cursor: Iterable cursor yielding all matching representations. - """ - - return _get_representations( - project_name=project_name, - representation_ids=representation_ids, - representation_names=representation_names, - version_ids=version_ids, - context_filters=context_filters, - names_by_version_ids=names_by_version_ids, - standard=False, - archived=True, - fields=fields - ) - - -def get_representations_parents(project_name, representations): - """Prepare parents of representation entities. - - Each item of returned dictionary contains version, subset, asset - and project in that order. - - Args: - project_name (str): Name of project where to look for queried entities. - representations (List[dict]): Representation entities with at least - '_id' and 'parent' keys. - - Returns: - dict[ObjectId, tuple]: Parents by representation id. - """ - - repre_docs_by_version_id = collections.defaultdict(list) - version_docs_by_version_id = {} - version_docs_by_subset_id = collections.defaultdict(list) - subset_docs_by_subset_id = {} - subset_docs_by_asset_id = collections.defaultdict(list) - output = {} - for repre_doc in representations: - repre_id = repre_doc["_id"] - version_id = repre_doc["parent"] - output[repre_id] = (None, None, None, None) - repre_docs_by_version_id[version_id].append(repre_doc) - - version_docs = get_versions( - project_name, - version_ids=repre_docs_by_version_id.keys(), - hero=True - ) - for version_doc in version_docs: - version_id = version_doc["_id"] - subset_id = version_doc["parent"] - version_docs_by_version_id[version_id] = version_doc - version_docs_by_subset_id[subset_id].append(version_doc) - - subset_docs = get_subsets( - project_name, subset_ids=version_docs_by_subset_id.keys() - ) - for subset_doc in subset_docs: - subset_id = subset_doc["_id"] - asset_id = subset_doc["parent"] - subset_docs_by_subset_id[subset_id] = subset_doc - subset_docs_by_asset_id[asset_id].append(subset_doc) - - asset_docs = get_assets( - project_name, asset_ids=subset_docs_by_asset_id.keys() - ) - asset_docs_by_id = { - asset_doc["_id"]: asset_doc - for asset_doc in asset_docs - } - - project_doc = get_project(project_name) - - for version_id, repre_docs in repre_docs_by_version_id.items(): - asset_doc = None - subset_doc = None - version_doc = version_docs_by_version_id.get(version_id) - if version_doc: - subset_id = version_doc["parent"] - subset_doc = subset_docs_by_subset_id.get(subset_id) - if subset_doc: - asset_id = subset_doc["parent"] - asset_doc = asset_docs_by_id.get(asset_id) - - for repre_doc in repre_docs: - repre_id = repre_doc["_id"] - output[repre_id] = ( - version_doc, subset_doc, asset_doc, project_doc - ) - return output - - -def get_representation_parents(project_name, representation): - """Prepare parents of representation entity. - - Each item of returned dictionary contains version, subset, asset - and project in that order. - - Args: - project_name (str): Name of project where to look for queried entities. - representation (dict): Representation entities with at least - '_id' and 'parent' keys. - - Returns: - dict[ObjectId, tuple]: Parents by representation id. - """ - - if not representation: - return None - - repre_id = representation["_id"] - parents_by_repre_id = get_representations_parents( - project_name, [representation] - ) - return parents_by_repre_id[repre_id] - - -def get_thumbnail_id_from_source(project_name, src_type, src_id): - """Receive thumbnail id from source entity. - - Args: - project_name (str): Name of project where to look for queried entities. - src_type (str): Type of source entity ('asset', 'version'). - src_id (Union[str, ObjectId]): Id of source entity. - - Returns: - Union[ObjectId, None]: Thumbnail id assigned to entity. If Source - entity does not have any thumbnail id assigned. - """ - - if not src_type or not src_id: - return None - - query_filter = {"_id": convert_id(src_id)} - - conn = get_project_connection(project_name) - src_doc = conn.find_one(query_filter, {"data.thumbnail_id"}) - if src_doc: - return src_doc.get("data", {}).get("thumbnail_id") - return None - - -def get_thumbnails(project_name, thumbnail_ids, fields=None): - """Receive thumbnails entity data. - - Thumbnail entity can be used to receive binary content of thumbnail based - on its content and ThumbnailResolvers. - - Args: - project_name (str): Name of project where to look for queried entities. - thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail - entities. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - cursor: Cursor of queried documents. - """ - - if thumbnail_ids: - thumbnail_ids = convert_ids(thumbnail_ids) - - if not thumbnail_ids: - return [] - query_filter = { - "type": "thumbnail", - "_id": {"$in": thumbnail_ids} - } - conn = get_project_connection(project_name) - return conn.find(query_filter, _prepare_fields(fields)) - - -def get_thumbnail(project_name, thumbnail_id, fields=None): - """Receive thumbnail entity data. - - Args: - project_name (str): Name of project where to look for queried entities. - thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Thumbnail entity data which can be reduced to - specified 'fields'.None is returned if thumbnail with specified - filters was not found. - """ - - if not thumbnail_id: - return None - query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)} - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -def get_workfile_info( - project_name, asset_id, task_name, filename, fields=None -): - """Document with workfile information. - - Warning: - Query is based on filename and context which does not meant it will - find always right and expected result. Information have limited usage - and is not recommended to use it as source information about workfile. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_id (Union[str, ObjectId]): Id of asset entity. - task_name (str): Task name on asset. - fields (Optional[Iterable[str]]): Fields that should be returned. All - fields are returned if 'None' is passed. - - Returns: - Union[Dict, None]: Workfile entity data which can be reduced to - specified 'fields'.None is returned if workfile with specified - filters was not found. - """ - - if not asset_id or not task_name or not filename: - return None - - query_filter = { - "type": "workfile", - "parent": convert_id(asset_id), - "task_name": task_name, - "filename": filename - } - conn = get_project_connection(project_name) - return conn.find_one(query_filter, _prepare_fields(fields)) - - -""" -## Custom data storage: -- Settings - OP settings overrides and local settings -- Logging - logs from Logger -- Webpublisher - jobs -- Ftrack - events -- Maya - Shaders - - openpype/hosts/maya/api/shader_definition_editor.py - - openpype/hosts/maya/plugins/publish/validate_model_name.py - -## Global publish plugins -- openpype/plugins/publish/extract_hierarchy_avalon.py - Create: - - asset - Update: - - asset - -## Lib -- openpype/lib/avalon_context.py - Update: - - workfile data -- openpype/lib/project_backpack.py - Update: - - project -""" +if not AYON_SERVER_ENABLED: + from .mongo.entities import * +else: + from .server.entities import * diff --git a/openpype/client/entity_links.py b/openpype/client/entity_links.py index b74b4ce7f6..e18970de90 100644 --- a/openpype/client/entity_links.py +++ b/openpype/client/entity_links.py @@ -1,243 +1,6 @@ -from .mongo import get_project_connection -from .entities import ( - get_assets, - get_asset_by_id, - get_version_by_id, - get_representation_by_id, - convert_id, -) +from openpype import AYON_SERVER_ENABLED - -def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): - """Extract linked asset ids from asset document. - - One of asset document or asset id must be passed. - - Note: - Asset links now works only from asset to assets. - - Args: - asset_doc (dict): Asset document from DB. - - Returns: - List[Union[ObjectId, str]]: Asset ids of input links. - """ - - output = [] - if not asset_doc and not asset_id: - return output - - if not asset_doc: - asset_doc = get_asset_by_id( - project_name, asset_id, fields=["data.inputLinks"] - ) - - input_links = asset_doc["data"].get("inputLinks") - if not input_links: - return output - - for item in input_links: - # Backwards compatibility for "_id" key which was replaced with - # "id" - if "_id" in item: - link_id = item["_id"] - else: - link_id = item["id"] - output.append(link_id) - return output - - -def get_linked_assets( - project_name, asset_doc=None, asset_id=None, fields=None -): - """Return linked assets based on passed asset document. - - One of asset document or asset id must be passed. - - Args: - project_name (str): Name of project where to look for queried entities. - asset_doc (Dict[str, Any]): Asset document from database. - asset_id (Union[ObjectId, str]): Asset id. Can be used instead of - asset document. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. - - Returns: - List[Dict[str, Any]]: Asset documents of input links for passed - asset doc. - """ - - if not asset_doc: - if not asset_id: - return [] - asset_doc = get_asset_by_id( - project_name, - asset_id, - fields=["data.inputLinks"] - ) - if not asset_doc: - return [] - - link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc) - if not link_ids: - return [] - - return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) - - -def get_linked_representation_id( - project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None -): - """Returns list of linked ids of particular type (if provided). - - One of representation document or representation id must be passed. - Note: - Representation links now works only from representation through version - back to representations. - - Args: - project_name (str): Name of project where look for links. - repre_doc (Dict[str, Any]): Representation document. - repre_id (Union[ObjectId, str]): Representation id. - link_type (str): Type of link (e.g. 'reference', ...). - max_depth (int): Limit recursion level. Default: 0 - - Returns: - List[ObjectId] Linked representation ids. - """ - - if repre_doc: - repre_id = repre_doc["_id"] - - if repre_id: - repre_id = convert_id(repre_id) - - if not repre_id and not repre_doc: - return [] - - version_id = None - if repre_doc: - version_id = repre_doc.get("parent") - - if not version_id: - repre_doc = get_representation_by_id( - project_name, repre_id, fields=["parent"] - ) - version_id = repre_doc["parent"] - - if not version_id: - return [] - - version_doc = get_version_by_id( - project_name, version_id, fields=["type", "version_id"] - ) - if version_doc["type"] == "hero_version": - version_id = version_doc["version_id"] - - if max_depth is None: - max_depth = 0 - - match = { - "_id": version_id, - # Links are not stored to hero versions at this moment so filter - # is limited to just versions - "type": "version" - } - - graph_lookup = { - "from": project_name, - "startWith": "$data.inputLinks.id", - "connectFromField": "data.inputLinks.id", - "connectToField": "_id", - "as": "outputs_recursive", - "depthField": "depth" - } - if max_depth != 0: - # We offset by -1 since 0 basically means no recursion - # but the recursion only happens after the initial lookup - # for outputs. - graph_lookup["maxDepth"] = max_depth - 1 - - query_pipeline = [ - # Match - {"$match": match}, - # Recursive graph lookup for inputs - {"$graphLookup": graph_lookup} - ] - conn = get_project_connection(project_name) - result = conn.aggregate(query_pipeline) - referenced_version_ids = _process_referenced_pipeline_result( - result, link_type - ) - if not referenced_version_ids: - return [] - - ref_ids = conn.distinct( - "_id", - filter={ - "parent": {"$in": list(referenced_version_ids)}, - "type": "representation" - } - ) - - return list(ref_ids) - - -def _process_referenced_pipeline_result(result, link_type): - """Filters result from pipeline for particular link_type. - - Pipeline cannot use link_type directly in a query. - - Returns: - (list) - """ - - referenced_version_ids = set() - correctly_linked_ids = set() - for item in result: - input_links = item.get("data", {}).get("inputLinks") - if not input_links: - continue - - _filter_input_links( - input_links, - link_type, - correctly_linked_ids - ) - - # outputs_recursive in random order, sort by depth - outputs_recursive = item.get("outputs_recursive") - if not outputs_recursive: - continue - - for output in sorted(outputs_recursive, key=lambda o: o["depth"]): - output_links = output.get("data", {}).get("inputLinks") - if not output_links and output["type"] != "hero_version": - continue - - # Leaf - if output["_id"] not in correctly_linked_ids: - continue - - _filter_input_links( - output_links, - link_type, - correctly_linked_ids - ) - - referenced_version_ids.add(output["_id"]) - - return referenced_version_ids - - -def _filter_input_links(input_links, link_type, correctly_linked_ids): - if not input_links: # to handle hero versions - return - - for input_link in input_links: - if link_type and input_link["type"] != link_type: - continue - - link_id = input_link.get("id") or input_link.get("_id") - if link_id is not None: - correctly_linked_ids.add(link_id) +if not AYON_SERVER_ENABLED: + from .mongo.entity_links import * +else: + from .server.entity_links import * diff --git a/openpype/client/mongo/__init__.py b/openpype/client/mongo/__init__.py new file mode 100644 index 0000000000..9f62d7a9cf --- /dev/null +++ b/openpype/client/mongo/__init__.py @@ -0,0 +1,26 @@ +from .mongo import ( + MongoEnvNotSet, + get_default_components, + should_add_certificate_path_to_mongo_url, + validate_mongo_connection, + OpenPypeMongoConnection, + get_project_database, + get_project_connection, + load_json_file, + replace_project_documents, + store_project_documents, +) + + +__all__ = ( + "MongoEnvNotSet", + "get_default_components", + "should_add_certificate_path_to_mongo_url", + "validate_mongo_connection", + "OpenPypeMongoConnection", + "get_project_database", + "get_project_connection", + "load_json_file", + "replace_project_documents", + "store_project_documents", +) diff --git a/openpype/client/mongo/entities.py b/openpype/client/mongo/entities.py new file mode 100644 index 0000000000..260fde4594 --- /dev/null +++ b/openpype/client/mongo/entities.py @@ -0,0 +1,1555 @@ +"""Unclear if these will have public functions like these. + +Goal is that most of functions here are called on (or with) an object +that has project name as a context (e.g. on 'ProjectEntity'?). + ++ We will need more specific functions doing very specific queries really fast. +""" + +import re +import collections + +import six +from bson.objectid import ObjectId + +from .mongo import get_project_database, get_project_connection + +PatternType = type(re.compile("")) + + +def _prepare_fields(fields, required_fields=None): + if not fields: + return None + + output = { + field: True + for field in fields + } + if "_id" not in output: + output["_id"] = True + + if required_fields: + for key in required_fields: + output[key] = True + return output + + +def convert_id(in_id): + """Helper function for conversion of id from string to ObjectId. + + Args: + in_id (Union[str, ObjectId, Any]): Entity id that should be converted + to right type for queries. + + Returns: + Union[ObjectId, Any]: Converted ids to ObjectId or in type. + """ + + if isinstance(in_id, six.string_types): + return ObjectId(in_id) + return in_id + + +def convert_ids(in_ids): + """Helper function for conversion of ids from string to ObjectId. + + Args: + in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that + should be converted to right type for queries. + + Returns: + List[ObjectId]: Converted ids to ObjectId. + """ + + _output = set() + for in_id in in_ids: + if in_id is not None: + _output.add(convert_id(in_id)) + return list(_output) + + +def get_projects(active=True, inactive=False, fields=None): + """Yield all project entity documents. + + Args: + active (Optional[bool]): Include active projects. Defaults to True. + inactive (Optional[bool]): Include inactive projects. + Defaults to False. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Yields: + dict: Project entity data which can be reduced to specified 'fields'. + None is returned if project with specified filters was not found. + """ + mongodb = get_project_database() + for project_name in mongodb.collection_names(): + if project_name in ("system.indexes",): + continue + project_doc = get_project( + project_name, active=active, inactive=inactive, fields=fields + ) + if project_doc is not None: + yield project_doc + + +def get_project(project_name, active=True, inactive=True, fields=None): + """Return project entity document by project name. + + Args: + project_name (str): Name of project. + active (Optional[bool]): Allow active project. Defaults to True. + inactive (Optional[bool]): Allow inactive project. Defaults to True. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Project entity data which can be reduced to + specified 'fields'. None is returned if project with specified + filters was not found. + """ + # Skip if both are disabled + if not active and not inactive: + return None + + query_filter = {"type": "project"} + # Keep query untouched if both should be available + if active and inactive: + pass + + # Add filter to keep only active + elif active: + query_filter["$or"] = [ + {"data.active": {"$exists": False}}, + {"data.active": True}, + ] + + # Add filter to keep only inactive + elif inactive: + query_filter["$or"] = [ + {"data.active": {"$exists": False}}, + {"data.active": False}, + ] + + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_whole_project(project_name): + """Receive all documents from project. + + Helper that can be used to get all document from whole project. For example + for backups etc. + + Returns: + Cursor: Query cursor as iterable which returns all documents from + project collection. + """ + + conn = get_project_connection(project_name) + return conn.find({}) + + +def get_asset_by_id(project_name, asset_id, fields=None): + """Receive asset data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_id (Union[str, ObjectId]): Asset's id. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Asset entity data which can be reduced to + specified 'fields'. None is returned if asset with specified + filters was not found. + """ + + asset_id = convert_id(asset_id) + if not asset_id: + return None + + query_filter = {"type": "asset", "_id": asset_id} + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_asset_by_name(project_name, asset_name, fields=None): + """Receive asset data by its name. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_name (str): Asset's name. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Asset entity data which can be reduced to + specified 'fields'. None is returned if asset with specified + filters was not found. + """ + + if not asset_name: + return None + + query_filter = {"type": "asset", "name": asset_name} + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +# NOTE this could be just public function? +# - any better variable name instead of 'standard'? +# - same approach can be used for rest of types +def _get_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + standard=True, + archived=False, + fields=None +): + """Assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should + be found. + asset_names (Iterable[str]): Name assets that should be found. + parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. + standard (bool): Query standard assets (type 'asset'). + archived (bool): Query archived assets (type 'archived_asset'). + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + asset_types = [] + if standard: + asset_types.append("asset") + if archived: + asset_types.append("archived_asset") + + if not asset_types: + return [] + + if len(asset_types) == 1: + query_filter = {"type": asset_types[0]} + else: + query_filter = {"type": {"$in": asset_types}} + + if asset_ids is not None: + asset_ids = convert_ids(asset_ids) + if not asset_ids: + return [] + query_filter["_id"] = {"$in": asset_ids} + + if asset_names is not None: + if not asset_names: + return [] + query_filter["name"] = {"$in": list(asset_names)} + + if parent_ids is not None: + parent_ids = convert_ids(parent_ids) + if not parent_ids: + return [] + query_filter["data.visualParent"] = {"$in": parent_ids} + + conn = get_project_connection(project_name) + + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + archived=False, + fields=None +): + """Assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should + be found. + asset_names (Iterable[str]): Name assets that should be found. + parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. + archived (bool): Add also archived assets. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + return _get_assets( + project_name, + asset_ids, + asset_names, + parent_ids, + True, + archived, + fields + ) + + +def get_archived_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + fields=None +): + """Archived assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all archived assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids that should + be found. + asset_names (Iterable[str]): Name assets that should be found. + parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + return _get_assets( + project_name, asset_ids, asset_names, parent_ids, False, True, fields + ) + + +def get_asset_ids_with_subsets(project_name, asset_ids=None): + """Find out which assets have existing subsets. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (Iterable[Union[str, ObjectId]]): Look only for entered + asset ids. + + Returns: + Iterable[ObjectId]: Asset ids that have existing subsets. + """ + + subset_query = { + "type": "subset" + } + if asset_ids is not None: + asset_ids = convert_ids(asset_ids) + if not asset_ids: + return [] + subset_query["parent"] = {"$in": asset_ids} + + conn = get_project_connection(project_name) + result = conn.aggregate([ + { + "$match": subset_query + }, + { + "$group": { + "_id": "$parent", + "count": {"$sum": 1} + } + } + ]) + asset_ids_with_subsets = [] + for item in result: + asset_id = item["_id"] + count = item["count"] + if count > 0: + asset_ids_with_subsets.append(asset_id) + return asset_ids_with_subsets + + +def get_subset_by_id(project_name, subset_id, fields=None): + """Single subset entity data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_id (Union[str, ObjectId]): Id of subset which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Subset entity data which can be reduced to + specified 'fields'. None is returned if subset with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + query_filters = {"type": "subset", "_id": subset_id} + conn = get_project_connection(project_name) + return conn.find_one(query_filters, _prepare_fields(fields)) + + +def get_subset_by_name(project_name, subset_name, asset_id, fields=None): + """Single subset entity data by its name and its version id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_name (str): Name of subset. + asset_id (Union[str, ObjectId]): Id of parent asset. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Subset entity data which can be reduced to + specified 'fields'. None is returned if subset with specified + filters was not found. + """ + if not subset_name: + return None + + asset_id = convert_id(asset_id) + if not asset_id: + return None + + query_filters = { + "type": "subset", + "name": subset_name, + "parent": asset_id + } + conn = get_project_connection(project_name) + return conn.find_one(query_filters, _prepare_fields(fields)) + + +def get_subsets( + project_name, + subset_ids=None, + subset_names=None, + asset_ids=None, + names_by_asset_ids=None, + archived=False, + fields=None +): + """Subset entities data from one project filtered by entered filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should be + queried. Filter ignored if 'None' is passed. + subset_names (Iterable[str]): Subset names that should be queried. + Filter ignored if 'None' is passed. + asset_ids (Iterable[Union[str, ObjectId]]): Asset ids under which + should look for the subsets. Filter ignored if 'None' is passed. + names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering + using asset ids and list of subset names under the asset. + archived (bool): Look for archived subsets too. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching subsets. + """ + + subset_types = ["subset"] + if archived: + subset_types.append("archived_subset") + + if len(subset_types) == 1: + query_filter = {"type": subset_types[0]} + else: + query_filter = {"type": {"$in": subset_types}} + + if asset_ids is not None: + asset_ids = convert_ids(asset_ids) + if not asset_ids: + return [] + query_filter["parent"] = {"$in": asset_ids} + + if subset_ids is not None: + subset_ids = convert_ids(subset_ids) + if not subset_ids: + return [] + query_filter["_id"] = {"$in": subset_ids} + + if subset_names is not None: + if not subset_names: + return [] + query_filter["name"] = {"$in": list(subset_names)} + + if names_by_asset_ids is not None: + or_query = [] + for asset_id, names in names_by_asset_ids.items(): + if asset_id and names: + or_query.append({ + "parent": convert_id(asset_id), + "name": {"$in": list(names)} + }) + if not or_query: + return [] + query_filter["$or"] = or_query + + conn = get_project_connection(project_name) + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_subset_families(project_name, subset_ids=None): + """Set of main families of subsets. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): Subset ids that should + be queried. All subsets from project are used if 'None' is passed. + + Returns: + set[str]: Main families of matching subsets. + """ + + subset_filter = { + "type": "subset" + } + if subset_ids is not None: + if not subset_ids: + return set() + subset_filter["_id"] = {"$in": list(subset_ids)} + + conn = get_project_connection(project_name) + result = list(conn.aggregate([ + {"$match": subset_filter}, + {"$project": { + "family": {"$arrayElemAt": ["$data.families", 0]} + }}, + {"$group": { + "_id": "family_group", + "families": {"$addToSet": "$family"} + }} + ])) + if result: + return set(result[0]["families"]) + return set() + + +def get_version_by_id(project_name, version_id, fields=None): + """Single version entity data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Id of version which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + version_id = convert_id(version_id) + if not version_id: + return None + + query_filter = { + "type": {"$in": ["version", "hero_version"]}, + "_id": version_id + } + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_version_by_name(project_name, version, subset_id, fields=None): + """Single version entity data by its name and subset id. + + Args: + project_name (str): Name of project where to look for queried entities. + version (int): name of version entity (its version). + subset_id (Union[str, ObjectId]): Id of version which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + conn = get_project_connection(project_name) + query_filter = { + "type": "version", + "parent": subset_id, + "name": version + } + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def version_is_latest(project_name, version_id): + """Is version the latest from its subset. + + Note: + Hero versions are considered as latest. + + Todo: + Maybe raise exception when version was not found? + + Args: + project_name (str):Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Version id which is checked. + + Returns: + bool: True if is latest version from subset else False. + """ + + version_id = convert_id(version_id) + if not version_id: + return False + version_doc = get_version_by_id( + project_name, version_id, fields=["_id", "type", "parent"] + ) + # What to do when version is not found? + if not version_doc: + return False + + if version_doc["type"] == "hero_version": + return True + + last_version = get_last_version_by_subset_id( + project_name, version_doc["parent"], fields=["_id"] + ) + return last_version["_id"] == version_id + + +def _get_versions( + project_name, + subset_ids=None, + version_ids=None, + versions=None, + standard=True, + hero=False, + fields=None +): + version_types = [] + if standard: + version_types.append("version") + + if hero: + version_types.append("hero_version") + + if not version_types: + return [] + elif len(version_types) == 1: + query_filter = {"type": version_types[0]} + else: + query_filter = {"type": {"$in": version_types}} + + if subset_ids is not None: + subset_ids = convert_ids(subset_ids) + if not subset_ids: + return [] + query_filter["parent"] = {"$in": subset_ids} + + if version_ids is not None: + version_ids = convert_ids(version_ids) + if not version_ids: + return [] + query_filter["_id"] = {"$in": version_ids} + + if versions is not None: + versions = list(versions) + if not versions: + return [] + + if len(versions) == 1: + query_filter["name"] = versions[0] + else: + query_filter["name"] = {"$in": versions} + + conn = get_project_connection(project_name) + + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_versions( + project_name, + version_ids=None, + subset_ids=None, + versions=None, + hero=False, + fields=None +): + """Version entities data from one project filtered by entered filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + version_ids (Iterable[Union[str, ObjectId]]): Version ids that will + be queried. Filter ignored if 'None' is passed. + subset_ids (Iterable[str]): Subset ids that will be queried. + Filter ignored if 'None' is passed. + versions (Iterable[int]): Version names (as integers). + Filter ignored if 'None' is passed. + hero (bool): Look also for hero versions. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching versions. + """ + + return _get_versions( + project_name, + subset_ids, + version_ids, + versions, + standard=True, + hero=hero, + fields=fields + ) + + +def get_hero_version_by_subset_id(project_name, subset_id, fields=None): + """Hero version by subset id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_id (Union[str, ObjectId]): Subset id under which + is hero version. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Hero version entity data which can be reduced to + specified 'fields'. None is returned if hero version with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + versions = list(_get_versions( + project_name, + subset_ids=[subset_id], + standard=False, + hero=True, + fields=fields + )) + if versions: + return versions[0] + return None + + +def get_hero_version_by_id(project_name, version_id, fields=None): + """Hero version by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Hero version id. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Hero version entity data which can be reduced to + specified 'fields'. None is returned if hero version with specified + filters was not found. + """ + + version_id = convert_id(version_id) + if not version_id: + return None + + versions = list(_get_versions( + project_name, + version_ids=[version_id], + standard=False, + hero=True, + fields=fields + )) + if versions: + return versions[0] + return None + + +def get_hero_versions( + project_name, + subset_ids=None, + version_ids=None, + fields=None +): + """Hero version entities data from one project filtered by entered filters. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): Subset ids for which + should look for hero versions. Filter ignored if 'None' is passed. + version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter + ignored if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor|list: Iterable yielding hero versions matching passed filters. + """ + + return _get_versions( + project_name, + subset_ids, + version_ids, + standard=False, + hero=True, + fields=fields + ) + + +def get_output_link_versions(project_name, version_id, fields=None): + """Versions where passed version was used as input. + + Question: + Not 100% sure about the usage of the function so the name and docstring + maybe does not match what it does? + + Args: + project_name (str): Name of project where to look for queried entities. + version_id (Union[str, ObjectId]): Version id which can be used + as input link for other versions. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Iterable: Iterable cursor yielding versions that are used as input + links for passed version. + """ + + version_id = convert_id(version_id) + if not version_id: + return [] + + conn = get_project_connection(project_name) + # Does make sense to look for hero versions? + query_filter = { + "type": "version", + "data.inputLinks.id": version_id + } + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_last_versions(project_name, subset_ids, active=None, fields=None): + """Latest versions for entered subset_ids. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids. + active (Optional[bool]): If True only active versions are returned. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + dict[ObjectId, int]: Key is subset id and value is last version name. + """ + + subset_ids = convert_ids(subset_ids) + if not subset_ids: + return {} + + if fields is not None: + fields = list(fields) + if not fields: + return {} + + # Avoid double query if only name and _id are requested + name_needed = False + limit_query = False + if fields: + fields_s = set(fields) + if "name" in fields_s: + name_needed = True + fields_s.remove("name") + + for field in ("_id", "parent"): + if field in fields_s: + fields_s.remove(field) + limit_query = len(fields_s) == 0 + + group_item = { + "_id": "$parent", + "_version_id": {"$last": "$_id"} + } + # Add name if name is needed (only for limit query) + if name_needed: + group_item["name"] = {"$last": "$name"} + + aggregate_filter = { + "type": "version", + "parent": {"$in": subset_ids} + } + if active is False: + aggregate_filter["data.active"] = active + elif active is True: + aggregate_filter["$or"] = [ + {"data.active": {"$exists": 0}}, + {"data.active": active}, + ] + + aggregation_pipeline = [ + # Find all versions of those subsets + {"$match": aggregate_filter}, + # Sorting versions all together + {"$sort": {"name": 1}}, + # Group them by "parent", but only take the last + {"$group": group_item} + ] + + conn = get_project_connection(project_name) + aggregate_result = conn.aggregate(aggregation_pipeline) + if limit_query: + output = {} + for item in aggregate_result: + subset_id = item["_id"] + item_data = {"_id": item["_version_id"], "parent": subset_id} + if name_needed: + item_data["name"] = item["name"] + output[subset_id] = item_data + return output + + version_ids = [ + doc["_version_id"] + for doc in aggregate_result + ] + + fields = _prepare_fields(fields, ["parent"]) + + version_docs = get_versions( + project_name, version_ids=version_ids, fields=fields + ) + + return { + version_doc["parent"]: version_doc + for version_doc in version_docs + } + + +def get_last_version_by_subset_id(project_name, subset_id, fields=None): + """Last version for passed subset id. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_id (Union[str, ObjectId]): Id of version which should be found. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + subset_id = convert_id(subset_id) + if not subset_id: + return None + + last_versions = get_last_versions( + project_name, subset_ids=[subset_id], fields=fields + ) + return last_versions.get(subset_id) + + +def get_last_version_by_subset_name( + project_name, subset_name, asset_id=None, asset_name=None, fields=None +): + """Last version for passed subset name under asset id/name. + + It is required to pass 'asset_id' or 'asset_name'. Asset id is recommended + if is available. + + Args: + project_name (str): Name of project where to look for queried entities. + subset_name (str): Name of subset. + asset_id (Union[str, ObjectId]): Asset id which is parent of passed + subset name. + asset_name (str): Asset name which is parent of passed subset name. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. + """ + + if not asset_id and not asset_name: + return None + + if not asset_id: + asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) + if not asset_doc: + return None + asset_id = asset_doc["_id"] + subset_doc = get_subset_by_name( + project_name, subset_name, asset_id, fields=["_id"] + ) + if not subset_doc: + return None + return get_last_version_by_subset_id( + project_name, subset_doc["_id"], fields=fields + ) + + +def get_representation_by_id(project_name, representation_id, fields=None): + """Representation entity data by its id. + + Args: + project_name (str): Name of project where to look for queried entities. + representation_id (Union[str, ObjectId]): Representation id. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Representation entity data which can be reduced to + specified 'fields'. None is returned if representation with + specified filters was not found. + """ + + if not representation_id: + return None + + repre_types = ["representation", "archived_representation"] + query_filter = { + "type": {"$in": repre_types} + } + if representation_id is not None: + query_filter["_id"] = convert_id(representation_id) + + conn = get_project_connection(project_name) + + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_representation_by_name( + project_name, representation_name, version_id, fields=None +): + """Representation entity data by its name and its version id. + + Args: + project_name (str): Name of project where to look for queried entities. + representation_name (str): Representation name. + version_id (Union[str, ObjectId]): Id of parent version entity. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[dict[str, Any], None]: Representation entity data which can be + reduced to specified 'fields'. None is returned if representation + with specified filters was not found. + """ + + version_id = convert_id(version_id) + if not version_id or not representation_name: + return None + repre_types = ["representation", "archived_representations"] + query_filter = { + "type": {"$in": repre_types}, + "name": representation_name, + "parent": version_id + } + + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def _flatten_dict(data): + flatten_queue = collections.deque() + flatten_queue.append(data) + output = {} + while flatten_queue: + item = flatten_queue.popleft() + for key, value in item.items(): + if not isinstance(value, dict): + output[key] = value + continue + + tmp = {} + for subkey, subvalue in value.items(): + new_key = "{}.{}".format(key, subkey) + tmp[new_key] = subvalue + flatten_queue.append(tmp) + return output + + +def _regex_filters(filters): + output = [] + for key, value in filters.items(): + regexes = [] + a_values = [] + if isinstance(value, PatternType): + regexes.append(value) + elif isinstance(value, (list, tuple, set)): + for item in value: + if isinstance(item, PatternType): + regexes.append(item) + else: + a_values.append(item) + else: + a_values.append(value) + + key_filters = [] + if len(a_values) == 1: + key_filters.append({key: a_values[0]}) + elif a_values: + key_filters.append({key: {"$in": a_values}}) + + for regex in regexes: + key_filters.append({key: {"$regex": regex}}) + + if len(key_filters) == 1: + output.append(key_filters[0]) + else: + output.append({"$or": key_filters}) + + return output + + +def _get_representations( + project_name, + representation_ids, + representation_names, + version_ids, + context_filters, + names_by_version_ids, + standard, + archived, + fields +): + default_output = [] + repre_types = [] + if standard: + repre_types.append("representation") + if archived: + repre_types.append("archived_representation") + + if not repre_types: + return default_output + + if len(repre_types) == 1: + query_filter = {"type": repre_types[0]} + else: + query_filter = {"type": {"$in": repre_types}} + + if representation_ids is not None: + representation_ids = convert_ids(representation_ids) + if not representation_ids: + return default_output + query_filter["_id"] = {"$in": representation_ids} + + if representation_names is not None: + if not representation_names: + return default_output + query_filter["name"] = {"$in": list(representation_names)} + + if version_ids is not None: + version_ids = convert_ids(version_ids) + if not version_ids: + return default_output + query_filter["parent"] = {"$in": version_ids} + + or_queries = [] + if names_by_version_ids is not None: + or_query = [] + for version_id, names in names_by_version_ids.items(): + if version_id and names: + or_query.append({ + "parent": convert_id(version_id), + "name": {"$in": list(names)} + }) + if not or_query: + return default_output + or_queries.append(or_query) + + if context_filters is not None: + if not context_filters: + return [] + _flatten_filters = _flatten_dict(context_filters) + flatten_filters = {} + for key, value in _flatten_filters.items(): + if not key.startswith("context"): + key = "context.{}".format(key) + flatten_filters[key] = value + + for item in _regex_filters(flatten_filters): + for key, value in item.items(): + if key != "$or": + query_filter[key] = value + + elif value: + or_queries.append(value) + + if len(or_queries) == 1: + query_filter["$or"] = or_queries[0] + elif or_queries: + and_query = [] + for or_query in or_queries: + if isinstance(or_query, list): + or_query = {"$or": or_query} + and_query.append(or_query) + query_filter["$and"] = and_query + + conn = get_project_connection(project_name) + + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + archived=False, + standard=True, + fields=None +): + """Representation entities data from one project filtered by filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + representation_ids (Iterable[Union[str, ObjectId]]): Representation ids + used as filter. Filter ignored if 'None' is passed. + representation_names (Iterable[str]): Representations names used + as filter. Filter ignored if 'None' is passed. + version_ids (Iterable[str]): Subset ids used as parent filter. Filter + ignored if 'None' is passed. + context_filters (Dict[str, List[str, PatternType]]): Filter by + representation context fields. + names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering + using version ids and list of names under the version. + archived (bool): Output will also contain archived representations. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching representations. + """ + + return _get_representations( + project_name=project_name, + representation_ids=representation_ids, + representation_names=representation_names, + version_ids=version_ids, + context_filters=context_filters, + names_by_version_ids=names_by_version_ids, + standard=standard, + archived=archived, + fields=fields + ) + + +def get_archived_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + fields=None +): + """Archived representation entities data from project with applied filters. + + Filters are additive (all conditions must pass to return subset). + + Args: + project_name (str): Name of project where to look for queried entities. + representation_ids (Iterable[Union[str, ObjectId]]): Representation ids + used as filter. Filter ignored if 'None' is passed. + representation_names (Iterable[str]): Representations names used + as filter. Filter ignored if 'None' is passed. + version_ids (Iterable[str]): Subset ids used as parent filter. Filter + ignored if 'None' is passed. + context_filters (Dict[str, List[str, PatternType]]): Filter by + representation context fields. + names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering + using version ids and list of names under the version. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Cursor: Iterable cursor yielding all matching representations. + """ + + return _get_representations( + project_name=project_name, + representation_ids=representation_ids, + representation_names=representation_names, + version_ids=version_ids, + context_filters=context_filters, + names_by_version_ids=names_by_version_ids, + standard=False, + archived=True, + fields=fields + ) + + +def get_representations_parents(project_name, representations): + """Prepare parents of representation entities. + + Each item of returned dictionary contains version, subset, asset + and project in that order. + + Args: + project_name (str): Name of project where to look for queried entities. + representations (List[dict]): Representation entities with at least + '_id' and 'parent' keys. + + Returns: + dict[ObjectId, tuple]: Parents by representation id. + """ + + repre_docs_by_version_id = collections.defaultdict(list) + version_docs_by_version_id = {} + version_docs_by_subset_id = collections.defaultdict(list) + subset_docs_by_subset_id = {} + subset_docs_by_asset_id = collections.defaultdict(list) + output = {} + for repre_doc in representations: + repre_id = repre_doc["_id"] + version_id = repre_doc["parent"] + output[repre_id] = (None, None, None, None) + repre_docs_by_version_id[version_id].append(repre_doc) + + version_docs = get_versions( + project_name, + version_ids=repre_docs_by_version_id.keys(), + hero=True + ) + for version_doc in version_docs: + version_id = version_doc["_id"] + subset_id = version_doc["parent"] + version_docs_by_version_id[version_id] = version_doc + version_docs_by_subset_id[subset_id].append(version_doc) + + subset_docs = get_subsets( + project_name, subset_ids=version_docs_by_subset_id.keys() + ) + for subset_doc in subset_docs: + subset_id = subset_doc["_id"] + asset_id = subset_doc["parent"] + subset_docs_by_subset_id[subset_id] = subset_doc + subset_docs_by_asset_id[asset_id].append(subset_doc) + + asset_docs = get_assets( + project_name, asset_ids=subset_docs_by_asset_id.keys() + ) + asset_docs_by_id = { + asset_doc["_id"]: asset_doc + for asset_doc in asset_docs + } + + project_doc = get_project(project_name) + + for version_id, repre_docs in repre_docs_by_version_id.items(): + asset_doc = None + subset_doc = None + version_doc = version_docs_by_version_id.get(version_id) + if version_doc: + subset_id = version_doc["parent"] + subset_doc = subset_docs_by_subset_id.get(subset_id) + if subset_doc: + asset_id = subset_doc["parent"] + asset_doc = asset_docs_by_id.get(asset_id) + + for repre_doc in repre_docs: + repre_id = repre_doc["_id"] + output[repre_id] = ( + version_doc, subset_doc, asset_doc, project_doc + ) + return output + + +def get_representation_parents(project_name, representation): + """Prepare parents of representation entity. + + Each item of returned dictionary contains version, subset, asset + and project in that order. + + Args: + project_name (str): Name of project where to look for queried entities. + representation (dict): Representation entities with at least + '_id' and 'parent' keys. + + Returns: + dict[ObjectId, tuple]: Parents by representation id. + """ + + if not representation: + return None + + repre_id = representation["_id"] + parents_by_repre_id = get_representations_parents( + project_name, [representation] + ) + return parents_by_repre_id[repre_id] + + +def get_thumbnail_id_from_source(project_name, src_type, src_id): + """Receive thumbnail id from source entity. + + Args: + project_name (str): Name of project where to look for queried entities. + src_type (str): Type of source entity ('asset', 'version'). + src_id (Union[str, ObjectId]): Id of source entity. + + Returns: + Union[ObjectId, None]: Thumbnail id assigned to entity. If Source + entity does not have any thumbnail id assigned. + """ + + if not src_type or not src_id: + return None + + query_filter = {"_id": convert_id(src_id)} + + conn = get_project_connection(project_name) + src_doc = conn.find_one(query_filter, {"data.thumbnail_id"}) + if src_doc: + return src_doc.get("data", {}).get("thumbnail_id") + return None + + +def get_thumbnails(project_name, thumbnail_ids, fields=None): + """Receive thumbnails entity data. + + Thumbnail entity can be used to receive binary content of thumbnail based + on its content and ThumbnailResolvers. + + Args: + project_name (str): Name of project where to look for queried entities. + thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail + entities. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + cursor: Cursor of queried documents. + """ + + if thumbnail_ids: + thumbnail_ids = convert_ids(thumbnail_ids) + + if not thumbnail_ids: + return [] + query_filter = { + "type": "thumbnail", + "_id": {"$in": thumbnail_ids} + } + conn = get_project_connection(project_name) + return conn.find(query_filter, _prepare_fields(fields)) + + +def get_thumbnail( + project_name, thumbnail_id, entity_type, entity_id, fields=None +): + """Receive thumbnail entity data. + + Args: + project_name (str): Name of project where to look for queried entities. + thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Thumbnail entity data which can be reduced to + specified 'fields'.None is returned if thumbnail with specified + filters was not found. + """ + + if not thumbnail_id: + return None + query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)} + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +def get_workfile_info( + project_name, asset_id, task_name, filename, fields=None +): + """Document with workfile information. + + Warning: + Query is based on filename and context which does not meant it will + find always right and expected result. Information have limited usage + and is not recommended to use it as source information about workfile. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_id (Union[str, ObjectId]): Id of asset entity. + task_name (str): Task name on asset. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Workfile entity data which can be reduced to + specified 'fields'.None is returned if workfile with specified + filters was not found. + """ + + if not asset_id or not task_name or not filename: + return None + + query_filter = { + "type": "workfile", + "parent": convert_id(asset_id), + "task_name": task_name, + "filename": filename + } + conn = get_project_connection(project_name) + return conn.find_one(query_filter, _prepare_fields(fields)) + + +""" +## Custom data storage: +- Settings - OP settings overrides and local settings +- Logging - logs from Logger +- Webpublisher - jobs +- Ftrack - events +- Maya - Shaders + - openpype/hosts/maya/api/shader_definition_editor.py + - openpype/hosts/maya/plugins/publish/validate_model_name.py + +## Global publish plugins +- openpype/plugins/publish/extract_hierarchy_avalon.py + Create: + - asset + Update: + - asset + +## Lib +- openpype/lib/avalon_context.py + Update: + - workfile data +- openpype/lib/project_backpack.py + Update: + - project +""" diff --git a/openpype/client/mongo/entity_links.py b/openpype/client/mongo/entity_links.py new file mode 100644 index 0000000000..fd13a2d83b --- /dev/null +++ b/openpype/client/mongo/entity_links.py @@ -0,0 +1,240 @@ +from .mongo import get_project_connection +from .entities import ( + get_assets, + get_asset_by_id, + get_version_by_id, + get_representation_by_id, + convert_id, +) + + +def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): + """Extract linked asset ids from asset document. + + One of asset document or asset id must be passed. + + Note: + Asset links now works only from asset to assets. + + Args: + asset_doc (dict): Asset document from DB. + + Returns: + List[Union[ObjectId, str]]: Asset ids of input links. + """ + + output = [] + if not asset_doc and not asset_id: + return output + + if not asset_doc: + asset_doc = get_asset_by_id( + project_name, asset_id, fields=["data.inputLinks"] + ) + + input_links = asset_doc["data"].get("inputLinks") + if not input_links: + return output + + for item in input_links: + # Backwards compatibility for "_id" key which was replaced with + # "id" + if "_id" in item: + link_id = item["_id"] + else: + link_id = item["id"] + output.append(link_id) + return output + + +def get_linked_assets( + project_name, asset_doc=None, asset_id=None, fields=None +): + """Return linked assets based on passed asset document. + + One of asset document or asset id must be passed. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_doc (Dict[str, Any]): Asset document from database. + asset_id (Union[ObjectId, str]): Asset id. Can be used instead of + asset document. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + List[Dict[str, Any]]: Asset documents of input links for passed + asset doc. + """ + + if not asset_doc: + if not asset_id: + return [] + asset_doc = get_asset_by_id( + project_name, + asset_id, + fields=["data.inputLinks"] + ) + if not asset_doc: + return [] + + link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc) + if not link_ids: + return [] + + return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) + + +def get_linked_representation_id( + project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None +): + """Returns list of linked ids of particular type (if provided). + + One of representation document or representation id must be passed. + Note: + Representation links now works only from representation through version + back to representations. + + Args: + project_name (str): Name of project where look for links. + repre_doc (Dict[str, Any]): Representation document. + repre_id (Union[ObjectId, str]): Representation id. + link_type (str): Type of link (e.g. 'reference', ...). + max_depth (int): Limit recursion level. Default: 0 + + Returns: + List[ObjectId] Linked representation ids. + """ + + if repre_doc: + repre_id = repre_doc["_id"] + + if repre_id: + repre_id = convert_id(repre_id) + + if not repre_id and not repre_doc: + return [] + + version_id = None + if repre_doc: + version_id = repre_doc.get("parent") + + if not version_id: + repre_doc = get_representation_by_id( + project_name, repre_id, fields=["parent"] + ) + version_id = repre_doc["parent"] + + if not version_id: + return [] + + version_doc = get_version_by_id( + project_name, version_id, fields=["type", "version_id"] + ) + if version_doc["type"] == "hero_version": + version_id = version_doc["version_id"] + + if max_depth is None: + max_depth = 0 + + match = { + "_id": version_id, + # Links are not stored to hero versions at this moment so filter + # is limited to just versions + "type": "version" + } + + graph_lookup = { + "from": project_name, + "startWith": "$data.inputLinks.id", + "connectFromField": "data.inputLinks.id", + "connectToField": "_id", + "as": "outputs_recursive", + "depthField": "depth" + } + if max_depth != 0: + # We offset by -1 since 0 basically means no recursion + # but the recursion only happens after the initial lookup + # for outputs. + graph_lookup["maxDepth"] = max_depth - 1 + + query_pipeline = [ + # Match + {"$match": match}, + # Recursive graph lookup for inputs + {"$graphLookup": graph_lookup} + ] + + conn = get_project_connection(project_name) + result = conn.aggregate(query_pipeline) + referenced_version_ids = _process_referenced_pipeline_result( + result, link_type + ) + if not referenced_version_ids: + return [] + + ref_ids = conn.distinct( + "_id", + filter={ + "parent": {"$in": list(referenced_version_ids)}, + "type": "representation" + } + ) + + return list(ref_ids) + + +def _process_referenced_pipeline_result(result, link_type): + """Filters result from pipeline for particular link_type. + + Pipeline cannot use link_type directly in a query. + + Returns: + (list) + """ + + referenced_version_ids = set() + correctly_linked_ids = set() + for item in result: + input_links = item.get("data", {}).get("inputLinks") + if not input_links: + continue + + _filter_input_links( + input_links, + link_type, + correctly_linked_ids + ) + + # outputs_recursive in random order, sort by depth + outputs_recursive = item.get("outputs_recursive") + if not outputs_recursive: + continue + + for output in sorted(outputs_recursive, key=lambda o: o["depth"]): + # Leaf + if output["_id"] not in correctly_linked_ids: + continue + + _filter_input_links( + output.get("data", {}).get("inputLinks"), + link_type, + correctly_linked_ids + ) + + referenced_version_ids.add(output["_id"]) + + return referenced_version_ids + + +def _filter_input_links(input_links, link_type, correctly_linked_ids): + if not input_links: # to handle hero versions + return + + for input_link in input_links: + if link_type and input_link["type"] != link_type: + continue + + link_id = input_link.get("id") or input_link.get("_id") + if link_id is not None: + correctly_linked_ids.add(link_id) diff --git a/openpype/client/mongo.py b/openpype/client/mongo/mongo.py similarity index 98% rename from openpype/client/mongo.py rename to openpype/client/mongo/mongo.py index 251041c028..2be426efeb 100644 --- a/openpype/client/mongo.py +++ b/openpype/client/mongo/mongo.py @@ -11,6 +11,7 @@ from bson.json_util import ( CANONICAL_JSON_OPTIONS ) +from openpype import AYON_SERVER_ENABLED if sys.version_info[0] == 2: from urlparse import urlparse, parse_qs else: @@ -134,7 +135,7 @@ def should_add_certificate_path_to_mongo_url(mongo_url): add_certificate = False # Check if url 'ssl' or 'tls' are set to 'true' for key in ("ssl", "tls"): - if key in query and "true" in query["ssl"]: + if key in query and "true" in query[key]: add_certificate = True break @@ -206,6 +207,8 @@ class OpenPypeMongoConnection: @classmethod def create_connection(cls, mongo_url, timeout=None, retry_attempts=None): + if AYON_SERVER_ENABLED: + raise RuntimeError("Created mongo connection in AYON mode") parsed = urlparse(mongo_url) # Force validation of scheme if parsed.scheme not in ["mongodb", "mongodb+srv"]: @@ -221,7 +224,7 @@ class OpenPypeMongoConnection: "serverSelectionTimeoutMS": timeout } if should_add_certificate_path_to_mongo_url(mongo_url): - kwargs["ssl_ca_certs"] = certifi.where() + kwargs["tlsCAFile"] = certifi.where() mongo_client = pymongo.MongoClient(mongo_url, **kwargs) diff --git a/openpype/client/mongo/operations.py b/openpype/client/mongo/operations.py new file mode 100644 index 0000000000..3537aa4a3d --- /dev/null +++ b/openpype/client/mongo/operations.py @@ -0,0 +1,632 @@ +import re +import copy +import collections + +from bson.objectid import ObjectId +from pymongo import DeleteOne, InsertOne, UpdateOne + +from openpype.client.operations_base import ( + REMOVED_VALUE, + CreateOperation, + UpdateOperation, + DeleteOperation, + BaseOperationsSession +) +from .mongo import get_project_connection +from .entities import get_project + + +PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_" +PROJECT_NAME_REGEX = re.compile( + "^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS) +) + +CURRENT_PROJECT_SCHEMA = "openpype:project-3.0" +CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0" +CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0" +CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0" +CURRENT_VERSION_SCHEMA = "openpype:version-3.0" +CURRENT_HERO_VERSION_SCHEMA = "openpype:hero_version-1.0" +CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0" +CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0" +CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0" + + +def _create_or_convert_to_mongo_id(mongo_id): + if mongo_id is None: + return ObjectId() + return ObjectId(mongo_id) + + +def new_project_document( + project_name, project_code, config, data=None, entity_id=None +): + """Create skeleton data of project document. + + Args: + project_name (str): Name of project. Used as identifier of a project. + project_code (str): Shorter version of projet without spaces and + special characters (in most of cases). Should be also considered + as unique name across projects. + config (Dic[str, Any]): Project config consist of roots, templates, + applications and other project Anatomy related data. + data (Dict[str, Any]): Project data with information about it's + attributes (e.g. 'fps' etc.) or integration specific keys. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of project document. + """ + + if data is None: + data = {} + + data["code"] = project_code + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "name": project_name, + "type": CURRENT_PROJECT_SCHEMA, + "entity_data": data, + "config": config + } + + +def new_asset_document( + name, project_id, parent_id, parents, data=None, entity_id=None +): + """Create skeleton data of asset document. + + Args: + name (str): Is considered as unique identifier of asset in project. + project_id (Union[str, ObjectId]): Id of project doument. + parent_id (Union[str, ObjectId]): Id of parent asset. + parents (List[str]): List of parent assets names. + data (Dict[str, Any]): Asset document data. Empty dictionary is used + if not passed. Value of 'parent_id' is used to fill 'visualParent'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of asset document. + """ + + if data is None: + data = {} + if parent_id is not None: + parent_id = ObjectId(parent_id) + data["visualParent"] = parent_id + data["parents"] = parents + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "asset", + "name": name, + "parent": ObjectId(project_id), + "data": data, + "schema": CURRENT_ASSET_DOC_SCHEMA + } + + +def new_subset_document(name, family, asset_id, data=None, entity_id=None): + """Create skeleton data of subset document. + + Args: + name (str): Is considered as unique identifier of subset under asset. + family (str): Subset's family. + asset_id (Union[str, ObjectId]): Id of parent asset. + data (Dict[str, Any]): Subset document data. Empty dictionary is used + if not passed. Value of 'family' is used to fill 'family'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of subset document. + """ + + if data is None: + data = {} + data["family"] = family + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_SUBSET_SCHEMA, + "type": "subset", + "name": name, + "data": data, + "parent": asset_id + } + + +def new_version_doc(version, subset_id, data=None, entity_id=None): + """Create skeleton data of version document. + + Args: + version (int): Is considered as unique identifier of version + under subset. + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_VERSION_SCHEMA, + "type": "version", + "name": int(version), + "parent": subset_id, + "data": data + } + + +def new_hero_version_doc(version_id, subset_id, data=None, entity_id=None): + """Create skeleton data of hero version document. + + Args: + version_id (ObjectId): Is considered as unique identifier of version + under subset. + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_HERO_VERSION_SCHEMA, + "type": "hero_version", + "version_id": version_id, + "parent": subset_id, + "data": data + } + + +def new_representation_doc( + name, version_id, context, data=None, entity_id=None +): + """Create skeleton data of asset document. + + Args: + version (int): Is considered as unique identifier of version + under subset. + version_id (Union[str, ObjectId]): Id of parent version. + context (Dict[str, Any]): Representation context used for fill template + of to query. + data (Dict[str, Any]): Representation document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "schema": CURRENT_REPRESENTATION_SCHEMA, + "type": "representation", + "parent": version_id, + "name": name, + "data": data, + + # Imprint shortcut to context for performance reasons. + "context": context + } + + +def new_thumbnail_doc(data=None, entity_id=None): + """Create skeleton data of thumbnail document. + + Args: + data (Dict[str, Any]): Thumbnail document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of thumbnail document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": data + } + + +def new_workfile_info_doc( + filename, asset_id, task_name, files, data=None, entity_id=None +): + """Create skeleton data of workfile info document. + + Workfile document is at this moment used primarily for artist notes. + + Args: + filename (str): Filename of workfile. + asset_id (Union[str, ObjectId]): Id of asset under which workfile live. + task_name (str): Task under which was workfile created. + files (List[str]): List of rootless filepaths related to workfile. + data (Dict[str, Any]): Additional metadata. + + Returns: + Dict[str, Any]: Skeleton of workfile info document. + """ + + if not data: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "workfile", + "parent": ObjectId(asset_id), + "task_name": task_name, + "filename": filename, + "data": data, + "files": files + } + + +def _prepare_update_data(old_doc, new_doc, replace): + changes = {} + for key, value in new_doc.items(): + if key not in old_doc or value != old_doc[key]: + changes[key] = value + + if replace: + for key in old_doc.keys(): + if key not in new_doc: + changes[key] = REMOVED_VALUE + return changes + + +def prepare_subset_update_data(old_doc, new_doc, replace=True): + """Compare two subset documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_version_update_data(old_doc, new_doc, replace=True): + """Compare two version documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_hero_version_update_data(old_doc, new_doc, replace=True): + """Compare two hero version documents and prepare update data. + + Based on compared values will create update data for 'UpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_representation_update_data(old_doc, new_doc, replace=True): + """Compare two representation documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_workfile_info_update_data(old_doc, new_doc, replace=True): + """Compare two workfile info documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +class MongoCreateOperation(CreateOperation): + """Operation to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + operation_name = "create" + + def __init__(self, project_name, entity_type, data): + super(MongoCreateOperation, self).__init__( + project_name, entity_type, data + ) + + if "_id" not in self._data: + self._data["_id"] = ObjectId() + else: + self._data["_id"] = ObjectId(self._data["_id"]) + + @property + def entity_id(self): + return self._data["_id"] + + def to_mongo_operation(self): + return InsertOne(copy.deepcopy(self._data)) + + +class MongoUpdateOperation(UpdateOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + operation_name = "update" + + def __init__(self, project_name, entity_type, entity_id, update_data): + super(MongoUpdateOperation, self).__init__( + project_name, entity_type, entity_id, update_data + ) + + self._entity_id = ObjectId(self._entity_id) + + def to_mongo_operation(self): + unset_data = {} + set_data = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + unset_data[key] = None + else: + set_data[key] = value + + op_data = {} + if unset_data: + op_data["$unset"] = unset_data + if set_data: + op_data["$set"] = set_data + + if not op_data: + return None + + return UpdateOne( + {"_id": self.entity_id}, + op_data + ) + + +class MongoDeleteOperation(DeleteOperation): + """Operation to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Entity id that will be removed. + """ + + operation_name = "delete" + + def __init__(self, project_name, entity_type, entity_id): + super(MongoDeleteOperation, self).__init__( + project_name, entity_type, entity_id + ) + + self._entity_id = ObjectId(self._entity_id) + + def to_mongo_operation(self): + return DeleteOne({"_id": self.entity_id}) + + +class MongoOperationsSession(BaseOperationsSession): + """Session storing operations that should happen in an order. + + At this moment does not handle anything special can be sonsidered as + stupid list of operations that will happen after each other. If creation + of same entity is there multiple times it's handled in any way and document + values are not validated. + + All operations must be related to single project. + + Args: + project_name (str): Project name to which are operations related. + """ + + def commit(self): + """Commit session operations.""" + + operations, self._operations = self._operations, [] + if not operations: + return + + operations_by_project = collections.defaultdict(list) + for operation in operations: + operations_by_project[operation.project_name].append(operation) + + for project_name, operations in operations_by_project.items(): + bulk_writes = [] + for operation in operations: + mongo_op = operation.to_mongo_operation() + if mongo_op is not None: + bulk_writes.append(mongo_op) + + if bulk_writes: + collection = get_project_connection(project_name) + collection.bulk_write(bulk_writes) + + def create_entity(self, project_name, entity_type, data): + """Fast access to 'MongoCreateOperation'. + + Returns: + MongoCreateOperation: Object of update operation. + """ + + operation = MongoCreateOperation(project_name, entity_type, data) + self.add(operation) + return operation + + def update_entity(self, project_name, entity_type, entity_id, update_data): + """Fast access to 'MongoUpdateOperation'. + + Returns: + MongoUpdateOperation: Object of update operation. + """ + + operation = MongoUpdateOperation( + project_name, entity_type, entity_id, update_data + ) + self.add(operation) + return operation + + def delete_entity(self, project_name, entity_type, entity_id): + """Fast access to 'MongoDeleteOperation'. + + Returns: + MongoDeleteOperation: Object of delete operation. + """ + + operation = MongoDeleteOperation(project_name, entity_type, entity_id) + self.add(operation) + return operation + + +def create_project( + project_name, + project_code, + library_project=False, +): + """Create project using OpenPype settings. + + This project creation function is not validating project document on + creation. It is because project document is created blindly with only + minimum required information about project which is it's name, code, type + and schema. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name(str): New project name. Should be unique. + project_code(str): Project's code should be unique too. + library_project(bool): Project is library project. + + Raises: + ValueError: When project name already exists in MongoDB. + + Returns: + dict: Created project document. + """ + + from openpype.settings import ProjectSettings, SaveWarningExc + from openpype.pipeline.schema import validate + + if get_project(project_name, fields=["name"]): + raise ValueError("Project with name \"{}\" already exists".format( + project_name + )) + + if not PROJECT_NAME_REGEX.match(project_name): + raise ValueError(( + "Project name \"{}\" contain invalid characters" + ).format(project_name)) + + project_doc = { + "type": "project", + "name": project_name, + "data": { + "code": project_code, + "library_project": library_project + }, + "schema": CURRENT_PROJECT_SCHEMA + } + + op_session = MongoOperationsSession() + # Insert document with basic data + create_op = op_session.create_entity( + project_name, project_doc["type"], project_doc + ) + op_session.commit() + + # Load ProjectSettings for the project and save it to store all attributes + # and Anatomy + try: + project_settings_entity = ProjectSettings(project_name) + project_settings_entity.save() + except SaveWarningExc as exc: + print(str(exc)) + except Exception: + op_session.delete_entity( + project_name, project_doc["type"], create_op.entity_id + ) + op_session.commit() + raise + + project_doc = get_project(project_name) + + try: + # Validate created project document + validate(project_doc) + except Exception: + # Remove project if is not valid + op_session.delete_entity( + project_name, project_doc["type"], create_op.entity_id + ) + op_session.commit() + raise + + return project_doc diff --git a/openpype/client/operations.py b/openpype/client/operations.py index e8c9d28636..8bc09dffd3 100644 --- a/openpype/client/operations.py +++ b/openpype/client/operations.py @@ -1,797 +1,24 @@ -import re -import uuid -import copy -import collections -from abc import ABCMeta, abstractmethod, abstractproperty +from openpype import AYON_SERVER_ENABLED -import six -from bson.objectid import ObjectId -from pymongo import DeleteOne, InsertOne, UpdateOne +from .operations_base import REMOVED_VALUE +if not AYON_SERVER_ENABLED: + from .mongo.operations import * + OperationsSession = MongoOperationsSession -from .mongo import get_project_connection -from .entities import get_project - -REMOVED_VALUE = object() - -PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_" -PROJECT_NAME_REGEX = re.compile( - "^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS) -) - -CURRENT_PROJECT_SCHEMA = "openpype:project-3.0" -CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0" -CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0" -CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0" -CURRENT_VERSION_SCHEMA = "openpype:version-3.0" -CURRENT_HERO_VERSION_SCHEMA = "openpype:hero_version-1.0" -CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0" -CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0" -CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0" - - -def _create_or_convert_to_mongo_id(mongo_id): - if mongo_id is None: - return ObjectId() - return ObjectId(mongo_id) - - -def new_project_document( - project_name, project_code, config, data=None, entity_id=None -): - """Create skeleton data of project document. - - Args: - project_name (str): Name of project. Used as identifier of a project. - project_code (str): Shorter version of projet without spaces and - special characters (in most of cases). Should be also considered - as unique name across projects. - config (Dic[str, Any]): Project config consist of roots, templates, - applications and other project Anatomy related data. - data (Dict[str, Any]): Project data with information about it's - attributes (e.g. 'fps' etc.) or integration specific keys. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of project document. - """ - - if data is None: - data = {} - - data["code"] = project_code - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "name": project_name, - "type": CURRENT_PROJECT_SCHEMA, - "entity_data": data, - "config": config - } - - -def new_asset_document( - name, project_id, parent_id, parents, data=None, entity_id=None -): - """Create skeleton data of asset document. - - Args: - name (str): Is considered as unique identifier of asset in project. - project_id (Union[str, ObjectId]): Id of project doument. - parent_id (Union[str, ObjectId]): Id of parent asset. - parents (List[str]): List of parent assets names. - data (Dict[str, Any]): Asset document data. Empty dictionary is used - if not passed. Value of 'parent_id' is used to fill 'visualParent'. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of asset document. - """ - - if data is None: - data = {} - if parent_id is not None: - parent_id = ObjectId(parent_id) - data["visualParent"] = parent_id - data["parents"] = parents - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "type": "asset", - "name": name, - "parent": ObjectId(project_id), - "data": data, - "schema": CURRENT_ASSET_DOC_SCHEMA - } - - -def new_subset_document(name, family, asset_id, data=None, entity_id=None): - """Create skeleton data of subset document. - - Args: - name (str): Is considered as unique identifier of subset under asset. - family (str): Subset's family. - asset_id (Union[str, ObjectId]): Id of parent asset. - data (Dict[str, Any]): Subset document data. Empty dictionary is used - if not passed. Value of 'family' is used to fill 'family'. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of subset document. - """ - - if data is None: - data = {} - data["family"] = family - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_SUBSET_SCHEMA, - "type": "subset", - "name": name, - "data": data, - "parent": asset_id - } - - -def new_version_doc(version, subset_id, data=None, entity_id=None): - """Create skeleton data of version document. - - Args: - version (int): Is considered as unique identifier of version - under subset. - subset_id (Union[str, ObjectId]): Id of parent subset. - data (Dict[str, Any]): Version document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of version document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_VERSION_SCHEMA, - "type": "version", - "name": int(version), - "parent": subset_id, - "data": data - } - - -def new_hero_version_doc(version_id, subset_id, data=None, entity_id=None): - """Create skeleton data of hero version document. - - Args: - version_id (ObjectId): Is considered as unique identifier of version - under subset. - subset_id (Union[str, ObjectId]): Id of parent subset. - data (Dict[str, Any]): Version document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of version document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_HERO_VERSION_SCHEMA, - "type": "hero_version", - "version_id": version_id, - "parent": subset_id, - "data": data - } - - -def new_representation_doc( - name, version_id, context, data=None, entity_id=None -): - """Create skeleton data of asset document. - - Args: - version (int): Is considered as unique identifier of version - under subset. - version_id (Union[str, ObjectId]): Id of parent version. - context (Dict[str, Any]): Representation context used for fill template - of to query. - data (Dict[str, Any]): Representation document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of version document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "schema": CURRENT_REPRESENTATION_SCHEMA, - "type": "representation", - "parent": version_id, - "name": name, - "data": data, - # Imprint shortcut to context for performance reasons. - "context": context - } - - -def new_thumbnail_doc(data=None, entity_id=None): - """Create skeleton data of thumbnail document. - - Args: - data (Dict[str, Any]): Thumbnail document data. - entity_id (Union[str, ObjectId]): Predefined id of document. New id is - created if not passed. - - Returns: - Dict[str, Any]: Skeleton of thumbnail document. - """ - - if data is None: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "type": "thumbnail", - "schema": CURRENT_THUMBNAIL_SCHEMA, - "data": data - } - - -def new_workfile_info_doc( - filename, asset_id, task_name, files, data=None, entity_id=None -): - """Create skeleton data of workfile info document. - - Workfile document is at this moment used primarily for artist notes. - - Args: - filename (str): Filename of workfile. - asset_id (Union[str, ObjectId]): Id of asset under which workfile live. - task_name (str): Task under which was workfile created. - files (List[str]): List of rootless filepaths related to workfile. - data (Dict[str, Any]): Additional metadata. - - Returns: - Dict[str, Any]: Skeleton of workfile info document. - """ - - if not data: - data = {} - - return { - "_id": _create_or_convert_to_mongo_id(entity_id), - "type": "workfile", - "parent": ObjectId(asset_id), - "task_name": task_name, - "filename": filename, - "data": data, - "files": files - } - - -def _prepare_update_data(old_doc, new_doc, replace): - changes = {} - for key, value in new_doc.items(): - if key not in old_doc or value != old_doc[key]: - changes[key] = value - - if replace: - for key in old_doc.keys(): - if key not in new_doc: - changes[key] = REMOVED_VALUE - return changes - - -def prepare_subset_update_data(old_doc, new_doc, replace=True): - """Compare two subset documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_version_update_data(old_doc, new_doc, replace=True): - """Compare two version documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_hero_version_update_data(old_doc, new_doc, replace=True): - """Compare two hero version documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_representation_update_data(old_doc, new_doc, replace=True): - """Compare two representation documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -def prepare_workfile_info_update_data(old_doc, new_doc, replace=True): - """Compare two workfile info documents and prepare update data. - - Based on compared values will create update data for 'UpdateOperation'. - - Empty output means that documents are identical. - - Returns: - Dict[str, Any]: Changes between old and new document. - """ - - return _prepare_update_data(old_doc, new_doc, replace) - - -@six.add_metaclass(ABCMeta) -class AbstractOperation(object): - """Base operation class. - - Operation represent a call into database. The call can create, change or - remove data. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - """ - - def __init__(self, project_name, entity_type): - self._project_name = project_name - self._entity_type = entity_type - self._id = str(uuid.uuid4()) - - @property - def project_name(self): - return self._project_name - - @property - def id(self): - """Identifier of operation.""" - - return self._id - - @property - def entity_type(self): - return self._entity_type - - @abstractproperty - def operation_name(self): - """Stringified type of operation.""" - - pass - - @abstractmethod - def to_mongo_operation(self): - """Convert operation to Mongo batch operation.""" - - pass - - def to_data(self): - """Convert operation to data that can be converted to json or others. - - Warning: - Current state returns ObjectId objects which cannot be parsed by - json. - - Returns: - Dict[str, Any]: Description of operation. - """ - - return { - "id": self._id, - "entity_type": self.entity_type, - "project_name": self.project_name, - "operation": self.operation_name - } - - -class CreateOperation(AbstractOperation): - """Operation to create an entity. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - data (Dict[str, Any]): Data of entity that will be created. - """ - - operation_name = "create" - - def __init__(self, project_name, entity_type, data): - super(CreateOperation, self).__init__(project_name, entity_type) - - if not data: - data = {} - else: - data = copy.deepcopy(dict(data)) - - if "_id" not in data: - data["_id"] = ObjectId() - else: - data["_id"] = ObjectId(data["_id"]) - - self._entity_id = data["_id"] - self._data = data - - def __setitem__(self, key, value): - self.set_value(key, value) - - def __getitem__(self, key): - return self.data[key] - - def set_value(self, key, value): - self.data[key] = value - - def get(self, key, *args, **kwargs): - return self.data.get(key, *args, **kwargs) - - @property - def entity_id(self): - return self._entity_id - - @property - def data(self): - return self._data - - def to_mongo_operation(self): - return InsertOne(copy.deepcopy(self._data)) - - def to_data(self): - output = super(CreateOperation, self).to_data() - output["data"] = copy.deepcopy(self.data) - return output - - -class UpdateOperation(AbstractOperation): - """Operation to update an entity. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - entity_id (Union[str, ObjectId]): Identifier of an entity. - update_data (Dict[str, Any]): Key -> value changes that will be set in - database. If value is set to 'REMOVED_VALUE' the key will be - removed. Only first level of dictionary is checked (on purpose). - """ - - operation_name = "update" - - def __init__(self, project_name, entity_type, entity_id, update_data): - super(UpdateOperation, self).__init__(project_name, entity_type) - - self._entity_id = ObjectId(entity_id) - self._update_data = update_data - - @property - def entity_id(self): - return self._entity_id - - @property - def update_data(self): - return self._update_data - - def to_mongo_operation(self): - unset_data = {} - set_data = {} - for key, value in self._update_data.items(): - if value is REMOVED_VALUE: - unset_data[key] = None - else: - set_data[key] = value - - op_data = {} - if unset_data: - op_data["$unset"] = unset_data - if set_data: - op_data["$set"] = set_data - - if not op_data: - return None - - return UpdateOne( - {"_id": self.entity_id}, - op_data - ) - - def to_data(self): - changes = {} - for key, value in self._update_data.items(): - if value is REMOVED_VALUE: - value = None - changes[key] = value - - output = super(UpdateOperation, self).to_data() - output.update({ - "entity_id": self.entity_id, - "changes": changes - }) - return output - - -class DeleteOperation(AbstractOperation): - """Operation to delete an entity. - - Args: - project_name (str): On which project operation will happen. - entity_type (str): Type of entity on which change happens. - e.g. 'asset', 'representation' etc. - entity_id (Union[str, ObjectId]): Entity id that will be removed. - """ - - operation_name = "delete" - - def __init__(self, project_name, entity_type, entity_id): - super(DeleteOperation, self).__init__(project_name, entity_type) - - self._entity_id = ObjectId(entity_id) - - @property - def entity_id(self): - return self._entity_id - - def to_mongo_operation(self): - return DeleteOne({"_id": self.entity_id}) - - def to_data(self): - output = super(DeleteOperation, self).to_data() - output["entity_id"] = self.entity_id - return output - - -class OperationsSession(object): - """Session storing operations that should happen in an order. - - At this moment does not handle anything special can be sonsidered as - stupid list of operations that will happen after each other. If creation - of same entity is there multiple times it's handled in any way and document - values are not validated. - - All operations must be related to single project. - - Args: - project_name (str): Project name to which are operations related. - """ - - def __init__(self): - self._operations = [] - - def add(self, operation): - """Add operation to be processed. - - Args: - operation (BaseOperation): Operation that should be processed. - """ - if not isinstance( - operation, - (CreateOperation, UpdateOperation, DeleteOperation) - ): - raise TypeError("Expected Operation object got {}".format( - str(type(operation)) - )) - - self._operations.append(operation) - - def append(self, operation): - """Add operation to be processed. - - Args: - operation (BaseOperation): Operation that should be processed. - """ - - self.add(operation) - - def extend(self, operations): - """Add operations to be processed. - - Args: - operations (List[BaseOperation]): Operations that should be - processed. - """ - - for operation in operations: - self.add(operation) - - def remove(self, operation): - """Remove operation.""" - - self._operations.remove(operation) - - def clear(self): - """Clear all registered operations.""" - - self._operations = [] - - def to_data(self): - return [ - operation.to_data() - for operation in self._operations - ] - - def commit(self): - """Commit session operations.""" - - operations, self._operations = self._operations, [] - if not operations: - return - - operations_by_project = collections.defaultdict(list) - for operation in operations: - operations_by_project[operation.project_name].append(operation) - - for project_name, operations in operations_by_project.items(): - bulk_writes = [] - for operation in operations: - mongo_op = operation.to_mongo_operation() - if mongo_op is not None: - bulk_writes.append(mongo_op) - - if bulk_writes: - collection = get_project_connection(project_name) - collection.bulk_write(bulk_writes) - - def create_entity(self, project_name, entity_type, data): - """Fast access to 'CreateOperation'. - - Returns: - CreateOperation: Object of update operation. - """ - - operation = CreateOperation(project_name, entity_type, data) - self.add(operation) - return operation - - def update_entity(self, project_name, entity_type, entity_id, update_data): - """Fast access to 'UpdateOperation'. - - Returns: - UpdateOperation: Object of update operation. - """ - - operation = UpdateOperation( - project_name, entity_type, entity_id, update_data - ) - self.add(operation) - return operation - - def delete_entity(self, project_name, entity_type, entity_id): - """Fast access to 'DeleteOperation'. - - Returns: - DeleteOperation: Object of delete operation. - """ - - operation = DeleteOperation(project_name, entity_type, entity_id) - self.add(operation) - return operation - - -def create_project( - project_name, - project_code, - library_project=False, -): - """Create project using OpenPype settings. - - This project creation function is not validating project document on - creation. It is because project document is created blindly with only - minimum required information about project which is it's name, code, type - and schema. - - Entered project name must be unique and project must not exist yet. - - Note: - This function is here to be OP v4 ready but in v3 has more logic - to do. That's why inner imports are in the body. - - Args: - project_name(str): New project name. Should be unique. - project_code(str): Project's code should be unique too. - library_project(bool): Project is library project. - - Raises: - ValueError: When project name already exists in MongoDB. - - Returns: - dict: Created project document. - """ - - from openpype.settings import ProjectSettings, SaveWarningExc - from openpype.pipeline.schema import validate - - if get_project(project_name, fields=["name"]): - raise ValueError("Project with name \"{}\" already exists".format( - project_name - )) - - if not PROJECT_NAME_REGEX.match(project_name): - raise ValueError(( - "Project name \"{}\" contain invalid characters" - ).format(project_name)) - - project_doc = { - "type": "project", - "name": project_name, - "data": { - "code": project_code, - "library_project": library_project, - }, - "schema": CURRENT_PROJECT_SCHEMA - } - - op_session = OperationsSession() - # Insert document with basic data - create_op = op_session.create_entity( - project_name, project_doc["type"], project_doc +else: + from ayon_api.server_api import ( + PROJECT_NAME_ALLOWED_SYMBOLS, + PROJECT_NAME_REGEX, + ) + from .server.operations import * + from .mongo.operations import ( + CURRENT_PROJECT_SCHEMA, + CURRENT_PROJECT_CONFIG_SCHEMA, + CURRENT_ASSET_DOC_SCHEMA, + CURRENT_SUBSET_SCHEMA, + CURRENT_VERSION_SCHEMA, + CURRENT_HERO_VERSION_SCHEMA, + CURRENT_REPRESENTATION_SCHEMA, + CURRENT_WORKFILE_INFO_SCHEMA, + CURRENT_THUMBNAIL_SCHEMA ) - op_session.commit() - - # Load ProjectSettings for the project and save it to store all attributes - # and Anatomy - try: - project_settings_entity = ProjectSettings(project_name) - project_settings_entity.save() - except SaveWarningExc as exc: - print(str(exc)) - except Exception: - op_session.delete_entity( - project_name, project_doc["type"], create_op.entity_id - ) - op_session.commit() - raise - - project_doc = get_project(project_name) - - try: - # Validate created project document - validate(project_doc) - except Exception: - # Remove project if is not valid - op_session.delete_entity( - project_name, project_doc["type"], create_op.entity_id - ) - op_session.commit() - raise - - return project_doc diff --git a/openpype/client/operations_base.py b/openpype/client/operations_base.py new file mode 100644 index 0000000000..887b237b1c --- /dev/null +++ b/openpype/client/operations_base.py @@ -0,0 +1,289 @@ +import uuid +import copy +from abc import ABCMeta, abstractmethod, abstractproperty +import six + +REMOVED_VALUE = object() + + +@six.add_metaclass(ABCMeta) +class AbstractOperation(object): + """Base operation class. + + Operation represent a call into database. The call can create, change or + remove data. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + """ + + def __init__(self, project_name, entity_type): + self._project_name = project_name + self._entity_type = entity_type + self._id = str(uuid.uuid4()) + + @property + def project_name(self): + return self._project_name + + @property + def id(self): + """Identifier of operation.""" + + return self._id + + @property + def entity_type(self): + return self._entity_type + + @abstractproperty + def operation_name(self): + """Stringified type of operation.""" + + pass + + def to_data(self): + """Convert operation to data that can be converted to json or others. + + Warning: + Current state returns ObjectId objects which cannot be parsed by + json. + + Returns: + Dict[str, Any]: Description of operation. + """ + + return { + "id": self._id, + "entity_type": self.entity_type, + "project_name": self.project_name, + "operation": self.operation_name + } + + +class CreateOperation(AbstractOperation): + """Operation to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + operation_name = "create" + + def __init__(self, project_name, entity_type, data): + super(CreateOperation, self).__init__(project_name, entity_type) + + if not data: + data = {} + else: + data = copy.deepcopy(dict(data)) + self._data = data + + def __setitem__(self, key, value): + self.set_value(key, value) + + def __getitem__(self, key): + return self.data[key] + + def set_value(self, key, value): + self.data[key] = value + + def get(self, key, *args, **kwargs): + return self.data.get(key, *args, **kwargs) + + @abstractproperty + def entity_id(self): + pass + + @property + def data(self): + return self._data + + def to_data(self): + output = super(CreateOperation, self).to_data() + output["data"] = copy.deepcopy(self.data) + return output + + +class UpdateOperation(AbstractOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + operation_name = "update" + + def __init__(self, project_name, entity_type, entity_id, update_data): + super(UpdateOperation, self).__init__(project_name, entity_type) + + self._entity_id = entity_id + self._update_data = update_data + + @property + def entity_id(self): + return self._entity_id + + @property + def update_data(self): + return self._update_data + + def to_data(self): + changes = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + changes[key] = value + + output = super(UpdateOperation, self).to_data() + output.update({ + "entity_id": self.entity_id, + "changes": changes + }) + return output + + +class DeleteOperation(AbstractOperation): + """Operation to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Entity id that will be removed. + """ + + operation_name = "delete" + + def __init__(self, project_name, entity_type, entity_id): + super(DeleteOperation, self).__init__(project_name, entity_type) + + self._entity_id = entity_id + + @property + def entity_id(self): + return self._entity_id + + def to_data(self): + output = super(DeleteOperation, self).to_data() + output["entity_id"] = self.entity_id + return output + + +class BaseOperationsSession(object): + """Session storing operations that should happen in an order. + + At this moment does not handle anything special can be considered as + stupid list of operations that will happen after each other. If creation + of same entity is there multiple times it's handled in any way and document + values are not validated. + """ + + def __init__(self): + self._operations = [] + + def __len__(self): + return len(self._operations) + + def add(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + if not isinstance( + operation, + (CreateOperation, UpdateOperation, DeleteOperation) + ): + raise TypeError("Expected Operation object got {}".format( + str(type(operation)) + )) + + self._operations.append(operation) + + def append(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + + self.add(operation) + + def extend(self, operations): + """Add operations to be processed. + + Args: + operations (List[BaseOperation]): Operations that should be + processed. + """ + + for operation in operations: + self.add(operation) + + def remove(self, operation): + """Remove operation.""" + + self._operations.remove(operation) + + def clear(self): + """Clear all registered operations.""" + + self._operations = [] + + def to_data(self): + return [ + operation.to_data() + for operation in self._operations + ] + + @abstractmethod + def commit(self): + """Commit session operations.""" + pass + + def create_entity(self, project_name, entity_type, data): + """Fast access to 'CreateOperation'. + + Returns: + CreateOperation: Object of update operation. + """ + + operation = CreateOperation(project_name, entity_type, data) + self.add(operation) + return operation + + def update_entity(self, project_name, entity_type, entity_id, update_data): + """Fast access to 'UpdateOperation'. + + Returns: + UpdateOperation: Object of update operation. + """ + + operation = UpdateOperation( + project_name, entity_type, entity_id, update_data + ) + self.add(operation) + return operation + + def delete_entity(self, project_name, entity_type, entity_id): + """Fast access to 'DeleteOperation'. + + Returns: + DeleteOperation: Object of delete operation. + """ + + operation = DeleteOperation(project_name, entity_type, entity_id) + self.add(operation) + return operation diff --git a/openpype/hosts/celaction/hooks/__init__.py b/openpype/client/server/__init__.py similarity index 100% rename from openpype/hosts/celaction/hooks/__init__.py rename to openpype/client/server/__init__.py diff --git a/openpype/client/server/constants.py b/openpype/client/server/constants.py new file mode 100644 index 0000000000..1d3f94c702 --- /dev/null +++ b/openpype/client/server/constants.py @@ -0,0 +1,18 @@ +# --- Folders --- +DEFAULT_FOLDER_FIELDS = { + "id", + "name", + "path", + "parentId", + "active", + "parents", + "thumbnailId" +} + +REPRESENTATION_FILES_FIELDS = { + "files.name", + "files.hash", + "files.id", + "files.path", + "files.size", +} diff --git a/openpype/client/server/conversion_utils.py b/openpype/client/server/conversion_utils.py new file mode 100644 index 0000000000..a6c190a0fc --- /dev/null +++ b/openpype/client/server/conversion_utils.py @@ -0,0 +1,1332 @@ +import os +import arrow +import collections +import json + +import six + +from openpype.client.operations_base import REMOVED_VALUE +from openpype.client.mongo.operations import ( + CURRENT_PROJECT_SCHEMA, + CURRENT_ASSET_DOC_SCHEMA, + CURRENT_SUBSET_SCHEMA, + CURRENT_VERSION_SCHEMA, + CURRENT_HERO_VERSION_SCHEMA, + CURRENT_REPRESENTATION_SCHEMA, + CURRENT_WORKFILE_INFO_SCHEMA, +) +from .constants import REPRESENTATION_FILES_FIELDS +from .utils import create_entity_id, prepare_entity_changes + +# --- Project entity --- +PROJECT_FIELDS_MAPPING_V3_V4 = { + "_id": {"name"}, + "name": {"name"}, + "data": {"data", "code"}, + "data.library_project": {"library"}, + "data.code": {"code"}, + "data.active": {"active"}, +} + +# TODO this should not be hardcoded but received from server!!! +# --- Folder entity --- +FOLDER_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"name"}, + "label": {"label"}, + "data": { + "parentId", "parents", "active", "tasks", "thumbnailId" + }, + "data.visualParent": {"parentId"}, + "data.parents": {"parents"}, + "data.active": {"active"}, + "data.thumbnail_id": {"thumbnailId"}, + "data.entityType": {"folderType"} +} + +# --- Subset entity --- +SUBSET_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"name"}, + "data.active": {"active"}, + "parent": {"folderId"} +} + +# --- Version entity --- +VERSION_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"version"}, + "parent": {"productId"} +} + +# --- Representation entity --- +REPRESENTATION_FIELDS_MAPPING_V3_V4 = { + "_id": {"id"}, + "name": {"name"}, + "parent": {"versionId"}, + "context": {"context"}, + "files": {"files"}, +} + + +def project_fields_v3_to_v4(fields, con): + """Convert project fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + # TODO config fields + # - config.apps + # - config.groups + if not fields: + return None + + project_attribs = con.get_attributes_for_type("project") + output = set() + for field in fields: + # If config is needed the rest api call must be used + if field.startswith("config"): + return None + + if field in PROJECT_FIELDS_MAPPING_V3_V4: + output |= PROJECT_FIELDS_MAPPING_V3_V4[field] + if field == "data": + output |= { + "attrib.{}".format(attr) + for attr in project_attribs + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key in project_attribs: + output.add("attrib.{}".format(data_key)) + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "name" not in output: + output.add("name") + return output + + +def _get_default_template_name(templates): + default_template = None + for name, template in templates.items(): + if name == "default": + return "default" + + if default_template is None: + default_template = name + + return default_template + + +def _template_replacements_to_v3(template): + return ( + template + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + ) + + +def _convert_template_item(template): + # Others won't have 'directory' + if "directory" not in template: + return + folder = _template_replacements_to_v3(template.pop("directory")) + template["folder"] = folder + template["file"] = _template_replacements_to_v3(template["file"]) + template["path"] = "/".join( + (folder, template["file"]) + ) + + +def _fill_template_category(templates, cat_templates, cat_key): + default_template_name = _get_default_template_name(cat_templates) + for template_name, cat_template in cat_templates.items(): + _convert_template_item(cat_template) + if template_name == default_template_name: + templates[cat_key] = cat_template + else: + new_name = "{}_{}".format(cat_key, template_name) + templates["others"][new_name] = cat_template + + +def convert_v4_project_to_v3(project): + """Convert Project entity data from v4 structure to v3 structure. + + Args: + project (Dict[str, Any]): Project entity queried from v4 server. + + Returns: + Dict[str, Any]: Project converted to v3 structure. + """ + + if not project: + return project + + project_name = project["name"] + output = { + "_id": project_name, + "name": project_name, + "schema": CURRENT_PROJECT_SCHEMA, + "type": "project" + } + + data = project.get("data") or {} + attribs = project.get("attrib") or {} + apps_attr = attribs.pop("applications", None) or [] + applications = [ + {"name": app_name} + for app_name in apps_attr + ] + data.update(attribs) + if "tools" in data: + data["tools_env"] = data.pop("tools") + + data["entityType"] = "Project" + + config = {} + project_config = project.get("config") + + if project_config: + config["apps"] = applications + config["roots"] = project_config["roots"] + + templates = project_config["templates"] + templates["defaults"] = templates.pop("common", None) or {} + + others_templates = templates.pop("others", None) or {} + new_others_templates = {} + templates["others"] = new_others_templates + for name, template in others_templates.items(): + _convert_template_item(template) + new_others_templates[name] = template + + for key in ( + "work", + "publish", + "hero" + ): + cat_templates = templates.pop(key) + _fill_template_category(templates, cat_templates, key) + + delivery_templates = templates.pop("delivery", None) or {} + new_delivery_templates = {} + for name, delivery_template in delivery_templates.items(): + new_delivery_templates[name] = "/".join( + (delivery_template["directory"], delivery_template["file"]) + ) + templates["delivery"] = new_delivery_templates + + config["templates"] = templates + + if "taskTypes" in project: + task_types = project["taskTypes"] + new_task_types = {} + for task_type in task_types: + name = task_type.pop("name") + new_task_types[name] = task_type + + config["tasks"] = new_task_types + + if config: + output["config"] = config + + for data_key, key in ( + ("library_project", "library"), + ("code", "code"), + ("active", "active") + ): + if key in project: + data[data_key] = project[key] + + if "attrib" in project: + for key, value in project["attrib"].items(): + data[key] = value + + if data: + output["data"] = data + return output + + +def folder_fields_v3_to_v4(fields, con): + """Convert folder fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + folder_attributes = con.get_attributes_for_type("folder") + output = set() + for field in fields: + if field in ("schema", "type", "parent"): + continue + + if field in FOLDER_FIELDS_MAPPING_V3_V4: + output |= FOLDER_FIELDS_MAPPING_V3_V4[field] + if field == "data": + output |= { + "attrib.{}".format(attr) + for attr in folder_attributes + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key == "label": + output.add("name") + + elif data_key in ("icon", "color"): + continue + + elif data_key.startswith("tasks"): + output.add("tasks") + + elif data_key in folder_attributes: + output.add("attrib.{}".format(data_key)) + + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_tasks_to_v3(tasks): + """Convert v4 task item to v3 task. + + Args: + tasks (List[Dict[str, Any]]): Task entites. + + Returns: + Dict[str, Dict[str, Any]]: Tasks in v3 variant ready for v3 asset. + """ + + output = {} + for task in tasks: + task_name = task["name"] + new_task = { + "type": task["taskType"] + } + output[task_name] = new_task + return output + + +def convert_v4_folder_to_v3(folder, project_name): + """Convert v4 folder to v3 asset. + + Args: + folder (Dict[str, Any]): Folder entity data. + project_name (str): Project name from which folder was queried. + + Returns: + Dict[str, Any]: Converted v4 folder to v3 asset. + """ + + output = { + "_id": folder["id"], + "parent": project_name, + "type": "asset", + "schema": CURRENT_ASSET_DOC_SCHEMA + } + + output_data = folder.get("data") or {} + + if "name" in folder: + output["name"] = folder["name"] + output_data["label"] = folder["name"] + + if "folderType" in folder: + output_data["entityType"] = folder["folderType"] + + for src_key, dst_key in ( + ("parentId", "visualParent"), + ("active", "active"), + ("thumbnailId", "thumbnail_id"), + ("parents", "parents"), + ): + if src_key in folder: + output_data[dst_key] = folder[src_key] + + if "attrib" in folder: + output_data.update(folder["attrib"]) + + if "tools" in output_data: + output_data["tools_env"] = output_data.pop("tools") + + if "tasks" in folder: + output_data["tasks"] = convert_v4_tasks_to_v3(folder["tasks"]) + + output["data"] = output_data + + return output + + +def subset_fields_v3_to_v4(fields, con): + """Convert subset fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + product_attributes = con.get_attributes_for_type("product") + + output = set() + for field in fields: + if field in ("schema", "type"): + continue + + if field in SUBSET_FIELDS_MAPPING_V3_V4: + output |= SUBSET_FIELDS_MAPPING_V3_V4[field] + + elif field == "data": + output.add("productType") + output.add("active") + output |= { + "attrib.{}".format(attr) + for attr in product_attributes + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key in ("family", "families"): + output.add("productType") + + elif data_key in product_attributes: + output.add("attrib.{}".format(data_key)) + + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_subset_to_v3(subset): + output = { + "_id": subset["id"], + "type": "subset", + "schema": CURRENT_SUBSET_SCHEMA + } + if "folderId" in subset: + output["parent"] = subset["folderId"] + + output_data = subset.get("data") or {} + + if "name" in subset: + output["name"] = subset["name"] + + if "active" in subset: + output_data["active"] = subset["active"] + + if "attrib" in subset: + attrib = subset["attrib"] + if "productGroup" in attrib: + attrib["subsetGroup"] = attrib.pop("productGroup") + output_data.update(attrib) + + family = subset.get("productType") + if family: + output_data["family"] = family + output_data["families"] = [family] + + output["data"] = output_data + + return output + + +def version_fields_v3_to_v4(fields, con): + """Convert version fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + version_attributes = con.get_attributes_for_type("version") + + output = set() + for field in fields: + if field in ("type", "schema", "version_id"): + continue + + if field in VERSION_FIELDS_MAPPING_V3_V4: + output |= VERSION_FIELDS_MAPPING_V3_V4[field] + + elif field == "data": + output |= { + "attrib.{}".format(attr) + for attr in version_attributes + } + output |= { + "author", + "createdAt", + "thumbnailId", + } + + elif field.startswith("data"): + field_parts = field.split(".") + field_parts.pop(0) + data_key = ".".join(field_parts) + if data_key in version_attributes: + output.add("attrib.{}".format(data_key)) + + elif data_key == "thumbnail_id": + output.add("thumbnailId") + + elif data_key == "time": + output.add("createdAt") + + elif data_key == "author": + output.add("author") + + elif data_key in ("tags", ): + continue + + else: + output.add("data") + print("Requested specific key from data {}".format(data_key)) + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_version_to_v3(version): + """Convert v4 version entity to v4 version. + + Args: + version (Dict[str, Any]): Queried v4 version entity. + + Returns: + Dict[str, Any]: Conveted version entity to v3 structure. + """ + + version_num = version["version"] + if version_num < 0: + output = { + "_id": version["id"], + "type": "hero_version", + "schema": CURRENT_HERO_VERSION_SCHEMA, + } + if "productId" in version: + output["parent"] = version["productId"] + + if "data" in version: + output["data"] = version["data"] + return output + + output = { + "_id": version["id"], + "type": "version", + "name": version_num, + "schema": CURRENT_VERSION_SCHEMA + } + if "productId" in version: + output["parent"] = version["productId"] + + output_data = version.get("data") or {} + if "attrib" in version: + output_data.update(version["attrib"]) + + for src_key, dst_key in ( + ("active", "active"), + ("thumbnailId", "thumbnail_id"), + ("author", "author") + ): + if src_key in version: + output_data[dst_key] = version[src_key] + + if "createdAt" in version: + created_at = arrow.get(version["createdAt"]) + output_data["time"] = created_at.strftime("%Y%m%dT%H%M%SZ") + + output["data"] = output_data + + return output + + +def representation_fields_v3_to_v4(fields, con): + """Convert representation fields from v3 to v4 structure. + + Args: + fields (Union[Iterable(str), None]): fields to be converted. + + Returns: + Union[Set(str), None]: Converted fields to v4 fields. + """ + + if not fields: + return None + + representation_attributes = con.get_attributes_for_type("representation") + + output = set() + for field in fields: + if field in ("type", "schema"): + continue + + if field in REPRESENTATION_FIELDS_MAPPING_V3_V4: + output |= REPRESENTATION_FIELDS_MAPPING_V3_V4[field] + + elif field.startswith("context"): + output.add("context") + + # TODO: 'files' can have specific attributes but the keys in v3 and v4 + # are not the same (content is not the same) + elif field.startswith("files"): + output |= REPRESENTATION_FILES_FIELDS + + elif field.startswith("data"): + output |= { + "attrib.{}".format(attr) + for attr in representation_attributes + } + + else: + raise ValueError("Unknown field mapping for {}".format(field)) + + if "id" not in output: + output.add("id") + return output + + +def convert_v4_representation_to_v3(representation): + """Convert v4 representation to v3 representation. + + Args: + representation (Dict[str, Any]): Queried representation from v4 server. + + Returns: + Dict[str, Any]: Converted representation to v3 structure. + """ + + output = { + "type": "representation", + "schema": CURRENT_REPRESENTATION_SCHEMA, + } + if "id" in representation: + output["_id"] = representation["id"] + + for v3_key, v4_key in ( + ("name", "name"), + ("parent", "versionId") + ): + if v4_key in representation: + output[v3_key] = representation[v4_key] + + if "context" in representation: + context = representation["context"] + if isinstance(context, six.string_types): + context = json.loads(context) + + if "folder" in context: + _c_folder = context.pop("folder") + context["asset"] = _c_folder["name"] + + if "product" in context: + _c_product = context.pop("product") + context["family"] = _c_product["type"] + context["subset"] = _c_product["name"] + + output["context"] = context + + if "files" in representation: + files = representation["files"] + new_files = [] + # From GraphQl is list + if isinstance(files, list): + for file_info in files: + file_info["_id"] = file_info["id"] + new_files.append(file_info) + + # From RestPoint is dictionary + elif isinstance(files, dict): + for file_id, file_info in files: + file_info["_id"] = file_id + new_files.append(file_info) + + for file_info in new_files: + if not file_info.get("sites"): + file_info["sites"] = [{ + "name": "studio" + }] + + output["files"] = new_files + + if representation.get("active") is False: + output["type"] = "archived_representation" + output["old_id"] = output["_id"] + + output_data = representation.get("data") or {} + if "attrib" in representation: + output_data.update(representation["attrib"]) + + for key, data_key in ( + ("active", "active"), + ): + if key in representation: + output_data[data_key] = representation[key] + + if "template" in output_data: + output_data["template"] = ( + output_data["template"] + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + ) + + output["data"] = output_data + + return output + + +def workfile_info_fields_v3_to_v4(fields): + if not fields: + return None + + new_fields = set() + fields = set(fields) + for v3_key, v4_key in ( + ("_id", "id"), + ("files", "path"), + ("filename", "name"), + ("data", "data"), + ): + if v3_key in fields: + new_fields.add(v4_key) + + if "parent" in fields or "task_name" in fields: + new_fields.add("taskId") + + return new_fields + + +def convert_v4_workfile_info_to_v3(workfile_info, task): + output = { + "type": "workfile", + "schema": CURRENT_WORKFILE_INFO_SCHEMA, + } + if "id" in workfile_info: + output["_id"] = workfile_info["id"] + + if "path" in workfile_info: + output["files"] = [workfile_info["path"]] + + if "name" in workfile_info: + output["filename"] = workfile_info["name"] + + if "taskId" in workfile_info: + output["task_name"] = task["name"] + output["parent"] = task["folderId"] + + return output + + +def convert_create_asset_to_v4(asset, project, con): + folder_attributes = con.get_attributes_for_type("folder") + + asset_data = asset["data"] + parent_id = asset_data["visualParent"] + + folder = { + "name": asset["name"], + "parentId": parent_id, + } + entity_id = asset.get("_id") + if entity_id: + folder["id"] = entity_id + + attribs = {} + data = {} + for key, value in asset_data.items(): + if key in ( + "visualParent", + "thumbnail_id", + "parents", + "inputLinks", + "avalon_mongo_id", + ): + continue + + if key not in folder_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + folder["attrib"] = attribs + + if data: + folder["data"] = data + return folder + + +def convert_create_task_to_v4(task, project, con): + if not project["taskTypes"]: + raise ValueError( + "Project \"{}\" does not have any task types".format( + project["name"])) + + task_type = task["type"] + if task_type not in project["taskTypes"]: + task_type = tuple(project["taskTypes"].keys())[0] + + return { + "name": task["name"], + "taskType": task_type, + "folderId": task["folderId"] + } + + +def convert_create_subset_to_v4(subset, con): + product_attributes = con.get_attributes_for_type("product") + + subset_data = subset["data"] + product_type = subset_data.get("family") + if not product_type: + product_type = subset_data["families"][0] + + converted_product = { + "name": subset["name"], + "productType": product_type, + "folderId": subset["parent"], + } + entity_id = subset.get("_id") + if entity_id: + converted_product["id"] = entity_id + + attribs = {} + data = {} + if "subsetGroup" in subset_data: + subset_data["productGroup"] = subset_data.pop("subsetGroup") + for key, value in subset_data.items(): + if key not in product_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_product["attrib"] = attribs + + if data: + converted_product["data"] = data + + return converted_product + + +def convert_create_version_to_v4(version, con): + version_attributes = con.get_attributes_for_type("version") + converted_version = { + "version": version["name"], + "productId": version["parent"], + } + entity_id = version.get("_id") + if entity_id: + converted_version["id"] = entity_id + + version_data = version["data"] + attribs = {} + data = {} + for key, value in version_data.items(): + if key not in version_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_version["attrib"] = attribs + + if data: + converted_version["data"] = attribs + + return converted_version + + +def convert_create_hero_version_to_v4(hero_version, project_name, con): + if "version_id" in hero_version: + version_id = hero_version["version_id"] + version = con.get_version_by_id(project_name, version_id) + version["version"] = - version["version"] + + for auto_key in ( + "name", + "createdAt", + "updatedAt", + "author", + ): + version.pop(auto_key, None) + + return version + + version_attributes = con.get_attributes_for_type("version") + converted_version = { + "version": hero_version["version"], + "productId": hero_version["parent"], + } + entity_id = hero_version.get("_id") + if entity_id: + converted_version["id"] = entity_id + + version_data = hero_version["data"] + attribs = {} + data = {} + for key, value in version_data.items(): + if key not in version_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_version["attrib"] = attribs + + if data: + converted_version["data"] = attribs + + return converted_version + + +def convert_create_representation_to_v4(representation, con): + representation_attributes = con.get_attributes_for_type("representation") + + converted_representation = { + "name": representation["name"], + "versionId": representation["parent"], + } + entity_id = representation.get("_id") + if entity_id: + converted_representation["id"] = entity_id + + if representation.get("type") == "archived_representation": + converted_representation["active"] = False + + new_files = [] + for file_item in representation["files"]: + new_file_item = { + key: value + for key, value in file_item.items() + if key in ("hash", "path", "size") + } + new_file_item.update({ + "id": create_entity_id(), + "hash_type": "op3", + "name": os.path.basename(new_file_item["path"]) + }) + new_files.append(new_file_item) + + converted_representation["files"] = new_files + + context = representation["context"] + context["folder"] = { + "name": context.pop("asset", None) + } + context["product"] = { + "type": context.pop("family", None), + "name": context.pop("subset", None), + } + + attribs = {} + data = { + "context": context, + } + + representation_data = representation["data"] + representation_data["template"] = ( + representation_data["template"] + .replace("{subset}", "{product[name]}") + .replace("{family}", "{product[type]}") + ) + + for key, value in representation_data.items(): + if key not in representation_attributes: + data[key] = value + elif value is not None: + attribs[key] = value + + if attribs: + converted_representation["attrib"] = attribs + + if data: + converted_representation["data"] = data + + return converted_representation + + +def convert_create_workfile_info_to_v4(data, project_name, con): + folder_id = data["parent"] + task_name = data["task_name"] + task = con.get_task_by_name(project_name, folder_id, task_name) + if not task: + return None + + workfile_attributes = con.get_attributes_for_type("workfile") + filename = data["filename"] + possible_attribs = { + "extension": os.path.splitext(filename)[-1] + } + attribs = {} + for attr in workfile_attributes: + if attr in possible_attribs: + attribs[attr] = possible_attribs[attr] + + output = { + "path": data["files"][0], + "name": filename, + "taskId": task["id"] + } + if "_id" in data: + output["id"] = data["_id"] + + if attribs: + output["attrib"] = attribs + + output_data = data.get("data") + if output_data: + output["data"] = output_data + return output + + +def _from_flat_dict(data): + output = {} + for key, value in data.items(): + output_value = output + subkeys = key.split(".") + last_key = subkeys.pop(-1) + for subkey in subkeys: + if subkey not in output_value: + output_value[subkey] = {} + output_value = output_value[subkey] + + output_value[last_key] = value + return output + + +def _to_flat_dict(data): + output = {} + flat_queue = collections.deque() + flat_queue.append(([], data)) + while flat_queue: + item = flat_queue.popleft() + parent_keys, data = item + for key, value in data.items(): + keys = list(parent_keys) + keys.append(key) + if isinstance(value, dict): + flat_queue.append((keys, value)) + else: + full_key = ".".join(keys) + output[full_key] = value + + return output + + +def convert_update_folder_to_v4(project_name, asset_id, update_data, con): + new_update_data = {} + + folder_attributes = con.get_attributes_for_type("folder") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + + has_new_parent = False + has_task_changes = False + parent_id = None + tasks = None + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if "type" in update_data: + new_update_data["active"] = update_data["type"] == "asset" + + if data: + if "thumbnail_id" in data: + new_update_data["thumbnailId"] = data.pop("thumbnail_id") + + if "tasks" in data: + tasks = data.pop("tasks") + has_task_changes = True + + if "visualParent" in data: + has_new_parent = True + parent_id = data.pop("visualParent") + + for key, value in data.items(): + if key in folder_attributes: + attribs[key] = value + else: + new_data[key] = value + + if "name" in update_data: + new_update_data["name"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "asset": + new_update_data["active"] = True + elif new_type == "archived_asset": + new_update_data["active"] = False + + if has_new_parent: + new_update_data["parentId"] = parent_id + + if new_data: + print("Folder has new data: {}".format(new_data)) + new_update_data["data"] = new_data + + if attribs: + new_update_data["attrib"] = attribs + + if has_task_changes: + raise ValueError("Task changes of folder are not implemented") + + return _to_flat_dict(new_update_data) + + +def convert_update_subset_to_v4(project_name, subset_id, update_data, con): + new_update_data = {} + + product_attributes = con.get_attributes_for_type("product") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if data: + if "family" in data: + family = data.pop("family") + new_update_data["productType"] = family + + if "families" in data: + families = data.pop("families") + if "productType" not in new_update_data: + new_update_data["productType"] = families[0] + + if "subsetGroup" in data: + data["productGroup"] = data.pop("subsetGroup") + for key, value in data.items(): + if key in product_attributes: + if value is REMOVED_VALUE: + value = None + attribs[key] = value + + elif value is not REMOVED_VALUE: + new_data[key] = value + + if "name" in update_data: + new_update_data["name"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "subset": + new_update_data["active"] = True + elif new_type == "archived_subset": + new_update_data["active"] = False + + if "parent" in update_data: + new_update_data["folderId"] = update_data["parent"] + + flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + + if new_data: + print("Subset has new data: {}".format(new_data)) + flat_data["data"] = new_data + + return flat_data + + +def convert_update_version_to_v4(project_name, version_id, update_data, con): + new_update_data = {} + + version_attributes = con.get_attributes_for_type("version") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if data: + if "author" in data: + new_update_data["author"] = data.pop("author") + + if "thumbnail_id" in data: + new_update_data["thumbnailId"] = data.pop("thumbnail_id") + + for key, value in data.items(): + if key in version_attributes: + if value is REMOVED_VALUE: + value = None + attribs[key] = value + + elif value is not REMOVED_VALUE: + new_data[key] = value + + if "name" in update_data: + new_update_data["version"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "version": + new_update_data["active"] = True + elif new_type == "archived_version": + new_update_data["active"] = False + + if "parent" in update_data: + new_update_data["productId"] = update_data["parent"] + + flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + + if new_data: + print("Version has new data: {}".format(new_data)) + flat_data["data"] = new_data + return flat_data + + +def convert_update_hero_version_to_v4( + project_name, hero_version_id, update_data, con +): + if "version_id" not in update_data: + return None + + version_id = update_data["version_id"] + hero_version = con.get_hero_version_by_id(project_name, hero_version_id) + version = con.get_version_by_id(project_name, version_id) + version["version"] = - version["version"] + version["id"] = hero_version_id + + for auto_key in ( + "name", + "createdAt", + "updatedAt", + "author", + ): + version.pop(auto_key, None) + + return prepare_entity_changes(hero_version, version) + + +def convert_update_representation_to_v4( + project_name, repre_id, update_data, con +): + new_update_data = {} + + folder_attributes = con.get_attributes_for_type("folder") + full_update_data = _from_flat_dict(update_data) + data = full_update_data.get("data") + + new_data = {} + attribs = full_update_data.pop("attrib", {}) + if data: + for key, value in data.items(): + if key in folder_attributes: + attribs[key] = value + else: + new_data[key] = value + + if "template" in attribs: + attribs["template"] = ( + attribs["template"] + .replace("{family}", "{product[type]}") + .replace("{subset}", "{product[name]}") + ) + + if "name" in update_data: + new_update_data["name"] = update_data["name"] + + if "type" in update_data: + new_type = update_data["type"] + if new_type == "representation": + new_update_data["active"] = True + elif new_type == "archived_representation": + new_update_data["active"] = False + + if "parent" in update_data: + new_update_data["versionId"] = update_data["parent"] + + if "context" in update_data: + context = update_data["context"] + if "asset" in context: + context["folder"] = {"name": context.pop("asset")} + + if "family" in context or "subset" in context: + context["product"] = { + "name": context.pop("subset"), + "type": context.pop("family"), + } + new_data["context"] = context + + if "files" in update_data: + new_files = update_data["files"] + if isinstance(new_files, dict): + new_files = list(new_files.values()) + + for item in new_files: + for key in tuple(item.keys()): + if key not in ("hash", "path", "size"): + item.pop(key) + item.update({ + "id": create_entity_id(), + "name": os.path.basename(item["path"]), + "hash_type": "op3", + }) + new_update_data["files"] = new_files + + flat_data = _to_flat_dict(new_update_data) + if attribs: + flat_data["attrib"] = attribs + + if new_data: + print("Representation has new data: {}".format(new_data)) + flat_data["data"] = new_data + + return flat_data + + +def convert_update_workfile_info_to_v4( + project_name, workfile_id, update_data, con +): + return { + key: value + for key, value in update_data.items() + if key.startswith("data") + } diff --git a/openpype/client/server/entities.py b/openpype/client/server/entities.py new file mode 100644 index 0000000000..9579f13add --- /dev/null +++ b/openpype/client/server/entities.py @@ -0,0 +1,694 @@ +import collections + +from ayon_api import get_server_api_connection + +from openpype.client.mongo.operations import CURRENT_THUMBNAIL_SCHEMA + +from .openpype_comp import get_folders_with_tasks +from .conversion_utils import ( + project_fields_v3_to_v4, + convert_v4_project_to_v3, + + folder_fields_v3_to_v4, + convert_v4_folder_to_v3, + + subset_fields_v3_to_v4, + convert_v4_subset_to_v3, + + version_fields_v3_to_v4, + convert_v4_version_to_v3, + + representation_fields_v3_to_v4, + convert_v4_representation_to_v3, + + workfile_info_fields_v3_to_v4, + convert_v4_workfile_info_to_v3, +) + + +def get_projects(active=True, inactive=False, library=None, fields=None): + if not active and not inactive: + return + + if active and inactive: + active = None + elif active: + active = True + elif inactive: + active = False + + con = get_server_api_connection() + fields = project_fields_v3_to_v4(fields, con) + for project in con.get_projects(active, library, fields=fields): + yield convert_v4_project_to_v3(project) + + +def get_project(project_name, active=True, inactive=False, fields=None): + # Skip if both are disabled + con = get_server_api_connection() + fields = project_fields_v3_to_v4(fields, con) + return convert_v4_project_to_v3( + con.get_project(project_name, fields=fields) + ) + + +def get_whole_project(*args, **kwargs): + raise NotImplementedError("'get_whole_project' not implemented") + + +def _get_subsets( + project_name, + subset_ids=None, + subset_names=None, + folder_ids=None, + names_by_folder_ids=None, + archived=False, + fields=None +): + # Convert fields and add minimum required fields + con = get_server_api_connection() + fields = subset_fields_v3_to_v4(fields, con) + if fields is not None: + for key in ( + "id", + "active" + ): + fields.add(key) + + active = None + if archived: + active = False + + for subset in con.get_products( + project_name, + subset_ids, + subset_names, + folder_ids, + names_by_folder_ids, + active, + fields + ): + yield convert_v4_subset_to_v3(subset) + + +def _get_versions( + project_name, + version_ids=None, + subset_ids=None, + versions=None, + hero=True, + standard=True, + latest=None, + active=None, + fields=None +): + con = get_server_api_connection() + + fields = version_fields_v3_to_v4(fields, con) + + # Make sure 'productId' and 'version' are available when hero versions + # are queried + if fields and hero: + fields = set(fields) + fields |= {"productId", "version"} + + queried_versions = con.get_versions( + project_name, + version_ids, + subset_ids, + versions, + hero, + standard, + latest, + active=active, + fields=fields + ) + + versions = [] + hero_versions = [] + for version in queried_versions: + if version["version"] < 0: + hero_versions.append(version) + else: + versions.append(convert_v4_version_to_v3(version)) + + if hero_versions: + subset_ids = set() + versions_nums = set() + for hero_version in hero_versions: + versions_nums.add(abs(hero_version["version"])) + subset_ids.add(hero_version["productId"]) + + hero_eq_versions = con.get_versions( + project_name, + product_ids=subset_ids, + versions=versions_nums, + hero=False, + fields=["id", "version", "productId"] + ) + hero_eq_by_subset_id = collections.defaultdict(list) + for version in hero_eq_versions: + hero_eq_by_subset_id[version["productId"]].append(version) + + for hero_version in hero_versions: + abs_version = abs(hero_version["version"]) + subset_id = hero_version["productId"] + version_id = None + for version in hero_eq_by_subset_id.get(subset_id, []): + if version["version"] == abs_version: + version_id = version["id"] + break + conv_hero = convert_v4_version_to_v3(hero_version) + conv_hero["version_id"] = version_id + versions.append(conv_hero) + + return versions + + +def get_asset_by_id(project_name, asset_id, fields=None): + assets = get_assets( + project_name, asset_ids=[asset_id], fields=fields + ) + for asset in assets: + return asset + return None + + +def get_asset_by_name(project_name, asset_name, fields=None): + assets = get_assets( + project_name, asset_names=[asset_name], fields=fields + ) + for asset in assets: + return asset + return None + + +def get_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + archived=False, + fields=None +): + if not project_name: + return + + active = True + if archived: + active = False + + con = get_server_api_connection() + fields = folder_fields_v3_to_v4(fields, con) + kwargs = dict( + folder_ids=asset_ids, + folder_names=asset_names, + parent_ids=parent_ids, + active=active, + fields=fields + ) + + if fields is None or "tasks" in fields: + folders = get_folders_with_tasks(con, project_name, **kwargs) + + else: + folders = con.get_folders(project_name, **kwargs) + + for folder in folders: + yield convert_v4_folder_to_v3(folder, project_name) + + +def get_archived_assets( + project_name, + asset_ids=None, + asset_names=None, + parent_ids=None, + fields=None +): + return get_assets( + project_name, + asset_ids, + asset_names, + parent_ids, + True, + fields + ) + + +def get_asset_ids_with_subsets(project_name, asset_ids=None): + con = get_server_api_connection() + return con.get_folder_ids_with_products(project_name, asset_ids) + + +def get_subset_by_id(project_name, subset_id, fields=None): + subsets = get_subsets( + project_name, subset_ids=[subset_id], fields=fields + ) + for subset in subsets: + return subset + return None + + +def get_subset_by_name(project_name, subset_name, asset_id, fields=None): + subsets = get_subsets( + project_name, + subset_names=[subset_name], + asset_ids=[asset_id], + fields=fields + ) + for subset in subsets: + return subset + return None + + +def get_subsets( + project_name, + subset_ids=None, + subset_names=None, + asset_ids=None, + names_by_asset_ids=None, + archived=False, + fields=None +): + return _get_subsets( + project_name, + subset_ids, + subset_names, + asset_ids, + names_by_asset_ids, + archived, + fields=fields + ) + + +def get_subset_families(project_name, subset_ids=None): + con = get_server_api_connection() + return con.get_product_type_names(project_name, subset_ids) + + +def get_version_by_id(project_name, version_id, fields=None): + versions = get_versions( + project_name, + version_ids=[version_id], + fields=fields, + hero=True + ) + for version in versions: + return version + return None + + +def get_version_by_name(project_name, version, subset_id, fields=None): + versions = get_versions( + project_name, + subset_ids=[subset_id], + versions=[version], + fields=fields + ) + for version in versions: + return version + return None + + +def get_versions( + project_name, + version_ids=None, + subset_ids=None, + versions=None, + hero=False, + fields=None +): + return _get_versions( + project_name, + version_ids, + subset_ids, + versions, + hero=hero, + standard=True, + fields=fields + ) + + +def get_hero_version_by_id(project_name, version_id, fields=None): + versions = get_hero_versions( + project_name, + version_ids=[version_id], + fields=fields + ) + for version in versions: + return version + return None + + +def get_hero_version_by_subset_id( + project_name, subset_id, fields=None +): + versions = get_hero_versions( + project_name, + subset_ids=[subset_id], + fields=fields + ) + for version in versions: + return version + return None + + +def get_hero_versions( + project_name, subset_ids=None, version_ids=None, fields=None +): + return _get_versions( + project_name, + version_ids=version_ids, + subset_ids=subset_ids, + hero=True, + standard=False, + fields=fields + ) + + +def get_last_versions(project_name, subset_ids, active=None, fields=None): + if fields: + fields = set(fields) + fields.add("parent") + + versions = _get_versions( + project_name, + subset_ids=subset_ids, + latest=True, + hero=False, + active=active, + fields=fields + ) + return { + version["parent"]: version + for version in versions + } + + +def get_last_version_by_subset_id(project_name, subset_id, fields=None): + versions = _get_versions( + project_name, + subset_ids=[subset_id], + latest=True, + hero=False, + fields=fields + ) + if not versions: + return None + return versions[0] + + +def get_last_version_by_subset_name( + project_name, + subset_name, + asset_id=None, + asset_name=None, + fields=None +): + if not asset_id and not asset_name: + return None + + if not asset_id: + asset = get_asset_by_name( + project_name, asset_name, fields=["_id"] + ) + if not asset: + return None + asset_id = asset["_id"] + + subset = get_subset_by_name( + project_name, subset_name, asset_id, fields=["_id"] + ) + if not subset: + return None + return get_last_version_by_subset_id( + project_name, subset["id"], fields=fields + ) + + +def get_output_link_versions(project_name, version_id, fields=None): + if not version_id: + return [] + + con = get_server_api_connection() + version_links = con.get_version_links( + project_name, version_id, link_direction="out") + + version_ids = { + link["entityId"] + for link in version_links + if link["entityType"] == "version" + } + if not version_ids: + return [] + + return get_versions(project_name, version_ids=version_ids, fields=fields) + + +def version_is_latest(project_name, version_id): + con = get_server_api_connection() + return con.version_is_latest(project_name, version_id) + + +def get_representation_by_id(project_name, representation_id, fields=None): + representations = get_representations( + project_name, + representation_ids=[representation_id], + fields=fields + ) + for representation in representations: + return representation + return None + + +def get_representation_by_name( + project_name, representation_name, version_id, fields=None +): + representations = get_representations( + project_name, + representation_names=[representation_name], + version_ids=[version_id], + fields=fields + ) + for representation in representations: + return representation + return None + + +def get_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + archived=False, + standard=True, + fields=None +): + if context_filters is not None: + # TODO should we add the support? + # - there was ability to fitler using regex + raise ValueError("OP v4 can't filter by representation context.") + + if not archived and not standard: + return + + if archived and not standard: + active = False + elif not archived and standard: + active = True + else: + active = None + + con = get_server_api_connection() + fields = representation_fields_v3_to_v4(fields, con) + if fields and active is not None: + fields.add("active") + + representations = con.get_representations( + project_name, + representation_ids, + representation_names, + version_ids, + names_by_version_ids, + active, + fields=fields + ) + for representation in representations: + yield convert_v4_representation_to_v3(representation) + + +def get_representation_parents(project_name, representation): + if not representation: + return None + + repre_id = representation["_id"] + parents_by_repre_id = get_representations_parents( + project_name, [representation] + ) + return parents_by_repre_id[repre_id] + + +def get_representations_parents(project_name, representations): + repre_ids = { + repre["_id"] + for repre in representations + } + con = get_server_api_connection() + parents_by_repre_id = con.get_representations_parents(project_name, + repre_ids) + folder_ids = set() + for parents in parents_by_repre_id .values(): + folder_ids.add(parents[2]["id"]) + + tasks_by_folder_id = {} + + new_parents = {} + for repre_id, parents in parents_by_repre_id .items(): + version, subset, folder, project = parents + folder_tasks = tasks_by_folder_id.get(folder["id"]) or {} + folder["tasks"] = folder_tasks + new_parents[repre_id] = ( + convert_v4_version_to_v3(version), + convert_v4_subset_to_v3(subset), + convert_v4_folder_to_v3(folder, project_name), + project + ) + return new_parents + + +def get_archived_representations( + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + context_filters=None, + names_by_version_ids=None, + fields=None +): + return get_representations( + project_name, + representation_ids=representation_ids, + representation_names=representation_names, + version_ids=version_ids, + context_filters=context_filters, + names_by_version_ids=names_by_version_ids, + archived=True, + standard=False, + fields=fields + ) + + +def get_thumbnail( + project_name, thumbnail_id, entity_type, entity_id, fields=None +): + """Receive thumbnail entity data. + + Args: + project_name (str): Name of project where to look for queried entities. + thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. + entity_type (str): Type of entity for which the thumbnail should be + received. + entity_id (str): Id of entity for which the thumbnail should be + received. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + None: If thumbnail with specified id was not found. + Dict: Thumbnail entity data which can be reduced to specified 'fields'. + """ + + if not thumbnail_id or not entity_type or not entity_id: + return None + + if entity_type == "asset": + entity_type = "folder" + + elif entity_type == "hero_version": + entity_type = "version" + + return { + "_id": thumbnail_id, + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": { + "entity_type": entity_type, + "entity_id": entity_id + } + } + + +def get_thumbnails(project_name, thumbnail_contexts, fields=None): + """Get thumbnail entities. + + Warning: + This function is not OpenPype compatible. There is none usage of this + function in codebase so there is nothing to convert. The previous + implementation cannot be AYON compatible without entity types. + """ + + thumbnail_items = set() + for thumbnail_context in thumbnail_contexts: + thumbnail_id, entity_type, entity_id = thumbnail_context + thumbnail_item = get_thumbnail( + project_name, thumbnail_id, entity_type, entity_id + ) + if thumbnail_item: + thumbnail_items.add(thumbnail_item) + return list(thumbnail_items) + + +def get_thumbnail_id_from_source(project_name, src_type, src_id): + """Receive thumbnail id from source entity. + + Args: + project_name (str): Name of project where to look for queried entities. + src_type (str): Type of source entity ('asset', 'version'). + src_id (Union[str, ObjectId]): Id of source entity. + + Returns: + ObjectId: Thumbnail id assigned to entity. + None: If Source entity does not have any thumbnail id assigned. + """ + + if not src_type or not src_id: + return None + + if src_type == "version": + version = get_version_by_id( + project_name, src_id, fields=["data.thumbnail_id"] + ) or {} + return version.get("data", {}).get("thumbnail_id") + + if src_type == "asset": + asset = get_asset_by_id( + project_name, src_id, fields=["data.thumbnail_id"] + ) or {} + return asset.get("data", {}).get("thumbnail_id") + + return None + + +def get_workfile_info( + project_name, asset_id, task_name, filename, fields=None +): + if not asset_id or not task_name or not filename: + return None + + con = get_server_api_connection() + task = con.get_task_by_name( + project_name, asset_id, task_name, fields=["id", "name", "folderId"] + ) + if not task: + return None + + fields = workfile_info_fields_v3_to_v4(fields) + + for workfile_info in con.get_workfiles_info( + project_name, task_ids=[task["id"]], fields=fields + ): + if workfile_info["name"] == filename: + return convert_v4_workfile_info_to_v3(workfile_info, task) + return None diff --git a/openpype/client/server/entity_links.py b/openpype/client/server/entity_links.py new file mode 100644 index 0000000000..d8395aabe7 --- /dev/null +++ b/openpype/client/server/entity_links.py @@ -0,0 +1,156 @@ +import ayon_api +from ayon_api import get_folder_links, get_versions_links + +from .entities import get_assets, get_representation_by_id + + +def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): + """Extract linked asset ids from asset document. + + One of asset document or asset id must be passed. + + Note: + Asset links now works only from asset to assets. + + Args: + project_name (str): Project where to look for asset. + asset_doc (dict): Asset document from DB. + asset_id (str): Asset id to find its document. + + Returns: + List[Union[ObjectId, str]]: Asset ids of input links. + """ + + output = [] + if not asset_doc and not asset_id: + return output + + if not asset_id: + asset_id = asset_doc["_id"] + + links = get_folder_links(project_name, asset_id, link_direction="in") + return [ + link["entityId"] + for link in links + if link["entityType"] == "folder" + ] + + +def get_linked_assets( + project_name, asset_doc=None, asset_id=None, fields=None +): + """Return linked assets based on passed asset document. + + One of asset document or asset id must be passed. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_doc (Dict[str, Any]): Asset document from database. + asset_id (Union[ObjectId, str]): Asset id. Can be used instead of + asset document. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + List[Dict[str, Any]]: Asset documents of input links for passed + asset doc. + """ + + link_ids = get_linked_asset_ids(project_name, asset_doc, asset_id) + if not link_ids: + return [] + return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) + + + +def get_linked_representation_id( + project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None +): + """Returns list of linked ids of particular type (if provided). + + One of representation document or representation id must be passed. + Note: + Representation links now works only from representation through version + back to representations. + + Todos: + Missing depth query. Not sure how it did find more representations in + depth, probably links to version? + + Args: + project_name (str): Name of project where look for links. + repre_doc (Dict[str, Any]): Representation document. + repre_id (Union[ObjectId, str]): Representation id. + link_type (str): Type of link (e.g. 'reference', ...). + max_depth (int): Limit recursion level. Default: 0 + + Returns: + List[ObjectId] Linked representation ids. + """ + + if repre_doc: + repre_id = repre_doc["_id"] + + if not repre_id and not repre_doc: + return [] + + version_id = None + if repre_doc: + version_id = repre_doc.get("parent") + + if not version_id: + repre_doc = get_representation_by_id( + project_name, repre_id, fields=["parent"] + ) + if repre_doc: + version_id = repre_doc["parent"] + + if not version_id: + return [] + + if max_depth is None or max_depth == 0: + max_depth = 1 + + link_types = None + if link_type: + link_types = [link_type] + + # Store already found version ids to avoid recursion, and also to store + # output -> Don't forget to remove 'version_id' at the end!!! + linked_version_ids = {version_id} + # Each loop of depth will reset this variable + versions_to_check = {version_id} + for _ in range(max_depth): + if not versions_to_check: + break + + links = get_versions_links( + project_name, + versions_to_check, + link_types=link_types, + link_direction="out") + + versions_to_check = set() + for link in links: + # Care only about version links + if link["entityType"] != "version": + continue + entity_id = link["entityId"] + # Skip already found linked version ids + if entity_id in linked_version_ids: + continue + linked_version_ids.add(entity_id) + versions_to_check.add(entity_id) + + linked_version_ids.remove(version_id) + if not linked_version_ids: + return [] + + representations = ayon_api.get_representations( + project_name, + version_ids=linked_version_ids, + fields=["id"]) + return [ + repre["id"] + for repre in representations + ] diff --git a/openpype/client/server/openpype_comp.py b/openpype/client/server/openpype_comp.py new file mode 100644 index 0000000000..a123fe3167 --- /dev/null +++ b/openpype/client/server/openpype_comp.py @@ -0,0 +1,156 @@ +import collections +from ayon_api.graphql import GraphQlQuery, FIELD_VALUE, fields_to_dict + +from .constants import DEFAULT_FOLDER_FIELDS + + +def folders_tasks_graphql_query(fields): + query = GraphQlQuery("FoldersQuery") + project_name_var = query.add_variable("projectName", "String!") + folder_ids_var = query.add_variable("folderIds", "[String!]") + parent_folder_ids_var = query.add_variable("parentFolderIds", "[String!]") + folder_paths_var = query.add_variable("folderPaths", "[String!]") + folder_names_var = query.add_variable("folderNames", "[String!]") + has_products_var = query.add_variable("folderHasProducts", "Boolean!") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + folders_field = project_field.add_field_with_edges("folders") + folders_field.set_filter("ids", folder_ids_var) + folders_field.set_filter("parentIds", parent_folder_ids_var) + folders_field.set_filter("names", folder_names_var) + folders_field.set_filter("paths", folder_paths_var) + folders_field.set_filter("hasProducts", has_products_var) + + fields = set(fields) + fields.discard("tasks") + tasks_field = folders_field.add_field_with_edges("tasks") + tasks_field.add_field("name") + tasks_field.add_field("taskType") + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, folders_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def get_folders_with_tasks( + con, + project_name, + folder_ids=None, + folder_paths=None, + folder_names=None, + parent_ids=None, + active=True, + fields=None +): + """Query folders with tasks from server. + + This is for v4 compatibility where tasks were stored on assets. This is + an inefficient way how folders and tasks are queried so it was added only + as compatibility function. + + Todos: + Folder name won't be unique identifier, so we should add folder path + filtering. + + Notes: + Filter 'active' don't have direct filter in GraphQl. + + Args: + con (ServerAPI): Connection to server. + project_name (str): Name of project where folders are. + folder_ids (Iterable[str]): Folder ids to filter. + folder_paths (Iterable[str]): Folder paths used for filtering. + folder_names (Iterable[str]): Folder names used for filtering. + parent_ids (Iterable[str]): Ids of folder parents. Use 'None' + if folder is direct child of project. + active (Union[bool, None]): Filter active/inactive folders. Both + are returned if is set to None. + fields (Union[Iterable(str), None]): Fields to be queried + for folder. All possible folder fields are returned if 'None' + is passed. + + Returns: + List[Dict[str, Any]]: Queried folder entities. + """ + + if not project_name: + return [] + + filters = { + "projectName": project_name + } + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return [] + filters["folderIds"] = list(folder_ids) + + if folder_paths is not None: + folder_paths = set(folder_paths) + if not folder_paths: + return [] + filters["folderPaths"] = list(folder_paths) + + if folder_names is not None: + folder_names = set(folder_names) + if not folder_names: + return [] + filters["folderNames"] = list(folder_names) + + if parent_ids is not None: + parent_ids = set(parent_ids) + if not parent_ids: + return [] + if None in parent_ids: + # Replace 'None' with '"root"' which is used during GraphQl + # query for parent ids filter for folders without folder + # parent + parent_ids.remove(None) + parent_ids.add("root") + + if project_name in parent_ids: + # Replace project name with '"root"' which is used during + # GraphQl query for parent ids filter for folders without + # folder parent + parent_ids.remove(project_name) + parent_ids.add("root") + + filters["parentFolderIds"] = list(parent_ids) + + if fields: + fields = set(fields) + else: + fields = con.get_default_fields_for_type("folder") + fields |= DEFAULT_FOLDER_FIELDS + + if active is not None: + fields.add("active") + + query = folders_tasks_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + parsed_data = query.query(con) + folders = parsed_data["project"]["folders"] + if active is None: + return folders + return [ + folder + for folder in folders + if folder["active"] is active + ] diff --git a/openpype/client/server/operations.py b/openpype/client/server/operations.py new file mode 100644 index 0000000000..eeb55784e1 --- /dev/null +++ b/openpype/client/server/operations.py @@ -0,0 +1,881 @@ +import copy +import json +import collections +import uuid +import datetime + +from bson.objectid import ObjectId +from ayon_api import get_server_api_connection + +from openpype.client.operations_base import ( + REMOVED_VALUE, + CreateOperation, + UpdateOperation, + DeleteOperation, + BaseOperationsSession +) + +from openpype.client.mongo.operations import ( + CURRENT_THUMBNAIL_SCHEMA, + CURRENT_REPRESENTATION_SCHEMA, + CURRENT_HERO_VERSION_SCHEMA, + CURRENT_VERSION_SCHEMA, + CURRENT_SUBSET_SCHEMA, + CURRENT_ASSET_DOC_SCHEMA, + CURRENT_PROJECT_SCHEMA, +) + +from .conversion_utils import ( + convert_create_asset_to_v4, + convert_create_task_to_v4, + convert_create_subset_to_v4, + convert_create_version_to_v4, + convert_create_hero_version_to_v4, + convert_create_representation_to_v4, + convert_create_workfile_info_to_v4, + + convert_update_folder_to_v4, + convert_update_subset_to_v4, + convert_update_version_to_v4, + convert_update_hero_version_to_v4, + convert_update_representation_to_v4, + convert_update_workfile_info_to_v4, +) +from .utils import create_entity_id + + +def _create_or_convert_to_id(entity_id=None): + if entity_id is None: + return create_entity_id() + + if isinstance(entity_id, ObjectId): + raise TypeError("Type of 'ObjectId' is not supported anymore.") + + # Validate if can be converted to uuid + uuid.UUID(entity_id) + return entity_id + + +def new_project_document( + project_name, project_code, config, data=None, entity_id=None +): + """Create skeleton data of project document. + + Args: + project_name (str): Name of project. Used as identifier of a project. + project_code (str): Shorter version of projet without spaces and + special characters (in most of cases). Should be also considered + as unique name across projects. + config (Dic[str, Any]): Project config consist of roots, templates, + applications and other project Anatomy related data. + data (Dict[str, Any]): Project data with information about it's + attributes (e.g. 'fps' etc.) or integration specific keys. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of project document. + """ + + if data is None: + data = {} + + data["code"] = project_code + + return { + "_id": _create_or_convert_to_id(entity_id), + "name": project_name, + "type": CURRENT_PROJECT_SCHEMA, + "entity_data": data, + "config": config + } + + +def new_asset_document( + name, project_id, parent_id, parents, data=None, entity_id=None +): + """Create skeleton data of asset document. + + Args: + name (str): Is considered as unique identifier of asset in project. + project_id (Union[str, ObjectId]): Id of project doument. + parent_id (Union[str, ObjectId]): Id of parent asset. + parents (List[str]): List of parent assets names. + data (Dict[str, Any]): Asset document data. Empty dictionary is used + if not passed. Value of 'parent_id' is used to fill 'visualParent'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of asset document. + """ + + if data is None: + data = {} + if parent_id is not None: + parent_id = _create_or_convert_to_id(parent_id) + data["visualParent"] = parent_id + data["parents"] = parents + + return { + "_id": _create_or_convert_to_id(entity_id), + "type": "asset", + "name": name, + # This will be ignored + "parent": project_id, + "data": data, + "schema": CURRENT_ASSET_DOC_SCHEMA + } + + +def new_subset_document(name, family, asset_id, data=None, entity_id=None): + """Create skeleton data of subset document. + + Args: + name (str): Is considered as unique identifier of subset under asset. + family (str): Subset's family. + asset_id (Union[str, ObjectId]): Id of parent asset. + data (Dict[str, Any]): Subset document data. Empty dictionary is used + if not passed. Value of 'family' is used to fill 'family'. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of subset document. + """ + + if data is None: + data = {} + data["family"] = family + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_SUBSET_SCHEMA, + "type": "subset", + "name": name, + "data": data, + "parent": _create_or_convert_to_id(asset_id) + } + + +def new_version_doc(version, subset_id, data=None, entity_id=None): + """Create skeleton data of version document. + + Args: + version (int): Is considered as unique identifier of version + under subset. + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_VERSION_SCHEMA, + "type": "version", + "name": int(version), + "parent": _create_or_convert_to_id(subset_id), + "data": data + } + + +def new_hero_version_doc(subset_id, data, version=None, entity_id=None): + """Create skeleton data of hero version document. + + Args: + subset_id (Union[str, ObjectId]): Id of parent subset. + data (Dict[str, Any]): Version document data. + version (int): Version of source version. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if version is None: + version = -1 + elif version > 0: + version = -version + + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_HERO_VERSION_SCHEMA, + "type": "hero_version", + "version": version, + "parent": _create_or_convert_to_id(subset_id), + "data": data + } + + +def new_representation_doc( + name, version_id, context, data=None, entity_id=None +): + """Create skeleton data of representation document. + + Args: + name (str): Representation name considered as unique identifier + of representation under version. + version_id (Union[str, ObjectId]): Id of parent version. + context (Dict[str, Any]): Representation context used for fill template + of to query. + data (Dict[str, Any]): Representation document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "schema": CURRENT_REPRESENTATION_SCHEMA, + "type": "representation", + "parent": _create_or_convert_to_id(version_id), + "name": name, + "data": data, + + # Imprint shortcut to context for performance reasons. + "context": context + } + + +def new_thumbnail_doc(data=None, entity_id=None): + """Create skeleton data of thumbnail document. + + Args: + data (Dict[str, Any]): Thumbnail document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of thumbnail document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": data + } + + +def new_workfile_info_doc( + filename, asset_id, task_name, files, data=None, entity_id=None +): + """Create skeleton data of workfile info document. + + Workfile document is at this moment used primarily for artist notes. + + Args: + filename (str): Filename of workfile. + asset_id (Union[str, ObjectId]): Id of asset under which workfile live. + task_name (str): Task under which was workfile created. + files (List[str]): List of rootless filepaths related to workfile. + data (Dict[str, Any]): Additional metadata. + + Returns: + Dict[str, Any]: Skeleton of workfile info document. + """ + + if not data: + data = {} + + return { + "_id": _create_or_convert_to_id(entity_id), + "type": "workfile", + "parent": _create_or_convert_to_id(asset_id), + "task_name": task_name, + "filename": filename, + "data": data, + "files": files + } + + +def _prepare_update_data(old_doc, new_doc, replace): + changes = {} + for key, value in new_doc.items(): + if key not in old_doc or value != old_doc[key]: + changes[key] = value + + if replace: + for key in old_doc.keys(): + if key not in new_doc: + changes[key] = REMOVED_VALUE + return changes + + +def prepare_subset_update_data(old_doc, new_doc, replace=True): + """Compare two subset documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_version_update_data(old_doc, new_doc, replace=True): + """Compare two version documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +def prepare_hero_version_update_data(old_doc, new_doc, replace=True): + """Compare two hero version documents and prepare update data. + + Based on compared values will create update data for 'UpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + changes = _prepare_update_data(old_doc, new_doc, replace) + changes.pop("version_id", None) + return changes + + +def prepare_representation_update_data(old_doc, new_doc, replace=True): + """Compare two representation documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + changes = _prepare_update_data(old_doc, new_doc, replace) + context = changes.get("data", {}).get("context") + # Make sure that both 'family' and 'subset' are in changes if + # one of them changed (they'll both become 'product'). + if ( + context + and ("family" in context or "subset" in context) + ): + context["family"] = new_doc["data"]["context"]["family"] + context["subset"] = new_doc["data"]["context"]["subset"] + + return changes + + +def prepare_workfile_info_update_data(old_doc, new_doc, replace=True): + """Compare two workfile info documents and prepare update data. + + Based on compared values will create update data for + 'MongoUpdateOperation'. + + Empty output means that documents are identical. + + Returns: + Dict[str, Any]: Changes between old and new document. + """ + + return _prepare_update_data(old_doc, new_doc, replace) + + +class FailedOperations(Exception): + pass + + +def entity_data_json_default(value): + if isinstance(value, datetime.datetime): + return int(value.timestamp()) + + raise TypeError( + "Object of type {} is not JSON serializable".format(str(type(value))) + ) + + +def failed_json_default(value): + return "< Failed value {} > {}".format(type(value), str(value)) + + +class ServerCreateOperation(CreateOperation): + """Opeartion to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + def __init__(self, project_name, entity_type, data, session): + self._session = session + + if not data: + data = {} + data = copy.deepcopy(data) + if entity_type == "project": + raise ValueError("Project cannot be created using operations") + + tasks = None + if entity_type in "asset": + # TODO handle tasks + entity_type = "folder" + if "data" in data: + tasks = data["data"].get("tasks") + + project = self._session.get_project(project_name) + new_data = convert_create_asset_to_v4(data, project, self.con) + + elif entity_type == "task": + project = self._session.get_project(project_name) + new_data = convert_create_task_to_v4(data, project, self.con) + + elif entity_type == "subset": + new_data = convert_create_subset_to_v4(data, self.con) + entity_type = "product" + + elif entity_type == "version": + new_data = convert_create_version_to_v4(data, self.con) + + elif entity_type == "hero_version": + new_data = convert_create_hero_version_to_v4( + data, project_name, self.con + ) + entity_type = "version" + + elif entity_type in ("representation", "archived_representation"): + new_data = convert_create_representation_to_v4(data, self.con) + entity_type = "representation" + + elif entity_type == "workfile": + new_data = convert_create_workfile_info_to_v4( + data, project_name, self.con + ) + + else: + raise ValueError( + "Unhandled entity type \"{}\"".format(entity_type) + ) + + # Simple check if data can be dumped into json + # - should raise error on 'ObjectId' object + try: + new_data = json.loads( + json.dumps(new_data, default=entity_data_json_default) + ) + + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps(new_data, default=failed_json_default) + )) + + super(ServerCreateOperation, self).__init__( + project_name, entity_type, new_data + ) + + if "id" not in self._data: + self._data["id"] = create_entity_id() + + if tasks: + copied_tasks = copy.deepcopy(tasks) + for task_name, task in copied_tasks.items(): + task["name"] = task_name + task["folderId"] = self._data["id"] + self.session.create_entity( + project_name, "task", task, nested_id=self.id + ) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + @property + def entity_id(self): + return self._data["id"] + + def to_server_operation(self): + return { + "id": self.id, + "type": "create", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": self._data + } + + +class ServerUpdateOperation(UpdateOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + def __init__( + self, project_name, entity_type, entity_id, update_data, session + ): + self._session = session + + update_data = copy.deepcopy(update_data) + if entity_type == "project": + raise ValueError("Project cannot be created using operations") + + if entity_type in ("asset", "archived_asset"): + new_update_data = convert_update_folder_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "folder" + + elif entity_type == "subset": + new_update_data = convert_update_subset_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "product" + + elif entity_type == "version": + new_update_data = convert_update_version_to_v4( + project_name, entity_id, update_data, self.con + ) + + elif entity_type == "hero_version": + new_update_data = convert_update_hero_version_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "version" + + elif entity_type in ("representation", "archived_representation"): + new_update_data = convert_update_representation_to_v4( + project_name, entity_id, update_data, self.con + ) + entity_type = "representation" + + elif entity_type == "workfile": + new_update_data = convert_update_workfile_info_to_v4( + project_name, entity_id, update_data, self.con + ) + + else: + raise ValueError( + "Unhandled entity type \"{}\"".format(entity_type) + ) + + try: + new_update_data = json.loads( + json.dumps(new_update_data, default=entity_data_json_default) + ) + + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps(new_update_data, default=failed_json_default) + )) + + super(ServerUpdateOperation, self).__init__( + project_name, entity_type, entity_id, new_update_data + ) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_server_operation(self): + if not self._update_data: + return None + + update_data = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + update_data[key] = value + + return { + "id": self.id, + "type": "update", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": update_data + } + + +class ServerDeleteOperation(DeleteOperation): + """Opeartion to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'asset', 'representation' etc. + entity_id (Union[str, ObjectId]): Entity id that will be removed. + """ + + def __init__(self, project_name, entity_type, entity_id, session): + self._session = session + + if entity_type == "asset": + entity_type == "folder" + + elif entity_type == "hero_version": + entity_type = "version" + + elif entity_type == "subset": + entity_type = "product" + + super(ServerDeleteOperation, self).__init__( + project_name, entity_type, entity_id + ) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_server_operation(self): + return { + "id": self.id, + "type": self.operation_name, + "entityId": self.entity_id, + "entityType": self.entity_type, + } + + +class OperationsSession(BaseOperationsSession): + def __init__(self, con=None, *args, **kwargs): + super(OperationsSession, self).__init__(*args, **kwargs) + if con is None: + con = get_server_api_connection() + self._con = con + self._project_cache = {} + self._nested_operations = collections.defaultdict(list) + + @property + def con(self): + return self._con + + def get_project(self, project_name): + if project_name not in self._project_cache: + self._project_cache[project_name] = self.con.get_project( + project_name) + return copy.deepcopy(self._project_cache[project_name]) + + def commit(self): + """Commit session operations.""" + + operations, self._operations = self._operations, [] + if not operations: + return + + operations_by_project = collections.defaultdict(list) + for operation in operations: + operations_by_project[operation.project_name].append(operation) + + body_by_id = {} + results = [] + for project_name, operations in operations_by_project.items(): + operations_body = [] + for operation in operations: + body = operation.to_server_operation() + if body is not None: + try: + json.dumps(body) + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps( + body, indent=4, default=failed_json_default + ) + )) + + body_by_id[operation.id] = body + operations_body.append(body) + + if operations_body: + result = self._con.post( + "projects/{}/operations".format(project_name), + operations=operations_body, + canFail=False + ) + results.append(result.data) + + for result in results: + if result.get("success"): + continue + + if "operations" not in result: + raise FailedOperations( + "Operation failed. Content: {}".format(str(result)) + ) + + for op_result in result["operations"]: + if not op_result["success"]: + operation_id = op_result["id"] + raise FailedOperations(( + "Operation \"{}\" failed with data:\n{}\nError: {}." + ).format( + operation_id, + json.dumps(body_by_id[operation_id], indent=4), + op_result.get("error", "unknown"), + )) + + def create_entity(self, project_name, entity_type, data, nested_id=None): + """Fast access to 'ServerCreateOperation'. + + Args: + project_name (str): On which project the creation happens. + entity_type (str): Which entity type will be created. + data (Dicst[str, Any]): Entity data. + nested_id (str): Id of other operation from which is triggered + operation -> Operations can trigger suboperations but they + must be added to operations list after it's parent is added. + + Returns: + ServerCreateOperation: Object of update operation. + """ + + operation = ServerCreateOperation( + project_name, entity_type, data, self + ) + + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + + return operation + + def update_entity( + self, project_name, entity_type, entity_id, update_data, nested_id=None + ): + """Fast access to 'ServerUpdateOperation'. + + Returns: + ServerUpdateOperation: Object of update operation. + """ + + operation = ServerUpdateOperation( + project_name, entity_type, entity_id, update_data, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation + + def delete_entity( + self, project_name, entity_type, entity_id, nested_id=None + ): + """Fast access to 'ServerDeleteOperation'. + + Returns: + ServerDeleteOperation: Object of delete operation. + """ + + operation = ServerDeleteOperation( + project_name, entity_type, entity_id, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation + + +def create_project( + project_name, + project_code, + library_project=False, + preset_name=None, + con=None +): + """Create project using OpenPype settings. + + This project creation function is not validating project document on + creation. It is because project document is created blindly with only + minimum required information about project which is it's name, code, type + and schema. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name (str): New project name. Should be unique. + project_code (str): Project's code should be unique too. + library_project (bool): Project is library project. + preset_name (str): Name of anatomy preset. Default is used if not + passed. + con (ServerAPI): Connection to server with logged user. + + Raises: + ValueError: When project name already exists in MongoDB. + + Returns: + dict: Created project document. + """ + + if con is None: + con = get_server_api_connection() + + return con.create_project( + project_name, + project_code, + library_project, + preset_name + ) + + +def delete_project(project_name, con=None): + if con is None: + con = get_server_api_connection() + + return con.delete_project(project_name) + + +def create_thumbnail(project_name, src_filepath, thumbnail_id=None, con=None): + if con is None: + con = get_server_api_connection() + return con.create_thumbnail(project_name, src_filepath, thumbnail_id) diff --git a/openpype/client/server/thumbnails.py b/openpype/client/server/thumbnails.py new file mode 100644 index 0000000000..dc649b9651 --- /dev/null +++ b/openpype/client/server/thumbnails.py @@ -0,0 +1,229 @@ +"""Cache of thumbnails downloaded from AYON server. + +Thumbnails are cached to appdirs to predefined directory. + +This should be moved to thumbnails logic in pipeline but because it would +overflow OpenPype logic it's here for now. +""" + +import os +import time +import collections + +import appdirs + +FileInfo = collections.namedtuple( + "FileInfo", + ("path", "size", "modification_time") +) + + +class AYONThumbnailCache: + """Cache of thumbnails on local storage. + + Thumbnails are cached to appdirs to predefined directory. Each project has + own subfolder with thumbnails -> that's because each project has own + thumbnail id validation and file names are thumbnail ids with matching + extension. Extensions are predefined (.png and .jpeg). + + Cache has cleanup mechanism which is triggered on initialized by default. + + The cleanup has 2 levels: + 1. soft cleanup which remove all files that are older then 'days_alive' + 2. max size cleanup which remove all files until the thumbnails folder + contains less then 'max_filesize' + - this is time consuming so it's not triggered automatically + + Args: + cleanup (bool): Trigger soft cleanup (Cleanup expired thumbnails). + """ + + # Lifetime of thumbnails (in seconds) + # - default 3 days + days_alive = 3 + # Max size of thumbnail directory (in bytes) + # - default 2 Gb + max_filesize = 2 * 1024 * 1024 * 1024 + + def __init__(self, cleanup=True): + self._thumbnails_dir = None + self._days_alive_secs = self.days_alive * 24 * 60 * 60 + if cleanup: + self.cleanup() + + def get_thumbnails_dir(self): + """Root directory where thumbnails are stored. + + Returns: + str: Path to thumbnails root. + """ + + if self._thumbnails_dir is None: + # TODO use generic function + directory = appdirs.user_data_dir("AYON", "Ynput") + self._thumbnails_dir = os.path.join(directory, "thumbnails") + return self._thumbnails_dir + + thumbnails_dir = property(get_thumbnails_dir) + + def get_thumbnails_dir_file_info(self): + """Get information about all files in thumbnails directory. + + Returns: + List[FileInfo]: List of file information about all files. + """ + + thumbnails_dir = self.thumbnails_dir + files_info = [] + if not os.path.exists(thumbnails_dir): + return files_info + + for root, _, filenames in os.walk(thumbnails_dir): + for filename in filenames: + path = os.path.join(root, filename) + files_info.append(FileInfo( + path, os.path.getsize(path), os.path.getmtime(path) + )) + return files_info + + def get_thumbnails_dir_size(self, files_info=None): + """Got full size of thumbnail directory. + + Args: + files_info (List[FileInfo]): Prepared file information about + files in thumbnail directory. + + Returns: + int: File size of all files in thumbnail directory. + """ + + if files_info is None: + files_info = self.get_thumbnails_dir_file_info() + + if not files_info: + return 0 + + return sum( + file_info.size + for file_info in files_info + ) + + def cleanup(self, check_max_size=False): + """Cleanup thumbnails directory. + + Args: + check_max_size (bool): Also cleanup files to match max size of + thumbnails directory. + """ + + thumbnails_dir = self.get_thumbnails_dir() + # Skip if thumbnails dir does not exists yet + if not os.path.exists(thumbnails_dir): + return + + self._soft_cleanup(thumbnails_dir) + if check_max_size: + self._max_size_cleanup(thumbnails_dir) + + def _soft_cleanup(self, thumbnails_dir): + current_time = time.time() + for root, _, filenames in os.walk(thumbnails_dir): + for filename in filenames: + path = os.path.join(root, filename) + modification_time = os.path.getmtime(path) + if current_time - modification_time > self._days_alive_secs: + os.remove(path) + + def _max_size_cleanup(self, thumbnails_dir): + files_info = self.get_thumbnails_dir_file_info() + size = self.get_thumbnails_dir_size(files_info) + if size < self.max_filesize: + return + + sorted_file_info = collections.deque( + sorted(files_info, key=lambda item: item.modification_time) + ) + diff = size - self.max_filesize + while diff > 0: + if not sorted_file_info: + break + + file_info = sorted_file_info.popleft() + diff -= file_info.size + os.remove(file_info.path) + + def get_thumbnail_filepath(self, project_name, thumbnail_id): + """Get thumbnail by thumbnail id. + + Args: + project_name (str): Name of project. + thumbnail_id (str): Thumbnail id. + + Returns: + Union[str, None]: Path to thumbnail image or None if thumbnail + is not cached yet. + """ + + if not thumbnail_id: + return None + + for ext in ( + ".png", + ".jpeg", + ): + filepath = os.path.join( + self.thumbnails_dir, project_name, thumbnail_id + ext + ) + if os.path.exists(filepath): + return filepath + return None + + def get_project_dir(self, project_name): + """Path to root directory for specific project. + + Args: + project_name (str): Name of project for which root directory path + should be returned. + + Returns: + str: Path to root of project's thumbnails. + """ + + return os.path.join(self.thumbnails_dir, project_name) + + def make_sure_project_dir_exists(self, project_name): + project_dir = self.get_project_dir(project_name) + if not os.path.exists(project_dir): + os.makedirs(project_dir) + return project_dir + + def store_thumbnail(self, project_name, thumbnail_id, content, mime_type): + """Store thumbnail to cache folder. + + Args: + project_name (str): Project where the thumbnail belong to. + thumbnail_id (str): Id of thumbnail. + content (bytes): Byte content of thumbnail file. + mime_data (str): Type of content. + + Returns: + str: Path to cached thumbnail image file. + """ + + if mime_type == "image/png": + ext = ".png" + elif mime_type == "image/jpeg": + ext = ".jpeg" + else: + raise ValueError( + "Unknown mime type for thumbnail \"{}\"".format(mime_type)) + + project_dir = self.make_sure_project_dir_exists(project_name) + thumbnail_path = os.path.join(project_dir, thumbnail_id + ext) + with open(thumbnail_path, "wb") as stream: + stream.write(content) + + current_time = time.time() + os.utime(thumbnail_path, (current_time, current_time)) + + return thumbnail_path diff --git a/openpype/client/server/utils.py b/openpype/client/server/utils.py new file mode 100644 index 0000000000..ed128cfad9 --- /dev/null +++ b/openpype/client/server/utils.py @@ -0,0 +1,109 @@ +import uuid + +from openpype.client.operations_base import REMOVED_VALUE + + +def create_entity_id(): + return uuid.uuid1().hex + + +def prepare_attribute_changes(old_entity, new_entity, replace=False): + """Prepare changes of attributes on entities. + + Compare 'attrib' of old and new entity data to prepare only changed + values that should be sent to server for update. + + Example: + >>> # Limited entity data to 'attrib' + >>> old_entity = { + ... "attrib": {"attr_1": 1, "attr_2": "MyString", "attr_3": True} + ... } + >>> new_entity = { + ... "attrib": {"attr_1": 2, "attr_3": True, "attr_4": 3} + ... } + >>> # Changes if replacement should not happen + >>> expected_changes = { + ... "attr_1": 2, + ... "attr_4": 3 + ... } + >>> changes = prepare_attribute_changes(old_entity, new_entity) + >>> changes == expected_changes + True + + >>> # Changes if replacement should happen + >>> expected_changes_replace = { + ... "attr_1": 2, + ... "attr_2": REMOVED_VALUE, + ... "attr_4": 3 + ... } + >>> changes_replace = prepare_attribute_changes( + ... old_entity, new_entity, True) + >>> changes_replace == expected_changes_replace + True + + Args: + old_entity (dict[str, Any]): Data of entity queried from server. + new_entity (dict[str, Any]): Entity data with applied changes. + replace (bool): New entity should fully replace all old entity values. + + Returns: + Dict[str, Any]: Values from new entity only if value has changed. + """ + + attrib_changes = {} + new_attrib = new_entity.get("attrib") + old_attrib = old_entity.get("attrib") + if new_attrib is None: + if not replace: + return attrib_changes + new_attrib = {} + + if old_attrib is None: + return new_attrib + + for attr, new_attr_value in new_attrib.items(): + old_attr_value = old_attrib.get(attr) + if old_attr_value != new_attr_value: + attrib_changes[attr] = new_attr_value + + if replace: + for attr in old_attrib: + if attr not in new_attrib: + attrib_changes[attr] = REMOVED_VALUE + + return attrib_changes + + +def prepare_entity_changes(old_entity, new_entity, replace=False): + """Prepare changes of AYON entities. + + Compare old and new entity to filter values from new data that changed. + + Args: + old_entity (dict[str, Any]): Data of entity queried from server. + new_entity (dict[str, Any]): Entity data with applied changes. + replace (bool): All attributes should be replaced by new values. So + all attribute values that are not on new entity will be removed. + + Returns: + Dict[str, Any]: Only values from new entity that changed. + """ + + changes = {} + for key, new_value in new_entity.items(): + if key == "attrib": + continue + + old_value = old_entity.get(key) + if old_value != new_value: + changes[key] = new_value + + if replace: + for key in old_entity: + if key not in new_entity: + changes[key] = REMOVED_VALUE + + attr_changes = prepare_attribute_changes(old_entity, new_entity, replace) + if attr_changes: + changes["attrib"] = attr_changes + return changes diff --git a/openpype/hooks/pre_add_last_workfile_arg.py b/openpype/hooks/pre_add_last_workfile_arg.py index c54acbc203..1418bc210b 100644 --- a/openpype/hooks/pre_add_last_workfile_arg.py +++ b/openpype/hooks/pre_add_last_workfile_arg.py @@ -1,6 +1,6 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class AddLastWorkfileToLaunchArgs(PreLaunchHook): @@ -13,8 +13,8 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): # Execute after workfile template copy order = 10 - app_groups = [ - "3dsmax", + app_groups = { + "3dsmax", "adsk_3dsmax", "maya", "nuke", "nukex", @@ -26,8 +26,9 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): "photoshop", "tvpaint", "substancepainter", - "aftereffects" - ] + "aftereffects", + } + launch_types = {LaunchTypes.local} def execute(self): if not self.data.get("start_last_workfile"): diff --git a/openpype/hooks/pre_copy_template_workfile.py b/openpype/hooks/pre_copy_template_workfile.py index 70c549919f..2203ff4396 100644 --- a/openpype/hooks/pre_copy_template_workfile.py +++ b/openpype/hooks/pre_copy_template_workfile.py @@ -1,7 +1,7 @@ import os import shutil -from openpype.lib import PreLaunchHook from openpype.settings import get_project_settings +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.pipeline.workfile import ( get_custom_workfile_template, get_custom_workfile_template_by_string_context @@ -19,7 +19,8 @@ class CopyTemplateWorkfile(PreLaunchHook): # Before `AddLastWorkfileToLaunchArgs` order = 0 - app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"] + app_groups = {"blender", "photoshop", "tvpaint", "aftereffects"} + launch_types = {LaunchTypes.local} def execute(self): """Check if can copy template for context and do it if possible. diff --git a/openpype/hooks/pre_create_extra_workdir_folders.py b/openpype/hooks/pre_create_extra_workdir_folders.py index 8856281120..4c9d08b375 100644 --- a/openpype/hooks/pre_create_extra_workdir_folders.py +++ b/openpype/hooks/pre_create_extra_workdir_folders.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.pipeline.workfile import create_workdir_extra_folders @@ -14,6 +14,7 @@ class CreateWorkdirExtraFolders(PreLaunchHook): # Execute after workfile template copy order = 15 + launch_types = {LaunchTypes.local} def execute(self): if not self.application.is_host: diff --git a/openpype/hooks/pre_foundry_apps.py b/openpype/hooks/pre_foundry_apps.py index 21ec8e7881..7536df4c16 100644 --- a/openpype/hooks/pre_foundry_apps.py +++ b/openpype/hooks/pre_foundry_apps.py @@ -1,5 +1,5 @@ import subprocess -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class LaunchFoundryAppsWindows(PreLaunchHook): @@ -13,8 +13,9 @@ class LaunchFoundryAppsWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = ["nuke", "nukeassist", "nukex", "hiero", "nukestudio"] - platforms = ["windows"] + app_groups = {"nuke", "nukeassist", "nukex", "hiero", "nukestudio"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE diff --git a/openpype/hooks/pre_global_host_data.py b/openpype/hooks/pre_global_host_data.py index 8a178915fb..813df24af0 100644 --- a/openpype/hooks/pre_global_host_data.py +++ b/openpype/hooks/pre_global_host_data.py @@ -1,15 +1,16 @@ from openpype.client import get_project, get_asset_by_name -from openpype.lib import ( +from openpype.lib.applications import ( PreLaunchHook, EnvironmentPrepData, prepare_app_environments, prepare_context_environments ) -from openpype.pipeline import AvalonMongoDB, Anatomy +from openpype.pipeline import Anatomy class GlobalHostDataHook(PreLaunchHook): order = -100 + launch_types = set() def execute(self): """Prepare global objects to `data` that will be used for sure.""" @@ -26,7 +27,6 @@ class GlobalHostDataHook(PreLaunchHook): "app": app, - "dbcon": self.data["dbcon"], "project_doc": self.data["project_doc"], "asset_doc": self.data["asset_doc"], @@ -62,13 +62,6 @@ class GlobalHostDataHook(PreLaunchHook): # Anatomy self.data["anatomy"] = Anatomy(project_name) - # Mongo connection - dbcon = AvalonMongoDB() - dbcon.Session["AVALON_PROJECT"] = project_name - dbcon.install() - - self.data["dbcon"] = dbcon - # Project document project_doc = get_project(project_name) self.data["project_doc"] = project_doc diff --git a/openpype/hooks/pre_mac_launch.py b/openpype/hooks/pre_mac_launch.py index f85557a4f0..402e9a5517 100644 --- a/openpype/hooks/pre_mac_launch.py +++ b/openpype/hooks/pre_mac_launch.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class LaunchWithTerminal(PreLaunchHook): @@ -12,7 +12,8 @@ class LaunchWithTerminal(PreLaunchHook): """ order = 1000 - platforms = ["darwin"] + platforms = {"darwin"} + launch_types = {LaunchTypes.local} def execute(self): executable = str(self.launch_context.executable) diff --git a/openpype/hooks/pre_non_python_host_launch.py b/openpype/hooks/pre_non_python_host_launch.py index 043cb3c7f6..d9e912c826 100644 --- a/openpype/hooks/pre_non_python_host_launch.py +++ b/openpype/hooks/pre_non_python_host_launch.py @@ -1,10 +1,11 @@ import os -from openpype.lib import ( +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import ( + get_non_python_host_kwargs, PreLaunchHook, - get_openpype_execute_args + LaunchTypes, ) -from openpype.lib.applications import get_non_python_host_kwargs from openpype import PACKAGE_DIR as OPENPYPE_DIR @@ -16,9 +17,10 @@ class NonPythonHostHook(PreLaunchHook): python script which launch the host. For these cases it is necessary to prepend python (or openpype) executable and script path before application's. """ - app_groups = ["harmony", "photoshop", "aftereffects"] + app_groups = {"harmony", "photoshop", "aftereffects"} order = 20 + launch_types = {LaunchTypes.local} def execute(self): # Pop executable @@ -54,4 +56,3 @@ class NonPythonHostHook(PreLaunchHook): self.launch_context.kwargs = \ get_non_python_host_kwargs(self.launch_context.kwargs) - diff --git a/openpype/hooks/pre_ocio_hook.py b/openpype/hooks/pre_ocio_hook.py index 8f462665bc..1307ed9f76 100644 --- a/openpype/hooks/pre_ocio_hook.py +++ b/openpype/hooks/pre_ocio_hook.py @@ -1,8 +1,6 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook -from openpype.pipeline.colorspace import ( - get_imageio_config -) +from openpype.pipeline.colorspace import get_imageio_config from openpype.pipeline.template_data import get_template_data_with_names @@ -10,7 +8,7 @@ class OCIOEnvHook(PreLaunchHook): """Set OCIO environment variable for hosts that use OpenColorIO.""" order = 0 - hosts = [ + hosts = { "substancepainter", "fusion", "blender", @@ -20,8 +18,9 @@ class OCIOEnvHook(PreLaunchHook): "maya", "nuke", "hiero", - "resolve" - ] + "resolve", + } + launch_types = set() def execute(self): """Hook entry method.""" @@ -39,7 +38,8 @@ class OCIOEnvHook(PreLaunchHook): host_name=self.host_name, project_settings=self.data["project_settings"], anatomy_data=template_data, - anatomy=self.data["anatomy"] + anatomy=self.data["anatomy"], + env=self.launch_context.env, ) if config_data: diff --git a/openpype/host/dirmap.py b/openpype/host/dirmap.py index 42bf80ecec..96a98e808e 100644 --- a/openpype/host/dirmap.py +++ b/openpype/host/dirmap.py @@ -32,19 +32,26 @@ class HostDirmap(object): """ def __init__( - self, host_name, project_name, project_settings=None, sync_module=None + self, + host_name, + project_name, + project_settings=None, + sync_module=None ): self.host_name = host_name self.project_name = project_name self._project_settings = project_settings - self._sync_module = sync_module # to limit reinit of Modules + self._sync_module = sync_module + # to limit reinit of Modules + self._sync_module_discovered = sync_module is not None self._log = None @property def sync_module(self): - if self._sync_module is None: + if not self._sync_module_discovered: + self._sync_module_discovered = True manager = ModulesManager() - self._sync_module = manager["sync_server"] + self._sync_module = manager.get("sync_server") return self._sync_module @property @@ -149,23 +156,27 @@ class HostDirmap(object): Returns: dict : { "source-path": [XXX], "destination-path": [YYYY]} """ - project_name = os.getenv("AVALON_PROJECT") + project_name = self.project_name + sync_module = self.sync_module mapping = {} - if (not self.sync_module.enabled or - project_name not in self.sync_module.get_enabled_projects()): + if ( + sync_module is None + or not sync_module.enabled + or project_name not in sync_module.get_enabled_projects() + ): return mapping - active_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_active_site(project_name)) - remote_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_remote_site(project_name)) + active_site = sync_module.get_local_normalized_site( + sync_module.get_active_site(project_name)) + remote_site = sync_module.get_local_normalized_site( + sync_module.get_remote_site(project_name)) self.log.debug( "active {} - remote {}".format(active_site, remote_site) ) if active_site == "local" and active_site != remote_site: - sync_settings = self.sync_module.get_sync_project_setting( + sync_settings = sync_module.get_sync_project_setting( project_name, exclude_locals=False, cached=False) @@ -179,7 +190,7 @@ class HostDirmap(object): self.log.debug("remote overrides {}".format(remote_overrides)) current_platform = platform.system().lower() - remote_provider = self.sync_module.get_provider_for_site( + remote_provider = sync_module.get_provider_for_site( project_name, remote_site ) # dirmap has sense only with regular disk provider, in the workfile diff --git a/openpype/hosts/aftereffects/api/launch_logic.py b/openpype/hosts/aftereffects/api/launch_logic.py index ea71122042..e90c3dc5b8 100644 --- a/openpype/hosts/aftereffects/api/launch_logic.py +++ b/openpype/hosts/aftereffects/api/launch_logic.py @@ -13,13 +13,13 @@ from wsrpc_aiohttp import ( WebSocketAsync ) -from qtpy import QtCore, QtWidgets +from qtpy import QtCore from openpype.lib import Logger -from openpype.tools.utils import host_tools from openpype.tests.lib import is_in_tests from openpype.pipeline import install_host, legacy_io from openpype.modules import ModulesManager +from openpype.tools.utils import host_tools, get_openpype_qt_app from openpype.tools.adobe_webserver.app import WebServerTool from .ws_stub import get_stub @@ -43,7 +43,7 @@ def main(*subprocess_args): install_host(host) os.environ["OPENPYPE_LOG_NO_COLORS"] = "False" - app = QtWidgets.QApplication([]) + app = get_openpype_qt_app() app.setQuitOnLastWindowClosed(False) launcher = ProcessLauncher(subprocess_args) diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index 5566ca9e5b..8fc7a70dd8 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -23,6 +23,7 @@ from openpype.host import ( ILoadHost, IPublishHost ) +from openpype.tools.utils import get_openpype_qt_app from .launch_logic import get_stub from .ws_stub import ConnectionNotEstablishedYet @@ -236,10 +237,7 @@ def check_inventory(): return # Warn about outdated containers. - _app = QtWidgets.QApplication.instance() - if not _app: - print("Starting new QApplication..") - _app = QtWidgets.QApplication([]) + _app = get_openpype_qt_app() message_box = QtWidgets.QMessageBox() message_box.setIcon(QtWidgets.QMessageBox.Warning) diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py index fa79fac78f..dcf424b44f 100644 --- a/openpype/hosts/aftereffects/plugins/create/create_render.py +++ b/openpype/hosts/aftereffects/plugins/create/create_render.py @@ -28,7 +28,6 @@ class RenderCreator(Creator): create_allow_context_change = True # Settings - default_variants = [] mark_for_review = True def create(self, subset_name_from_ui, data, pre_create_data): @@ -171,6 +170,10 @@ class RenderCreator(Creator): ) self.mark_for_review = plugin_settings["mark_for_review"] + self.default_variants = plugin_settings.get( + "default_variants", + plugin_settings.get("defaults") or [] + ) def get_detail_description(self): return """Creator for Render instances diff --git a/openpype/hosts/aftereffects/plugins/load/load_background.py b/openpype/hosts/aftereffects/plugins/load/load_background.py index e7c29fee5a..16f45074aa 100644 --- a/openpype/hosts/aftereffects/plugins/load/load_background.py +++ b/openpype/hosts/aftereffects/plugins/load/load_background.py @@ -33,9 +33,10 @@ class BackgroundLoader(api.AfterEffectsLoader): existing_items, "{}_{}".format(context["asset"]["name"], name)) - layers = get_background_layers(self.fname) + path = self.filepath_from_context(context) + layers = get_background_layers(path) if not layers: - raise ValueError("No layers found in {}".format(self.fname)) + raise ValueError("No layers found in {}".format(path)) comp = stub.import_background(None, stub.LOADED_ICON + comp_name, layers) diff --git a/openpype/hosts/aftereffects/plugins/load/load_file.py b/openpype/hosts/aftereffects/plugins/load/load_file.py index 33a86aa505..def7c927ab 100644 --- a/openpype/hosts/aftereffects/plugins/load/load_file.py +++ b/openpype/hosts/aftereffects/plugins/load/load_file.py @@ -29,32 +29,32 @@ class FileLoader(api.AfterEffectsLoader): import_options = {} - file = self.fname + path = self.filepath_from_context(context) repr_cont = context["representation"]["context"] - if "#" not in file: + if "#" not in path: frame = repr_cont.get("frame") if frame: padding = len(frame) - file = file.replace(frame, "#" * padding) + path = path.replace(frame, "#" * padding) import_options['sequence'] = True - if not file: + if not path: repr_id = context["representation"]["_id"] self.log.warning( "Representation id `{}` is failing to load".format(repr_id)) return - file = file.replace("\\", "/") - if '.psd' in file: + path = path.replace("\\", "/") + if '.psd' in path: import_options['ImportAsType'] = 'ImportAsType.COMP' - comp = stub.import_file(self.fname, stub.LOADED_ICON + comp_name, + comp = stub.import_file(path, stub.LOADED_ICON + comp_name, import_options) if not comp: self.log.warning( - "Representation id `{}` is failing to load".format(file)) + "Representation `{}` is failing to load".format(path)) self.log.warning("Check host app for alert error.") return diff --git a/openpype/hosts/aftereffects/plugins/publish/closeAE.py b/openpype/hosts/aftereffects/plugins/publish/closeAE.py index eff2573e8f..0be20d9f05 100644 --- a/openpype/hosts/aftereffects/plugins/publish/closeAE.py +++ b/openpype/hosts/aftereffects/plugins/publish/closeAE.py @@ -15,7 +15,7 @@ class CloseAE(pyblish.api.ContextPlugin): active = True hosts = ["aftereffects"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): self.log.info("CloseAE") diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py b/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py index c21c3623c3..dc557f67fc 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py @@ -1,7 +1,6 @@ import os import pyblish.api -from openpype.pipeline import legacy_io from openpype.pipeline.create import get_subset_name @@ -44,7 +43,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin): instance.data["publish"] = instance.data["active"] # for DL def _get_new_instance(self, context, scene_file): - task = legacy_io.Session["AVALON_TASK"] + task = context.data["task"] version = context.data["version"] asset_entity = context.data["assetEntity"] project_entity = context.data["projectEntity"] diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index c70aa41dbe..bdb48e11f8 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -1,11 +1,5 @@ import os -import sys -import six -from openpype.lib import ( - get_ffmpeg_tool_path, - run_subprocess, -) from openpype.pipeline import publish from openpype.hosts.aftereffects.api import get_stub diff --git a/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py b/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py index 6c36136b20..36f6035d23 100644 --- a/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py +++ b/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -30,7 +30,7 @@ class ValidateInstanceAssetRepair(pyblish.api.Action): for instance in instances: data = stub.read(instance[0]) - data["asset"] = legacy_io.Session["AVALON_ASSET"] + data["asset"] = get_current_asset_name() stub.imprint(instance[0].instance_id, data) @@ -54,7 +54,7 @@ class ValidateInstanceAsset(pyblish.api.InstancePlugin): def process(self, instance): instance_asset = instance.data["asset"] - current_asset = legacy_io.Session["AVALON_ASSET"] + current_asset = get_current_asset_name() msg = ( f"Instance asset {instance_asset} is not the same " f"as current context {current_asset}." diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 91cbfe524f..62d7987b47 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -16,10 +16,11 @@ import bpy import bpy.utils.previews from openpype import style -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name, get_current_task_name from openpype.tools.utils import host_tools from .workio import OpenFileCacher +from . import pipeline PREVIEW_COLLECTIONS: Dict = dict() @@ -283,7 +284,7 @@ class LaunchLoader(LaunchQtApp): def before_window_show(self): self._window.set_context( - {"asset": legacy_io.Session["AVALON_ASSET"]}, + {"asset": get_current_asset_name()}, refresh=True ) @@ -331,8 +332,8 @@ class LaunchWorkFiles(LaunchQtApp): def execute(self, context): result = super().execute(context) self._window.set_context({ - "asset": legacy_io.Session["AVALON_ASSET"], - "task": legacy_io.Session["AVALON_TASK"] + "asset": get_current_asset_name(), + "task": get_current_task_name() }) return result @@ -344,6 +345,26 @@ class LaunchWorkFiles(LaunchQtApp): self._window.refresh() +class SetFrameRange(bpy.types.Operator): + bl_idname = "wm.ayon_set_frame_range" + bl_label = "Set Frame Range" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_frame_range(data) + return {"FINISHED"} + + +class SetResolution(bpy.types.Operator): + bl_idname = "wm.ayon_set_resolution" + bl_label = "Set Resolution" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_resolution(data) + return {"FINISHED"} + + class TOPBAR_MT_avalon(bpy.types.Menu): """Avalon menu.""" @@ -362,8 +383,8 @@ class TOPBAR_MT_avalon(bpy.types.Menu): else: pyblish_menu_icon_id = 0 - asset = legacy_io.Session['AVALON_ASSET'] - task = legacy_io.Session['AVALON_TASK'] + asset = get_current_asset_name() + task = get_current_task_name() context_label = f"{asset}, {task}" context_label_item = layout.row() context_label_item.operator( @@ -381,9 +402,11 @@ class TOPBAR_MT_avalon(bpy.types.Menu): layout.operator(LaunchManager.bl_idname, text="Manage...") layout.operator(LaunchLibrary.bl_idname, text="Library...") layout.separator() + layout.operator(SetFrameRange.bl_idname, text="Set Frame Range") + layout.operator(SetResolution.bl_idname, text="Set Resolution") + layout.separator() layout.operator(LaunchWorkFiles.bl_idname, text="Work Files...") - # TODO (jasper): maybe add 'Reload Pipeline', 'Set Frame Range' and - # 'Set Resolution'? + # TODO (jasper): maybe add 'Reload Pipeline' def draw_avalon_menu(self, context): @@ -399,6 +422,8 @@ classes = [ LaunchManager, LaunchLibrary, LaunchWorkFiles, + SetFrameRange, + SetResolution, TOPBAR_MT_avalon, ] @@ -411,6 +436,7 @@ def register(): pcoll.load("pyblish_menu_icon", str(pyblish_icon_file.absolute()), 'IMAGE') PREVIEW_COLLECTIONS["avalon"] = pcoll + BlenderApplication.get_app() for cls in classes: bpy.utils.register_class(cls) bpy.types.TOPBAR_MT_editor_menus.append(draw_avalon_menu) diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index 0f756d8cb6..29339a512c 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -14,6 +14,8 @@ from openpype.client import get_asset_by_name from openpype.pipeline import ( schema, legacy_io, + get_current_project_name, + get_current_asset_name, register_loader_plugin_path, register_creator_plugin_path, deregister_loader_plugin_path, @@ -111,22 +113,21 @@ def message_window(title, message): _process_app_events() -def set_start_end_frames(): - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] +def get_asset_data(): + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name) + return asset_doc.get("data") + + +def set_frame_range(data): scene = bpy.context.scene # Default scene settings frameStart = scene.frame_start frameEnd = scene.frame_end fps = scene.render.fps / scene.render.fps_base - resolution_x = scene.render.resolution_x - resolution_y = scene.render.resolution_y - - # Check if settings are set - data = asset_doc.get("data") if not data: return @@ -137,26 +138,47 @@ def set_start_end_frames(): frameEnd = data.get("frameEnd") if data.get("fps"): fps = data.get("fps") - if data.get("resolutionWidth"): - resolution_x = data.get("resolutionWidth") - if data.get("resolutionHeight"): - resolution_y = data.get("resolutionHeight") scene.frame_start = frameStart scene.frame_end = frameEnd scene.render.fps = round(fps) scene.render.fps_base = round(fps) / fps + + +def set_resolution(data): + scene = bpy.context.scene + + # Default scene settings + resolution_x = scene.render.resolution_x + resolution_y = scene.render.resolution_y + + if not data: + return + + if data.get("resolutionWidth"): + resolution_x = data.get("resolutionWidth") + if data.get("resolutionHeight"): + resolution_y = data.get("resolutionHeight") + scene.render.resolution_x = resolution_x scene.render.resolution_y = resolution_y def on_new(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) + + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") if unit_scale_enabled: unit_scale = unit_scale_settings.get("base_file_unit_scale") @@ -164,12 +186,20 @@ def on_new(): def on_open(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) + + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") apply_on_opening = unit_scale_settings.get("apply_on_opening") if unit_scale_enabled and apply_on_opening: diff --git a/openpype/hosts/blender/api/plugin.py b/openpype/hosts/blender/api/plugin.py index 1274795c6b..fb87d08cce 100644 --- a/openpype/hosts/blender/api/plugin.py +++ b/openpype/hosts/blender/api/plugin.py @@ -243,7 +243,8 @@ class AssetLoader(LoaderPlugin): """ # TODO (jasper): make it possible to add the asset several times by # just re-using the collection - assert Path(self.fname).exists(), f"{self.fname} doesn't exist." + filepath = self.filepath_from_context(context) + assert Path(filepath).exists(), f"{filepath} doesn't exist." asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py index 559e9ae0ce..68c9bfdd57 100644 --- a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py +++ b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py @@ -1,6 +1,6 @@ from pathlib import Path -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class AddPythonScriptToLaunchArgs(PreLaunchHook): @@ -8,9 +8,8 @@ class AddPythonScriptToLaunchArgs(PreLaunchHook): # Append after file argument order = 15 - app_groups = [ - "blender", - ] + app_groups = {"blender"} + launch_types = {LaunchTypes.local} def execute(self): if not self.launch_context.data.get("python_scripts"): diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py index e5f66d2a26..777e383215 100644 --- a/openpype/hosts/blender/hooks/pre_pyside_install.py +++ b/openpype/hosts/blender/hooks/pre_pyside_install.py @@ -2,7 +2,7 @@ import os import re import subprocess from platform import system -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class InstallPySideToBlender(PreLaunchHook): @@ -16,7 +16,8 @@ class InstallPySideToBlender(PreLaunchHook): blender's python packages. """ - app_groups = ["blender"] + app_groups = {"blender"} + launch_types = {LaunchTypes.local} def execute(self): # Prelaunch hook is not crucial diff --git a/openpype/hosts/blender/hooks/pre_windows_console.py b/openpype/hosts/blender/hooks/pre_windows_console.py index d6be45b225..2161b7a2f5 100644 --- a/openpype/hosts/blender/hooks/pre_windows_console.py +++ b/openpype/hosts/blender/hooks/pre_windows_console.py @@ -1,5 +1,5 @@ import subprocess -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class BlenderConsoleWindows(PreLaunchHook): @@ -13,8 +13,9 @@ class BlenderConsoleWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = ["blender"] - platforms = ["windows"] + app_groups = {"blender"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE diff --git a/openpype/hosts/blender/plugins/create/create_action.py b/openpype/hosts/blender/plugins/create/create_action.py index 54b3a501a7..0203ba74c0 100644 --- a/openpype/hosts/blender/plugins/create/create_action.py +++ b/openpype/hosts/blender/plugins/create/create_action.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name import openpype.hosts.blender.api.plugin from openpype.hosts.blender.api import lib @@ -22,7 +22,7 @@ class CreateAction(openpype.hosts.blender.api.plugin.Creator): name = openpype.hosts.blender.api.plugin.asset_name(asset, subset) collection = bpy.data.collections.new(name=name) bpy.context.scene.collection.children.link(collection) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(collection, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_animation.py b/openpype/hosts/blender/plugins/create/create_animation.py index a0e9e5e399..bc2840952b 100644 --- a/openpype/hosts/blender/plugins/create/create_animation.py +++ b/openpype/hosts/blender/plugins/create/create_animation.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -37,7 +37,7 @@ class CreateAnimation(plugin.Creator): # asset_group.empty_display_type = 'SINGLE_ARROW' asset_group = bpy.data.collections.new(name=name) instances.children.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_blendScene.py b/openpype/hosts/blender/plugins/create/create_blendScene.py new file mode 100644 index 0000000000..63bcf212ff --- /dev/null +++ b/openpype/hosts/blender/plugins/create/create_blendScene.py @@ -0,0 +1,51 @@ +"""Create a Blender scene asset.""" + +import bpy + +from openpype.pipeline import get_current_task_name +from openpype.hosts.blender.api import plugin, lib, ops +from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES + + +class CreateBlendScene(plugin.Creator): + """Generic group of assets""" + + name = "blendScene" + label = "Blender Scene" + family = "blendScene" + icon = "cubes" + + def process(self): + """ Run the creator on Blender main thread""" + mti = ops.MainThreadItem(self._process) + ops.execute_in_main_thread(mti) + + def _process(self): + # Get Instance Container or create it if it does not exist + instances = bpy.data.collections.get(AVALON_INSTANCES) + if not instances: + instances = bpy.data.collections.new(name=AVALON_INSTANCES) + bpy.context.scene.collection.children.link(instances) + + # Create instance object + asset = self.data["asset"] + subset = self.data["subset"] + name = plugin.asset_name(asset, subset) + asset_group = bpy.data.objects.new(name=name, object_data=None) + asset_group.empty_display_type = 'SINGLE_ARROW' + instances.objects.link(asset_group) + self.data['task'] = get_current_task_name() + lib.imprint(asset_group, self.data) + + # Add selected objects to instance + if (self.options or {}).get("useSelection"): + bpy.context.view_layer.objects.active = asset_group + selected = lib.get_selection() + for obj in selected: + if obj.parent in selected: + obj.select_set(False) + continue + selected.append(asset_group) + bpy.ops.object.parent_set(keep_transform=True) + + return asset_group diff --git a/openpype/hosts/blender/plugins/create/create_camera.py b/openpype/hosts/blender/plugins/create/create_camera.py index ada512d7ac..7a770a3e77 100644 --- a/openpype/hosts/blender/plugins/create/create_camera.py +++ b/openpype/hosts/blender/plugins/create/create_camera.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -35,7 +35,7 @@ class CreateCamera(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() print(f"self.data: {self.data}") lib.imprint(asset_group, self.data) @@ -43,7 +43,9 @@ class CreateCamera(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) else: diff --git a/openpype/hosts/blender/plugins/create/create_layout.py b/openpype/hosts/blender/plugins/create/create_layout.py index 5949a4b86e..73ed683256 100644 --- a/openpype/hosts/blender/plugins/create/create_layout.py +++ b/openpype/hosts/blender/plugins/create/create_layout.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -34,7 +34,7 @@ class CreateLayout(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) # Add selected objects to instance @@ -42,7 +42,9 @@ class CreateLayout(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) diff --git a/openpype/hosts/blender/plugins/create/create_model.py b/openpype/hosts/blender/plugins/create/create_model.py index fedc708943..51fc6683f6 100644 --- a/openpype/hosts/blender/plugins/create/create_model.py +++ b/openpype/hosts/blender/plugins/create/create_model.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -34,7 +34,7 @@ class CreateModel(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) # Add selected objects to instance @@ -42,7 +42,9 @@ class CreateModel(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) diff --git a/openpype/hosts/blender/plugins/create/create_pointcache.py b/openpype/hosts/blender/plugins/create/create_pointcache.py index 38707fd3b1..6220f68dc5 100644 --- a/openpype/hosts/blender/plugins/create/create_pointcache.py +++ b/openpype/hosts/blender/plugins/create/create_pointcache.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name import openpype.hosts.blender.api.plugin from openpype.hosts.blender.api import lib @@ -22,7 +22,7 @@ class CreatePointcache(openpype.hosts.blender.api.plugin.Creator): name = openpype.hosts.blender.api.plugin.asset_name(asset, subset) collection = bpy.data.collections.new(name=name) bpy.context.scene.collection.children.link(collection) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(collection, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_review.py b/openpype/hosts/blender/plugins/create/create_review.py index bf4ea6a7cd..914f249891 100644 --- a/openpype/hosts/blender/plugins/create/create_review.py +++ b/openpype/hosts/blender/plugins/create/create_review.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -33,7 +33,7 @@ class CreateReview(plugin.Creator): name = plugin.asset_name(asset, subset) asset_group = bpy.data.collections.new(name=name) instances.children.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) if (self.options or {}).get("useSelection"): diff --git a/openpype/hosts/blender/plugins/create/create_rig.py b/openpype/hosts/blender/plugins/create/create_rig.py index 0abd306c6b..08cc46ee3e 100644 --- a/openpype/hosts/blender/plugins/create/create_rig.py +++ b/openpype/hosts/blender/plugins/create/create_rig.py @@ -2,7 +2,7 @@ import bpy -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name from openpype.hosts.blender.api import plugin, lib, ops from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES @@ -34,7 +34,7 @@ class CreateRig(plugin.Creator): asset_group = bpy.data.objects.new(name=name, object_data=None) asset_group.empty_display_type = 'SINGLE_ARROW' instances.objects.link(asset_group) - self.data['task'] = legacy_io.Session.get('AVALON_TASK') + self.data['task'] = get_current_task_name() lib.imprint(asset_group, self.data) # Add selected objects to instance @@ -42,7 +42,9 @@ class CreateRig(plugin.Creator): bpy.context.view_layer.objects.active = asset_group selected = lib.get_selection() for obj in selected: - obj.select_set(True) + if obj.parent in selected: + obj.select_set(False) + continue selected.append(asset_group) bpy.ops.object.parent_set(keep_transform=True) diff --git a/openpype/hosts/blender/plugins/load/import_workfile.py b/openpype/hosts/blender/plugins/load/import_workfile.py index bbdf1c7ea0..4f5016d422 100644 --- a/openpype/hosts/blender/plugins/load/import_workfile.py +++ b/openpype/hosts/blender/plugins/load/import_workfile.py @@ -52,7 +52,8 @@ class AppendBlendLoader(plugin.AssetLoader): color = "#775555" def load(self, context, name=None, namespace=None, data=None): - append_workfile(context, self.fname, False) + path = self.filepath_from_context(context) + append_workfile(context, path, False) # We do not containerize imported content, it remains unmanaged return @@ -76,7 +77,8 @@ class ImportBlendLoader(plugin.AssetLoader): color = "#775555" def load(self, context, name=None, namespace=None, data=None): - append_workfile(context, self.fname, True) + path = self.filepath_from_context(context) + append_workfile(context, path, True) # We do not containerize imported content, it remains unmanaged return diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py index c1d73eff02..292925c833 100644 --- a/openpype/hosts/blender/plugins/load/load_abc.py +++ b/openpype/hosts/blender/plugins/load/load_abc.py @@ -111,7 +111,7 @@ class CacheModelLoader(plugin.AssetLoader): options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_action.py b/openpype/hosts/blender/plugins/load/load_action.py index 3c8fe988f0..3447e67ebf 100644 --- a/openpype/hosts/blender/plugins/load/load_action.py +++ b/openpype/hosts/blender/plugins/load/load_action.py @@ -43,7 +43,7 @@ class BlendActionLoader(openpype.hosts.blender.api.plugin.AssetLoader): options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] lib_container = openpype.hosts.blender.api.plugin.asset_name(asset, subset) diff --git a/openpype/hosts/blender/plugins/load/load_animation.py b/openpype/hosts/blender/plugins/load/load_animation.py index 6b8d4abd04..3e7f808903 100644 --- a/openpype/hosts/blender/plugins/load/load_animation.py +++ b/openpype/hosts/blender/plugins/load/load_animation.py @@ -34,7 +34,7 @@ class BlendAnimationLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) with bpy.data.libraries.load( libpath, link=True, relative=False diff --git a/openpype/hosts/blender/plugins/load/load_audio.py b/openpype/hosts/blender/plugins/load/load_audio.py index 3f4fcc17de..ac8f363316 100644 --- a/openpype/hosts/blender/plugins/load/load_audio.py +++ b/openpype/hosts/blender/plugins/load/load_audio.py @@ -38,7 +38,7 @@ class AudioLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_blend.py b/openpype/hosts/blender/plugins/load/load_blend.py new file mode 100644 index 0000000000..99f291a5a7 --- /dev/null +++ b/openpype/hosts/blender/plugins/load/load_blend.py @@ -0,0 +1,257 @@ +from typing import Dict, List, Optional +from pathlib import Path + +import bpy + +from openpype.pipeline import ( + legacy_create, + get_representation_path, + AVALON_CONTAINER_ID, +) +from openpype.pipeline.create import get_legacy_creator_by_name +from openpype.hosts.blender.api import plugin +from openpype.hosts.blender.api.lib import imprint +from openpype.hosts.blender.api.pipeline import ( + AVALON_CONTAINERS, + AVALON_PROPERTY, +) + + +class BlendLoader(plugin.AssetLoader): + """Load assets from a .blend file.""" + + families = ["model", "rig", "layout", "camera", "blendScene"] + representations = ["blend"] + + label = "Append Blend" + icon = "code-fork" + color = "orange" + + @staticmethod + def _get_asset_container(objects): + empties = [obj for obj in objects if obj.type == 'EMPTY'] + + for empty in empties: + if empty.get(AVALON_PROPERTY): + return empty + + return None + + @staticmethod + def get_all_container_parents(asset_group): + parent_containers = [] + parent = asset_group.parent + while parent: + if parent.get(AVALON_PROPERTY): + parent_containers.append(parent) + parent = parent.parent + + return parent_containers + + def _post_process_layout(self, container, asset, representation): + rigs = [ + obj for obj in container.children_recursive + if ( + obj.type == 'EMPTY' and + obj.get(AVALON_PROPERTY) and + obj.get(AVALON_PROPERTY).get('family') == 'rig' + ) + ] + + for rig in rigs: + creator_plugin = get_legacy_creator_by_name("CreateAnimation") + legacy_create( + creator_plugin, + name=rig.name.split(':')[-1] + "_animation", + asset=asset, + options={ + "useSelection": False, + "asset_group": rig + }, + data={ + "dependencies": representation + } + ) + + def _process_data(self, libpath, group_name): + # Append all the data from the .blend file + with bpy.data.libraries.load( + libpath, link=False, relative=False + ) as (data_from, data_to): + for attr in dir(data_to): + setattr(data_to, attr, getattr(data_from, attr)) + + members = [] + + # Rename the object to add the asset name + for attr in dir(data_to): + for data in getattr(data_to, attr): + data.name = f"{group_name}:{data.name}" + members.append(data) + + container = self._get_asset_container(data_to.objects) + assert container, "No asset group found" + + container.name = group_name + container.empty_display_type = 'SINGLE_ARROW' + + # Link the collection to the scene + bpy.context.scene.collection.objects.link(container) + + # Link all the container children to the collection + for obj in container.children_recursive: + bpy.context.scene.collection.objects.link(obj) + + # Remove the library from the blend file + library = bpy.data.libraries.get(bpy.path.basename(libpath)) + bpy.data.libraries.remove(library) + + return container, members + + def process_asset( + self, context: dict, name: str, namespace: Optional[str] = None, + options: Optional[Dict] = None + ) -> Optional[List]: + """ + Arguments: + name: Use pre-defined name + namespace: Use pre-defined namespace + context: Full parenthood of representation to load + options: Additional settings dictionary + """ + libpath = self.fname + asset = context["asset"]["name"] + subset = context["subset"]["name"] + + try: + family = context["representation"]["context"]["family"] + except ValueError: + family = "model" + + representation = str(context["representation"]["_id"]) + + asset_name = plugin.asset_name(asset, subset) + unique_number = plugin.get_unique_number(asset, subset) + group_name = plugin.asset_name(asset, subset, unique_number) + namespace = namespace or f"{asset}_{unique_number}" + + avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) + if not avalon_container: + avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) + bpy.context.scene.collection.children.link(avalon_container) + + container, members = self._process_data(libpath, group_name) + + if family == "layout": + self._post_process_layout(container, asset, representation) + + avalon_container.objects.link(container) + + data = { + "schema": "openpype:container-2.0", + "id": AVALON_CONTAINER_ID, + "name": name, + "namespace": namespace or '', + "loader": str(self.__class__.__name__), + "representation": str(context["representation"]["_id"]), + "libpath": libpath, + "asset_name": asset_name, + "parent": str(context["representation"]["parent"]), + "family": context["representation"]["context"]["family"], + "objectName": group_name, + "members": members, + } + + container[AVALON_PROPERTY] = data + + objects = [ + obj for obj in bpy.data.objects + if obj.name.startswith(f"{group_name}:") + ] + + self[:] = objects + return objects + + def exec_update(self, container: Dict, representation: Dict): + """ + Update the loaded asset. + """ + group_name = container["objectName"] + asset_group = bpy.data.objects.get(group_name) + libpath = Path(get_representation_path(representation)).as_posix() + + assert asset_group, ( + f"The asset is not loaded: {container['objectName']}" + ) + + transform = asset_group.matrix_basis.copy() + old_data = dict(asset_group.get(AVALON_PROPERTY)) + parent = asset_group.parent + + self.exec_remove(container) + + asset_group, members = self._process_data(libpath, group_name) + + avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) + avalon_container.objects.link(asset_group) + + asset_group.matrix_basis = transform + asset_group.parent = parent + + # Restore the old data, but reset memebers, as they don't exist anymore + # This avoids a crash, because the memory addresses of those members + # are not valid anymore + old_data["members"] = [] + asset_group[AVALON_PROPERTY] = old_data + + new_data = { + "libpath": libpath, + "representation": str(representation["_id"]), + "parent": str(representation["parent"]), + "members": members, + } + + imprint(asset_group, new_data) + + # We need to update all the parent container members + parent_containers = self.get_all_container_parents(asset_group) + + for parent_container in parent_containers: + parent_members = parent_container[AVALON_PROPERTY]["members"] + parent_container[AVALON_PROPERTY]["members"] = ( + parent_members + members) + + def exec_remove(self, container: Dict) -> bool: + """ + Remove an existing container from a Blender scene. + """ + group_name = container["objectName"] + asset_group = bpy.data.objects.get(group_name) + + attrs = [ + attr for attr in dir(bpy.data) + if isinstance( + getattr(bpy.data, attr), + bpy.types.bpy_prop_collection + ) + ] + + members = asset_group.get(AVALON_PROPERTY).get("members", []) + + # We need to update all the parent container members + parent_containers = self.get_all_container_parents(asset_group) + + for parent in parent_containers: + parent.get(AVALON_PROPERTY)["members"] = list(filter( + lambda i: i not in members, + parent.get(AVALON_PROPERTY)["members"])) + + for attr in attrs: + for data in getattr(bpy.data, attr): + if data in members: + # Skip the asset group + if data == asset_group: + continue + getattr(bpy.data, attr).remove(data) + + bpy.data.objects.remove(asset_group) diff --git a/openpype/hosts/blender/plugins/load/load_camera_abc.py b/openpype/hosts/blender/plugins/load/load_camera_abc.py index 21b48f409f..e5afecff66 100644 --- a/openpype/hosts/blender/plugins/load/load_camera_abc.py +++ b/openpype/hosts/blender/plugins/load/load_camera_abc.py @@ -81,7 +81,9 @@ class AbcCameraLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + + libpath = self.filepath_from_context(context) + asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_camera_blend.py b/openpype/hosts/blender/plugins/load/load_camera_blend.py deleted file mode 100644 index f00027f0b4..0000000000 --- a/openpype/hosts/blender/plugins/load/load_camera_blend.py +++ /dev/null @@ -1,256 +0,0 @@ -"""Load a camera asset in Blender.""" - -import logging -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.hosts.blender.api import plugin -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - -logger = logging.getLogger("openpype").getChild( - "blender").getChild("load_camera") - - -class BlendCameraLoader(plugin.AssetLoader): - """Load a camera from a .blend file. - - Warning: - Loading the same asset more then once is not properly supported at the - moment. - """ - - families = ["camera"] - representations = ["blend"] - - label = "Link Camera (Blend)" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'CAMERA': - bpy.data.cameras.remove(obj.data) - - def _process(self, libpath, asset_group, group_name): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if empty.get(AVALON_PROPERTY): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - for obj in nodes: - obj.parent = asset_group - - for obj in nodes: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj, group_name) - - if local_obj.type != 'EMPTY': - plugin.prepare_data(local_obj.data, group_name) - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - bpy.data.orphans_purge(do_local_ids=False) - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - objects = self._process(libpath, asset_group, group_name) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def exec_update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all children of the asset group, load the new ones - and add them as children of the group. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - mat = asset_group.matrix_basis.copy() - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - if library: - bpy.data.libraries.remove(library) - - self._process(str(libpath), asset_group, object_name) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - metadata["parent"] = str(representation["parent"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing container from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the container was deleted. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - if not asset_group: - return False - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) - - return True diff --git a/openpype/hosts/blender/plugins/load/load_camera_fbx.py b/openpype/hosts/blender/plugins/load/load_camera_fbx.py index 97f844e610..b9d05dda0a 100644 --- a/openpype/hosts/blender/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/blender/plugins/load/load_camera_fbx.py @@ -86,7 +86,7 @@ class FbxCameraLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_fbx.py b/openpype/hosts/blender/plugins/load/load_fbx.py index ee2e7d175c..e129ea6754 100644 --- a/openpype/hosts/blender/plugins/load/load_fbx.py +++ b/openpype/hosts/blender/plugins/load/load_fbx.py @@ -130,7 +130,7 @@ class FbxModelLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_layout_blend.py b/openpype/hosts/blender/plugins/load/load_layout_blend.py deleted file mode 100644 index 7d2fd23444..0000000000 --- a/openpype/hosts/blender/plugins/load/load_layout_blend.py +++ /dev/null @@ -1,469 +0,0 @@ -"""Load a layout in Blender.""" - -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - legacy_create, - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.pipeline.create import get_legacy_creator_by_name -from openpype.hosts.blender.api import plugin -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - - -class BlendLayoutLoader(plugin.AssetLoader): - """Load layout from a .blend file.""" - - families = ["layout"] - representations = ["blend"] - - label = "Link Layout" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'MESH': - for material_slot in list(obj.material_slots): - if material_slot.material: - bpy.data.materials.remove(material_slot.material) - bpy.data.meshes.remove(obj.data) - elif obj.type == 'ARMATURE': - objects.extend(obj.children) - bpy.data.armatures.remove(obj.data) - elif obj.type == 'CURVE': - bpy.data.curves.remove(obj.data) - elif obj.type == 'EMPTY': - objects.extend(obj.children) - bpy.data.objects.remove(obj) - - def _remove_asset_and_library(self, asset_group): - if not asset_group.get(AVALON_PROPERTY): - return - - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - if not libpath: - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).all_objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - if library: - bpy.data.libraries.remove(library) - - def _process( - self, libpath, asset_group, group_name, asset, representation, - actions, anim_instances - ): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if (empty.get(AVALON_PROPERTY) and - empty.get(AVALON_PROPERTY).get('family') == 'layout'): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - allowed_types = ['ARMATURE', 'MESH', 'EMPTY'] - - for obj in nodes: - if obj.type in allowed_types: - obj.parent = asset_group - - for obj in nodes: - if obj.type in allowed_types: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - constraints = [] - - armatures = [obj for obj in objects if obj.type == 'ARMATURE'] - - for armature in armatures: - for bone in armature.pose.bones: - for constraint in bone.constraints: - if hasattr(constraint, 'target'): - constraints.append(constraint) - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj) - - action = None - - if actions: - action = actions.get(local_obj.name, None) - - if local_obj.type == 'MESH': - plugin.prepare_data(local_obj.data) - - if obj != local_obj: - for constraint in constraints: - if constraint.target == obj: - constraint.target = local_obj - - for material_slot in local_obj.material_slots: - if material_slot.material: - plugin.prepare_data(material_slot.material) - elif local_obj.type == 'ARMATURE': - plugin.prepare_data(local_obj.data) - - if action: - if local_obj.animation_data is None: - local_obj.animation_data_create() - local_obj.animation_data.action = action - elif (local_obj.animation_data and - local_obj.animation_data.action): - plugin.prepare_data( - local_obj.animation_data.action) - - # Set link the drivers to the local object - if local_obj.data.animation_data: - for d in local_obj.data.animation_data.drivers: - for v in d.driver.variables: - for t in v.targets: - t.id = local_obj - - elif local_obj.type == 'EMPTY': - if (not anim_instances or - (anim_instances and - local_obj.name not in anim_instances.keys())): - avalon = local_obj.get(AVALON_PROPERTY) - if avalon and avalon.get('family') == 'rig': - creator_plugin = get_legacy_creator_by_name( - "CreateAnimation") - if not creator_plugin: - raise ValueError( - "Creator plugin \"CreateAnimation\" was " - "not found.") - - legacy_create( - creator_plugin, - name=local_obj.name.split(':')[-1] + "_animation", - asset=asset, - options={"useSelection": False, - "asset_group": local_obj}, - data={"dependencies": representation} - ) - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - armatures = [ - obj for obj in bpy.data.objects - if obj.type == 'ARMATURE' and obj.library is None] - arm_act = {} - - # The armatures with an animation need to be at the center of the - # scene to be hooked correctly by the curves modifiers. - for armature in armatures: - if armature.animation_data and armature.animation_data.action: - arm_act[armature] = armature.animation_data.action - armature.animation_data.action = None - armature.location = (0.0, 0.0, 0.0) - for bone in armature.pose.bones: - bone.location = (0.0, 0.0, 0.0) - bone.rotation_euler = (0.0, 0.0, 0.0) - - curves = [obj for obj in data_to.objects if obj.type == 'CURVE'] - - for curve in curves: - curve_name = curve.name.split(':')[0] - curve_obj = bpy.data.objects.get(curve_name) - - local_obj = plugin.prepare_data(curve) - plugin.prepare_data(local_obj.data) - - # Curves need to reset the hook, but to do that they need to be - # in the view layer. - parent.objects.link(local_obj) - plugin.deselect_all() - local_obj.select_set(True) - bpy.context.view_layer.objects.active = local_obj - if local_obj.library is None: - bpy.ops.object.mode_set(mode='EDIT') - bpy.ops.object.hook_reset() - bpy.ops.object.mode_set(mode='OBJECT') - parent.objects.unlink(local_obj) - - local_obj.use_fake_user = True - - for mod in local_obj.modifiers: - mod.object = bpy.data.objects.get(f"{mod.object.name}") - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - local_obj.parent = curve_obj - objects.append(local_obj) - - for armature in armatures: - if arm_act.get(armature): - armature.animation_data.action = arm_act[armature] - - while bpy.data.orphans_purge(do_local_ids=False): - pass - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - representation = str(context["representation"]["_id"]) - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - objects = self._process( - libpath, asset_group, group_name, asset, representation, - None, None) - - for child in asset_group.children: - if child.get(AVALON_PROPERTY): - avalon_container.objects.link(child) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all objects of the current collection, load the new - ones and add them to the collection. - If the objects of the collection are used in another collection they - will not be removed, only unlinked. Normally this should not be the - case though. - - Warning: - No nested collections are supported at the moment! - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - actions = {} - anim_instances = {} - - for obj in asset_group.children: - obj_meta = obj.get(AVALON_PROPERTY) - if obj_meta.get('family') == 'rig': - # Get animation instance - collections = list(obj.users_collection) - for c in collections: - avalon = c.get(AVALON_PROPERTY) - if avalon and avalon.get('family') == 'animation': - anim_instances[obj.name] = c.name - break - - # Get armature's action - rig = None - for child in obj.children: - if child.type == 'ARMATURE': - rig = child - break - if not rig: - raise Exception("No armature in the rig asset group.") - if rig.animation_data and rig.animation_data.action: - instance_name = obj_meta.get('instance_name') - actions[instance_name] = rig.animation_data.action - - mat = asset_group.matrix_basis.copy() - - # Remove the children of the asset_group first - for child in list(asset_group.children): - self._remove_asset_and_library(child) - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - if library: - bpy.data.libraries.remove(library) - - asset = container.get("asset_name").split("_")[0] - - self._process( - str(libpath), asset_group, object_name, asset, - str(representation.get("_id")), actions, anim_instances - ) - - # Link the new objects to the animation collection - for inst in anim_instances.keys(): - try: - obj = bpy.data.objects[inst] - bpy.data.collections[anim_instances[inst]].objects.link(obj) - except KeyError: - self.log.info(f"Object {inst} does not exist anymore.") - coll = bpy.data.collections.get(anim_instances[inst]) - if (coll): - bpy.data.collections.remove(coll) - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - for child in asset_group.children: - if child.get(AVALON_PROPERTY): - avalon_container.objects.link(child) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing container from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the container was deleted. - - Warning: - No nested collections are supported at the moment! - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - - if not asset_group: - return False - - # Remove the children of the asset_group first - for child in list(asset_group.children): - self._remove_asset_and_library(child) - - self._remove_asset_and_library(asset_group) - - return True diff --git a/openpype/hosts/blender/plugins/load/load_layout_json.py b/openpype/hosts/blender/plugins/load/load_layout_json.py index eca098627e..81683b8de8 100644 --- a/openpype/hosts/blender/plugins/load/load_layout_json.py +++ b/openpype/hosts/blender/plugins/load/load_layout_json.py @@ -144,7 +144,7 @@ class JsonLayoutLoader(plugin.AssetLoader): context: Full parenthood of representation to load options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_look.py b/openpype/hosts/blender/plugins/load/load_look.py index 70d1b95f02..c121f55633 100644 --- a/openpype/hosts/blender/plugins/load/load_look.py +++ b/openpype/hosts/blender/plugins/load/load_look.py @@ -92,7 +92,7 @@ class BlendLookLoader(plugin.AssetLoader): options: Additional settings dictionary """ - libpath = self.fname + libpath = self.filepath_from_context(context) asset = context["asset"]["name"] subset = context["subset"]["name"] diff --git a/openpype/hosts/blender/plugins/load/load_model.py b/openpype/hosts/blender/plugins/load/load_model.py deleted file mode 100644 index 0a5d98ffa0..0000000000 --- a/openpype/hosts/blender/plugins/load/load_model.py +++ /dev/null @@ -1,296 +0,0 @@ -"""Load a model asset in Blender.""" - -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.hosts.blender.api import plugin -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - - -class BlendModelLoader(plugin.AssetLoader): - """Load models from a .blend file. - - Because they come from a .blend file we can simply link the collection that - contains the model. There is no further need to 'containerise' it. - """ - - families = ["model"] - representations = ["blend"] - - label = "Link Model" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'MESH': - for material_slot in list(obj.material_slots): - bpy.data.materials.remove(material_slot.material) - bpy.data.meshes.remove(obj.data) - elif obj.type == 'EMPTY': - objects.extend(obj.children) - bpy.data.objects.remove(obj) - - def _process(self, libpath, asset_group, group_name): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if empty.get(AVALON_PROPERTY): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - for obj in nodes: - obj.parent = asset_group - - for obj in nodes: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj, group_name) - if local_obj.type != 'EMPTY': - plugin.prepare_data(local_obj.data, group_name) - - for material_slot in local_obj.material_slots: - if material_slot.material: - plugin.prepare_data(material_slot.material, group_name) - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - bpy.data.orphans_purge(do_local_ids=False) - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - plugin.deselect_all() - - if options is not None: - parent = options.get('parent') - transform = options.get('transform') - - if parent and transform: - location = transform.get('translation') - rotation = transform.get('rotation') - scale = transform.get('scale') - - asset_group.location = ( - location.get('x'), - location.get('y'), - location.get('z') - ) - asset_group.rotation_euler = ( - rotation.get('x'), - rotation.get('y'), - rotation.get('z') - ) - asset_group.scale = ( - scale.get('x'), - scale.get('y'), - scale.get('z') - ) - - bpy.context.view_layer.objects.active = parent - asset_group.select_set(True) - - bpy.ops.object.parent_set(keep_transform=True) - - plugin.deselect_all() - - objects = self._process(libpath, asset_group, group_name) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def exec_update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all objects of the current collection, load the new - ones and add them to the collection. - If the objects of the collection are used in another collection they - will not be removed, only unlinked. Normally this should not be the - case though. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - mat = asset_group.matrix_basis.copy() - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - if library: - bpy.data.libraries.remove(library) - - self._process(str(libpath), asset_group, object_name) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - metadata["parent"] = str(representation["parent"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing container from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the container was deleted. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - if not asset_group: - return False - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) - - return True diff --git a/openpype/hosts/blender/plugins/load/load_rig.py b/openpype/hosts/blender/plugins/load/load_rig.py deleted file mode 100644 index 1d23a70061..0000000000 --- a/openpype/hosts/blender/plugins/load/load_rig.py +++ /dev/null @@ -1,417 +0,0 @@ -"""Load a rig asset in Blender.""" - -from pathlib import Path -from pprint import pformat -from typing import Dict, List, Optional - -import bpy - -from openpype.pipeline import ( - legacy_create, - get_representation_path, - AVALON_CONTAINER_ID, -) -from openpype.pipeline.create import get_legacy_creator_by_name -from openpype.hosts.blender.api import ( - plugin, - get_selection, -) -from openpype.hosts.blender.api.pipeline import ( - AVALON_CONTAINERS, - AVALON_PROPERTY, -) - - -class BlendRigLoader(plugin.AssetLoader): - """Load rigs from a .blend file.""" - - families = ["rig"] - representations = ["blend"] - - label = "Link Rig" - icon = "code-fork" - color = "orange" - - def _remove(self, asset_group): - objects = list(asset_group.children) - - for obj in objects: - if obj.type == 'MESH': - for material_slot in list(obj.material_slots): - if material_slot.material: - bpy.data.materials.remove(material_slot.material) - bpy.data.meshes.remove(obj.data) - elif obj.type == 'ARMATURE': - objects.extend(obj.children) - bpy.data.armatures.remove(obj.data) - elif obj.type == 'CURVE': - bpy.data.curves.remove(obj.data) - elif obj.type == 'EMPTY': - objects.extend(obj.children) - bpy.data.objects.remove(obj) - - def _process(self, libpath, asset_group, group_name, action): - with bpy.data.libraries.load( - libpath, link=True, relative=False - ) as (data_from, data_to): - data_to.objects = data_from.objects - - parent = bpy.context.scene.collection - - empties = [obj for obj in data_to.objects if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if empty.get(AVALON_PROPERTY): - container = empty - break - - assert container, "No asset group found" - - # Children must be linked before parents, - # otherwise the hierarchy will break - objects = [] - nodes = list(container.children) - - allowed_types = ['ARMATURE', 'MESH'] - - for obj in nodes: - if obj.type in allowed_types: - obj.parent = asset_group - - for obj in nodes: - if obj.type in allowed_types: - objects.append(obj) - nodes.extend(list(obj.children)) - - objects.reverse() - - constraints = [] - - armatures = [obj for obj in objects if obj.type == 'ARMATURE'] - - for armature in armatures: - for bone in armature.pose.bones: - for constraint in bone.constraints: - if hasattr(constraint, 'target'): - constraints.append(constraint) - - for obj in objects: - parent.objects.link(obj) - - for obj in objects: - local_obj = plugin.prepare_data(obj, group_name) - - if local_obj.type == 'MESH': - plugin.prepare_data(local_obj.data, group_name) - - if obj != local_obj: - for constraint in constraints: - if constraint.target == obj: - constraint.target = local_obj - - for material_slot in local_obj.material_slots: - if material_slot.material: - plugin.prepare_data(material_slot.material, group_name) - elif local_obj.type == 'ARMATURE': - plugin.prepare_data(local_obj.data, group_name) - - if action is not None: - if local_obj.animation_data is None: - local_obj.animation_data_create() - local_obj.animation_data.action = action - elif (local_obj.animation_data and - local_obj.animation_data.action is not None): - plugin.prepare_data( - local_obj.animation_data.action, group_name) - - # Set link the drivers to the local object - if local_obj.data.animation_data: - for d in local_obj.data.animation_data.drivers: - for v in d.driver.variables: - for t in v.targets: - t.id = local_obj - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - objects.reverse() - - curves = [obj for obj in data_to.objects if obj.type == 'CURVE'] - - for curve in curves: - local_obj = plugin.prepare_data(curve, group_name) - plugin.prepare_data(local_obj.data, group_name) - - local_obj.use_fake_user = True - - for mod in local_obj.modifiers: - mod_target_name = mod.object.name - mod.object = bpy.data.objects.get( - f"{group_name}:{mod_target_name}") - - if not local_obj.get(AVALON_PROPERTY): - local_obj[AVALON_PROPERTY] = dict() - - avalon_info = local_obj[AVALON_PROPERTY] - avalon_info.update({"container_name": group_name}) - - local_obj.parent = asset_group - objects.append(local_obj) - - while bpy.data.orphans_purge(do_local_ids=False): - pass - - plugin.deselect_all() - - return objects - - def process_asset( - self, context: dict, name: str, namespace: Optional[str] = None, - options: Optional[Dict] = None - ) -> Optional[List]: - """ - Arguments: - name: Use pre-defined name - namespace: Use pre-defined namespace - context: Full parenthood of representation to load - options: Additional settings dictionary - """ - libpath = self.fname - asset = context["asset"]["name"] - subset = context["subset"]["name"] - - asset_name = plugin.asset_name(asset, subset) - unique_number = plugin.get_unique_number(asset, subset) - group_name = plugin.asset_name(asset, subset, unique_number) - namespace = namespace or f"{asset}_{unique_number}" - - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) - - asset_group = bpy.data.objects.new(group_name, object_data=None) - asset_group.empty_display_type = 'SINGLE_ARROW' - avalon_container.objects.link(asset_group) - - action = None - - plugin.deselect_all() - - create_animation = False - anim_file = None - - if options is not None: - parent = options.get('parent') - transform = options.get('transform') - action = options.get('action') - create_animation = options.get('create_animation') - anim_file = options.get('animation_file') - - if parent and transform: - location = transform.get('translation') - rotation = transform.get('rotation') - scale = transform.get('scale') - - asset_group.location = ( - location.get('x'), - location.get('y'), - location.get('z') - ) - asset_group.rotation_euler = ( - rotation.get('x'), - rotation.get('y'), - rotation.get('z') - ) - asset_group.scale = ( - scale.get('x'), - scale.get('y'), - scale.get('z') - ) - - bpy.context.view_layer.objects.active = parent - asset_group.select_set(True) - - bpy.ops.object.parent_set(keep_transform=True) - - plugin.deselect_all() - - objects = self._process(libpath, asset_group, group_name, action) - - if create_animation: - creator_plugin = get_legacy_creator_by_name("CreateAnimation") - if not creator_plugin: - raise ValueError("Creator plugin \"CreateAnimation\" was " - "not found.") - - asset_group.select_set(True) - - animation_asset = options.get('animation_asset') - - legacy_create( - creator_plugin, - name=namespace + "_animation", - # name=f"{unique_number}_{subset}_animation", - asset=animation_asset, - options={"useSelection": False, "asset_group": asset_group}, - data={"dependencies": str(context["representation"]["_id"])} - ) - - plugin.deselect_all() - - if anim_file: - bpy.ops.import_scene.fbx(filepath=anim_file, anim_offset=0.0) - - imported = get_selection() - - armature = [ - o for o in asset_group.children if o.type == 'ARMATURE'][0] - - imported_group = [ - o for o in imported if o.type == 'EMPTY'][0] - - for obj in imported: - if obj.type == 'ARMATURE': - if not armature.animation_data: - armature.animation_data_create() - armature.animation_data.action = obj.animation_data.action - - self._remove(imported_group) - bpy.data.objects.remove(imported_group) - - bpy.context.scene.collection.objects.link(asset_group) - - asset_group[AVALON_PROPERTY] = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, - "name": name, - "namespace": namespace or '', - "loader": str(self.__class__.__name__), - "representation": str(context["representation"]["_id"]), - "libpath": libpath, - "asset_name": asset_name, - "parent": str(context["representation"]["parent"]), - "family": context["representation"]["context"]["family"], - "objectName": group_name - } - - self[:] = objects - return objects - - def exec_update(self, container: Dict, representation: Dict): - """Update the loaded asset. - - This will remove all children of the asset group, load the new ones - and add them as children of the group. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = Path(get_representation_path(representation)) - extension = libpath.suffix.lower() - - self.log.info( - "Container: %s\nRepresentation: %s", - pformat(container, indent=2), - pformat(representation, indent=2), - ) - - assert asset_group, ( - f"The asset is not loaded: {container['objectName']}" - ) - assert libpath, ( - "No existing library file found for {container['objectName']}" - ) - assert libpath.is_file(), ( - f"The file doesn't exist: {libpath}" - ) - assert extension in plugin.VALID_EXTENSIONS, ( - f"Unsupported file: {libpath}" - ) - - metadata = asset_group.get(AVALON_PROPERTY) - group_libpath = metadata["libpath"] - - normalized_group_libpath = ( - str(Path(bpy.path.abspath(group_libpath)).resolve()) - ) - normalized_libpath = ( - str(Path(bpy.path.abspath(str(libpath))).resolve()) - ) - self.log.debug( - "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s", - normalized_group_libpath, - normalized_libpath, - ) - if normalized_group_libpath == normalized_libpath: - self.log.info("Library already loaded, not updating...") - return - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath: - count += 1 - - # Get the armature of the rig - objects = asset_group.children - armature = [obj for obj in objects if obj.type == 'ARMATURE'][0] - - action = None - if armature.animation_data and armature.animation_data.action: - action = armature.animation_data.action - - mat = asset_group.matrix_basis.copy() - - self._remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(group_libpath)) - bpy.data.libraries.remove(library) - - self._process(str(libpath), asset_group, object_name, action) - - asset_group.matrix_basis = mat - - metadata["libpath"] = str(libpath) - metadata["representation"] = str(representation["_id"]) - metadata["parent"] = str(representation["parent"]) - - def exec_remove(self, container: Dict) -> bool: - """Remove an existing asset group from a Blender scene. - - Arguments: - container (openpype:container-1.0): Container to remove, - from `host.ls()`. - - Returns: - bool: Whether the asset group was deleted. - """ - object_name = container["objectName"] - asset_group = bpy.data.objects.get(object_name) - libpath = asset_group.get(AVALON_PROPERTY).get('libpath') - - # Check how many assets use the same library - count = 0 - for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects: - if obj.get(AVALON_PROPERTY).get('libpath') == libpath: - count += 1 - - if not asset_group: - return False - - self._remove(asset_group) - - bpy.data.objects.remove(asset_group) - - # If it is the last object to use that library, remove it - if count == 1: - library = bpy.data.libraries.get(bpy.path.basename(libpath)) - bpy.data.libraries.remove(library) - - return True diff --git a/openpype/hosts/blender/plugins/publish/collect_current_file.py b/openpype/hosts/blender/plugins/publish/collect_current_file.py index c3097a0694..c2d8a96a18 100644 --- a/openpype/hosts/blender/plugins/publish/collect_current_file.py +++ b/openpype/hosts/blender/plugins/publish/collect_current_file.py @@ -2,7 +2,7 @@ import os import bpy import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_task_name, get_current_asset_name from openpype.hosts.blender.api import workio @@ -37,7 +37,7 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): folder, file = os.path.split(current_file) filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] + task = get_current_task_name() data = {} @@ -47,7 +47,7 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): data.update({ "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), + "asset": get_current_asset_name(), "label": subset, "publish": True, "family": "workfile", diff --git a/openpype/hosts/blender/plugins/publish/collect_review.py b/openpype/hosts/blender/plugins/publish/collect_review.py index d6abd9d967..6459927015 100644 --- a/openpype/hosts/blender/plugins/publish/collect_review.py +++ b/openpype/hosts/blender/plugins/publish/collect_review.py @@ -1,7 +1,6 @@ import bpy import pyblish.api -from openpype.pipeline import legacy_io class CollectReview(pyblish.api.InstancePlugin): @@ -30,6 +29,8 @@ class CollectReview(pyblish.api.InstancePlugin): camera = cameras[0].name self.log.debug(f"camera: {camera}") + focal_length = cameras[0].data.lens + # get isolate objects list from meshes instance members . isolate_objects = [ obj @@ -39,7 +40,11 @@ class CollectReview(pyblish.api.InstancePlugin): if not instance.data.get("remove"): - task = legacy_io.Session.get("AVALON_TASK") + task = instance.context.data["task"] + + # Store focal length in `burninDataMembers` + burninData = instance.data.setdefault("burninDataMembers", {}) + burninData["focalLength"] = focal_length instance.data.update({ "subset": f"{task}Review", diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py index 1cab9d225b..f4babc94d3 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc.py @@ -22,8 +22,6 @@ class ExtractABC(publish.Extractor): filepath = os.path.join(stagingdir, filename) context = bpy.context - scene = context.scene - view_layer = context.view_layer # Perform extraction self.log.info("Performing extraction..") @@ -31,24 +29,25 @@ class ExtractABC(publish.Extractor): plugin.deselect_all() selected = [] - asset_group = None + active = None for obj in instance: obj.select_set(True) selected.append(obj) + # Set as active the asset group if obj.get(AVALON_PROPERTY): - asset_group = obj + active = obj context = plugin.create_blender_context( - active=asset_group, selected=selected) + active=active, selected=selected) - # We export the abc - bpy.ops.wm.alembic_export( - context, - filepath=filepath, - selected=True, - flatten=False - ) + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=False + ) plugin.deselect_all() diff --git a/openpype/hosts/blender/plugins/publish/extract_blend.py b/openpype/hosts/blender/plugins/publish/extract_blend.py index 6a001b6f65..d4f26b4f3c 100644 --- a/openpype/hosts/blender/plugins/publish/extract_blend.py +++ b/openpype/hosts/blender/plugins/publish/extract_blend.py @@ -10,7 +10,7 @@ class ExtractBlend(publish.Extractor): label = "Extract Blend" hosts = ["blender"] - families = ["model", "camera", "rig", "action", "layout"] + families = ["model", "camera", "rig", "action", "layout", "blendScene"] optional = True def process(self, instance): diff --git a/openpype/hosts/blender/plugins/publish/extract_camera_abc.py b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py new file mode 100644 index 0000000000..a21a59b151 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py @@ -0,0 +1,73 @@ +import os + +import bpy + +from openpype.pipeline import publish +from openpype.hosts.blender.api import plugin +from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY + + +class ExtractCameraABC(publish.Extractor): + """Extract camera as ABC.""" + + label = "Extract Camera (ABC)" + hosts = ["blender"] + families = ["camera"] + optional = True + + def process(self, instance): + # Define extract output file path + stagingdir = self.staging_dir(instance) + filename = f"{instance.name}.abc" + filepath = os.path.join(stagingdir, filename) + + context = bpy.context + + # Perform extraction + self.log.info("Performing extraction..") + + plugin.deselect_all() + + selected = [] + active = None + + asset_group = None + for obj in instance: + if obj.get(AVALON_PROPERTY): + asset_group = obj + break + assert asset_group, "No asset group found" + + # Need to cast to list because children is a tuple + selected = list(asset_group.children) + active = selected[0] + + for obj in selected: + obj.select_set(True) + + context = plugin.create_blender_context( + active=active, selected=selected) + + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=True + ) + + plugin.deselect_all() + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance '%s' to: %s", + instance.name, representation) diff --git a/openpype/hosts/blender/plugins/publish/extract_camera.py b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py similarity index 98% rename from openpype/hosts/blender/plugins/publish/extract_camera.py rename to openpype/hosts/blender/plugins/publish/extract_camera_fbx.py index 9fd181825c..315994140e 100644 --- a/openpype/hosts/blender/plugins/publish/extract_camera.py +++ b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py @@ -9,7 +9,7 @@ from openpype.hosts.blender.api import plugin class ExtractCamera(publish.Extractor): """Extract as the camera as FBX.""" - label = "Extract Camera" + label = "Extract Camera (FBX)" hosts = ["blender"] families = ["camera"] optional = True diff --git a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py index 963ca1398f..27fa4baf28 100644 --- a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py +++ b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py @@ -9,7 +9,7 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin): label = "Increment Workfile Version" optional = True hosts = ["blender"] - families = ["animation", "model", "rig", "action", "layout"] + families = ["animation", "model", "rig", "action", "layout", "blendScene"] def process(self, context): diff --git a/openpype/hosts/celaction/hooks/pre_celaction_setup.py b/openpype/hosts/celaction/hooks/pre_celaction_setup.py index 96e784875c..83aeab7c58 100644 --- a/openpype/hosts/celaction/hooks/pre_celaction_setup.py +++ b/openpype/hosts/celaction/hooks/pre_celaction_setup.py @@ -2,20 +2,18 @@ import os import shutil import winreg import subprocess -from openpype.lib import PreLaunchHook, get_openpype_execute_args -from openpype.hosts.celaction import scripts - -CELACTION_SCRIPTS_DIR = os.path.dirname( - os.path.abspath(scripts.__file__) -) +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import PreLaunchHook, LaunchTypes +from openpype.hosts.celaction import CELACTION_ROOT_DIR class CelactionPrelaunchHook(PreLaunchHook): """ Bootstrap celacion with pype """ - app_groups = ["celaction"] - platforms = ["windows"] + app_groups = {"celaction"} + platforms = {"windows"} + launch_types = {LaunchTypes.local} def execute(self): asset_doc = self.data["asset_doc"] @@ -37,7 +35,9 @@ class CelactionPrelaunchHook(PreLaunchHook): winreg.KEY_ALL_ACCESS ) - path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py") + path_to_cli = os.path.join( + CELACTION_ROOT_DIR, "scripts", "publish_cli.py" + ) subprocess_args = get_openpype_execute_args("run", path_to_cli) openpype_executable = subprocess_args.pop(0) workfile_settings = self.get_workfile_settings() @@ -122,9 +122,8 @@ class CelactionPrelaunchHook(PreLaunchHook): if not os.path.exists(workfile_path): # TODO add ability to set different template workfile path via # settings - openpype_celaction_dir = os.path.dirname(CELACTION_SCRIPTS_DIR) template_path = os.path.join( - openpype_celaction_dir, + CELACTION_ROOT_DIR, "resources", "celaction_template_scene.scn" ) diff --git a/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py b/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py index 35ac7fc264..c815c1edd4 100644 --- a/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py +++ b/openpype/hosts/celaction/plugins/publish/collect_celaction_instances.py @@ -1,6 +1,5 @@ import os import pyblish.api -from openpype.pipeline import legacy_io class CollectCelactionInstances(pyblish.api.ContextPlugin): @@ -10,7 +9,7 @@ class CollectCelactionInstances(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.1 def process(self, context): - task = legacy_io.Session["AVALON_TASK"] + task = context.data["task"] current_file = context.data["currentFile"] staging_dir = os.path.dirname(current_file) scene_file = os.path.basename(current_file) diff --git a/openpype/hosts/flame/api/menu.py b/openpype/hosts/flame/api/menu.py index 5f9dc57a61..e8bdf32ebd 100644 --- a/openpype/hosts/flame/api/menu.py +++ b/openpype/hosts/flame/api/menu.py @@ -1,7 +1,9 @@ -import os -from qtpy import QtWidgets from copy import deepcopy from pprint import pformat + +from qtpy import QtWidgets + +from openpype.pipeline import get_current_project_name from openpype.tools.utils.host_tools import HostToolsHelper menu_group_name = 'OpenPype' @@ -61,10 +63,10 @@ class _FlameMenuApp(object): self.framework.prefs_global, self.name) self.mbox = QtWidgets.QMessageBox() - + project_name = get_current_project_name() self.menu = { "actions": [{ - 'name': os.getenv("AVALON_PROJECT", "project"), + 'name': project_name or "project", 'isEnabled': False }], "name": self.menu_group_name diff --git a/openpype/hosts/flame/hooks/pre_flame_setup.py b/openpype/hosts/flame/hooks/pre_flame_setup.py index 83110bb6b5..850569cfdd 100644 --- a/openpype/hosts/flame/hooks/pre_flame_setup.py +++ b/openpype/hosts/flame/hooks/pre_flame_setup.py @@ -6,13 +6,10 @@ import socket from pprint import pformat from openpype.lib import ( - PreLaunchHook, get_openpype_username, run_subprocess, ) -from openpype.lib.applications import ( - ApplicationLaunchFailed -) +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts import flame as opflame @@ -22,11 +19,12 @@ class FlamePrelaunch(PreLaunchHook): Will make sure flame_script_dirs are copied to user's folder defined in environment var FLAME_SCRIPT_DIR. """ - app_groups = ["flame"] + app_groups = {"flame"} permissions = 0o777 wtc_script_path = os.path.join( opflame.HOST_DIR, "api", "scripts", "wiretap_com.py") + launch_types = {LaunchTypes.local} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/openpype/hosts/flame/plugins/load/load_clip.py b/openpype/hosts/flame/plugins/load/load_clip.py index dfb2d2b6f0..338833b449 100644 --- a/openpype/hosts/flame/plugins/load/load_clip.py +++ b/openpype/hosts/flame/plugins/load/load_clip.py @@ -82,8 +82,9 @@ class LoadClip(opfapi.ClipLoader): os.makedirs(openclip_dir) # prepare clip data from context ad send it to openClipLoader + path = self.filepath_from_context(context) loading_context = { - "path": self.fname.replace("\\", "/"), + "path": path.replace("\\", "/"), "colorspace": colorspace, "version": "v{:0>3}".format(version_name), "layer_rename_template": self.layer_rename_template, diff --git a/openpype/hosts/flame/plugins/load/load_clip_batch.py b/openpype/hosts/flame/plugins/load/load_clip_batch.py index 5c5a77f0d0..ca43b94ee9 100644 --- a/openpype/hosts/flame/plugins/load/load_clip_batch.py +++ b/openpype/hosts/flame/plugins/load/load_clip_batch.py @@ -81,9 +81,10 @@ class LoadClipBatch(opfapi.ClipLoader): if not os.path.exists(openclip_dir): os.makedirs(openclip_dir) - # prepare clip data from context ad send it to openClipLoader + # prepare clip data from context and send it to openClipLoader + path = self.filepath_from_context(context) loading_context = { - "path": self.fname.replace("\\", "/"), + "path": path.replace("\\", "/"), "colorspace": colorspace, "version": "v{:0>3}".format(version_name), "layer_rename_template": self.layer_rename_template, diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py index 917041e053..f8cfa9e963 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_otio.py @@ -2,7 +2,6 @@ import pyblish.api import openpype.hosts.flame.api as opfapi from openpype.hosts.flame.otio import flame_export -from openpype.pipeline import legacy_io from openpype.pipeline.create import get_subset_name @@ -19,7 +18,7 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin): # main asset_doc = context.data["assetEntity"] - task_name = legacy_io.Session["AVALON_TASK"] + task_name = context.data["task"] project = opfapi.get_current_project() sequence = opfapi.get_current_sequence(opfapi.CTX.selection) diff --git a/openpype/hosts/fusion/api/action.py b/openpype/hosts/fusion/api/action.py index 347d552108..66b787c2f1 100644 --- a/openpype/hosts/fusion/api/action.py +++ b/openpype/hosts/fusion/api/action.py @@ -18,8 +18,10 @@ class SelectInvalidAction(pyblish.api.Action): icon = "search" # Icon from Awesome Icon def process(self, context, plugin): - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) + errored_instances = get_errored_instances_from_context( + context, + plugin=plugin, + ) # Get the invalid nodes for the plug-ins self.log.info("Finding invalid nodes..") @@ -51,6 +53,7 @@ class SelectInvalidAction(pyblish.api.Action): names = set() for tool in invalid: flow.Select(tool, True) + comp.SetActiveTool(tool) names.add(tool.Name) self.log.info( "Selecting invalid tools: %s" % ", ".join(sorted(names)) diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py index cba8c38c2f..d96557571b 100644 --- a/openpype/hosts/fusion/api/lib.py +++ b/openpype/hosts/fusion/api/lib.py @@ -14,7 +14,7 @@ from openpype.client import ( ) from openpype.pipeline import ( switch_container, - legacy_io, + get_current_project_name, ) from openpype.pipeline.context_tools import get_current_project_asset @@ -206,7 +206,7 @@ def switch_item(container, # Collect any of current asset, subset and representation if not provided # so we can use the original name from those. - project_name = legacy_io.active_project() + project_name = get_current_project_name() if any(not x for x in [asset_name, subset_name, representation_name]): repre_id = container["representation"] representation = get_representation_by_id(project_name, repre_id) diff --git a/openpype/hosts/fusion/api/menu.py b/openpype/hosts/fusion/api/menu.py index 92f38a64c2..50250a6656 100644 --- a/openpype/hosts/fusion/api/menu.py +++ b/openpype/hosts/fusion/api/menu.py @@ -12,7 +12,7 @@ from openpype.hosts.fusion.api.lib import ( set_asset_framerange, set_asset_resolution, ) -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name from openpype.resources import get_openpype_icon_filepath from .pipeline import FusionEventHandler @@ -125,7 +125,7 @@ class OpenPypeMenu(QtWidgets.QWidget): def on_task_changed(self): # Update current context label - label = legacy_io.Session["AVALON_ASSET"] + label = get_current_asset_name() self.asset_label.setText(label) def register_callback(self, name, fn): diff --git a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py index f08dc0bf2c..87322235f5 100644 --- a/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py +++ b/openpype/hosts/fusion/deploy/Scripts/Comp/OpenPype/switch_ui.py @@ -11,7 +11,7 @@ from openpype.client import get_assets from openpype import style from openpype.pipeline import ( install_host, - legacy_io, + get_current_project_name, ) from openpype.hosts.fusion import api from openpype.pipeline.context_tools import get_workdir_from_session @@ -167,7 +167,7 @@ class App(QtWidgets.QWidget): return items def collect_asset_names(self): - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_docs = get_assets(project_name, fields=["name"]) asset_names = { asset_doc["name"] diff --git a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py index fd726ccda1..66b0f803aa 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py @@ -2,12 +2,16 @@ import os import shutil import platform from pathlib import Path -from openpype.lib import PreLaunchHook, ApplicationLaunchFailed from openpype.hosts.fusion import ( FUSION_HOST_DIR, FUSION_VERSIONS_DICT, get_fusion_version, ) +from openpype.lib.applications import ( + PreLaunchHook, + LaunchTypes, + ApplicationLaunchFailed, +) class FusionCopyPrefsPrelaunch(PreLaunchHook): @@ -21,8 +25,9 @@ class FusionCopyPrefsPrelaunch(PreLaunchHook): Master.prefs is defined in openpype/hosts/fusion/deploy/fusion_shared.prefs """ - app_groups = ["fusion"] + app_groups = {"fusion"} order = 2 + launch_types = {LaunchTypes.local} def get_fusion_profile_name(self, profile_version) -> str: # Returns 'Default', unless FUSION16_PROFILE is set diff --git a/openpype/hosts/fusion/hooks/pre_fusion_setup.py b/openpype/hosts/fusion/hooks/pre_fusion_setup.py index f27cd1674b..576628e876 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_setup.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_setup.py @@ -1,5 +1,9 @@ import os -from openpype.lib import PreLaunchHook, ApplicationLaunchFailed +from openpype.lib.applications import ( + PreLaunchHook, + LaunchTypes, + ApplicationLaunchFailed, +) from openpype.hosts.fusion import ( FUSION_HOST_DIR, FUSION_VERSIONS_DICT, @@ -17,8 +21,9 @@ class FusionPrelaunch(PreLaunchHook): Fusion 18 : Python 3.6 - 3.10 """ - app_groups = ["fusion"] + app_groups = {"fusion"} order = 1 + launch_types = {LaunchTypes.local} def execute(self): # making sure python 3 is installed at provided path diff --git a/openpype/hosts/fusion/plugins/create/create_workfile.py b/openpype/hosts/fusion/plugins/create/create_workfile.py index 40721ea88a..8acaaa172f 100644 --- a/openpype/hosts/fusion/plugins/create/create_workfile.py +++ b/openpype/hosts/fusion/plugins/create/create_workfile.py @@ -5,7 +5,6 @@ from openpype.client import get_asset_by_name from openpype.pipeline import ( AutoCreator, CreatedInstance, - legacy_io, ) @@ -64,10 +63,10 @@ class FusionWorkfileCreator(AutoCreator): existing_instance = instance break - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] + project_name = self.create_context.get_current_project_name() + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name if existing_instance is None: asset_doc = get_asset_by_name(project_name, asset_name) diff --git a/openpype/hosts/fusion/plugins/load/load_alembic.py b/openpype/hosts/fusion/plugins/load/load_alembic.py index 11bf59af12..9b6d1e12b4 100644 --- a/openpype/hosts/fusion/plugins/load/load_alembic.py +++ b/openpype/hosts/fusion/plugins/load/load_alembic.py @@ -32,7 +32,7 @@ class FusionLoadAlembicMesh(load.LoaderPlugin): comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create tool"): - path = self.fname + path = self.filepath_from_context(context) args = (-32768, -32768) tool = comp.AddTool(self.tool_type, *args) diff --git a/openpype/hosts/fusion/plugins/load/load_fbx.py b/openpype/hosts/fusion/plugins/load/load_fbx.py index c73ad78394..d15d2c33d7 100644 --- a/openpype/hosts/fusion/plugins/load/load_fbx.py +++ b/openpype/hosts/fusion/plugins/load/load_fbx.py @@ -45,7 +45,7 @@ class FusionLoadFBXMesh(load.LoaderPlugin): # Create the Loader with the filename path set comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create tool"): - path = self.fname + path = self.filepath_from_context(context) args = (-32768, -32768) tool = comp.AddTool(self.tool_type, *args) diff --git a/openpype/hosts/fusion/plugins/load/load_sequence.py b/openpype/hosts/fusion/plugins/load/load_sequence.py index 552e282587..20be5faaba 100644 --- a/openpype/hosts/fusion/plugins/load/load_sequence.py +++ b/openpype/hosts/fusion/plugins/load/load_sequence.py @@ -1,10 +1,7 @@ import contextlib import openpype.pipeline.load as load -from openpype.pipeline.load import ( - get_representation_context, - get_representation_path_from_context, -) +from openpype.pipeline.load import get_representation_context from openpype.hosts.fusion.api import ( imprint_container, get_current_comp, @@ -157,7 +154,7 @@ class FusionLoadSequence(load.LoaderPlugin): namespace = context["asset"]["name"] # Use the first file for now - path = get_representation_path_from_context(context) + path = self.filepath_from_context(context) # Create the Loader with the filename path set comp = get_current_comp() @@ -228,7 +225,7 @@ class FusionLoadSequence(load.LoaderPlugin): comp = tool.Comp() context = get_representation_context(representation) - path = get_representation_path_from_context(context) + path = self.filepath_from_context(context) # Get start frame from version data start = self._get_start(context["version"], tool) diff --git a/openpype/hosts/fusion/plugins/load/load_workfile.py b/openpype/hosts/fusion/plugins/load/load_workfile.py index b49d104a15..14e36ca8fd 100644 --- a/openpype/hosts/fusion/plugins/load/load_workfile.py +++ b/openpype/hosts/fusion/plugins/load/load_workfile.py @@ -27,6 +27,7 @@ class FusionLoadWorkfile(load.LoaderPlugin): # Get needed elements bmd = get_bmd_library() comp = get_current_comp() + path = self.filepath_from_context(context) # Paste the content of the file into the current comp - comp.Paste(bmd.readfile(self.fname)) + comp.Paste(bmd.readfile(path)) diff --git a/openpype/hosts/harmony/api/README.md b/openpype/hosts/harmony/api/README.md index 12f21f551a..be3920fe29 100644 --- a/openpype/hosts/harmony/api/README.md +++ b/openpype/hosts/harmony/api/README.md @@ -610,7 +610,7 @@ class ImageSequenceLoader(load.LoaderPlugin): def update(self, container, representation): node = container.pop("node") - project_name = legacy_io.active_project() + project_name = get_current_project_name() version = get_version_by_id(project_name, representation["parent"]) files = [] for f in version["data"]["files"]: diff --git a/openpype/hosts/harmony/plugins/load/load_background.py b/openpype/hosts/harmony/plugins/load/load_background.py index c28a87791e..853d347c2e 100644 --- a/openpype/hosts/harmony/plugins/load/load_background.py +++ b/openpype/hosts/harmony/plugins/load/load_background.py @@ -238,7 +238,8 @@ class BackgroundLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): - with open(self.fname) as json_file: + path = self.filepath_from_context(context) + with open(path) as json_file: data = json.load(json_file) layers = list() @@ -251,7 +252,7 @@ class BackgroundLoader(load.LoaderPlugin): if layer.get("filename"): layers.append(layer["filename"]) - bg_folder = os.path.dirname(self.fname) + bg_folder = os.path.dirname(path) subset_name = context["subset"]["name"] # read_node_name += "_{}".format(uuid.uuid4()) diff --git a/openpype/hosts/harmony/plugins/load/load_imagesequence.py b/openpype/hosts/harmony/plugins/load/load_imagesequence.py index b95d25f507..754f82e5d5 100644 --- a/openpype/hosts/harmony/plugins/load/load_imagesequence.py +++ b/openpype/hosts/harmony/plugins/load/load_imagesequence.py @@ -34,7 +34,7 @@ class ImageSequenceLoader(load.LoaderPlugin): data (dict, optional): Additional data passed into loader. """ - fname = Path(self.fname) + fname = Path(self.filepath_from_context(context)) self_name = self.__class__.__name__ collections, remainder = clique.assemble( os.listdir(fname.parent.as_posix()) diff --git a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py index f6b26eb3e8..5e9b9094a7 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_farm_render.py +++ b/openpype/hosts/harmony/plugins/publish/collect_farm_render.py @@ -5,7 +5,6 @@ from pathlib import Path import attr from openpype.lib import get_formatted_current_time -from openpype.pipeline import legacy_io from openpype.pipeline import publish from openpype.pipeline.publish import RenderInstance import openpype.hosts.harmony.api as harmony @@ -99,6 +98,8 @@ class CollectFarmRender(publish.AbstractCollectRender): self_name = self.__class__.__name__ + asset_name = context.data["asset"] + for node in context.data["allNodes"]: data = harmony.read(node) @@ -141,7 +142,7 @@ class CollectFarmRender(publish.AbstractCollectRender): source=context.data["currentFile"], label=node.split("/")[1], subset=subset_name, - asset=legacy_io.Session["AVALON_ASSET"], + asset=asset_name, task=task_name, attachTo=False, setMembers=[node], diff --git a/openpype/hosts/harmony/plugins/publish/collect_palettes.py b/openpype/hosts/harmony/plugins/publish/collect_palettes.py index bbd60d1c55..e19057e302 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_palettes.py +++ b/openpype/hosts/harmony/plugins/publish/collect_palettes.py @@ -1,6 +1,5 @@ # -*- coding: utf-8 -*- """Collect palettes from Harmony.""" -import os import json import re @@ -32,6 +31,7 @@ class CollectPalettes(pyblish.api.ContextPlugin): if (not any([re.search(pattern, task_name) for pattern in self.allowed_tasks])): return + asset_name = context.data["asset"] for name, id in palettes.items(): instance = context.create_instance(name) @@ -39,7 +39,7 @@ class CollectPalettes(pyblish.api.ContextPlugin): "id": id, "family": "harmony.palette", 'families': [], - "asset": os.environ["AVALON_ASSET"], + "asset": asset_name, "subset": "{}{}".format("palette", name) }) self.log.info( diff --git a/openpype/hosts/harmony/plugins/publish/collect_workfile.py b/openpype/hosts/harmony/plugins/publish/collect_workfile.py index 3624147435..4492ab37a5 100644 --- a/openpype/hosts/harmony/plugins/publish/collect_workfile.py +++ b/openpype/hosts/harmony/plugins/publish/collect_workfile.py @@ -36,5 +36,5 @@ class CollectWorkfile(pyblish.api.ContextPlugin): "family": family, "families": [family], "representations": [], - "asset": os.environ["AVALON_ASSET"] + "asset": context.data["asset"] }) diff --git a/openpype/hosts/harmony/plugins/publish/extract_render.py b/openpype/hosts/harmony/plugins/publish/extract_render.py index 38b09902c1..5825d95a4a 100644 --- a/openpype/hosts/harmony/plugins/publish/extract_render.py +++ b/openpype/hosts/harmony/plugins/publish/extract_render.py @@ -94,15 +94,14 @@ class ExtractRender(pyblish.api.InstancePlugin): # Generate thumbnail. thumbnail_path = os.path.join(path, "thumbnail.png") - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") - args = [ - ffmpeg_path, + args = openpype.lib.get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", os.path.join(path, list(collections[0])[0]), "-vf", "scale=300:-1", "-vframes", "1", thumbnail_path - ] + ) process = subprocess.Popen( args, stdout=subprocess.PIPE, diff --git a/openpype/hosts/harmony/plugins/publish/extract_template.py b/openpype/hosts/harmony/plugins/publish/extract_template.py index 458bf25a3c..e75459fe1e 100644 --- a/openpype/hosts/harmony/plugins/publish/extract_template.py +++ b/openpype/hosts/harmony/plugins/publish/extract_template.py @@ -75,7 +75,7 @@ class ExtractTemplate(publish.Extractor): instance.data["representations"] = [representation] instance.data["version_name"] = "{}_{}".format( - instance.data["subset"], os.environ["AVALON_TASK"]) + instance.data["subset"], instance.context.data["task"]) def get_backdrops(self, node: str) -> list: """Get backdrops for the node. diff --git a/openpype/hosts/harmony/plugins/publish/validate_instances.py b/openpype/hosts/harmony/plugins/publish/validate_instances.py index ac367082ef..7183de6048 100644 --- a/openpype/hosts/harmony/plugins/publish/validate_instances.py +++ b/openpype/hosts/harmony/plugins/publish/validate_instances.py @@ -1,8 +1,7 @@ -import os - import pyblish.api import openpype.hosts.harmony.api as harmony +from openpype.pipeline import get_current_asset_name from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -30,7 +29,7 @@ class ValidateInstanceRepair(pyblish.api.Action): for instance in instances: data = harmony.read(instance.data["setMembers"][0]) - data["asset"] = os.environ["AVALON_ASSET"] + data["asset"] = get_current_asset_name() harmony.imprint(instance.data["setMembers"][0], data) @@ -44,7 +43,7 @@ class ValidateInstance(pyblish.api.InstancePlugin): def process(self, instance): instance_asset = instance.data["asset"] - current_asset = os.environ["AVALON_ASSET"] + current_asset = get_current_asset_name() msg = ( "Instance asset is not the same as current asset:" f"\nInstance: {instance_asset}\nCurrent: {current_asset}" diff --git a/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py b/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py index 6e4c6955e4..866f12076a 100644 --- a/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py +++ b/openpype/hosts/harmony/plugins/publish/validate_scene_settings.py @@ -67,7 +67,9 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin): expected_settings["frameEndHandle"] = expected_settings["frameEnd"] +\ expected_settings["handleEnd"] - if (any(re.search(pattern, os.getenv('AVALON_TASK')) + task_name = instance.context.data["task"] + + if (any(re.search(pattern, task_name) for pattern in self.skip_resolution_check)): self.log.info("Skipping resolution check because of " "task name and pattern {}".format( diff --git a/openpype/hosts/hiero/api/lib.py b/openpype/hosts/hiero/api/lib.py index 09d73f5cc2..bf719160d1 100644 --- a/openpype/hosts/hiero/api/lib.py +++ b/openpype/hosts/hiero/api/lib.py @@ -22,9 +22,7 @@ except ImportError: from openpype.client import get_project from openpype.settings import get_project_settings -from openpype.pipeline import ( - get_current_project_name, legacy_io, Anatomy -) +from openpype.pipeline import Anatomy, get_current_project_name from openpype.pipeline.load import filter_containers from openpype.lib import Logger from . import tags @@ -626,7 +624,7 @@ def get_publish_attribute(tag): def sync_avalon_data_to_workfile(): # import session to get project dir - project_name = legacy_io.Session["AVALON_PROJECT"] + project_name = get_current_project_name() anatomy = Anatomy(project_name) work_template = anatomy.templates["work"]["path"] @@ -821,7 +819,7 @@ class PublishAction(QtWidgets.QAction): # # create root node and save all metadata # root_node = hiero.core.nuke.RootNode() # -# anatomy = Anatomy(os.environ["AVALON_PROJECT"]) +# anatomy = Anatomy(get_current_project_name()) # work_template = anatomy.templates["work"]["path"] # root_path = anatomy.root_value_for_template(work_template) # @@ -1041,7 +1039,7 @@ def _set_hrox_project_knobs(doc, **knobs): def apply_colorspace_project(): - project_name = os.getenv("AVALON_PROJECT") + project_name = get_current_project_name() # get path the the active projects project = get_current_project(remove_untitled=True) current_file = project.path() @@ -1110,7 +1108,7 @@ def apply_colorspace_project(): def apply_colorspace_clips(): - project_name = os.getenv("AVALON_PROJECT") + project_name = get_current_project_name() project = get_current_project(remove_untitled=True) clips = project.clips() @@ -1264,7 +1262,7 @@ def check_inventory_versions(track_items=None): if not containers: return - project_name = legacy_io.active_project() + project_name = get_current_project_name() filter_result = filter_containers(containers, project_name) for container in filter_result.latest: set_track_color(container["_item"], clip_color_last) diff --git a/openpype/hosts/hiero/api/menu.py b/openpype/hosts/hiero/api/menu.py index 6baeb38cc0..9967e9c875 100644 --- a/openpype/hosts/hiero/api/menu.py +++ b/openpype/hosts/hiero/api/menu.py @@ -4,12 +4,18 @@ import sys import hiero.core from hiero.ui import findMenuAction +from qtpy import QtGui + from openpype.lib import Logger -from openpype.pipeline import legacy_io from openpype.tools.utils import host_tools +from openpype.settings import get_project_settings +from openpype.pipeline import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name +) from . import tags -from openpype.settings import get_project_settings log = Logger.get_logger(__name__) @@ -17,6 +23,13 @@ self = sys.modules[__name__] self._change_context_menu = None +def get_context_label(): + return "{}, {}".format( + get_current_asset_name(), + get_current_task_name() + ) + + def update_menu_task_label(): """Update the task label in Avalon menu to current session""" @@ -27,10 +40,7 @@ def update_menu_task_label(): log.warning("Can't find menuItem: {}".format(object_name)) return - label = "{}, {}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) + label = get_context_label() menu = found_menu.menu() self._change_context_menu = label @@ -43,7 +53,6 @@ def menu_install(): """ - from qtpy import QtGui from . import ( publish, launch_workfiles_app, reload_config, apply_colorspace_project, apply_colorspace_clips @@ -56,10 +65,7 @@ def menu_install(): menu_name = os.environ['AVALON_LABEL'] - context_label = "{0}, {1}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) + context_label = get_context_label() self._change_context_menu = context_label @@ -154,7 +160,7 @@ def add_scripts_menu(): return # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_settings = get_project_settings(get_current_project_name()) config = project_settings["hiero"]["scriptsmenu"]["definition"] _menu = project_settings["hiero"]["scriptsmenu"]["name"] diff --git a/openpype/hosts/hiero/api/plugin.py b/openpype/hosts/hiero/api/plugin.py index a3f8a6c524..65a4009756 100644 --- a/openpype/hosts/hiero/api/plugin.py +++ b/openpype/hosts/hiero/api/plugin.py @@ -12,6 +12,7 @@ from openpype.settings import get_current_project_settings from openpype.lib import Logger from openpype.pipeline import LoaderPlugin, LegacyCreator from openpype.pipeline.context_tools import get_current_project_asset +from openpype.pipeline.load import get_representation_path_from_context from . import lib log = Logger.get_logger(__name__) @@ -393,7 +394,7 @@ class ClipLoader: active_bin = None data = dict() - def __init__(self, cls, context, **options): + def __init__(self, cls, context, path, **options): """ Initialize object Arguments: @@ -406,6 +407,7 @@ class ClipLoader: self.__dict__.update(cls.__dict__) self.context = context self.active_project = lib.get_current_project() + self.fname = path # try to get value from options or evaluate key value for `handles` self.with_handles = options.get("handles") or bool( @@ -467,7 +469,7 @@ class ClipLoader: self.data["track_name"] = "_".join([subset, representation]) self.data["versionData"] = self.context["version"]["data"] # gets file path - file = self.fname + file = get_representation_path_from_context(self.context) if not file: repr_id = repr["_id"] log.warning( diff --git a/openpype/hosts/hiero/api/tags.py b/openpype/hosts/hiero/api/tags.py index cb7bc14edb..02d8205414 100644 --- a/openpype/hosts/hiero/api/tags.py +++ b/openpype/hosts/hiero/api/tags.py @@ -5,7 +5,7 @@ import hiero from openpype.client import get_project, get_assets from openpype.lib import Logger -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name log = Logger.get_logger(__name__) @@ -142,7 +142,7 @@ def add_tags_to_workfile(): nks_pres_tags = tag_data() # Get project task types. - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_doc = get_project(project_name) tasks = project_doc["config"]["tasks"] nks_pres_tags["[Tasks]"] = {} diff --git a/openpype/hosts/hiero/plugins/load/load_clip.py b/openpype/hosts/hiero/plugins/load/load_clip.py index c9bebfa8b2..05bd12d185 100644 --- a/openpype/hosts/hiero/plugins/load/load_clip.py +++ b/openpype/hosts/hiero/plugins/load/load_clip.py @@ -3,8 +3,8 @@ from openpype.client import ( get_last_version_by_subset_id ) from openpype.pipeline import ( - legacy_io, get_representation_path, + get_current_project_name, ) from openpype.lib.transcoding import ( VIDEO_EXTENSIONS, @@ -87,7 +87,8 @@ class LoadClip(phiero.SequenceLoader): }) # load clip to timeline and get main variables - track_item = phiero.ClipLoader(self, context, **options).load() + path = self.filepath_from_context(context) + track_item = phiero.ClipLoader(self, context, path, **options).load() namespace = namespace or track_item.name() version = context['version'] version_data = version.get("data", {}) @@ -147,7 +148,7 @@ class LoadClip(phiero.SequenceLoader): track_item = phiero.get_track_items( track_item_name=namespace).pop() - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) version_data = version_doc.get("data", {}) @@ -210,7 +211,7 @@ class LoadClip(phiero.SequenceLoader): @classmethod def set_item_color(cls, track_item, version_doc): - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] ) diff --git a/openpype/hosts/hiero/plugins/load/load_effects.py b/openpype/hosts/hiero/plugins/load/load_effects.py index b61cca9731..31147d013f 100644 --- a/openpype/hosts/hiero/plugins/load/load_effects.py +++ b/openpype/hosts/hiero/plugins/load/load_effects.py @@ -9,8 +9,8 @@ from openpype.client import ( from openpype.pipeline import ( AVALON_CONTAINER_ID, load, - legacy_io, - get_representation_path + get_representation_path, + get_current_project_name ) from openpype.hosts.hiero import api as phiero from openpype.lib import Logger @@ -59,7 +59,8 @@ class LoadEffects(load.LoaderPlugin): } # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context) + file = file.replace("\\", "/") if self._shared_loading( file, @@ -167,7 +168,7 @@ class LoadEffects(load.LoaderPlugin): namespace = container['namespace'] # get timeline in out data - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) version_data = version_doc["data"] clip_in = version_data["clipIn"] diff --git a/openpype/hosts/hiero/plugins/publish/extract_frames.py b/openpype/hosts/hiero/plugins/publish/extract_frames.py index f865d2fb39..803c338766 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_frames.py +++ b/openpype/hosts/hiero/plugins/publish/extract_frames.py @@ -2,7 +2,7 @@ import os import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -18,7 +18,7 @@ class ExtractFrames(publish.Extractor): movie_extensions = ["mov", "mp4"] def process(self, instance): - oiio_tool_path = get_oiio_tools_path() + oiio_tool_args = get_oiio_tool_args("oiiotool") staging_dir = self.staging_dir(instance) output_template = os.path.join(staging_dir, instance.data["name"]) sequence = instance.context.data["activeTimeline"] @@ -36,7 +36,7 @@ class ExtractFrames(publish.Extractor): output_path = output_template output_path += ".{:04d}.{}".format(int(frame), output_ext) - args = [oiio_tool_path] + args = list(oiio_tool_args) ext = os.path.splitext(input_path)[1][1:] if ext in self.movie_extensions: diff --git a/openpype/hosts/hiero/plugins/publish/precollect_workfile.py b/openpype/hosts/hiero/plugins/publish/precollect_workfile.py index 1f477c1639..5a66581531 100644 --- a/openpype/hosts/hiero/plugins/publish/precollect_workfile.py +++ b/openpype/hosts/hiero/plugins/publish/precollect_workfile.py @@ -7,7 +7,6 @@ from qtpy.QtGui import QPixmap import hiero.ui -from openpype.pipeline import legacy_io from openpype.hosts.hiero.api.otio import hiero_export @@ -19,7 +18,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin): def process(self, context): - asset = legacy_io.Session["AVALON_ASSET"] + asset = context.data["asset"] subset = "workfile" active_timeline = hiero.ui.activeSequence() project = active_timeline.project() diff --git a/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py index 5f96533052..767f7c30f7 100644 --- a/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py +++ b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py @@ -1,6 +1,5 @@ from pyblish import api from openpype.client import get_assets -from openpype.pipeline import legacy_io class CollectAssetBuilds(api.ContextPlugin): @@ -18,7 +17,7 @@ class CollectAssetBuilds(api.ContextPlugin): hosts = ["hiero"] def process(self, context): - project_name = legacy_io.active_project() + project_name = context.data["projectName"] asset_builds = {} for asset in get_assets(project_name): if asset["data"]["entityType"] == "AssetBuild": diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index a32e9d8d61..b03f8c8fc1 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -10,7 +10,7 @@ import json import six from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name, get_current_asset_name from openpype.pipeline.context_tools import get_current_project_asset import hou @@ -78,8 +78,8 @@ def generate_ids(nodes, asset_id=None): """ if asset_id is None: - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() # Get the asset ID from the database for the asset of current context asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) @@ -474,8 +474,8 @@ def maintained_selection(): def reset_framerange(): """Set frame range to current asset""" - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() # Get the asset ID from the database for the asset of current context asset_doc = get_asset_by_name(project_name, asset_name) asset_data = asset_doc["data"] diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 1e7eaa7e22..70c837205e 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -167,9 +167,12 @@ class HoudiniCreatorBase(object): class HoudiniCreator(NewCreator, HoudiniCreatorBase): """Base class for most of the Houdini creator plugins.""" selected_nodes = [] + settings_name = None def create(self, subset_name, instance_data, pre_create_data): try: + self.selected_nodes = [] + if pre_create_data.get("use_selection"): self.selected_nodes = hou.selectedNodes() @@ -292,3 +295,21 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): """ return [hou.ropNodeTypeCategory()] + + def apply_settings(self, project_settings, system_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["houdini"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) diff --git a/openpype/hosts/houdini/api/shelves.py b/openpype/hosts/houdini/api/shelves.py index 6e0f367f62..21e44e494a 100644 --- a/openpype/hosts/houdini/api/shelves.py +++ b/openpype/hosts/houdini/api/shelves.py @@ -4,6 +4,7 @@ import logging import platform from openpype.settings import get_project_settings +from openpype.pipeline import get_current_project_name import hou @@ -17,7 +18,8 @@ def generate_shelves(): current_os = platform.system().lower() # load configuration of houdini shelves - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) shelves_set_config = project_settings["houdini"]["shelves"] if not shelves_set_config: diff --git a/openpype/hosts/houdini/hooks/set_paths.py b/openpype/hosts/houdini/hooks/set_paths.py index 04a33b1643..b23659e23b 100644 --- a/openpype/hosts/houdini/hooks/set_paths.py +++ b/openpype/hosts/houdini/hooks/set_paths.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class SetPath(PreLaunchHook): @@ -6,7 +6,8 @@ class SetPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["houdini"] + app_groups = {"houdini"} + launch_types = {LaunchTypes.local} def execute(self): workdir = self.launch_context.env.get("AVALON_WORKDIR", "") diff --git a/openpype/hosts/houdini/plugins/create/convert_legacy.py b/openpype/hosts/houdini/plugins/create/convert_legacy.py index e549c9dc26..86103e3369 100644 --- a/openpype/hosts/houdini/plugins/create/convert_legacy.py +++ b/openpype/hosts/houdini/plugins/create/convert_legacy.py @@ -69,6 +69,8 @@ class HoudiniLegacyConvertor(SubsetConvertorPlugin): "creator_identifier": self.family_to_id[family], "instance_node": subset.path() } + if family == "pointcache": + data["families"] = ["abc"] self.log.info("Converting {} to {}".format( subset.path(), self.family_to_id[family])) imprint(subset, data) diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py index 8b310753d0..12d08f7d83 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_ass.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_ass.py @@ -10,9 +10,10 @@ class CreateArnoldAss(plugin.HoudiniCreator): label = "Arnold ASS" family = "ass" icon = "magic" - defaults = ["Main"] # Default extension: `.ass` or `.ass.gz` + # however calling HoudiniCreator.create() + # will override it by the value in the project settings ext = ".ass" def create(self, subset_name, instance_data, pre_create_data): diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py index bddf26dbd5..b58c377a20 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py @@ -1,5 +1,5 @@ from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateArnoldRop(plugin.HoudiniCreator): @@ -9,7 +9,6 @@ class CreateArnoldRop(plugin.HoudiniCreator): label = "Arnold ROP" family = "arnold_rop" icon = "magic" - defaults = ["master"] # Default extension ext = "exr" @@ -24,7 +23,7 @@ class CreateArnoldRop(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 1 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateArnoldRop, self).create( subset_name, @@ -64,6 +63,9 @@ class CreateArnoldRop(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_bgeo.py b/openpype/hosts/houdini/plugins/create/create_bgeo.py new file mode 100644 index 0000000000..a3f31e7e94 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_bgeo.py @@ -0,0 +1,92 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating pointcache bgeo files.""" +from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance, CreatorError +from openpype.lib import EnumDef + + +class CreateBGEO(plugin.HoudiniCreator): + """BGEO pointcache creator.""" + identifier = "io.openpype.creators.houdini.bgeo" + label = "PointCache (Bgeo)" + family = "pointcache" + icon = "gears" + + def create(self, subset_name, instance_data, pre_create_data): + import hou + + instance_data.pop("active", None) + + instance_data.update({"node_type": "geometry"}) + + instance = super(CreateBGEO, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + + instance_node = hou.node(instance.get("instance_node")) + + file_path = "{}{}".format( + hou.text.expandString("$HIP/pyblish/"), + "{}.$F4.{}".format( + subset_name, + pre_create_data.get("bgeo_type") or "bgeo.sc") + ) + parms = { + "sopoutput": file_path + } + + instance_node.parm("trange").set(1) + if self.selected_nodes: + # if selection is on SOP level, use it + if isinstance(self.selected_nodes[0], hou.SopNode): + parms["soppath"] = self.selected_nodes[0].path() + else: + # try to find output node with the lowest index + outputs = [ + child for child in self.selected_nodes[0].children() + if child.type().name() == "output" + ] + if not outputs: + instance_node.setParms(parms) + raise CreatorError(( + "Missing output node in SOP level for the selection. " + "Please select correct SOP path in created instance." + )) + outputs.sort(key=lambda output: output.evalParm("outputidx")) + parms["soppath"] = outputs[0].path() + + instance_node.setParms(parms) + + def get_pre_create_attr_defs(self): + attrs = super().get_pre_create_attr_defs() + bgeo_enum = [ + { + "value": "bgeo", + "label": "uncompressed bgeo (.bgeo)" + }, + { + "value": "bgeosc", + "label": "BLOSC compressed bgeo (.bgeosc)" + }, + { + "value": "bgeo.sc", + "label": "BLOSC compressed bgeo (.bgeo.sc)" + }, + { + "value": "bgeo.gz", + "label": "GZ compressed bgeo (.bgeo.gz)" + }, + { + "value": "bgeo.lzma", + "label": "LZMA compressed bgeo (.bgeo.lzma)" + }, + { + "value": "bgeo.bz2", + "label": "BZip2 compressed bgeo (.bgeo.bz2)" + } + ] + + return attrs + [ + EnumDef("bgeo_type", bgeo_enum, label="BGEO Options"), + ] diff --git a/openpype/hosts/houdini/plugins/create/create_hda.py b/openpype/hosts/houdini/plugins/create/create_hda.py index 5f95b2efb4..c4093bfbc6 100644 --- a/openpype/hosts/houdini/plugins/create/create_hda.py +++ b/openpype/hosts/houdini/plugins/create/create_hda.py @@ -4,7 +4,6 @@ from openpype.client import ( get_asset_by_name, get_subsets, ) -from openpype.pipeline import legacy_io from openpype.hosts.houdini.api import plugin @@ -21,7 +20,7 @@ class CreateHDA(plugin.HoudiniCreator): # type: (str) -> bool """Check if existing subset name versions already exists.""" # Get all subsets of the current asset - project_name = legacy_io.active_project() + project_name = self.project_name asset_doc = get_asset_by_name( project_name, self.data["asset"], fields=["_id"] ) diff --git a/openpype/hosts/houdini/plugins/create/create_karma_rop.py b/openpype/hosts/houdini/plugins/create/create_karma_rop.py index edfb992e1a..4e1360ca45 100644 --- a/openpype/hosts/houdini/plugins/create/create_karma_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_karma_rop.py @@ -11,7 +11,6 @@ class CreateKarmaROP(plugin.HoudiniCreator): label = "Karma ROP" family = "karma_rop" icon = "magic" - defaults = ["master"] def create(self, subset_name, instance_data, pre_create_data): import hou # noqa @@ -21,7 +20,7 @@ class CreateKarmaROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateKarmaROP, self).create( subset_name, @@ -67,6 +66,7 @@ class CreateKarmaROP(plugin.HoudiniCreator): camera = None for node in self.selected_nodes: if node.type().name() == "cam": + camera = node.path() has_camera = pre_create_data.get("cam_res") if has_camera: res_x = node.evalParm("resx") @@ -96,6 +96,9 @@ class CreateKarmaROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py index 5ca53e96de..d2f0e735a8 100644 --- a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py @@ -11,7 +11,6 @@ class CreateMantraROP(plugin.HoudiniCreator): label = "Mantra ROP" family = "mantra_rop" icon = "magic" - defaults = ["master"] def create(self, subset_name, instance_data, pre_create_data): import hou # noqa @@ -21,7 +20,7 @@ class CreateMantraROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateMantraROP, self).create( subset_name, @@ -76,6 +75,9 @@ class CreateMantraROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py index 554d5f2016..7eaf2aff2b 100644 --- a/openpype/hosts/houdini/plugins/create/create_pointcache.py +++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py @@ -8,7 +8,7 @@ import hou class CreatePointCache(plugin.HoudiniCreator): """Alembic ROP to pointcache""" identifier = "io.openpype.creators.houdini.pointcache" - label = "Point Cache" + label = "PointCache (Abc)" family = "pointcache" icon = "gears" diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py b/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py index 8b6a68437b..b814dd9d57 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_proxy.py @@ -33,7 +33,7 @@ class CreateRedshiftProxy(plugin.HoudiniCreator): instance_node = hou.node(instance.get("instance_node")) parms = { - "RS_archive_file": '$HIP/pyblish/`{}.$F4.rs'.format(subset_name), + "RS_archive_file": '$HIP/pyblish/{}.$F4.rs'.format(subset_name), } if self.selected_nodes: diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 4576e9a721..1b8826a932 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -3,7 +3,7 @@ import hou # noqa from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateRedshiftROP(plugin.HoudiniCreator): @@ -13,7 +13,6 @@ class CreateRedshiftROP(plugin.HoudiniCreator): label = "Redshift ROP" family = "redshift_rop" icon = "magic" - defaults = ["master"] ext = "exr" def create(self, subset_name, instance_data, pre_create_data): @@ -23,7 +22,7 @@ class CreateRedshiftROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateRedshiftROP, self).create( subset_name, @@ -100,6 +99,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_vray_rop.py b/openpype/hosts/houdini/plugins/create/create_vray_rop.py index 1de9be4ed6..793a544fdf 100644 --- a/openpype/hosts/houdini/plugins/create/create_vray_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_vray_rop.py @@ -14,8 +14,6 @@ class CreateVrayROP(plugin.HoudiniCreator): label = "VRay ROP" family = "vray_rop" icon = "magic" - defaults = ["master"] - ext = "exr" def create(self, subset_name, instance_data, pre_create_data): @@ -25,7 +23,7 @@ class CreateVrayROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateVrayROP, self).create( subset_name, @@ -139,6 +137,9 @@ class CreateVrayROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_workfile.py b/openpype/hosts/houdini/plugins/create/create_workfile.py index 1a8537adcd..cc45a6c2a8 100644 --- a/openpype/hosts/houdini/plugins/create/create_workfile.py +++ b/openpype/hosts/houdini/plugins/create/create_workfile.py @@ -4,7 +4,6 @@ from openpype.hosts.houdini.api import plugin from openpype.hosts.houdini.api.lib import read, imprint from openpype.hosts.houdini.api.pipeline import CONTEXT_CONTAINER from openpype.pipeline import CreatedInstance, AutoCreator -from openpype.pipeline import legacy_io from openpype.client import get_asset_by_name import hou @@ -27,9 +26,9 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator): ), None) project_name = self.project_name - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.host_name if current_instance is None: asset_doc = get_asset_by_name(project_name, asset_name) diff --git a/openpype/hosts/houdini/plugins/load/load_alembic.py b/openpype/hosts/houdini/plugins/load/load_alembic.py index c6f0ebf2f9..48bd730ebe 100644 --- a/openpype/hosts/houdini/plugins/load/load_alembic.py +++ b/openpype/hosts/houdini/plugins/load/load_alembic.py @@ -20,7 +20,8 @@ class AbcLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_alembic_archive.py b/openpype/hosts/houdini/plugins/load/load_alembic_archive.py index 47d2e1b896..3a577f72b4 100644 --- a/openpype/hosts/houdini/plugins/load/load_alembic_archive.py +++ b/openpype/hosts/houdini/plugins/load/load_alembic_archive.py @@ -21,7 +21,8 @@ class AbcArchiveLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_bgeo.py b/openpype/hosts/houdini/plugins/load/load_bgeo.py index 86e8675c02..22680178c0 100644 --- a/openpype/hosts/houdini/plugins/load/load_bgeo.py +++ b/openpype/hosts/houdini/plugins/load/load_bgeo.py @@ -43,9 +43,10 @@ class BgeoLoader(load.LoaderPlugin): file_node.destroy() # Explicitly create a file node + path = self.filepath_from_context(context) file_node = container.createNode("file", node_name=node_name) file_node.setParms( - {"file": self.format_path(self.fname, context["representation"])}) + {"file": self.format_path(path, context["representation"])}) # Set display on last node file_node.setDisplayFlag(True) diff --git a/openpype/hosts/houdini/plugins/load/load_camera.py b/openpype/hosts/houdini/plugins/load/load_camera.py index 6365508f4e..7b4a04809e 100644 --- a/openpype/hosts/houdini/plugins/load/load_camera.py +++ b/openpype/hosts/houdini/plugins/load/load_camera.py @@ -94,7 +94,8 @@ class CameraLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_hda.py b/openpype/hosts/houdini/plugins/load/load_hda.py index 2438570c6e..57edc341a3 100644 --- a/openpype/hosts/houdini/plugins/load/load_hda.py +++ b/openpype/hosts/houdini/plugins/load/load_hda.py @@ -21,7 +21,8 @@ class HdaLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_image.py b/openpype/hosts/houdini/plugins/load/load_image.py index 26bc569c53..663a93e48b 100644 --- a/openpype/hosts/houdini/plugins/load/load_image.py +++ b/openpype/hosts/houdini/plugins/load/load_image.py @@ -55,7 +55,8 @@ class ImageLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") file_path = self._get_file_sequence(file_path) diff --git a/openpype/hosts/houdini/plugins/load/load_usd_layer.py b/openpype/hosts/houdini/plugins/load/load_usd_layer.py index 1f0ec25128..1528cf549f 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_layer.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_layer.py @@ -26,7 +26,8 @@ class USDSublayerLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_usd_reference.py b/openpype/hosts/houdini/plugins/load/load_usd_reference.py index f66d05395e..8402ad072c 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_reference.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_reference.py @@ -26,7 +26,8 @@ class USDReferenceLoader(load.LoaderPlugin): import hou # Format file name, Houdini only wants forward slashes - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) file_path = file_path.replace("\\", "/") # Get the root node diff --git a/openpype/hosts/houdini/plugins/load/load_vdb.py b/openpype/hosts/houdini/plugins/load/load_vdb.py index 87900502c5..bcc4f200d3 100644 --- a/openpype/hosts/houdini/plugins/load/load_vdb.py +++ b/openpype/hosts/houdini/plugins/load/load_vdb.py @@ -40,8 +40,9 @@ class VdbLoader(load.LoaderPlugin): # Explicitly create a file node file_node = container.createNode("file", node_name=node_name) + path = self.filepath_from_context(context) file_node.setParms( - {"file": self.format_path(self.fname, context["representation"])}) + {"file": self.format_path(path, context["representation"])}) # Set display on last node file_node.setDisplayFlag(True) diff --git a/openpype/hosts/houdini/plugins/load/show_usdview.py b/openpype/hosts/houdini/plugins/load/show_usdview.py index 2737bc40fa..7b03a0738a 100644 --- a/openpype/hosts/houdini/plugins/load/show_usdview.py +++ b/openpype/hosts/houdini/plugins/load/show_usdview.py @@ -20,7 +20,8 @@ class ShowInUsdview(load.LoaderPlugin): usdview = find_executable("usdview") - filepath = os.path.normpath(self.fname) + filepath = self.filepath_from_context(context) + filepath = os.path.normpath(filepath) filepath = filepath.replace("\\", "/") if not os.path.exists(filepath): diff --git a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py index 614785487f..43b8428c60 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py @@ -50,7 +50,7 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin): num_aovs = rop.evalParm("ar_aovs") for index in range(1, num_aovs + 1): # Skip disabled AOVs - if not rop.evalParm("ar_enable_aovP{}".format(index)): + if not rop.evalParm("ar_enable_aov{}".format(index)): continue if rop.evalParm("ar_aov_exr_enable_layer_name{}".format(index)): diff --git a/openpype/hosts/houdini/plugins/publish/collect_frames.py b/openpype/hosts/houdini/plugins/publish/collect_frames.py index 91a3d9d170..01df809d4c 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_frames.py +++ b/openpype/hosts/houdini/plugins/publish/collect_frames.py @@ -13,7 +13,8 @@ class CollectFrames(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder + 0.01 label = "Collect Frames" - families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"] + families = ["vdbcache", "imagesequence", "ass", + "redshiftproxy", "review", "bgeo"] def process(self, instance): @@ -32,9 +33,9 @@ class CollectFrames(pyblish.api.InstancePlugin): output = output_parm.eval() _, ext = lib.splitext( - output, - allowed_multidot_extensions=[".ass.gz"] - ) + output, allowed_multidot_extensions=[ + ".ass.gz", ".bgeo.sc", ".bgeo.gz", + ".bgeo.lzma", ".bgeo.bz2"]) file_name = os.path.basename(output) result = file_name @@ -76,7 +77,7 @@ class CollectFrames(pyblish.api.InstancePlugin): frame = match.group(1) padding = len(frame) - # Get the parts of the filename surrounding the frame number + # Get the parts of the filename surrounding the frame number, # so we can put our own frame numbers in. span = match.span(1) prefix = match.string[: span[0]] diff --git a/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py new file mode 100644 index 0000000000..3323e97c20 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py @@ -0,0 +1,21 @@ +"""Collector for pointcache types. + +This will add additional family to pointcache instance based on +the creator_identifier parameter. +""" +import pyblish.api + + +class CollectPointcacheType(pyblish.api.InstancePlugin): + """Collect data type for pointcache instance.""" + + order = pyblish.api.CollectorOrder + hosts = ["houdini"] + families = ["pointcache"] + label = "Collect type of pointcache" + + def process(self, instance): + if instance.data["creator_identifier"] == "io.openpype.creators.houdini.bgeo": # noqa: E501 + instance.data["families"] += ["bgeo"] + elif instance.data["creator_identifier"] == "io.openpype.creators.houdini.pointcache": # noqa: E501 + instance.data["families"] += ["abc"] diff --git a/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py b/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py index 81274c670e..14a8e3c056 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py +++ b/openpype/hosts/houdini/plugins/publish/collect_usd_bootstrap.py @@ -1,7 +1,6 @@ import pyblish.api from openpype.client import get_subset_by_name, get_asset_by_name -from openpype.pipeline import legacy_io import openpype.lib.usdlib as usdlib @@ -51,7 +50,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin): self.log.debug("Add bootstrap for: %s" % bootstrap) - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] asset = get_asset_by_name(project_name, instance.data["asset"]) assert asset, "Asset must exist: %s" % asset diff --git a/openpype/hosts/houdini/plugins/publish/extract_alembic.py b/openpype/hosts/houdini/plugins/publish/extract_alembic.py index cb2d4ef424..bdd19b23d4 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_alembic.py +++ b/openpype/hosts/houdini/plugins/publish/extract_alembic.py @@ -13,7 +13,7 @@ class ExtractAlembic(publish.Extractor): order = pyblish.api.ExtractorOrder label = "Extract Alembic" hosts = ["houdini"] - families = ["pointcache", "camera"] + families = ["abc", "camera"] def process(self, instance): diff --git a/openpype/hosts/houdini/plugins/publish/extract_bgeo.py b/openpype/hosts/houdini/plugins/publish/extract_bgeo.py new file mode 100644 index 0000000000..c9625ec880 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/extract_bgeo.py @@ -0,0 +1,53 @@ +import os + +import pyblish.api + +from openpype.pipeline import publish +from openpype.hosts.houdini.api.lib import render_rop +from openpype.hosts.houdini.api import lib + +import hou + + +class ExtractBGEO(publish.Extractor): + + order = pyblish.api.ExtractorOrder + label = "Extract BGEO" + hosts = ["houdini"] + families = ["bgeo"] + + def process(self, instance): + + ropnode = hou.node(instance.data["instance_node"]) + + # Get the filename from the filename parameter + output = ropnode.evalParm("sopoutput") + staging_dir, file_name = os.path.split(output) + instance.data["stagingDir"] = staging_dir + + # We run the render + self.log.info("Writing bgeo files '{}' to '{}'.".format( + file_name, staging_dir)) + + # write files + render_rop(ropnode) + + output = instance.data["frames"] + + _, ext = lib.splitext( + output[0], allowed_multidot_extensions=[ + ".ass.gz", ".bgeo.sc", ".bgeo.gz", + ".bgeo.lzma", ".bgeo.bz2"]) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "bgeo", + "ext": ext.lstrip("."), + "files": output, + "stagingDir": staging_dir, + "frameStart": instance.data["frameStart"], + "frameEnd": instance.data["frameEnd"] + } + instance.data["representations"].append(representation) diff --git a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py index 8422a3bc3e..d6193f13c1 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py +++ b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py @@ -14,7 +14,6 @@ from openpype.client import ( ) from openpype.pipeline import ( get_representation_path, - legacy_io, publish, ) import openpype.hosts.houdini.api.usd as hou_usdlib @@ -250,7 +249,7 @@ class ExtractUSDLayered(publish.Extractor): # Set up the dependency for publish if they have new content # compared to previous publishes - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] for dependency in active_dependencies: dependency_fname = dependency.data["usdFilename"] diff --git a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py index bef8db45a4..af9e080466 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py +++ b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py @@ -17,7 +17,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Primitive to Detail (Abc)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py index 44d58cfa36..40114bc40e 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py @@ -18,7 +18,7 @@ class ValidateAlembicROPFaceSets(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Alembic ROP Face Sets" diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py index b0cf4cdc58..47c47e4ea2 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py @@ -14,7 +14,7 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Input Node (Abc)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_file_extension.py b/openpype/hosts/houdini/plugins/publish/validate_file_extension.py index 4584e78f4f..6594d10851 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_file_extension.py +++ b/openpype/hosts/houdini/plugins/publish/validate_file_extension.py @@ -19,12 +19,11 @@ class ValidateFileExtension(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder - families = ["pointcache", "camera", "vdbcache"] + families = ["camera", "vdbcache"] hosts = ["houdini"] label = "Output File Extension" family_extensions = { - "pointcache": ".abc", "camera": ".abc", "vdbcache": ".vdb", } diff --git a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py index cd5e724ab3..471fa5b6d1 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py @@ -1,10 +1,19 @@ # -*- coding: utf-8 -*- import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder from openpype.pipeline import PublishValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) + import hou +class AddDefaultPathAction(RepairAction): + label = "Add a default path attribute" + icon = "mdi.pencil-plus-outline" + + class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): """Validate all primitives build hierarchy from attribute when enabled. @@ -15,15 +24,17 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): """ order = ValidateContentsOrder + 0.1 - families = ["pointcache"] + families = ["abc"] hosts = ["houdini"] label = "Validate Prims Hierarchy Path" + actions = [AddDefaultPathAction] def process(self, instance): invalid = self.get_invalid(instance) if invalid: + nodes = [n.path() for n in invalid] raise PublishValidationError( - "See log for details. " "Invalid nodes: {0}".format(invalid), + "See log for details. " "Invalid nodes: {0}".format(nodes), title=self.label ) @@ -36,10 +47,10 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): if output_node is None: cls.log.error( "SOP Output node in '%s' does not exist. " - "Ensure a valid SOP output path is set." % rop_node.path() + "Ensure a valid SOP output path is set.", rop_node.path() ) - return [rop_node.path()] + return [rop_node] build_from_path = rop_node.parm("build_from_path").eval() if not build_from_path: @@ -56,9 +67,9 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): "value set, but 'Build Hierarchy from Attribute'" "is enabled." ) - return [rop_node.path()] + return [rop_node] - cls.log.debug("Checking for attribute: %s" % path_attr) + cls.log.debug("Checking for attribute: %s", path_attr) if not hasattr(output_node, "geometry"): # In the case someone has explicitly set an Object @@ -89,17 +100,17 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): if not attrib: cls.log.info( "Geometry Primitives are missing " - "path attribute: `%s`" % path_attr + "path attribute: `%s`", path_attr ) - return [output_node.path()] + return [output_node] # Ensure at least a single string value is present if not attrib.strings(): cls.log.info( "Primitive path attribute has no " - "string values: %s" % path_attr + "string values: %s", path_attr ) - return [output_node.path()] + return [output_node] paths = geo.primStringAttribValues(path_attr) # Ensure all primitives are set to a valid path @@ -109,6 +120,65 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): num_prims = len(geo.iterPrims()) # faster than len(geo.prims()) cls.log.info( "Prims have no value for attribute `%s` " - "(%s of %s prims)" % (path_attr, len(invalid_prims), num_prims) + "(%s of %s prims)", path_attr, len(invalid_prims), num_prims + ) + return [output_node] + + @classmethod + def repair(cls, instance): + """Add a default path attribute Action. + + It is a helper action more than a repair action, + used to add a default single value for the path. + """ + + rop_node = hou.node(instance.data["instance_node"]) + output_node = rop_node.parm("sop_path").evalAsNode() + + if not output_node: + cls.log.debug( + "Action isn't performed, invalid SOP Path on %s", + rop_node + ) + return + + # This check to prevent the action from running multiple times. + # git_invalid only returns [output_node] when + # path attribute is the problem + if cls.get_invalid(instance) != [output_node]: + return + + path_attr = rop_node.parm("path_attrib").eval() + + path_node = output_node.parent().createNode("name", "AUTO_PATH") + path_node.parm("attribname").set(path_attr) + path_node.parm("name1").set('`opname("..")`/`opname("..")`Shape') + + cls.log.debug( + "'%s' was created. It adds '%s' with a default single value", + path_node, path_attr + ) + + path_node.setGenericFlag(hou.nodeFlag.DisplayComment, True) + path_node.setComment( + 'Auto path node was created automatically by ' + '"Add a default path attribute"' + '\nFeel free to modify or replace it.' + ) + + if output_node.type().name() in ["null", "output"]: + # Connect before + path_node.setFirstInput(output_node.input(0)) + path_node.moveToGoodPosition() + output_node.setFirstInput(path_node) + output_node.moveToGoodPosition() + else: + # Connect after + path_node.setFirstInput(output_node) + rop_node.parm("sop_path").set(path_node.path()) + path_node.moveToGoodPosition() + + cls.log.debug( + "SOP path on '%s' updated to new output node '%s'", + rop_node, path_node ) - return [output_node.path()] diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py index c4f118ac3b..0db782d545 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py @@ -4,7 +4,6 @@ import re import pyblish.api from openpype.client import get_subset_by_name -from openpype.pipeline import legacy_io from openpype.pipeline.publish import ValidateContentsOrder from openpype.pipeline import PublishValidationError @@ -18,7 +17,7 @@ class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin): label = "USD Shade model exists" def process(self, instance): - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] asset_name = instance.data["asset"] subset = instance.data["subset"] diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py index 543c8e1407..afe05e3173 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py @@ -7,8 +7,6 @@ from openpype.pipeline import ( ) from openpype.pipeline.publish import RepairAction -from openpype.pipeline.publish import RepairAction - class ValidateWorkfilePaths( pyblish.api.InstancePlugin, OptionalPyblishPluginMixin): diff --git a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py index 48019e0a82..310d057a11 100644 --- a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py +++ b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py @@ -5,7 +5,7 @@ import husdoutputprocessors.base as base import colorbleed.usdlib as usdlib from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io, Anatomy +from openpype.pipeline import Anatomy, get_current_project_name class AvalonURIOutputProcessor(base.OutputProcessorBase): @@ -122,7 +122,7 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase): """ - PROJECT = legacy_io.Session["AVALON_PROJECT"] + PROJECT = get_current_project_name() anatomy = Anatomy(PROJECT) asset_doc = get_asset_by_name(PROJECT, asset) if not asset_doc: diff --git a/openpype/hosts/max/api/lib_renderproducts.py b/openpype/hosts/max/api/lib_renderproducts.py index 3074f8e170..90608737c2 100644 --- a/openpype/hosts/max/api/lib_renderproducts.py +++ b/openpype/hosts/max/api/lib_renderproducts.py @@ -7,15 +7,18 @@ import os from pymxs import runtime as rt from openpype.hosts.max.api.lib import get_current_renderer -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name from openpype.settings import get_project_settings class RenderProducts(object): def __init__(self, project_settings=None): - self._project_settings = project_settings or get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) + self._project_settings = project_settings + if not self._project_settings: + self._project_settings = get_project_settings( + get_current_project_name() + ) def get_beauty(self, container): render_dir = os.path.dirname(rt.rendOutputFilename) diff --git a/openpype/hosts/max/api/lib_rendersettings.py b/openpype/hosts/max/api/lib_rendersettings.py index 91e4a5bf9b..1b62edabee 100644 --- a/openpype/hosts/max/api/lib_rendersettings.py +++ b/openpype/hosts/max/api/lib_rendersettings.py @@ -2,7 +2,7 @@ import os from pymxs import runtime as rt from openpype.lib import Logger from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name from openpype.pipeline.context_tools import get_current_project_asset from openpype.hosts.max.api.lib import ( @@ -31,7 +31,7 @@ class RenderSettings(object): self._project_settings = project_settings if not self._project_settings: self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] + get_current_project_name() ) def set_render_camera(self, selection): diff --git a/openpype/hosts/max/api/plugin.py b/openpype/hosts/max/api/plugin.py index 14b0653f40..3389447cb0 100644 --- a/openpype/hosts/max/api/plugin.py +++ b/openpype/hosts/max/api/plugin.py @@ -15,6 +15,7 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" parameters main rollout:OPparams ( all_handles type:#maxObjectTab tabSize:0 tabSizeVariable:on + sel_list type:#stringTab tabSize:0 tabSizeVariable:on ) rollout OPparams "OP Parameters" @@ -30,11 +31,42 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" handle_name = obj_name + "<" + handle as string + ">" return handle_name ) + fn nodes_to_add node = + ( + sceneObjs = #() + if classOf node == Container do return false + n = node as string + for obj in Objects do + ( + tmp_obj = obj as string + append sceneObjs tmp_obj + ) + if sel_list != undefined do + ( + for obj in sel_list do + ( + idx = findItem sceneObjs obj + if idx do + ( + deleteItem sceneObjs idx + ) + ) + ) + idx = findItem sceneObjs n + if idx then return true else false + ) + + fn nodes_to_rmv node = + ( + n = node as string + idx = findItem sel_list n + if idx then return true else false + ) on button_add pressed do ( current_selection = selectByName title:"Select Objects to add to - the Container" buttontext:"Add" + the Container" buttontext:"Add" filter:nodes_to_add if current_selection == undefined then return False temp_arr = #() i_node_arr = #() @@ -46,8 +78,10 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" if idx do ( continue ) + name = c as string append temp_arr handle_name append i_node_arr node_ref + append sel_list name ) all_handles = join i_node_arr all_handles list_node.items = join temp_arr list_node.items @@ -56,7 +90,7 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" on button_del pressed do ( current_selection = selectByName title:"Select Objects to remove - from the Container" buttontext:"Remove" + from the Container" buttontext:"Remove" filter: nodes_to_rmv if current_selection == undefined then return False temp_arr = #() i_node_arr = #() @@ -67,6 +101,7 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" ( node_ref = NodeTransformMonitor node:c as string handle_name = node_to_name c + n = c as string tmp_all_handles = #() for i in all_handles do ( @@ -84,6 +119,11 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" ( new_temp_arr = DeleteItem list_node.items idx ) + idx = finditem sel_list n + if idx do + ( + sel_list = DeleteItem sel_list idx + ) ) all_handles = join i_node_arr new_i_node_arr list_node.items = join temp_arr new_temp_arr @@ -96,6 +136,7 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" temp_arr = #() for x in all_handles do ( + if x.node == undefined do continue handle_name = node_to_name x.node append temp_arr handle_name ) @@ -145,7 +186,10 @@ class MaxCreatorBase(object): node = rt.Container(name=node) attrs = rt.Execute(MS_CUSTOM_ATTRIB) - rt.custAttributes.add(node.baseObject, attrs) + modifier = rt.EmptyModifier() + rt.addModifier(node, modifier) + node.modifiers[0].name = "OP Data" + rt.custAttributes.add(node.modifiers[0], attrs) return node @@ -169,13 +213,19 @@ class MaxCreator(Creator, MaxCreatorBase): if pre_create_data.get("use_selection"): node_list = [] + sel_list = [] for i in self.selected_nodes: node_ref = rt.NodeTransformMonitor(node=i) node_list.append(node_ref) + sel_list.append(str(i)) # Setting the property rt.setProperty( - instance_node.openPypeData, "all_handles", node_list) + instance_node.modifiers[0].openPypeData, + "all_handles", node_list) + rt.setProperty( + instance_node.modifiers[0].openPypeData, + "sel_list", sel_list) self._add_instance_to_context(instance) imprint(instance_node.name, instance.data_to_store()) @@ -214,8 +264,8 @@ class MaxCreator(Creator, MaxCreatorBase): instance_node = rt.GetNodeByName( instance.data.get("instance_node")) if instance_node: - count = rt.custAttributes.count(instance_node) - rt.custAttributes.delete(instance_node, count) + count = rt.custAttributes.count(instance_node.modifiers[0]) + rt.custAttributes.delete(instance_node.modifiers[0], count) rt.Delete(instance_node) self._remove_instance_from_context(instance) diff --git a/openpype/hosts/max/hooks/force_startup_script.py b/openpype/hosts/max/hooks/force_startup_script.py index 4fcf4fef21..5fb8334d4b 100644 --- a/openpype/hosts/max/hooks/force_startup_script.py +++ b/openpype/hosts/max/hooks/force_startup_script.py @@ -1,7 +1,8 @@ # -*- coding: utf-8 -*- """Pre-launch to force 3ds max startup script.""" -from openpype.lib import PreLaunchHook import os +from openpype.hosts.max import MAX_HOST_DIR +from openpype.lib.applications import PreLaunchHook, LaunchTypes class ForceStartupScript(PreLaunchHook): @@ -13,12 +14,14 @@ class ForceStartupScript(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["3dsmax"] + app_groups = {"3dsmax", "adsk_3dsmax"} order = 11 + launch_types = {LaunchTypes.local} def execute(self): startup_args = [ "-U", "MAXScript", - f"{os.getenv('OPENPYPE_ROOT')}\\openpype\\hosts\\max\\startup\\startup.ms"] # noqa + os.path.join(MAX_HOST_DIR, "startup", "startup.ms"), + ] self.launch_context.launch_args.append(startup_args) diff --git a/openpype/hosts/max/hooks/inject_python.py b/openpype/hosts/max/hooks/inject_python.py index d9753ccbd8..e9dddbf710 100644 --- a/openpype/hosts/max/hooks/inject_python.py +++ b/openpype/hosts/max/hooks/inject_python.py @@ -1,7 +1,7 @@ # -*- coding: utf-8 -*- """Pre-launch hook to inject python environment.""" -from openpype.lib import PreLaunchHook import os +from openpype.lib.applications import PreLaunchHook, LaunchTypes class InjectPythonPath(PreLaunchHook): @@ -13,7 +13,8 @@ class InjectPythonPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["3dsmax"] + app_groups = {"3dsmax", "adsk_3dsmax"} + launch_types = {LaunchTypes.local} def execute(self): self.launch_context.env["MAX_PYTHONPATH"] = os.environ["PYTHONPATH"] diff --git a/openpype/hosts/max/hooks/set_paths.py b/openpype/hosts/max/hooks/set_paths.py index 3db5306344..4b961fa91e 100644 --- a/openpype/hosts/max/hooks/set_paths.py +++ b/openpype/hosts/max/hooks/set_paths.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class SetPath(PreLaunchHook): @@ -6,7 +6,8 @@ class SetPath(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["max"] + app_groups = {"max"} + launch_types = {LaunchTypes.local} def execute(self): workdir = self.launch_context.env.get("AVALON_WORKDIR", "") diff --git a/openpype/hosts/max/plugins/load/load_camera_fbx.py b/openpype/hosts/max/plugins/load/load_camera_fbx.py index c51900dbb7..62284b23d9 100644 --- a/openpype/hosts/max/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/max/plugins/load/load_camera_fbx.py @@ -17,7 +17,8 @@ class FbxLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - filepath = os.path.normpath(self.fname) + filepath = self.filepath_from_context(context) + filepath = os.path.normpath(filepath) rt.FBXImporterSetParam("Animation", True) rt.FBXImporterSetParam("Camera", True) rt.FBXImporterSetParam("AxisConversionMethod", True) diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py index e3fb34f5bc..76cd3bf367 100644 --- a/openpype/hosts/max/plugins/load/load_max_scene.py +++ b/openpype/hosts/max/plugins/load/load_max_scene.py @@ -19,7 +19,9 @@ class MaxSceneLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - path = os.path.normpath(self.fname) + + path = self.filepath_from_context(context) + path = os.path.normpath(path) # import the max scene by using "merge file" path = path.replace('\\', '/') rt.MergeMaxFile(path) diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py index 58c6d3c889..cff82a593c 100644 --- a/openpype/hosts/max/plugins/load/load_model.py +++ b/openpype/hosts/max/plugins/load/load_model.py @@ -18,7 +18,7 @@ class ModelAbcLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - file_path = os.path.normpath(self.fname) + file_path = os.path.normpath(self.filepath_from_context(context)) abc_before = { c diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py index 663f79f9f5..12f526ab95 100644 --- a/openpype/hosts/max/plugins/load/load_model_fbx.py +++ b/openpype/hosts/max/plugins/load/load_model_fbx.py @@ -17,7 +17,7 @@ class FbxModelLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - filepath = os.path.normpath(self.fname) + filepath = os.path.normpath(self.filepath_from_context(context)) rt.FBXImporterSetParam("Animation", False) rt.FBXImporterSetParam("Cameras", False) rt.FBXImporterSetParam("Preserveinstances", True) diff --git a/openpype/hosts/max/plugins/load/load_model_obj.py b/openpype/hosts/max/plugins/load/load_model_obj.py index 77d4e08cfb..18a19414fa 100644 --- a/openpype/hosts/max/plugins/load/load_model_obj.py +++ b/openpype/hosts/max/plugins/load/load_model_obj.py @@ -18,7 +18,7 @@ class ObjLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - filepath = os.path.normpath(self.fname) + filepath = os.path.normpath(self.filepath_from_context(context)) self.log.debug("Executing command to import..") rt.Execute(f'importFile @"{filepath}" #noPrompt using:ObjImp') diff --git a/openpype/hosts/max/plugins/load/load_model_usd.py b/openpype/hosts/max/plugins/load/load_model_usd.py index 2b34669278..48b50b9b18 100644 --- a/openpype/hosts/max/plugins/load/load_model_usd.py +++ b/openpype/hosts/max/plugins/load/load_model_usd.py @@ -20,7 +20,7 @@ class ModelUSDLoader(load.LoaderPlugin): from pymxs import runtime as rt # asset_filepath - filepath = os.path.normpath(self.fname) + filepath = os.path.normpath(self.filepath_from_context(context)) import_options = rt.USDImporter.CreateOptions() base_filename = os.path.basename(filepath) filename, ext = os.path.splitext(base_filename) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py index cadbe7cac2..290503e053 100644 --- a/openpype/hosts/max/plugins/load/load_pointcache.py +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -23,7 +23,8 @@ class AbcLoader(load.LoaderPlugin): def load(self, context, name=None, namespace=None, data=None): from pymxs import runtime as rt - file_path = os.path.normpath(self.fname) + file_path = self.filepath_from_context(context) + file_path = os.path.normpath(file_path) abc_before = { c diff --git a/openpype/hosts/max/plugins/load/load_pointcloud.py b/openpype/hosts/max/plugins/load/load_pointcloud.py index 8634e1d51f..2a1175167a 100644 --- a/openpype/hosts/max/plugins/load/load_pointcloud.py +++ b/openpype/hosts/max/plugins/load/load_pointcloud.py @@ -18,7 +18,7 @@ class PointCloudLoader(load.LoaderPlugin): """load point cloud by tyCache""" from pymxs import runtime as rt - filepath = os.path.normpath(self.fname) + filepath = os.path.normpath(self.filepath_from_context(context)) obj = rt.tyCache() obj.filename = filepath diff --git a/openpype/hosts/max/plugins/publish/collect_members.py b/openpype/hosts/max/plugins/publish/collect_members.py index 812d82ff26..2970cf0e24 100644 --- a/openpype/hosts/max/plugins/publish/collect_members.py +++ b/openpype/hosts/max/plugins/publish/collect_members.py @@ -17,6 +17,6 @@ class CollectMembers(pyblish.api.InstancePlugin): container = rt.GetNodeByName(instance.data["instance_node"]) instance.data["members"] = [ member.node for member - in container.openPypeData.all_handles + in container.modifiers[0].openPypeData.all_handles ] self.log.debug("{}".format(instance.data["members"])) diff --git a/openpype/hosts/max/plugins/publish/collect_workfile.py b/openpype/hosts/max/plugins/publish/collect_workfile.py index 3500b2735c..0eb4bb731e 100644 --- a/openpype/hosts/max/plugins/publish/collect_workfile.py +++ b/openpype/hosts/max/plugins/publish/collect_workfile.py @@ -4,7 +4,6 @@ import os import pyblish.api from pymxs import runtime as rt -from openpype.pipeline import legacy_io class CollectWorkfile(pyblish.api.ContextPlugin): @@ -26,7 +25,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin): filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] + task = context.data["task"] data = {} @@ -36,7 +35,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin): data.update({ "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), + "asset": context.data["asset"], "label": subset, "publish": True, "family": 'workfile', diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py index 6d1e8d03b4..5a99a8b845 100644 --- a/openpype/hosts/max/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py @@ -56,7 +56,7 @@ class ExtractAlembic(publish.Extractor): container = instance.data["instance_node"] - self.log.info("Extracting pointcache ...") + self.log.debug("Extracting pointcache ...") parent_dir = self.staging_dir(instance) file_name = "{name}.abc".format(**instance.data) diff --git a/openpype/hosts/max/plugins/publish/validate_pointcloud.py b/openpype/hosts/max/plugins/publish/validate_pointcloud.py index 1ff6eb126f..295a23f1f6 100644 --- a/openpype/hosts/max/plugins/publish/validate_pointcloud.py +++ b/openpype/hosts/max/plugins/publish/validate_pointcloud.py @@ -1,15 +1,6 @@ import pyblish.api from openpype.pipeline import PublishValidationError from pymxs import runtime as rt -from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io - - -def get_setting(project_setting=None): - project_setting = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) - return project_setting["max"]["PointCloud"] class ValidatePointCloud(pyblish.api.InstancePlugin): @@ -108,6 +99,9 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): f"Validating tyFlow custom attributes for {container}") selection_list = instance.data["members"] + + project_setting = instance.data["project_setting"] + attr_settings = project_setting["max"]["PointCloud"]["attribute"] for sel in selection_list: obj = sel.baseobject anim_names = rt.GetSubAnimNames(obj) @@ -118,8 +112,7 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): event_name = sub_anim.name opt = "${0}.{1}.export_particles".format(sel.name, event_name) - attributes = get_setting()["attribute"] - for key, value in attributes.items(): + for key, value in attr_settings.items(): custom_attr = "{0}.PRTChannels_{1}".format(opt, value) try: diff --git a/openpype/hosts/maya/api/action.py b/openpype/hosts/maya/api/action.py index 3b8e2c1848..277f4cc238 100644 --- a/openpype/hosts/maya/api/action.py +++ b/openpype/hosts/maya/api/action.py @@ -4,7 +4,6 @@ from __future__ import absolute_import import pyblish.api from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io from openpype.pipeline.publish import get_errored_instances_from_context @@ -80,7 +79,7 @@ class GenerateUUIDsOnInvalidAction(pyblish.api.Action): asset_doc = instance.data.get("assetEntity") if not asset_doc: asset_name = instance.data["asset"] - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] self.log.info(( "Asset is not stored on instance." " Querying by name \"{}\" from project \"{}\"" diff --git a/openpype/hosts/maya/api/commands.py b/openpype/hosts/maya/api/commands.py index 3e31875fd8..46494413b7 100644 --- a/openpype/hosts/maya/api/commands.py +++ b/openpype/hosts/maya/api/commands.py @@ -3,7 +3,7 @@ from maya import cmds from openpype.client import get_asset_by_name, get_project -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name, get_current_asset_name class ToolWindows: @@ -85,8 +85,8 @@ def reset_resolution(): resolution_height = 1080 # Get resolution from asset - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name) resolution = _resolution_from_document(asset_doc) # Try get resolution from project diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index fca4410ede..40b3419e73 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -25,23 +25,18 @@ from openpype.client import ( ) from openpype.settings import get_project_settings from openpype.pipeline import ( - legacy_io, + get_current_project_name, + get_current_asset_name, + get_current_task_name, discover_loader_plugins, loaders_from_representation, get_representation_path, load_container, - registered_host, -) -from openpype.pipeline.create import ( - legacy_create, - get_legacy_creator_by_name, -) -from openpype.pipeline.context_tools import ( - get_current_asset_name, - get_current_project_asset, - get_current_project_name, - get_current_task_name + registered_host ) +from openpype.lib import NumberDef +from openpype.pipeline.context_tools import get_current_project_asset +from openpype.pipeline.create import CreateContext from openpype.lib.profiles_filtering import filter_profiles @@ -122,16 +117,14 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94} RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"] -DISPLAY_LIGHTS_VALUES = [ - "project_settings", "default", "all", "selected", "flat", "none" -] -DISPLAY_LIGHTS_LABELS = [ - "Use Project Settings", - "Default Lighting", - "All Lights", - "Selected Lights", - "Flat Lighting", - "No Lights" + +DISPLAY_LIGHTS_ENUM = [ + {"label": "Use Project Settings", "value": "project_settings"}, + {"label": "Default Lighting", "value": "default"}, + {"label": "All Lights", "value": "all"}, + {"label": "Selected Lights", "value": "selected"}, + {"label": "Flat Lighting", "value": "flat"}, + {"label": "No Lights", "value": "none"} ] @@ -343,8 +336,8 @@ def pairwise(iterable): return zip(a, a) -def collect_animation_data(fps=False): - """Get the basic animation data +def collect_animation_defs(fps=False): + """Get the basic animation attribute defintions for the publisher. Returns: OrderedDict @@ -363,17 +356,42 @@ def collect_animation_data(fps=False): handle_end = frame_end_handle - frame_end # build attributes - data = OrderedDict() - data["frameStart"] = frame_start - data["frameEnd"] = frame_end - data["handleStart"] = handle_start - data["handleEnd"] = handle_end - data["step"] = 1.0 + defs = [ + NumberDef("frameStart", + label="Frame Start", + default=frame_start, + decimals=0), + NumberDef("frameEnd", + label="Frame End", + default=frame_end, + decimals=0), + NumberDef("handleStart", + label="Handle Start", + default=handle_start, + decimals=0), + NumberDef("handleEnd", + label="Handle End", + default=handle_end, + decimals=0), + NumberDef("step", + label="Step size", + tooltip="A smaller step size means more samples and larger " + "output files.\n" + "A 1.0 step size is a single sample every frame.\n" + "A 0.5 step size is two samples per frame.\n" + "A 0.2 step size is five samples per frame.", + default=1.0, + decimals=3), + ] if fps: - data["fps"] = mel.eval('currentTimeUnitToFPS()') + current_fps = mel.eval('currentTimeUnitToFPS()') + fps_def = NumberDef( + "fps", label="FPS", default=current_fps, decimals=5 + ) + defs.append(fps_def) - return data + return defs def imprint(node, data): @@ -459,10 +477,10 @@ def lsattrs(attrs): attrs (dict): Name and value pairs of expected matches Example: - >> # Return nodes with an `age` of five. - >> lsattr({"age": "five"}) - >> # Return nodes with both `age` and `color` of five and blue. - >> lsattr({"age": "five", "color": "blue"}) + >>> # Return nodes with an `age` of five. + >>> lsattrs({"age": "five"}) + >>> # Return nodes with both `age` and `color` of five and blue. + >>> lsattrs({"age": "five", "color": "blue"}) Return: list: matching nodes. @@ -1392,8 +1410,8 @@ def generate_ids(nodes, asset_id=None): if asset_id is None: # Get the asset ID from the database for the asset of current context - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) assert asset_doc, "No current asset found in Session" asset_id = asset_doc['_id'] @@ -1593,17 +1611,15 @@ def get_container_members(container): # region LOOKDEV -def list_looks(asset_id): +def list_looks(project_name, asset_id): """Return all look subsets for the given asset This assumes all look subsets start with "look*" in their names. """ - # # get all subsets with look leading in # the name associated with the asset # TODO this should probably look for family 'look' instead of checking # subset name that can not start with family - project_name = legacy_io.active_project() subset_docs = get_subsets(project_name, asset_ids=[asset_id]) return [ subset_doc @@ -1625,7 +1641,7 @@ def assign_look_by_version(nodes, version_id): None """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() # Get representations of shader file and relationships look_representation = get_representation_by_name( @@ -1691,7 +1707,7 @@ def assign_look(nodes, subset="lookDefault"): parts = pype_id.split(":", 1) grouped[parts[0]].append(node) - project_name = legacy_io.active_project() + project_name = get_current_project_name() subset_docs = get_subsets( project_name, subset_names=[subset], asset_ids=grouped.keys() ) @@ -2205,6 +2221,35 @@ def set_scene_resolution(width, height, pixelAspect): cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect) +def get_fps_for_current_context(): + """Get fps that should be set for current context. + + Todos: + - Skip project value. + - Merge logic with 'get_frame_range' and 'reset_scene_resolution' -> + all the values in the functions can be collected at one place as + they have same requirements. + + Returns: + Union[int, float]: FPS value. + """ + + project_name = get_current_project_name() + asset_name = get_current_asset_name() + asset_doc = get_asset_by_name( + project_name, asset_name, fields=["data.fps"] + ) or {} + fps = asset_doc.get("data", {}).get("fps") + if not fps: + project_doc = get_project(project_name, fields=["data.fps"]) or {} + fps = project_doc.get("data", {}).get("fps") + + if not fps: + fps = 25 + + return convert_to_maya_fps(fps) + + def get_frame_range(include_animation_range=False): """Get the current assets frame range and handles. @@ -2279,10 +2324,7 @@ def reset_frame_range(playback=True, render=True, fps=True): fps (bool, Optional): Whether to set scene FPS. Defaults to True. """ if fps: - fps = convert_to_maya_fps( - float(legacy_io.Session.get("AVALON_FPS", 25)) - ) - set_scene_fps(fps) + set_scene_fps(get_fps_for_current_context()) frame_range = get_frame_range(include_animation_range=True) if not frame_range: @@ -2318,7 +2360,7 @@ def reset_scene_resolution(): None """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_doc = get_project(project_name) project_data = project_doc["data"] asset_data = get_current_project_asset()["data"] @@ -2351,19 +2393,9 @@ def set_context_settings(): None """ - # Todo (Wijnand): apply renderer and resolution of project - project_name = legacy_io.active_project() - project_doc = get_project(project_name) - project_data = project_doc["data"] - asset_doc = get_current_project_asset(fields=["data.fps"]) - asset_data = asset_doc.get("data", {}) # Set project fps - fps = convert_to_maya_fps( - asset_data.get("fps", project_data.get("fps", 25)) - ) - legacy_io.Session["AVALON_FPS"] = str(fps) - set_scene_fps(fps) + set_scene_fps(get_fps_for_current_context()) reset_scene_resolution() @@ -2383,9 +2415,7 @@ def validate_fps(): """ - expected_fps = convert_to_maya_fps( - get_current_project_asset(fields=["data.fps"])["data"]["fps"] - ) + expected_fps = get_fps_for_current_context() current_fps = mel.eval('currentTimeUnitToFPS()') fps_match = current_fps == expected_fps @@ -4086,12 +4116,10 @@ def create_rig_animation_instance( ) assert roots, "No root nodes in rig, this is a bug." - asset = legacy_io.Session["AVALON_ASSET"] - dependency = str(context["representation"]["_id"]) - custom_subset = options.get("animationSubsetName") if custom_subset: formatting_data = { + # TODO remove 'asset_type' and replace 'asset_name' with 'asset' "asset_name": context['asset']['name'], "asset_type": context['asset']['type'], "subset": context['subset']['name'], @@ -4109,14 +4137,17 @@ def create_rig_animation_instance( if log: log.info("Creating subset: {}".format(namespace)) + # Fill creator identifier + creator_identifier = "io.openpype.creators.maya.animation" + + host = registered_host() + create_context = CreateContext(host) + # Create the animation instance - creator_plugin = get_legacy_creator_by_name("CreateAnimation") with maintained_selection(): cmds.select([output, controls] + roots, noExpand=True) - legacy_create( - creator_plugin, - name=namespace, - asset=asset, - options={"useSelection": True}, - data={"dependencies": dependency} + create_context.create( + creator_identifier=creator_identifier, + variant=namespace, + pre_create_data={"use_selection": True} ) diff --git a/openpype/hosts/maya/api/lib_renderproducts.py b/openpype/hosts/maya/api/lib_renderproducts.py index 7bfb53d500..b5b71a5a36 100644 --- a/openpype/hosts/maya/api/lib_renderproducts.py +++ b/openpype/hosts/maya/api/lib_renderproducts.py @@ -177,7 +177,7 @@ def get(layer, render_instance=None): }.get(renderer_name.lower(), None) if renderer is None: raise UnsupportedRendererException( - "unsupported {}".format(renderer_name) + "Unsupported renderer: {}".format(renderer_name) ) return renderer(layer, render_instance) diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py index eaa728a2f6..f54633c04d 100644 --- a/openpype/hosts/maya/api/lib_rendersettings.py +++ b/openpype/hosts/maya/api/lib_rendersettings.py @@ -6,13 +6,9 @@ import six import sys from openpype.lib import Logger -from openpype.settings import ( - get_project_settings, - get_current_project_settings -) +from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io -from openpype.pipeline import CreatorError +from openpype.pipeline import CreatorError, get_current_project_name from openpype.pipeline.context_tools import get_current_project_asset from openpype.hosts.maya.api.lib import reset_frame_range @@ -27,21 +23,6 @@ class RenderSettings(object): 'mayahardware2': 'defaultRenderGlobals.imageFilePrefix' } - _image_prefixes = { - 'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa - 'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa - 'renderman': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["image_prefix"], # noqa - 'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa - } - - # Renderman only - _image_dir = { - 'renderman': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["image_dir"], # noqa - 'cryptomatte': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["cryptomatte_dir"], # noqa - 'imageDisplay': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["imageDisplay_dir"], # noqa - "watermark": get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["watermark_dir"] # noqa - } - _aov_chars = { "dot": ".", "dash": "-", @@ -55,11 +36,30 @@ class RenderSettings(object): return cls._image_prefix_nodes[renderer] def __init__(self, project_settings=None): - self._project_settings = project_settings - if not self._project_settings: - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] + if not project_settings: + project_settings = get_project_settings( + get_current_project_name() ) + render_settings = project_settings["maya"]["RenderSettings"] + image_prefixes = { + "vray": render_settings["vray_renderer"]["image_prefix"], + "arnold": render_settings["arnold_renderer"]["image_prefix"], + "renderman": render_settings["renderman_renderer"]["image_prefix"], + "redshift": render_settings["redshift_renderer"]["image_prefix"] + } + + # TODO probably should be stored to more explicit attribute + # Renderman only + renderman_settings = render_settings["renderman_renderer"] + _image_dir = { + "renderman": renderman_settings["image_dir"], + "cryptomatte": renderman_settings["cryptomatte_dir"], + "imageDisplay": renderman_settings["imageDisplay_dir"], + "watermark": renderman_settings["watermark_dir"] + } + self._image_prefixes = image_prefixes + self._image_dir = _image_dir + self._project_settings = project_settings def set_default_renderer_settings(self, renderer=None): """Set basic settings based on renderer.""" diff --git a/openpype/hosts/maya/api/menu.py b/openpype/hosts/maya/api/menu.py index 5284c0249d..715f54686c 100644 --- a/openpype/hosts/maya/api/menu.py +++ b/openpype/hosts/maya/api/menu.py @@ -7,7 +7,11 @@ import maya.utils import maya.cmds as cmds from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io +from openpype.pipeline import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name +) from openpype.pipeline.workfile import BuildWorkfile from openpype.tools.utils import host_tools from openpype.hosts.maya.api import lib, lib_rendersettings @@ -35,6 +39,13 @@ def _get_menu(menu_name=None): return widgets.get(menu_name) +def get_context_label(): + return "{}, {}".format( + get_current_asset_name(), + get_current_task_name() + ) + + def install(): if cmds.about(batch=True): log.info("Skipping openpype.menu initialization in batch mode..") @@ -45,19 +56,15 @@ def install(): parent_widget = get_main_window() cmds.menu( MENU_NAME, - label=legacy_io.Session["AVALON_LABEL"], + label=os.environ.get("AVALON_LABEL") or "OpenPype", tearOff=True, parent="MayaWindow" ) # Create context menu - context_label = "{}, {}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) cmds.menuItem( "currentContext", - label=context_label, + label=get_context_label(), parent=MENU_NAME, enable=False ) @@ -66,10 +73,12 @@ def install(): cmds.menuItem(divider=True) - # Create default items cmds.menuItem( "Create...", - command=lambda *args: host_tools.show_creator(parent=parent_widget) + command=lambda *args: host_tools.show_publisher( + parent=parent_widget, + tab="create" + ) ) cmds.menuItem( @@ -82,8 +91,9 @@ def install(): cmds.menuItem( "Publish...", - command=lambda *args: host_tools.show_publish( - parent=parent_widget + command=lambda *args: host_tools.show_publisher( + parent=parent_widget, + tab="publish" ), image=pyblish_icon ) @@ -192,7 +202,8 @@ def install(): return # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) config = project_settings["maya"]["scriptsmenu"]["definition"] _menu = project_settings["maya"]["scriptsmenu"]["name"] @@ -249,8 +260,5 @@ def update_menu_task_label(): log.warning("Can't find menuItem: {}".format(object_name)) return - label = "{}, {}".format( - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] - ) + label = get_context_label() cmds.menuItem(object_name, edit=True, label=label) diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index e2d00b5bd7..60495ac652 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -1,3 +1,5 @@ +import json +import base64 import os import errno import logging @@ -14,6 +16,7 @@ from openpype.host import ( HostBase, IWorkfileHost, ILoadHost, + IPublishHost, HostDirmap, ) from openpype.tools.utils import host_tools @@ -24,6 +27,9 @@ from openpype.lib import ( ) from openpype.pipeline import ( legacy_io, + get_current_project_name, + get_current_asset_name, + get_current_task_name, register_loader_plugin_path, register_inventory_action_path, register_creator_plugin_path, @@ -64,7 +70,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") AVALON_CONTAINERS = ":AVALON_CONTAINERS" -class MayaHost(HostBase, IWorkfileHost, ILoadHost): +class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): name = "maya" def __init__(self): @@ -72,7 +78,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): self._op_events = {} def install(self): - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_settings = get_project_settings(project_name) # process path mapping dirmap_processor = MayaDirmap("maya", project_name, project_settings) @@ -150,6 +156,20 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): with lib.maintained_selection(): yield + def get_context_data(self): + data = cmds.fileInfo("OpenPypeContext", query=True) + if not data: + return {} + + data = data[0] # Maya seems to return a list + decoded = base64.b64decode(data).decode("utf-8") + return json.loads(decoded) + + def update_context_data(self, data, changes): + json_str = json.dumps(data) + encoded = base64.b64encode(json_str.encode("utf-8")) + return cmds.fileInfo("OpenPypeContext", encoded) + def _register_callbacks(self): for handler, event in self._op_events.copy().items(): if event is None: @@ -303,7 +323,7 @@ def _remove_workfile_lock(): def handle_workfile_locks(): if lib.IS_HEADLESS: return False - project_name = legacy_io.active_project() + project_name = get_current_project_name() return is_workfile_lock_enabled(MayaHost.name, project_name) @@ -640,9 +660,9 @@ def on_task_changed(): lib.update_content_on_context_change() msg = " project: {}\n asset: {}\n task:{}".format( - legacy_io.active_project(), - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"] + get_current_project_name(), + get_current_asset_name(), + get_current_task_name() ) lib.show_message( @@ -657,7 +677,7 @@ def before_workfile_open(): def before_workfile_save(event): - project_name = legacy_io.active_project() + project_name = get_current_project_name() if handle_workfile_locks(): _remove_workfile_lock() workdir_path = event["workdir_path"] diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 0971251469..00d6602ef9 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -1,29 +1,50 @@ +import json import os - -from maya import cmds +from abc import ABCMeta import qargparse +import six +from maya import cmds +from maya.app.renderSetup.model import renderSetup -from openpype.lib import Logger +from openpype.lib import BoolDef, Logger +from openpype.settings import get_project_settings from openpype.pipeline import ( + AVALON_CONTAINER_ID, + Anatomy, + + CreatedInstance, + Creator as NewCreator, + AutoCreator, + HiddenCreator, + + CreatorError, LegacyCreator, LoaderPlugin, get_representation_path, - AVALON_CONTAINER_ID, - Anatomy, ) from openpype.pipeline.load import LoadError -from openpype.settings import get_project_settings -from .pipeline import containerise -from . import lib +from openpype.client import get_asset_by_name +from openpype.pipeline.create import get_subset_name +from . import lib +from .lib import imprint, read +from .pipeline import containerise log = Logger.get_logger() +def _get_attr(node, attr, default=None): + """Helper to get attribute which allows attribute to not exist.""" + if not cmds.attributeQuery(attr, node=node, exists=True): + return default + return cmds.getAttr("{}.{}".format(node, attr)) + + # Backwards compatibility: these functions has been moved to lib. def get_reference_node(*args, **kwargs): - """ + """Get the reference node from the container members + Deprecated: This function was moved and will be removed in 3.16.x. """ @@ -60,9 +81,520 @@ class Creator(LegacyCreator): return instance +@six.add_metaclass(ABCMeta) +class MayaCreatorBase(object): + + @staticmethod + def cache_subsets(shared_data): + """Cache instances for Creators to shared data. + + Create `maya_cached_subsets` key when needed in shared data and + fill it with all collected instances from the scene under its + respective creator identifiers. + + If legacy instances are detected in the scene, create + `maya_cached_legacy_subsets` there and fill it with + all legacy subsets under family as a key. + + Args: + Dict[str, Any]: Shared data. + + Return: + Dict[str, Any]: Shared data dictionary. + + """ + if shared_data.get("maya_cached_subsets") is None: + cache = dict() + cache_legacy = dict() + + for node in cmds.ls(type="objectSet"): + + if _get_attr(node, attr="id") != "pyblish.avalon.instance": + continue + + creator_id = _get_attr(node, attr="creator_identifier") + if creator_id is not None: + # creator instance + cache.setdefault(creator_id, []).append(node) + else: + # legacy instance + family = _get_attr(node, attr="family") + if family is None: + # must be a broken instance + continue + + cache_legacy.setdefault(family, []).append(node) + + shared_data["maya_cached_subsets"] = cache + shared_data["maya_cached_legacy_subsets"] = cache_legacy + return shared_data + + def imprint_instance_node(self, node, data): + + # We never store the instance_node as value on the node since + # it's the node name itself + data.pop("instance_node", None) + + # We store creator attributes at the root level and assume they + # will not clash in names with `subset`, `task`, etc. and other + # default names. This is just so these attributes in many cases + # are still editable in the maya UI by artists. + # pop to move to end of dict to sort attributes last on the node + creator_attributes = data.pop("creator_attributes", {}) + data.update(creator_attributes) + + # We know the "publish_attributes" will be complex data of + # settings per plugins, we'll store this as a flattened json structure + # pop to move to end of dict to sort attributes last on the node + data["publish_attributes"] = json.dumps( + data.pop("publish_attributes", {}) + ) + + # Since we flattened the data structure for creator attributes we want + # to correctly detect which flattened attributes should end back in the + # creator attributes when reading the data from the node, so we store + # the relevant keys as a string + data["__creator_attributes_keys"] = ",".join(creator_attributes.keys()) + + # Kill any existing attributes just so we can imprint cleanly again + for attr in data.keys(): + if cmds.attributeQuery(attr, node=node, exists=True): + cmds.deleteAttr("{}.{}".format(node, attr)) + + return imprint(node, data) + + def read_instance_node(self, node): + node_data = read(node) + + # Never care about a cbId attribute on the object set + # being read as 'data' + node_data.pop("cbId", None) + + # Move the relevant attributes into "creator_attributes" that + # we flattened originally + node_data["creator_attributes"] = {} + creator_attribute_keys = node_data.pop("__creator_attributes_keys", + "").split(",") + for key in creator_attribute_keys: + if key in node_data: + node_data["creator_attributes"][key] = node_data.pop(key) + + publish_attributes = node_data.get("publish_attributes") + if publish_attributes: + node_data["publish_attributes"] = json.loads(publish_attributes) + + # Explicitly re-parse the node name + node_data["instance_node"] = node + + return node_data + + def _default_collect_instances(self): + self.cache_subsets(self.collection_shared_data) + cached_subsets = self.collection_shared_data["maya_cached_subsets"] + for node in cached_subsets.get(self.identifier, []): + node_data = self.read_instance_node(node) + + created_instance = CreatedInstance.from_existing(node_data, self) + self._add_instance_to_context(created_instance) + + def _default_update_instances(self, update_list): + for created_inst, _changes in update_list: + data = created_inst.data_to_store() + node = data.get("instance_node") + + self.imprint_instance_node(node, data) + + def _default_remove_instances(self, instances): + """Remove specified instance from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + for instance in instances: + node = instance.data.get("instance_node") + if node: + cmds.delete(node) + + self._remove_instance_from_context(instance) + + +@six.add_metaclass(ABCMeta) +class MayaCreator(NewCreator, MayaCreatorBase): + + settings_name = None + + def create(self, subset_name, instance_data, pre_create_data): + + members = list() + if pre_create_data.get("use_selection"): + members = cmds.ls(selection=True) + + with lib.undo_chunk(): + instance_node = cmds.sets(members, name=subset_name) + instance_data["instance_node"] = instance_node + instance = CreatedInstance( + self.family, + subset_name, + instance_data, + self) + self._add_instance_to_context(instance) + + self.imprint_instance_node(instance_node, + data=instance.data_to_store()) + return instance + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + def get_pre_create_attr_defs(self): + return [ + BoolDef("use_selection", + label="Use selection", + default=True) + ] + + def apply_settings(self, project_settings, system_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["maya"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) + + +class MayaAutoCreator(AutoCreator, MayaCreatorBase): + """Automatically triggered creator for Maya. + + The plugin is not visible in UI, and 'create' method does not expect + any arguments. + """ + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + +class MayaHiddenCreator(HiddenCreator, MayaCreatorBase): + """Hidden creator for Maya. + + The plugin is not visible in UI, and it does not have strictly defined + arguments for 'create' method. + """ + + def create(self, *args, **kwargs): + return MayaCreator.create(self, *args, **kwargs) + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + +def ensure_namespace(namespace): + """Make sure the namespace exists. + + Args: + namespace (str): The preferred namespace name. + + Returns: + str: The generated or existing namespace + + """ + exists = cmds.namespace(exists=namespace) + if exists: + return namespace + else: + return cmds.namespace(add=namespace) + + +class RenderlayerCreator(NewCreator, MayaCreatorBase): + """Creator which creates an instance per renderlayer in the workfile. + + Create and manages renderlayer subset per renderLayer in workfile. + This generates a singleton node in the scene which, if it exists, tells the + Creator to collect Maya rendersetup renderlayers as individual instances. + As such, triggering create doesn't actually create the instance node per + layer but only the node which tells the Creator it may now collect + an instance per renderlayer. + + """ + + # These are required to be overridden in subclass + singleton_node_name = "" + + # These are optional to be overridden in subclass + layer_instance_prefix = None + + def _get_singleton_node(self, return_all=False): + nodes = lib.lsattr("pre_creator_identifier", self.identifier) + if nodes: + return nodes if return_all else nodes[0] + + def create(self, subset_name, instance_data, pre_create_data): + # A Renderlayer is never explicitly created using the create method. + # Instead, renderlayers from the scene are collected. Thus "create" + # would only ever be called to say, 'hey, please refresh collect' + self.create_singleton_node() + + # if no render layers are present, create default one with + # asterisk selector + rs = renderSetup.instance() + if not rs.getRenderLayers(): + render_layer = rs.createRenderLayer("Main") + collection = render_layer.createCollection("defaultCollection") + collection.getSelector().setPattern('*') + + # By RenderLayerCreator.create we make it so that the renderlayer + # instances directly appear even though it just collects scene + # renderlayers. This doesn't actually 'create' any scene contents. + self.collect_instances() + + def create_singleton_node(self): + if self._get_singleton_node(): + raise CreatorError("A Render instance already exists - only " + "one can be configured.") + + with lib.undo_chunk(): + node = cmds.sets(empty=True, name=self.singleton_node_name) + lib.imprint(node, data={ + "pre_creator_identifier": self.identifier + }) + + return node + + def collect_instances(self): + + # We only collect if the global render instance exists + if not self._get_singleton_node(): + return + + rs = renderSetup.instance() + layers = rs.getRenderLayers() + for layer in layers: + layer_instance_node = self.find_layer_instance_node(layer) + if layer_instance_node: + data = self.read_instance_node(layer_instance_node) + instance = CreatedInstance.from_existing(data, creator=self) + else: + # No existing scene instance node for this layer. Note that + # this instance will not have the `instance_node` data yet + # until it's been saved/persisted at least once. + project_name = self.create_context.get_current_project_name() + + instance_data = { + "asset": self.create_context.get_current_asset_name(), + "task": self.create_context.get_current_task_name(), + "variant": layer.name(), + } + asset_doc = get_asset_by_name(project_name, + instance_data["asset"]) + subset_name = self.get_subset_name( + layer.name(), + instance_data["task"], + asset_doc, + project_name) + + instance = CreatedInstance( + family=self.family, + subset_name=subset_name, + data=instance_data, + creator=self + ) + + instance.transient_data["layer"] = layer + self._add_instance_to_context(instance) + + def find_layer_instance_node(self, layer): + connected_sets = cmds.listConnections( + "{}.message".format(layer.name()), + source=False, + destination=True, + type="objectSet" + ) or [] + + for node in connected_sets: + if not cmds.attributeQuery("creator_identifier", + node=node, + exists=True): + continue + + creator_identifier = cmds.getAttr(node + ".creator_identifier") + if creator_identifier == self.identifier: + self.log.info("Found node: {}".format(node)) + return node + + def _create_layer_instance_node(self, layer): + + # We only collect if a CreateRender instance exists + create_render_set = self._get_singleton_node() + if not create_render_set: + raise CreatorError("Creating a renderlayer instance node is not " + "allowed if no 'CreateRender' instance exists") + + namespace = "_{}".format(self.singleton_node_name) + namespace = ensure_namespace(namespace) + + name = "{}:{}".format(namespace, layer.name()) + render_set = cmds.sets(name=name, empty=True) + + # Keep an active link with the renderlayer so we can retrieve it + # later by a physical maya connection instead of relying on the layer + # name + cmds.addAttr(render_set, longName="renderlayer", at="message") + cmds.connectAttr("{}.message".format(layer.name()), + "{}.renderlayer".format(render_set), force=True) + + # Add the set to the 'CreateRender' set. + cmds.sets(render_set, forceElement=create_render_set) + + return render_set + + def update_instances(self, update_list): + # We only generate the persisting layer data into the scene once + # we save with the UI on e.g. validate or publish + for instance, _changes in update_list: + instance_node = instance.data.get("instance_node") + + # Ensure a node exists to persist the data to + if not instance_node: + layer = instance.transient_data["layer"] + instance_node = self._create_layer_instance_node(layer) + instance.data["instance_node"] = instance_node + + self.imprint_instance_node(instance_node, + data=instance.data_to_store()) + + def imprint_instance_node(self, node, data): + # Do not ever try to update the `renderlayer` since it'll try + # to remove the attribute and recreate it but fail to keep it a + # message attribute link. We only ever imprint that on the initial + # node creation. + # TODO: Improve how this is handled + data.pop("renderlayer", None) + data.get("creator_attributes", {}).pop("renderlayer", None) + + return super(RenderlayerCreator, self).imprint_instance_node(node, + data=data) + + def remove_instances(self, instances): + """Remove specified instances from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + # Instead of removing the single instance or renderlayers we instead + # remove the CreateRender node this creator relies on to decide whether + # it should collect anything at all. + nodes = self._get_singleton_node(return_all=True) + if nodes: + cmds.delete(nodes) + + # Remove ALL the instances even if only one gets deleted + for instance in list(self.create_context.instances): + if instance.get("creator_identifier") == self.identifier: + self._remove_instance_from_context(instance) + + # Remove the stored settings per renderlayer too + node = instance.data.get("instance_node") + if node and cmds.objExists(node): + cmds.delete(node) + + def get_subset_name( + self, + variant, + task_name, + asset_doc, + project_name, + host_name=None, + instance=None + ): + # creator.family != 'render' as expected + return get_subset_name(self.layer_instance_prefix, + variant, + task_name, + asset_doc, + project_name) + + class Loader(LoaderPlugin): hosts = ["maya"] + def get_custom_namespace_and_group(self, context, options, loader_key): + """Queries Settings to get custom template for namespace and group. + + Group template might be empty >> this forces to not wrap imported items + into separate group. + + Args: + context (dict) + options (dict): artist modifiable options from dialog + loader_key (str): key to get separate configuration from Settings + ('reference_loader'|'import_loader') + """ + options["attach_to_root"] = True + + asset = context['asset'] + subset = context['subset'] + settings = get_project_settings(context['project']['name']) + custom_naming = settings['maya']['load'][loader_key] + + if not custom_naming['namespace']: + raise LoadError("No namespace specified in " + "Maya ReferenceLoader settings") + elif not custom_naming['group_name']: + self.log.debug("No custom group_name, no group will be created.") + options["attach_to_root"] = False + + formatting_data = { + "asset_name": asset['name'], + "asset_type": asset['type'], + "folder": { + "name": asset["name"], + }, + "subset": subset['name'], + "family": ( + subset['data'].get('family') or + subset['data']['families'][0] + ) + } + + custom_namespace = custom_naming['namespace'].format( + **formatting_data + ) + + custom_group_name = custom_naming['group_name'].format( + **formatting_data + ) + + return custom_group_name, custom_namespace, options + class ReferenceLoader(Loader): """A basic ReferenceLoader for Maya @@ -102,41 +634,16 @@ class ReferenceLoader(Loader): namespace=None, options=None ): - assert os.path.exists(self.fname), "%s does not exist." % self.fname + path = self.filepath_from_context(context) + assert os.path.exists(path), "%s does not exist." % path - asset = context['asset'] - subset = context['subset'] - settings = get_project_settings(context['project']['name']) - custom_naming = settings['maya']['load']['reference_loader'] - loaded_containers = [] - - if not custom_naming['namespace']: - raise LoadError("No namespace specified in " - "Maya ReferenceLoader settings") - elif not custom_naming['group_name']: - raise LoadError("No group name specified in " - "Maya ReferenceLoader settings") - - formatting_data = { - "asset_name": asset['name'], - "asset_type": asset['type'], - "subset": subset['name'], - "family": ( - subset['data'].get('family') or - subset['data']['families'][0] - ) - } - - custom_namespace = custom_naming['namespace'].format( - **formatting_data - ) - - custom_group_name = custom_naming['group_name'].format( - **formatting_data - ) + custom_group_name, custom_namespace, options = \ + self.get_custom_namespace_and_group(context, options, + "reference_loader") count = options.get("count") or 1 + loaded_containers = [] for c in range(0, count): namespace = lib.get_custom_namespace(custom_namespace) group_name = "{}:{}".format( @@ -186,6 +693,7 @@ class ReferenceLoader(Loader): def update(self, container, representation): from maya import cmds + from openpype.hosts.maya.api.lib import get_container_members node = container["objectName"] diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py index 0bb1f186eb..7624aacd0f 100644 --- a/openpype/hosts/maya/api/setdress.py +++ b/openpype/hosts/maya/api/setdress.py @@ -18,13 +18,13 @@ from openpype.client import ( ) from openpype.pipeline import ( schema, - legacy_io, discover_loader_plugins, loaders_from_representation, load_container, update_container, remove_container, get_representation_path, + get_current_project_name, ) from openpype.hosts.maya.api.lib import ( matrix_equals, @@ -289,7 +289,7 @@ def update_package_version(container, version): """ # Versioning (from `core.maya.pipeline`) - project_name = legacy_io.active_project() + project_name = get_current_project_name() current_representation = get_representation_by_id( project_name, container["representation"] ) @@ -332,7 +332,7 @@ def update_package(set_container, representation): """ # Load the original package data - project_name = legacy_io.active_project() + project_name = get_current_project_name() current_representation = get_representation_by_id( project_name, set_container["representation"] ) @@ -380,7 +380,7 @@ def update_scene(set_container, containers, current_data, new_data, new_file): """ set_namespace = set_container['namespace'] - project_name = legacy_io.active_project() + project_name = get_current_project_name() # Update the setdress hierarchy alembic set_root = get_container_transforms(set_container, root=True) diff --git a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py index 689d7adb4f..4b1ea698a6 100644 --- a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py +++ b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class MayaPreAutoLoadPlugins(PreLaunchHook): @@ -6,7 +6,8 @@ class MayaPreAutoLoadPlugins(PreLaunchHook): # Before AddLastWorkfileToLaunchArgs order = 9 - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/hooks/pre_copy_mel.py b/openpype/hosts/maya/hooks/pre_copy_mel.py index 6f90af4b7c..0fb5af149a 100644 --- a/openpype/hosts/maya/hooks/pre_copy_mel.py +++ b/openpype/hosts/maya/hooks/pre_copy_mel.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts.maya.lib import create_workspace_mel @@ -7,13 +7,15 @@ class PreCopyMel(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): - project_name = self.launch_context.env.get("AVALON_PROJECT") + project_doc = self.data["project_doc"] workdir = self.launch_context.env.get("AVALON_WORKDIR") if not workdir: self.log.warning("BUG: Workdir is not filled.") return - create_workspace_mel(workdir, project_name) + project_settings = self.data["project_settings"] + create_workspace_mel(workdir, project_doc["name"], project_settings) diff --git a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py index 7582ce0591..1fe3c3ca2c 100644 --- a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py +++ b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py @@ -1,4 +1,4 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): @@ -6,7 +6,8 @@ class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): # Before AddLastWorkfileToLaunchArgs. order = 9 - app_groups = ["maya"] + app_groups = {"maya"} + launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/lib.py b/openpype/hosts/maya/lib.py index ffb2f0b27c..765c60381b 100644 --- a/openpype/hosts/maya/lib.py +++ b/openpype/hosts/maya/lib.py @@ -3,7 +3,7 @@ from openpype.settings import get_project_settings from openpype.lib import Logger -def create_workspace_mel(workdir, project_name): +def create_workspace_mel(workdir, project_name, project_settings=None): dst_filepath = os.path.join(workdir, "workspace.mel") if os.path.exists(dst_filepath): return @@ -11,8 +11,9 @@ def create_workspace_mel(workdir, project_name): if not os.path.exists(workdir): os.makedirs(workdir) - project_setting = get_project_settings(project_name) - mel_script = project_setting["maya"].get("mel_workspace") + if not project_settings: + project_settings = get_project_settings(project_name) + mel_script = project_settings["maya"].get("mel_workspace") # Skip if mel script in settings is empty if not mel_script: diff --git a/openpype/hosts/maya/plugins/create/convert_legacy.py b/openpype/hosts/maya/plugins/create/convert_legacy.py new file mode 100644 index 0000000000..cd8faf291b --- /dev/null +++ b/openpype/hosts/maya/plugins/create/convert_legacy.py @@ -0,0 +1,178 @@ +from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin +from openpype.hosts.maya.api import plugin +from openpype.hosts.maya.api.lib import read + +from openpype.client import get_asset_by_name + +from maya import cmds +from maya.app.renderSetup.model import renderSetup + + +class MayaLegacyConvertor(SubsetConvertorPlugin, + plugin.MayaCreatorBase): + """Find and convert any legacy subsets in the scene. + + This Convertor will find all legacy subsets in the scene and will + transform them to the current system. Since the old subsets doesn't + retain any information about their original creators, the only mapping + we can do is based on their families. + + Its limitation is that you can have multiple creators creating subset + of the same family and there is no way to handle it. This code should + nevertheless cover all creators that came with OpenPype. + + """ + identifier = "io.openpype.creators.maya.legacy" + + # Cases where the identifier or new family doesn't correspond to the + # original family on the legacy instances + special_family_conversions = { + "rendering": "io.openpype.creators.maya.renderlayer", + } + + def find_instances(self): + + self.cache_subsets(self.collection_shared_data) + legacy = self.collection_shared_data.get("maya_cached_legacy_subsets") + if not legacy: + return + + self.add_convertor_item("Convert legacy instances") + + def convert(self): + self.remove_convertor_item() + + # We can't use the collected shared data cache here + # we re-query it here directly to convert all found. + cache = {} + self.cache_subsets(cache) + legacy = cache.get("maya_cached_legacy_subsets") + if not legacy: + return + + # From all current new style manual creators find the mapping + # from family to identifier + family_to_id = {} + for identifier, creator in self.create_context.creators.items(): + family = getattr(creator, "family", None) + if not family: + continue + + if family in family_to_id: + # We have a clash of family -> identifier. Multiple + # new style creators use the same family + self.log.warning("Clash on family->identifier: " + "{}".format(identifier)) + family_to_id[family] = identifier + + family_to_id.update(self.special_family_conversions) + + # We also embed the current 'task' into the instance since legacy + # instances didn't store that data on the instances. The old style + # logic was thus to be live to the current task to begin with. + data = dict() + data["task"] = self.create_context.get_current_task_name() + for family, instance_nodes in legacy.items(): + if family not in family_to_id: + self.log.warning( + "Unable to convert legacy instance with family '{}'" + " because there is no matching new creator's family" + "".format(family) + ) + continue + + creator_id = family_to_id[family] + creator = self.create_context.creators[creator_id] + data["creator_identifier"] = creator_id + + if isinstance(creator, plugin.RenderlayerCreator): + self._convert_per_renderlayer(instance_nodes, data, creator) + else: + self._convert_regular(instance_nodes, data) + + def _convert_regular(self, instance_nodes, data): + # We only imprint the creator identifier for it to identify + # as the new style creator + for instance_node in instance_nodes: + self.imprint_instance_node(instance_node, + data=data.copy()) + + def _convert_per_renderlayer(self, instance_nodes, data, creator): + # Split the instance into an instance per layer + rs = renderSetup.instance() + layers = rs.getRenderLayers() + if not layers: + self.log.error( + "Can't convert legacy renderlayer instance because no existing" + " renderSetup layers exist in the scene." + ) + return + + creator_attribute_names = { + attr_def.key for attr_def in creator.get_instance_attr_defs() + } + + for instance_node in instance_nodes: + + # Ensure we have the new style singleton node generated + # TODO: Make function public + singleton_node = creator._get_singleton_node() + if singleton_node: + self.log.error( + "Can't convert legacy renderlayer instance '{}' because" + " new style instance '{}' already exists".format( + instance_node, + singleton_node + ) + ) + continue + + creator.create_singleton_node() + + # We are creating new nodes to replace the original instance + # Copy the attributes of the original instance to the new node + original_data = read(instance_node) + + # The family gets converted to the new family (this is due to + # "rendering" family being converted to "renderlayer" family) + original_data["family"] = creator.family + + # recreate subset name as without it would be + # `renderingMain` vs correct `renderMain` + project_name = self.create_context.get_current_project_name() + asset_doc = get_asset_by_name(project_name, + original_data["asset"]) + subset_name = creator.get_subset_name( + original_data["variant"], + data["task"], + asset_doc, + project_name) + original_data["subset"] = subset_name + + # Convert to creator attributes when relevant + creator_attributes = {} + for key in list(original_data.keys()): + # Iterate in order of the original attributes to preserve order + # in the output creator attributes + if key in creator_attribute_names: + creator_attributes[key] = original_data.pop(key) + original_data["creator_attributes"] = creator_attributes + + # For layer in maya layers + for layer in layers: + layer_instance_node = creator.find_layer_instance_node(layer) + if not layer_instance_node: + # TODO: Make function public + layer_instance_node = creator._create_layer_instance_node( + layer + ) + + # Transfer the main attributes of the original instance + layer_data = original_data.copy() + layer_data.update(data) + + self.imprint_instance_node(layer_instance_node, + data=layer_data) + + # Delete the legacy instance node + cmds.delete(instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py index 095cbcdd64..214ac18aef 100644 --- a/openpype/hosts/maya/plugins/create/create_animation.py +++ b/openpype/hosts/maya/plugins/create/create_animation.py @@ -2,59 +2,90 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import ( + BoolDef, + TextDef +) -class CreateAnimation(plugin.Creator): - """Animation output for character rigs""" - - # We hide the animation creator from the UI since the creation of it - # is automated upon loading a rig. There's an inventory action to recreate - # it for loaded rigs if by chance someone deleted the animation instance. - # Note: This setting is actually applied from project settings - enabled = False +class CreateAnimation(plugin.MayaHiddenCreator): + """Animation output for character rigs + We hide the animation creator from the UI since the creation of it is + automated upon loading a rig. There's an inventory action to recreate it + for loaded rigs if by chance someone deleted the animation instance. + """ + identifier = "io.openpype.creators.maya.animation" name = "animationDefault" label = "Animation" family = "animation" icon = "male" + write_color_sets = False write_face_sets = False include_parent_hierarchy = False include_user_defined_attributes = False - def __init__(self, *args, **kwargs): - super(CreateAnimation, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # create an ordered dict with the existing data first + defs = lib.collect_animation_defs() - # get basic animation data : start / end / handles / steps - for key, value in lib.collect_animation_data().items(): - self.data[key] = value - - # Write vertex colors with the geometry. - self.data["writeColorSets"] = self.write_color_sets - self.data["writeFaceSets"] = self.write_face_sets - - # Include only renderable visible shapes. - # Skips locators and empty transforms - self.data["renderableOnly"] = False - - # Include only nodes that are visible at least once during the - # frame range. - self.data["visibleOnly"] = False - - # Include the groups above the out_SET content - self.data["includeParentHierarchy"] = self.include_parent_hierarchy - - # Default to exporting world-space - self.data["worldSpace"] = True + defs.extend([ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=self.write_color_sets), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=self.write_face_sets), + BoolDef("writeNormals", + label="Write normals", + tooltip="Write normals with the deforming geometry", + default=True), + BoolDef("renderableOnly", + label="Renderable Only", + tooltip="Only export renderable visible shapes", + default=False), + BoolDef("visibleOnly", + label="Visible Only", + tooltip="Only export dag objects visible during " + "frame range", + default=False), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=self.include_parent_hierarchy), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + BoolDef("includeUserDefinedAttributes", + label="Include User Defined Attributes", + default=self.include_user_defined_attributes), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ]) + # TODO: Implement these on a Deadline plug-in instead? + """ # Default to not send to farm. self.data["farm"] = False self.data["priority"] = 50 + """ - # Default to write normals. - self.data["writeNormals"] = True + return defs - value = self.include_user_defined_attributes - self.data["includeUserDefinedAttributes"] = value + def apply_settings(self, project_settings, system_settings): + super(CreateAnimation, self).apply_settings( + project_settings, system_settings + ) + # Hardcoding creator to be enabled due to existing settings would + # disable the creator causing the creator plugin to not be + # discoverable. + self.enabled = True diff --git a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py index 2afb897e94..1ef132725f 100644 --- a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py +++ b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py @@ -2,17 +2,21 @@ from openpype.hosts.maya.api import ( lib, plugin ) - -from maya import cmds +from openpype.lib import ( + NumberDef, + BoolDef +) -class CreateArnoldSceneSource(plugin.Creator): +class CreateArnoldSceneSource(plugin.MayaCreator): """Arnold Scene Source""" - name = "ass" + identifier = "io.openpype.creators.maya.ass" label = "Arnold Scene Source" family = "ass" icon = "cube" + settings_name = "CreateAss" + expandProcedurals = False motionBlur = True motionBlurKeys = 2 @@ -28,39 +32,71 @@ class CreateArnoldSceneSource(plugin.Creator): maskColor_manager = False maskOperator = False - def __init__(self, *args, **kwargs): - super(CreateArnoldSceneSource, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data - self.data.update(lib.collect_animation_data()) + defs = lib.collect_animation_defs() - self.data["expandProcedurals"] = self.expandProcedurals - self.data["motionBlur"] = self.motionBlur - self.data["motionBlurKeys"] = self.motionBlurKeys - self.data["motionBlurLength"] = self.motionBlurLength + defs.extend([ + BoolDef("expandProcedural", + label="Expand Procedural", + default=self.expandProcedurals), + BoolDef("motionBlur", + label="Motion Blur", + default=self.motionBlur), + NumberDef("motionBlurKeys", + label="Motion Blur Keys", + decimals=0, + default=self.motionBlurKeys), + NumberDef("motionBlurLength", + label="Motion Blur Length", + decimals=3, + default=self.motionBlurLength), - # Masks - self.data["maskOptions"] = self.maskOptions - self.data["maskCamera"] = self.maskCamera - self.data["maskLight"] = self.maskLight - self.data["maskShape"] = self.maskShape - self.data["maskShader"] = self.maskShader - self.data["maskOverride"] = self.maskOverride - self.data["maskDriver"] = self.maskDriver - self.data["maskFilter"] = self.maskFilter - self.data["maskColor_manager"] = self.maskColor_manager - self.data["maskOperator"] = self.maskOperator + # Masks + BoolDef("maskOptions", + label="Export Options", + default=self.maskOptions), + BoolDef("maskCamera", + label="Export Cameras", + default=self.maskCamera), + BoolDef("maskLight", + label="Export Lights", + default=self.maskLight), + BoolDef("maskShape", + label="Export Shapes", + default=self.maskShape), + BoolDef("maskShader", + label="Export Shaders", + default=self.maskShader), + BoolDef("maskOverride", + label="Export Override Nodes", + default=self.maskOverride), + BoolDef("maskDriver", + label="Export Drivers", + default=self.maskDriver), + BoolDef("maskFilter", + label="Export Filters", + default=self.maskFilter), + BoolDef("maskOperator", + label="Export Operators", + default=self.maskOperator), + BoolDef("maskColor_manager", + label="Export Color Managers", + default=self.maskColor_manager), + ]) - def process(self): - instance = super(CreateArnoldSceneSource, self).process() + return defs - nodes = [] + def create(self, subset_name, instance_data, pre_create_data): - if (self.options or {}).get("useSelection"): - nodes = cmds.ls(selection=True) + from maya import cmds - cmds.sets(nodes, rm=instance) + instance = super(CreateArnoldSceneSource, self).create( + subset_name, instance_data, pre_create_data + ) - assContent = cmds.sets(name=instance + "_content_SET") - assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True) - cmds.sets([assContent, assProxy], forceElement=instance) + instance_node = instance.get("instance_node") + + content = cmds.sets(name=instance_node + "_content_SET", empty=True) + proxy = cmds.sets(name=instance_node + "_proxy_SET", empty=True) + cmds.sets([content, proxy], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_assembly.py b/openpype/hosts/maya/plugins/create/create_assembly.py index ff5e1d45c4..813fe4da04 100644 --- a/openpype/hosts/maya/plugins/create/create_assembly.py +++ b/openpype/hosts/maya/plugins/create/create_assembly.py @@ -1,10 +1,10 @@ from openpype.hosts.maya.api import plugin -class CreateAssembly(plugin.Creator): +class CreateAssembly(plugin.MayaCreator): """A grouped package of loaded content""" - name = "assembly" + identifier = "io.openpype.creators.maya.assembly" label = "Assembly" family = "assembly" icon = "cubes" diff --git a/openpype/hosts/maya/plugins/create/create_camera.py b/openpype/hosts/maya/plugins/create/create_camera.py index 8b2c881036..0219f56330 100644 --- a/openpype/hosts/maya/plugins/create/create_camera.py +++ b/openpype/hosts/maya/plugins/create/create_camera.py @@ -2,33 +2,35 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import BoolDef -class CreateCamera(plugin.Creator): +class CreateCamera(plugin.MayaCreator): """Single baked camera""" - name = "cameraMain" + identifier = "io.openpype.creators.maya.camera" label = "Camera" family = "camera" icon = "video-camera" - def __init__(self, *args, **kwargs): - super(CreateCamera, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # get basic animation data : start / end / handles / steps - animation_data = lib.collect_animation_data() - for key, value in animation_data.items(): - self.data[key] = value + defs = lib.collect_animation_defs() - # Bake to world space by default, when this is False it will also - # include the parent hierarchy in the baked results - self.data['bakeToWorldSpace'] = True + defs.extend([ + BoolDef("bakeToWorldSpace", + label="Bake to World-Space", + tooltip="Bake to World-Space", + default=True), + ]) + + return defs -class CreateCameraRig(plugin.Creator): +class CreateCameraRig(plugin.MayaCreator): """Complex hierarchy with camera.""" - name = "camerarigMain" + identifier = "io.openpype.creators.maya.camerarig" label = "Camera Rig" family = "camerarig" icon = "video-camera" diff --git a/openpype/hosts/maya/plugins/create/create_layout.py b/openpype/hosts/maya/plugins/create/create_layout.py index 1768a3d49e..168743d4dc 100644 --- a/openpype/hosts/maya/plugins/create/create_layout.py +++ b/openpype/hosts/maya/plugins/create/create_layout.py @@ -1,16 +1,21 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import BoolDef -class CreateLayout(plugin.Creator): +class CreateLayout(plugin.MayaCreator): """A grouped package of loaded content""" - name = "layoutMain" + identifier = "io.openpype.creators.maya.layout" label = "Layout" family = "layout" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateLayout, self).__init__(*args, **kwargs) - # enable this when you want to - # publish group of loaded asset - self.data["groupLoadedAssets"] = False + def get_instance_attr_defs(self): + + return [ + BoolDef("groupLoadedAssets", + label="Group Loaded Assets", + tooltip="Enable this when you want to publish group of " + "loaded asset", + default=False) + ] diff --git a/openpype/hosts/maya/plugins/create/create_look.py b/openpype/hosts/maya/plugins/create/create_look.py index 51b0b8819a..385ae81e01 100644 --- a/openpype/hosts/maya/plugins/create/create_look.py +++ b/openpype/hosts/maya/plugins/create/create_look.py @@ -1,29 +1,53 @@ from openpype.hosts.maya.api import ( - lib, - plugin + plugin, + lib +) +from openpype.lib import ( + BoolDef, + TextDef ) -class CreateLook(plugin.Creator): +class CreateLook(plugin.MayaCreator): """Shader connections defining shape look""" - name = "look" + identifier = "io.openpype.creators.maya.look" label = "Look" family = "look" icon = "paint-brush" + make_tx = True rs_tex = False - def __init__(self, *args, **kwargs): - super(CreateLook, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - self.data["renderlayer"] = lib.get_current_renderlayer() + return [ + # TODO: This value should actually get set on create! + TextDef("renderLayer", + # TODO: Bug: Hidden attribute's label is still shown in UI? + hidden=True, + default=lib.get_current_renderlayer(), + label="Renderlayer", + tooltip="Renderlayer to extract the look from"), + BoolDef("maketx", + label="MakeTX", + tooltip="Whether to generate .tx files for your textures", + default=self.make_tx), + BoolDef("rstex", + label="Convert textures to .rstex", + tooltip="Whether to generate Redshift .rstex files for " + "your textures", + default=self.rs_tex), + BoolDef("forceCopy", + label="Force Copy", + tooltip="Enable users to force a copy instead of hardlink." + "\nNote: On Windows copy is always forced due to " + "bugs in windows' implementation of hardlinks.", + default=False) + ] - # Whether to automatically convert the textures to .tx upon publish. - self.data["maketx"] = self.make_tx - # Whether to automatically convert the textures to .rstex upon publish. - self.data["rstex"] = self.rs_tex - # Enable users to force a copy. - # - on Windows is "forceCopy" always changed to `True` because of - # windows implementation of hardlinks - self.data["forceCopy"] = False + def get_pre_create_attr_defs(self): + # Show same attributes on create but include use selection + defs = super(CreateLook, self).get_pre_create_attr_defs() + defs.extend(self.get_instance_attr_defs()) + return defs diff --git a/openpype/hosts/maya/plugins/create/create_mayaascii.py b/openpype/hosts/maya/plugins/create/create_mayascene.py similarity index 65% rename from openpype/hosts/maya/plugins/create/create_mayaascii.py rename to openpype/hosts/maya/plugins/create/create_mayascene.py index f54f2df812..b61c97aebf 100644 --- a/openpype/hosts/maya/plugins/create/create_mayaascii.py +++ b/openpype/hosts/maya/plugins/create/create_mayascene.py @@ -1,9 +1,10 @@ from openpype.hosts.maya.api import plugin -class CreateMayaScene(plugin.Creator): +class CreateMayaScene(plugin.MayaCreator): """Raw Maya Scene file export""" + identifier = "io.openpype.creators.maya.mayascene" name = "mayaScene" label = "Maya Scene" family = "mayaScene" diff --git a/openpype/hosts/maya/plugins/create/create_model.py b/openpype/hosts/maya/plugins/create/create_model.py index 520e962f74..5c3dd04af0 100644 --- a/openpype/hosts/maya/plugins/create/create_model.py +++ b/openpype/hosts/maya/plugins/create/create_model.py @@ -1,26 +1,43 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import ( + BoolDef, + TextDef +) -class CreateModel(plugin.Creator): +class CreateModel(plugin.MayaCreator): """Polygonal static geometry""" - name = "modelMain" + identifier = "io.openpype.creators.maya.model" label = "Model" family = "model" icon = "cube" - defaults = ["Main", "Proxy", "_MD", "_HD", "_LD"] + default_variants = ["Main", "Proxy", "_MD", "_HD", "_LD"] + write_color_sets = False write_face_sets = False - def __init__(self, *args, **kwargs): - super(CreateModel, self).__init__(*args, **kwargs) - # Vertex colors with the geometry - self.data["writeColorSets"] = self.write_color_sets - self.data["writeFaceSets"] = self.write_face_sets + def get_instance_attr_defs(self): - # Include attributes by attribute name or prefix - self.data["attr"] = "" - self.data["attrPrefix"] = "" - - # Whether to include parent hierarchy of nodes in the instance - self.data["includeParentHierarchy"] = False + return [ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=self.write_color_sets), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=self.write_face_sets), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=False), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ] diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_look.py b/openpype/hosts/maya/plugins/create/create_multiverse_look.py index f47c88a93b..f27eb57fc1 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_look.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_look.py @@ -1,15 +1,27 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import ( + BoolDef, + EnumDef +) -class CreateMultiverseLook(plugin.Creator): +class CreateMultiverseLook(plugin.MayaCreator): """Create Multiverse Look""" - name = "mvLook" + identifier = "io.openpype.creators.maya.mvlook" label = "Multiverse Look" family = "mvLook" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseLook, self).__init__(*args, **kwargs) - self.data["fileFormat"] = ["usda", "usd"] - self.data["publishMipMap"] = True + def get_instance_attr_defs(self): + + return [ + EnumDef("fileFormat", + label="File Format", + tooltip="USD export file format", + items=["usda", "usd"], + default="usda"), + BoolDef("publishMipMap", + label="Publish MipMap", + default=True), + ] diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py index 8cd76b5f40..0b0ad3bccb 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py @@ -1,53 +1,135 @@ from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + NumberDef, + TextDef, + EnumDef +) -class CreateMultiverseUsd(plugin.Creator): +class CreateMultiverseUsd(plugin.MayaCreator): """Create Multiverse USD Asset""" - name = "mvUsdMain" + identifier = "io.openpype.creators.maya.mvusdasset" label = "Multiverse USD Asset" family = "usd" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseUsd, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data first, since it maintains order. - self.data.update(lib.collect_animation_data(True)) + defs = lib.collect_animation_defs(fps=True) + defs.extend([ + EnumDef("fileFormat", + label="File format", + items=["usd", "usda", "usdz"], + default="usd"), + BoolDef("stripNamespaces", + label="Strip Namespaces", + default=True), + BoolDef("mergeTransformAndShape", + label="Merge Transform and Shape", + default=False), + BoolDef("writeAncestors", + label="Write Ancestors", + default=True), + BoolDef("flattenParentXforms", + label="Flatten Parent Xforms", + default=False), + BoolDef("writeSparseOverrides", + label="Write Sparse Overrides", + default=False), + BoolDef("useMetaPrimPath", + label="Use Meta Prim Path", + default=False), + TextDef("customRootPath", + label="Custom Root Path", + default=''), + TextDef("customAttributes", + label="Custom Attributes", + tooltip="Comma-separated list of attribute names", + default=''), + TextDef("nodeTypesToIgnore", + label="Node Types to Ignore", + tooltip="Comma-separated list of node types to be ignored", + default=''), + BoolDef("writeMeshes", + label="Write Meshes", + default=True), + BoolDef("writeCurves", + label="Write Curves", + default=True), + BoolDef("writeParticles", + label="Write Particles", + default=True), + BoolDef("writeCameras", + label="Write Cameras", + default=False), + BoolDef("writeLights", + label="Write Lights", + default=False), + BoolDef("writeJoints", + label="Write Joints", + default=False), + BoolDef("writeCollections", + label="Write Collections", + default=False), + BoolDef("writePositions", + label="Write Positions", + default=True), + BoolDef("writeNormals", + label="Write Normals", + default=True), + BoolDef("writeUVs", + label="Write UVs", + default=True), + BoolDef("writeColorSets", + label="Write Color Sets", + default=False), + BoolDef("writeTangents", + label="Write Tangents", + default=False), + BoolDef("writeRefPositions", + label="Write Ref Positions", + default=True), + BoolDef("writeBlendShapes", + label="Write BlendShapes", + default=False), + BoolDef("writeDisplayColor", + label="Write Display Color", + default=True), + BoolDef("writeSkinWeights", + label="Write Skin Weights", + default=False), + BoolDef("writeMaterialAssignment", + label="Write Material Assignment", + default=False), + BoolDef("writeHardwareShader", + label="Write Hardware Shader", + default=False), + BoolDef("writeShadingNetworks", + label="Write Shading Networks", + default=False), + BoolDef("writeTransformMatrix", + label="Write Transform Matrix", + default=True), + BoolDef("writeUsdAttributes", + label="Write USD Attributes", + default=True), + BoolDef("writeInstancesAsReferences", + label="Write Instances as References", + default=False), + BoolDef("timeVaryingTopology", + label="Time Varying Topology", + default=False), + TextDef("customMaterialNamespace", + label="Custom Material Namespace", + default=''), + NumberDef("numTimeSamples", + label="Num Time Samples", + default=1), + NumberDef("timeSamplesSpan", + label="Time Samples Span", + default=0.0), + ]) - self.data["fileFormat"] = ["usd", "usda", "usdz"] - self.data["stripNamespaces"] = True - self.data["mergeTransformAndShape"] = False - self.data["writeAncestors"] = True - self.data["flattenParentXforms"] = False - self.data["writeSparseOverrides"] = False - self.data["useMetaPrimPath"] = False - self.data["customRootPath"] = '' - self.data["customAttributes"] = '' - self.data["nodeTypesToIgnore"] = '' - self.data["writeMeshes"] = True - self.data["writeCurves"] = True - self.data["writeParticles"] = True - self.data["writeCameras"] = False - self.data["writeLights"] = False - self.data["writeJoints"] = False - self.data["writeCollections"] = False - self.data["writePositions"] = True - self.data["writeNormals"] = True - self.data["writeUVs"] = True - self.data["writeColorSets"] = False - self.data["writeTangents"] = False - self.data["writeRefPositions"] = True - self.data["writeBlendShapes"] = False - self.data["writeDisplayColor"] = True - self.data["writeSkinWeights"] = False - self.data["writeMaterialAssignment"] = False - self.data["writeHardwareShader"] = False - self.data["writeShadingNetworks"] = False - self.data["writeTransformMatrix"] = True - self.data["writeUsdAttributes"] = True - self.data["writeInstancesAsReferences"] = False - self.data["timeVaryingTopology"] = False - self.data["customMaterialNamespace"] = '' - self.data["numTimeSamples"] = 1 - self.data["timeSamplesSpan"] = 0.0 + return defs diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py index ed466a8068..66ddd83eda 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_comp.py @@ -1,26 +1,48 @@ from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + NumberDef, + EnumDef +) -class CreateMultiverseUsdComp(plugin.Creator): +class CreateMultiverseUsdComp(plugin.MayaCreator): """Create Multiverse USD Composition""" - name = "mvUsdCompositionMain" + identifier = "io.openpype.creators.maya.mvusdcomposition" label = "Multiverse USD Composition" family = "mvUsdComposition" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data first, since it maintains order. - self.data.update(lib.collect_animation_data(True)) + defs = lib.collect_animation_defs(fps=True) + defs.extend([ + EnumDef("fileFormat", + label="File format", + items=["usd", "usda"], + default="usd"), + BoolDef("stripNamespaces", + label="Strip Namespaces", + default=False), + BoolDef("mergeTransformAndShape", + label="Merge Transform and Shape", + default=False), + BoolDef("flattenContent", + label="Flatten Content", + default=False), + BoolDef("writeAsCompoundLayers", + label="Write As Compound Layers", + default=False), + BoolDef("writePendingOverrides", + label="Write Pending Overrides", + default=False), + NumberDef("numTimeSamples", + label="Num Time Samples", + default=1), + NumberDef("timeSamplesSpan", + label="Time Samples Span", + default=0.0), + ]) - # Order of `fileFormat` must match extract_multiverse_usd_comp.py - self.data["fileFormat"] = ["usda", "usd"] - self.data["stripNamespaces"] = False - self.data["mergeTransformAndShape"] = False - self.data["flattenContent"] = False - self.data["writeAsCompoundLayers"] = False - self.data["writePendingOverrides"] = False - self.data["numTimeSamples"] = 1 - self.data["timeSamplesSpan"] = 0.0 + return defs diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py index 06e22df295..e1534dd68c 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd_over.py @@ -1,30 +1,59 @@ from openpype.hosts.maya.api import plugin, lib +from openpype.lib import ( + BoolDef, + NumberDef, + EnumDef +) class CreateMultiverseUsdOver(plugin.Creator): """Create Multiverse USD Override""" - name = "mvUsdOverrideMain" + identifier = "io.openpype.creators.maya.mvusdoverride" label = "Multiverse USD Override" family = "mvUsdOverride" icon = "cubes" - def __init__(self, *args, **kwargs): - super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): + defs = lib.collect_animation_defs(fps=True) + defs.extend([ + EnumDef("fileFormat", + label="File format", + items=["usd", "usda"], + default="usd"), + BoolDef("writeAll", + label="Write All", + default=False), + BoolDef("writeTransforms", + label="Write Transforms", + default=True), + BoolDef("writeVisibility", + label="Write Visibility", + default=True), + BoolDef("writeAttributes", + label="Write Attributes", + default=True), + BoolDef("writeMaterials", + label="Write Materials", + default=True), + BoolDef("writeVariants", + label="Write Variants", + default=True), + BoolDef("writeVariantsDefinition", + label="Write Variants Definition", + default=True), + BoolDef("writeActiveState", + label="Write Active State", + default=True), + BoolDef("writeNamespaces", + label="Write Namespaces", + default=False), + NumberDef("numTimeSamples", + label="Num Time Samples", + default=1), + NumberDef("timeSamplesSpan", + label="Time Samples Span", + default=0.0), + ]) - # Add animation data first, since it maintains order. - self.data.update(lib.collect_animation_data(True)) - - # Order of `fileFormat` must match extract_multiverse_usd_over.py - self.data["fileFormat"] = ["usda", "usd"] - self.data["writeAll"] = False - self.data["writeTransforms"] = True - self.data["writeVisibility"] = True - self.data["writeAttributes"] = True - self.data["writeMaterials"] = True - self.data["writeVariants"] = True - self.data["writeVariantsDefinition"] = True - self.data["writeActiveState"] = True - self.data["writeNamespaces"] = False - self.data["numTimeSamples"] = 1 - self.data["timeSamplesSpan"] = 0.0 + return defs diff --git a/openpype/hosts/maya/plugins/create/create_pointcache.py b/openpype/hosts/maya/plugins/create/create_pointcache.py index 1b8d5e6850..f4e8cbfc9a 100644 --- a/openpype/hosts/maya/plugins/create/create_pointcache.py +++ b/openpype/hosts/maya/plugins/create/create_pointcache.py @@ -4,47 +4,85 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import ( + BoolDef, + TextDef +) -class CreatePointCache(plugin.Creator): +class CreatePointCache(plugin.MayaCreator): """Alembic pointcache for animated data""" - name = "pointcache" - label = "Point Cache" + identifier = "io.openpype.creators.maya.pointcache" + label = "Pointcache" family = "pointcache" icon = "gears" write_color_sets = False write_face_sets = False include_user_defined_attributes = False - def __init__(self, *args, **kwargs): - super(CreatePointCache, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data - self.data.update(lib.collect_animation_data()) + defs = lib.collect_animation_defs() - # Vertex colors with the geometry. - self.data["writeColorSets"] = self.write_color_sets - # Vertex colors with the geometry. - self.data["writeFaceSets"] = self.write_face_sets - self.data["renderableOnly"] = False # Only renderable visible shapes - self.data["visibleOnly"] = False # only nodes that are visible - self.data["includeParentHierarchy"] = False # Include parent groups - self.data["worldSpace"] = True # Default to exporting world-space - self.data["refresh"] = False # Default to suspend refresh. - - # Add options for custom attributes - value = self.include_user_defined_attributes - self.data["includeUserDefinedAttributes"] = value - self.data["attr"] = "" - self.data["attrPrefix"] = "" + defs.extend([ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=False), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=False), + BoolDef("renderableOnly", + label="Renderable Only", + tooltip="Only export renderable visible shapes", + default=False), + BoolDef("visibleOnly", + label="Visible Only", + tooltip="Only export dag objects visible during " + "frame range", + default=False), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=False), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + BoolDef("refresh", + label="Refresh viewport during export", + default=False), + BoolDef("includeUserDefinedAttributes", + label="Include User Defined Attributes", + default=self.include_user_defined_attributes), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + default="", + placeholder="prefix1, prefix2") + ]) + # TODO: Implement these on a Deadline plug-in instead? + """ # Default to not send to farm. self.data["farm"] = False self.data["priority"] = 50 + """ - def process(self): - instance = super(CreatePointCache, self).process() + return defs - assProxy = cmds.sets(name=instance + "_proxy_SET", empty=True) - cmds.sets(assProxy, forceElement=instance) + def create(self, subset_name, instance_data, pre_create_data): + + instance = super(CreatePointCache, self).create( + subset_name, instance_data, pre_create_data + ) + instance_node = instance.get("instance_node") + + # For Arnold standin proxy + proxy_set = cmds.sets(name=instance_node + "_proxy_SET", empty=True) + cmds.sets(proxy_set, forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_proxy_abc.py b/openpype/hosts/maya/plugins/create/create_proxy_abc.py index 2946f7b530..d89470ebee 100644 --- a/openpype/hosts/maya/plugins/create/create_proxy_abc.py +++ b/openpype/hosts/maya/plugins/create/create_proxy_abc.py @@ -2,34 +2,49 @@ from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import ( + BoolDef, + TextDef +) -class CreateProxyAlembic(plugin.Creator): +class CreateProxyAlembic(plugin.MayaCreator): """Proxy Alembic for animated data""" - name = "proxyAbcMain" + identifier = "io.openpype.creators.maya.proxyabc" label = "Proxy Alembic" family = "proxyAbc" icon = "gears" write_color_sets = False write_face_sets = False - def __init__(self, *args, **kwargs): - super(CreateProxyAlembic, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - # Add animation data - self.data.update(lib.collect_animation_data()) + defs = lib.collect_animation_defs() - # Vertex colors with the geometry. - self.data["writeColorSets"] = self.write_color_sets - # Vertex colors with the geometry. - self.data["writeFaceSets"] = self.write_face_sets - # Default to exporting world-space - self.data["worldSpace"] = True + defs.extend([ + BoolDef("writeColorSets", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=self.write_color_sets), + BoolDef("writeFaceSets", + label="Write face sets", + tooltip="Write face sets with the geometry", + default=self.write_face_sets), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + TextDef("nameSuffix", + label="Name Suffix for Bounding Box", + default="_BBox", + placeholder="_BBox"), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ]) - # name suffix for the bounding box - self.data["nameSuffix"] = "_BBox" - - # Add options for custom attributes - self.data["attr"] = "" - self.data["attrPrefix"] = "" + return defs diff --git a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py index 419a8d99d4..2490738e8f 100644 --- a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py @@ -2,22 +2,24 @@ """Creator of Redshift proxy subset types.""" from openpype.hosts.maya.api import plugin, lib +from openpype.lib import BoolDef -class CreateRedshiftProxy(plugin.Creator): +class CreateRedshiftProxy(plugin.MayaCreator): """Create instance of Redshift Proxy subset.""" - name = "redshiftproxy" + identifier = "io.openpype.creators.maya.redshiftproxy" label = "Redshift Proxy" family = "redshiftproxy" icon = "gears" - def __init__(self, *args, **kwargs): - super(CreateRedshiftProxy, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - animation_data = lib.collect_animation_data() + defs = [ + BoolDef("animation", + label="Export animation", + default=False) + ] - self.data["animation"] = False - self.data["proxyFrameStart"] = animation_data["frameStart"] - self.data["proxyFrameEnd"] = animation_data["frameEnd"] - self.data["proxyFrameStep"] = animation_data["step"] + defs.extend(lib.collect_animation_defs()) + return defs diff --git a/openpype/hosts/maya/plugins/create/create_render.py b/openpype/hosts/maya/plugins/create/create_render.py index 4681175808..cc5c1eb205 100644 --- a/openpype/hosts/maya/plugins/create/create_render.py +++ b/openpype/hosts/maya/plugins/create/create_render.py @@ -1,425 +1,108 @@ # -*- coding: utf-8 -*- """Create ``Render`` instance in Maya.""" -import json -import os -import appdirs -import requests - -from maya import cmds -from maya.app.renderSetup.model import renderSetup - -from openpype.settings import ( - get_system_settings, - get_project_settings, -) -from openpype.lib import requests_get -from openpype.modules import ModulesManager -from openpype.pipeline import legacy_io from openpype.hosts.maya.api import ( - lib, lib_rendersettings, plugin ) +from openpype.pipeline import CreatorError +from openpype.lib import ( + BoolDef, + NumberDef, +) -class CreateRender(plugin.Creator): - """Create *render* instance. +class CreateRenderlayer(plugin.RenderlayerCreator): + """Create and manages renderlayer subset per renderLayer in workfile. - Render instances are not actually published, they hold options for - collecting of render data. It render instance is present, it will trigger - collection of render layers, AOVs, cameras for either direct submission - to render farm or export as various standalone formats (like V-Rays - ``vrscenes`` or Arnolds ``ass`` files) and then submitting them to render - farm. - - Instance has following attributes:: - - primaryPool (list of str): Primary list of slave machine pool to use. - secondaryPool (list of str): Optional secondary list of slave pools. - suspendPublishJob (bool): Suspend the job after it is submitted. - extendFrames (bool): Use already existing frames from previous version - to extend current render. - overrideExistingFrame (bool): Overwrite already existing frames. - priority (int): Submitted job priority - framesPerTask (int): How many frames per task to render. This is - basically job division on render farm. - whitelist (list of str): White list of slave machines - machineList (list of str): Specific list of slave machines to use - useMayaBatch (bool): Use Maya batch mode to render as opposite to - Maya interactive mode. This consumes different licenses. - vrscene (bool): Submit as ``vrscene`` file for standalone V-Ray - renderer. - ass (bool): Submit as ``ass`` file for standalone Arnold renderer. - tileRendering (bool): Instance is set to tile rendering mode. We - won't submit actual render, but we'll make publish job to wait - for Tile Assembly job done and then publish. - strict_error_checking (bool): Enable/disable error checking on DL - - See Also: - https://pype.club/docs/artist_hosts_maya#creating-basic-render-setup + This generates a single node in the scene which tells the Creator to if + it exists collect Maya rendersetup renderlayers as individual instances. + As such, triggering create doesn't actually create the instance node per + layer but only the node which tells the Creator it may now collect + the renderlayers. """ + identifier = "io.openpype.creators.maya.renderlayer" + family = "renderlayer" label = "Render" - family = "rendering" icon = "eye" - _token = None - _user = None - _password = None - _project_settings = None + layer_instance_prefix = "render" + singleton_node_name = "renderingMain" - def __init__(self, *args, **kwargs): - """Constructor.""" - super(CreateRender, self).__init__(*args, **kwargs) + render_settings = {} - # Defaults - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) - if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa + @classmethod + def apply_settings(cls, project_settings, system_settings): + cls.render_settings = project_settings["maya"]["RenderSettings"] + + def create(self, subset_name, instance_data, pre_create_data): + # Only allow a single render instance to exist + if self._get_singleton_node(): + raise CreatorError("A Render instance already exists - only " + "one can be configured.") + + # Apply default project render settings on create + if self.render_settings.get("apply_render_settings"): lib_rendersettings.RenderSettings().set_default_renderer_settings() - # Deadline-only - manager = ModulesManager() - deadline_settings = get_system_settings()["modules"]["deadline"] - if not deadline_settings["enabled"]: - self.deadline_servers = {} - return - self.deadline_module = manager.modules_by_name["deadline"] - try: - default_servers = deadline_settings["deadline_urls"] - project_servers = ( - self._project_settings["deadline"]["deadline_servers"] - ) - self.deadline_servers = { - k: default_servers[k] - for k in project_servers - if k in default_servers - } + super(CreateRenderlayer, self).create(subset_name, + instance_data, + pre_create_data) - if not self.deadline_servers: - self.deadline_servers = default_servers - - except AttributeError: - # Handle situation were we had only one url for deadline. - # get default deadline webservice url from deadline module - self.deadline_servers = self.deadline_module.deadline_urls - - def process(self): - """Entry point.""" - exists = cmds.ls(self.name) - if exists: - cmds.warning("%s already exists." % exists[0]) - return - - use_selection = self.options.get("useSelection") - with lib.undo_chunk(): - self._create_render_settings() - self.instance = super(CreateRender, self).process() - # create namespace with instance - index = 1 - namespace_name = "_{}".format(str(self.instance)) - try: - cmds.namespace(rm=namespace_name) - except RuntimeError: - # namespace is not empty, so we leave it untouched - pass - - while cmds.namespace(exists=namespace_name): - namespace_name = "_{}{}".format(str(self.instance), index) - index += 1 - - namespace = cmds.namespace(add=namespace_name) - - # add Deadline server selection list - if self.deadline_servers: - cmds.scriptJob( - attributeChange=[ - "{}.deadlineServers".format(self.instance), - self._deadline_webservice_changed - ]) - - cmds.setAttr("{}.machineList".format(self.instance), lock=True) - rs = renderSetup.instance() - layers = rs.getRenderLayers() - if use_selection: - self.log.info("Processing existing layers") - sets = [] - for layer in layers: - self.log.info(" - creating set for {}:{}".format( - namespace, layer.name())) - render_set = cmds.sets( - n="{}:{}".format(namespace, layer.name())) - sets.append(render_set) - cmds.sets(sets, forceElement=self.instance) - - # if no render layers are present, create default one with - # asterisk selector - if not layers: - render_layer = rs.createRenderLayer('Main') - collection = render_layer.createCollection("defaultCollection") - collection.getSelector().setPattern('*') - - return self.instance - - def _deadline_webservice_changed(self): - """Refresh Deadline server dependent options.""" - # get selected server - webservice = self.deadline_servers[ - self.server_aliases[ - cmds.getAttr("{}.deadlineServers".format(self.instance)) - ] - ] - pools = self.deadline_module.get_deadline_pools(webservice, self.log) - cmds.deleteAttr("{}.primaryPool".format(self.instance)) - cmds.deleteAttr("{}.secondaryPool".format(self.instance)) - - pool_setting = (self._project_settings["deadline"] - ["publish"] - ["CollectDeadlinePools"]) - - primary_pool = pool_setting["primary_pool"] - sorted_pools = self._set_default_pool(list(pools), primary_pool) - cmds.addAttr( - self.instance, - longName="primaryPool", - attributeType="enum", - enumName=":".join(sorted_pools) - ) - cmds.setAttr( - "{}.primaryPool".format(self.instance), - 0, - keyable=False, - channelBox=True - ) - - pools = ["-"] + pools - secondary_pool = pool_setting["secondary_pool"] - sorted_pools = self._set_default_pool(list(pools), secondary_pool) - cmds.addAttr( - self.instance, - longName="secondaryPool", - attributeType="enum", - enumName=":".join(sorted_pools) - ) - cmds.setAttr( - "{}.secondaryPool".format(self.instance), - 0, - keyable=False, - channelBox=True - ) - - def _create_render_settings(self): + def get_instance_attr_defs(self): """Create instance settings.""" - # get pools (slave machines of the render farm) - pool_names = [] - default_priority = 50 - self.data["suspendPublishJob"] = False - self.data["review"] = True - self.data["extendFrames"] = False - self.data["overrideExistingFrame"] = True - # self.data["useLegacyRenderLayers"] = True - self.data["priority"] = default_priority - self.data["tile_priority"] = default_priority - self.data["framesPerTask"] = 1 - self.data["whitelist"] = False - self.data["machineList"] = "" - self.data["useMayaBatch"] = False - self.data["tileRendering"] = False - self.data["tilesX"] = 2 - self.data["tilesY"] = 2 - self.data["convertToScanline"] = False - self.data["useReferencedAovs"] = False - self.data["renderSetupIncludeLights"] = ( - self._project_settings.get( - "maya", {}).get( - "RenderSettings", {}).get( - "enable_all_lights", False) - ) - # Disable for now as this feature is not working yet - # self.data["assScene"] = False + return [ + BoolDef("review", + label="Review", + tooltip="Mark as reviewable", + default=True), + BoolDef("extendFrames", + label="Extend Frames", + tooltip="Extends the frames on top of the previous " + "publish.\nIf the previous was 1001-1050 and you " + "would now submit 1020-1070 only the new frames " + "1051-1070 would be rendered and published " + "together with the previously rendered frames.\n" + "If 'overrideExistingFrame' is enabled it *will* " + "render any existing frames.", + default=False), + BoolDef("overrideExistingFrame", + label="Override Existing Frame", + tooltip="Override existing rendered frames " + "(if they exist).", + default=True), - system_settings = get_system_settings()["modules"] + # TODO: Should these move to submit_maya_deadline plugin? + # Tile rendering + BoolDef("tileRendering", + label="Enable tiled rendering", + default=False), + NumberDef("tilesX", + label="Tiles X", + default=2, + minimum=1, + decimals=0), + NumberDef("tilesY", + label="Tiles Y", + default=2, + minimum=1, + decimals=0), - deadline_enabled = system_settings["deadline"]["enabled"] - muster_enabled = system_settings["muster"]["enabled"] - muster_url = system_settings["muster"]["MUSTER_REST_URL"] + # Additional settings + BoolDef("convertToScanline", + label="Convert to Scanline", + tooltip="Convert the output images to scanline images", + default=False), + BoolDef("useReferencedAovs", + label="Use Referenced AOVs", + tooltip="Consider the AOVs from referenced scenes as well", + default=False), - if deadline_enabled and muster_enabled: - self.log.error( - "Both Deadline and Muster are enabled. " "Cannot support both." - ) - raise RuntimeError("Both Deadline and Muster are enabled") - - if deadline_enabled: - self.server_aliases = list(self.deadline_servers.keys()) - self.data["deadlineServers"] = self.server_aliases - - try: - deadline_url = self.deadline_servers["default"] - except KeyError: - # if 'default' server is not between selected, - # use first one for initial list of pools. - deadline_url = next(iter(self.deadline_servers.values())) - # Uses function to get pool machines from the assigned deadline - # url in settings - pool_names = self.deadline_module.get_deadline_pools(deadline_url, - self.log) - maya_submit_dl = self._project_settings.get( - "deadline", {}).get( - "publish", {}).get( - "MayaSubmitDeadline", {}) - priority = maya_submit_dl.get("priority", default_priority) - self.data["priority"] = priority - - tile_priority = maya_submit_dl.get("tile_priority", - default_priority) - self.data["tile_priority"] = tile_priority - - strict_error_checking = maya_submit_dl.get("strict_error_checking", - True) - self.data["strict_error_checking"] = strict_error_checking - - # Pool attributes should be last since they will be recreated when - # the deadline server changes. - pool_setting = (self._project_settings["deadline"] - ["publish"] - ["CollectDeadlinePools"]) - primary_pool = pool_setting["primary_pool"] - self.data["primaryPool"] = self._set_default_pool(pool_names, - primary_pool) - # We add a string "-" to allow the user to not - # set any secondary pools - pool_names = ["-"] + pool_names - secondary_pool = pool_setting["secondary_pool"] - self.data["secondaryPool"] = self._set_default_pool(pool_names, - secondary_pool) - - if muster_enabled: - self.log.info(">>> Loading Muster credentials ...") - self._load_credentials() - self.log.info(">>> Getting pools ...") - pools = [] - try: - pools = self._get_muster_pools() - except requests.exceptions.HTTPError as e: - if e.startswith("401"): - self.log.warning("access token expired") - self._show_login() - raise RuntimeError("Access token expired") - except requests.exceptions.ConnectionError: - self.log.error("Cannot connect to Muster API endpoint.") - raise RuntimeError("Cannot connect to {}".format(muster_url)) - for pool in pools: - self.log.info(" - pool: {}".format(pool["name"])) - pool_names.append(pool["name"]) - - self.options = {"useSelection": False} # Force no content - - def _set_default_pool(self, pool_names, pool_value): - """Reorder pool names, default should come first""" - if pool_value and pool_value in pool_names: - pool_names.remove(pool_value) - pool_names = [pool_value] + pool_names - return pool_names - - def _load_credentials(self): - """Load Muster credentials. - - Load Muster credentials from file and set ``MUSTER_USER``, - ``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from settings. - - Raises: - RuntimeError: If loaded credentials are invalid. - AttributeError: If ``MUSTER_REST_URL`` is not set. - - """ - app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype")) - file_name = "muster_cred.json" - fpath = os.path.join(app_dir, file_name) - file = open(fpath, "r") - muster_json = json.load(file) - self._token = muster_json.get("token", None) - if not self._token: - self._show_login() - raise RuntimeError("Invalid access token for Muster") - file.close() - self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL") - if not self.MUSTER_REST_URL: - raise AttributeError("Muster REST API url not set") - - def _get_muster_pools(self): - """Get render pools from Muster. - - Raises: - Exception: If pool list cannot be obtained from Muster. - - """ - params = {"authToken": self._token} - api_entry = "/api/pools/list" - response = requests_get(self.MUSTER_REST_URL + api_entry, - params=params) - if response.status_code != 200: - if response.status_code == 401: - self.log.warning("Authentication token expired.") - self._show_login() - else: - self.log.error( - ("Cannot get pools from " - "Muster: {}").format(response.status_code) - ) - raise Exception("Cannot get pools from Muster") - try: - pools = response.json()["ResponseData"]["pools"] - except ValueError as e: - self.log.error("Invalid response from Muster server {}".format(e)) - raise Exception("Invalid response from Muster server") - - return pools - - def _show_login(self): - # authentication token expired so we need to login to Muster - # again to get it. We use Pype API call to show login window. - api_url = "{}/muster/show_login".format( - os.environ["OPENPYPE_WEBSERVER_URL"]) - self.log.debug(api_url) - login_response = requests_get(api_url, timeout=1) - if login_response.status_code != 200: - self.log.error("Cannot show login form to Muster") - raise Exception("Cannot show login form to Muster") - - def _requests_post(self, *args, **kwargs): - """Wrap request post method. - - Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment - variable is found. This is useful when Deadline or Muster server are - running with self-signed certificates and their certificate is not - added to trusted certificates on client machines. - - Warning: - Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. - - """ - if "verify" not in kwargs: - kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) - return requests.post(*args, **kwargs) - - def _requests_get(self, *args, **kwargs): - """Wrap request get method. - - Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment - variable is found. This is useful when Deadline or Muster server are - running with self-signed certificates and their certificate is not - added to trusted certificates on client machines. - - Warning: - Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. - - """ - if "verify" not in kwargs: - kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) - return requests.get(*args, **kwargs) + BoolDef("renderSetupIncludeLights", + label="Render Setup Include Lights", + default=self.render_settings.get("enable_all_lights", + False)) + ] diff --git a/openpype/hosts/maya/plugins/create/create_rendersetup.py b/openpype/hosts/maya/plugins/create/create_rendersetup.py index 494f90d87b..dd64a0a842 100644 --- a/openpype/hosts/maya/plugins/create/create_rendersetup.py +++ b/openpype/hosts/maya/plugins/create/create_rendersetup.py @@ -1,55 +1,31 @@ -from openpype.hosts.maya.api import ( - lib, - plugin -) -from maya import cmds +from openpype.hosts.maya.api import plugin +from openpype.pipeline import CreatorError -class CreateRenderSetup(plugin.Creator): +class CreateRenderSetup(plugin.MayaCreator): """Create rendersetup template json data""" - name = "rendersetup" + identifier = "io.openpype.creators.maya.rendersetup" label = "Render Setup Preset" family = "rendersetup" icon = "tablet" - def __init__(self, *args, **kwargs): - super(CreateRenderSetup, self).__init__(*args, **kwargs) + def get_pre_create_attr_defs(self): + # Do not show the "use_selection" setting from parent class + return [] - # here we can pre-create renderSetup layers, possibly utlizing - # settings for it. + def create(self, subset_name, instance_data, pre_create_data): - # _____ - # / __\__ - # | / __\__ - # | | / \ - # | | | | - # \__| | | - # \__| | - # \_____/ + existing_instance = None + for instance in self.create_context.instances: + if instance.family == self.family: + existing_instance = instance + break - # from pype.api import get_project_settings - # import maya.app.renderSetup.model.renderSetup as renderSetup - # settings = get_project_settings(os.environ['AVALON_PROJECT']) - # layer = settings['maya']['create']['renderSetup']["layer"] + if existing_instance: + raise CreatorError("A RenderSetup instance already exists - only " + "one can be configured.") - # rs = renderSetup.instance() - # rs.createRenderLayer(layer) - - self.options = {"useSelection": False} # Force no content - - def process(self): - exists = cmds.ls(self.name) - assert len(exists) <= 1, ( - "More than one renderglobal exists, this is a bug" - ) - - if exists: - return cmds.warning("%s already exists." % exists[0]) - - with lib.undo_chunk(): - instance = super(CreateRenderSetup, self).process() - - self.data["renderSetup"] = "42" - null = cmds.sets(name="null_SET", empty=True) - cmds.sets([null], forceElement=instance) + super(CreateRenderSetup, self).create(subset_name, + instance_data, + pre_create_data) diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py index 40ae99b57c..f60e2406bc 100644 --- a/openpype/hosts/maya/plugins/create/create_review.py +++ b/openpype/hosts/maya/plugins/create/create_review.py @@ -1,76 +1,142 @@ -import os -from collections import OrderedDict import json +from maya import cmds + from openpype.hosts.maya.api import ( lib, plugin ) -from openpype.settings import get_project_settings -from openpype.pipeline import get_current_project_name, get_current_task_name +from openpype.lib import ( + BoolDef, + NumberDef, + EnumDef +) +from openpype.pipeline import CreatedInstance from openpype.client import get_asset_by_name +TRANSPARENCIES = [ + "preset", + "simple", + "object sorting", + "weighted average", + "depth peeling", + "alpha cut" +] -class CreateReview(plugin.Creator): - """Single baked camera""" - name = "reviewDefault" +class CreateReview(plugin.MayaCreator): + """Playblast reviewable""" + + identifier = "io.openpype.creators.maya.review" label = "Review" family = "review" icon = "video-camera" - keepImages = False - isolate = False - imagePlane = True - Width = 0 - Height = 0 - transparency = [ - "preset", - "simple", - "object sorting", - "weighted average", - "depth peeling", - "alpha cut" - ] + useMayaTimeline = True panZoom = False - def __init__(self, *args, **kwargs): - super(CreateReview, self).__init__(*args, **kwargs) - data = OrderedDict(**self.data) + # Overriding "create" method to prefill values from settings. + def create(self, subset_name, instance_data, pre_create_data): - project_name = get_current_project_name() - asset_doc = get_asset_by_name(project_name, data["asset"]) - task_name = get_current_task_name() + members = list() + if pre_create_data.get("use_selection"): + members = cmds.ls(selection=True) + + project_name = self.project_name + asset_doc = get_asset_by_name(project_name, instance_data["asset"]) + task_name = instance_data["task"] preset = lib.get_capture_preset( task_name, asset_doc["data"]["tasks"][task_name]["type"], - data["subset"], - get_project_settings(project_name), + subset_name, + self.project_settings, self.log ) - if os.environ.get("OPENPYPE_DEBUG") == "1": - self.log.debug( - "Using preset: {}".format( - json.dumps(preset, indent=4, sort_keys=True) - ) + self.log.debug( + "Using preset: {}".format( + json.dumps(preset, indent=4, sort_keys=True) ) + ) + + with lib.undo_chunk(): + instance_node = cmds.sets(members, name=subset_name) + instance_data["instance_node"] = instance_node + instance = CreatedInstance( + self.family, + subset_name, + instance_data, + self) + + creator_attribute_defs_by_key = { + x.key: x for x in instance.creator_attribute_defs + } + mapping = { + "review_width": preset["Resolution"]["width"], + "review_height": preset["Resolution"]["height"], + "isolate": preset["Generic"]["isolate_view"], + "imagePlane": preset["Viewport Options"]["imagePlane"], + "panZoom": preset["Generic"]["pan_zoom"] + } + for key, value in mapping.items(): + creator_attribute_defs_by_key[key].default = value + + self._add_instance_to_context(instance) + + self.imprint_instance_node(instance_node, + data=instance.data_to_store()) + return instance + + def get_instance_attr_defs(self): + + defs = lib.collect_animation_defs() # Option for using Maya or asset frame range in settings. - frame_range = lib.get_frame_range() - if self.useMayaTimeline: - frame_range = lib.collect_animation_data(fps=True) - for key, value in frame_range.items(): - data[key] = value + if not self.useMayaTimeline: + # Update the defaults to be the asset frame range + frame_range = lib.get_frame_range() + defs_by_key = {attr_def.key: attr_def for attr_def in defs} + for key, value in frame_range.items(): + if key not in defs_by_key: + raise RuntimeError("Attribute definition not found to be " + "updated for key: {}".format(key)) + attr_def = defs_by_key[key] + attr_def.default = value - data["fps"] = lib.collect_animation_data(fps=True)["fps"] + defs.extend([ + NumberDef("review_width", + label="Review width", + tooltip="A value of zero will use the asset resolution.", + decimals=0, + minimum=0, + default=0), + NumberDef("review_height", + label="Review height", + tooltip="A value of zero will use the asset resolution.", + decimals=0, + minimum=0, + default=0), + BoolDef("keepImages", + label="Keep Images", + tooltip="Whether to also publish along the image sequence " + "next to the video reviewable.", + default=False), + BoolDef("isolate", + label="Isolate render members of instance", + tooltip="When enabled only the members of the instance " + "will be included in the playblast review.", + default=False), + BoolDef("imagePlane", + label="Show Image Plane", + default=True), + EnumDef("transparency", + label="Transparency", + items=TRANSPARENCIES), + BoolDef("panZoom", + label="Enable camera pan/zoom", + default=True), + EnumDef("displayLights", + label="Display Lights", + items=lib.DISPLAY_LIGHTS_ENUM), + ]) - data["keepImages"] = self.keepImages - data["transparency"] = self.transparency - data["review_width"] = preset["Resolution"]["width"] - data["review_height"] = preset["Resolution"]["height"] - data["isolate"] = preset["Generic"]["isolate_view"] - data["imagePlane"] = preset["Viewport Options"]["imagePlane"] - data["panZoom"] = preset["Generic"]["pan_zoom"] - data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS - - self.data = data + return defs diff --git a/openpype/hosts/maya/plugins/create/create_rig.py b/openpype/hosts/maya/plugins/create/create_rig.py index 8032e5fbbd..345ab6c00d 100644 --- a/openpype/hosts/maya/plugins/create/create_rig.py +++ b/openpype/hosts/maya/plugins/create/create_rig.py @@ -1,25 +1,25 @@ from maya import cmds -from openpype.hosts.maya.api import ( - lib, - plugin -) +from openpype.hosts.maya.api import plugin -class CreateRig(plugin.Creator): +class CreateRig(plugin.MayaCreator): """Artist-friendly rig with controls to direct motion""" - name = "rigDefault" + identifier = "io.openpype.creators.maya.rig" label = "Rig" family = "rig" icon = "wheelchair" - def process(self): + def create(self, subset_name, instance_data, pre_create_data): - with lib.undo_chunk(): - instance = super(CreateRig, self).process() + instance = super(CreateRig, self).create(subset_name, + instance_data, + pre_create_data) - self.log.info("Creating Rig instance set up ...") - controls = cmds.sets(name="controls_SET", empty=True) - pointcache = cmds.sets(name="out_SET", empty=True) - cmds.sets([controls, pointcache], forceElement=instance) + instance_node = instance.get("instance_node") + + self.log.info("Creating Rig instance set up ...") + controls = cmds.sets(name=subset_name + "_controls_SET", empty=True) + pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True) + cmds.sets([controls, pointcache], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_setdress.py b/openpype/hosts/maya/plugins/create/create_setdress.py index 4246183fdb..23a706380a 100644 --- a/openpype/hosts/maya/plugins/create/create_setdress.py +++ b/openpype/hosts/maya/plugins/create/create_setdress.py @@ -1,16 +1,19 @@ from openpype.hosts.maya.api import plugin +from openpype.lib import BoolDef -class CreateSetDress(plugin.Creator): +class CreateSetDress(plugin.MayaCreator): """A grouped package of loaded content""" - name = "setdressMain" + identifier = "io.openpype.creators.maya.setdress" label = "Set Dress" family = "setdress" icon = "cubes" - defaults = ["Main", "Anim"] + default_variants = ["Main", "Anim"] - def __init__(self, *args, **kwargs): - super(CreateSetDress, self).__init__(*args, **kwargs) - - self.data["exactSetMembersOnly"] = True + def get_instance_attr_defs(self): + return [ + BoolDef("exactSetMembersOnly", + label="Exact Set Members Only", + default=True) + ] diff --git a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py index 6e72bf5324..4e2a99eced 100644 --- a/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py +++ b/openpype/hosts/maya/plugins/create/create_unreal_skeletalmesh.py @@ -1,47 +1,63 @@ # -*- coding: utf-8 -*- """Creator for Unreal Skeletal Meshes.""" from openpype.hosts.maya.api import plugin, lib -from openpype.pipeline import legacy_io +from openpype.lib import ( + BoolDef, + TextDef +) + from maya import cmds # noqa -class CreateUnrealSkeletalMesh(plugin.Creator): +class CreateUnrealSkeletalMesh(plugin.MayaCreator): """Unreal Static Meshes with collisions.""" - name = "staticMeshMain" + + identifier = "io.openpype.creators.maya.unrealskeletalmesh" label = "Unreal - Skeletal Mesh" family = "skeletalMesh" icon = "thumbs-up" dynamic_subset_keys = ["asset"] - joint_hints = [] + # Defined in settings + joint_hints = set() - def __init__(self, *args, **kwargs): - """Constructor.""" - super(CreateUnrealSkeletalMesh, self).__init__(*args, **kwargs) - - @classmethod - def get_dynamic_data( - cls, variant, task_name, asset_id, project_name, host_name - ): - dynamic_data = super(CreateUnrealSkeletalMesh, cls).get_dynamic_data( - variant, task_name, asset_id, project_name, host_name + def apply_settings(self, project_settings, system_settings): + """Apply project settings to creator""" + settings = ( + project_settings["maya"]["create"]["CreateUnrealSkeletalMesh"] ) - dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET") + self.joint_hints = set(settings.get("joint_hints", [])) + + def get_dynamic_data( + self, variant, task_name, asset_doc, project_name, host_name, instance + ): + """ + The default subset name templates for Unreal include {asset} and thus + we should pass that along as dynamic data. + """ + dynamic_data = super(CreateUnrealSkeletalMesh, self).get_dynamic_data( + variant, task_name, asset_doc, project_name, host_name, instance + ) + dynamic_data["asset"] = asset_doc["name"] return dynamic_data - def process(self): - self.name = "{}_{}".format(self.family, self.name) - with lib.undo_chunk(): - instance = super(CreateUnrealSkeletalMesh, self).process() - content = cmds.sets(instance, query=True) + def create(self, subset_name, instance_data, pre_create_data): + + with lib.undo_chunk(): + instance = super(CreateUnrealSkeletalMesh, self).create( + subset_name, instance_data, pre_create_data) + instance_node = instance.get("instance_node") + + # We reorganize the geometry that was originally added into the + # set into either 'joints_SET' or 'geometry_SET' based on the + # joint_hints from project settings + members = cmds.sets(instance_node, query=True) + cmds.sets(clear=instance_node) - # empty set and process its former content - cmds.sets(content, rm=instance) geometry_set = cmds.sets(name="geometry_SET", empty=True) joints_set = cmds.sets(name="joints_SET", empty=True) - cmds.sets([geometry_set, joints_set], forceElement=instance) - members = cmds.ls(content) or [] + cmds.sets([geometry_set, joints_set], forceElement=instance_node) for node in members: if node in self.joint_hints: @@ -49,20 +65,38 @@ class CreateUnrealSkeletalMesh(plugin.Creator): else: cmds.sets(node, forceElement=geometry_set) - # Add animation data - self.data.update(lib.collect_animation_data()) + def get_instance_attr_defs(self): - # Only renderable visible shapes - self.data["renderableOnly"] = False - # only nodes that are visible - self.data["visibleOnly"] = False - # Include parent groups - self.data["includeParentHierarchy"] = False - # Default to exporting world-space - self.data["worldSpace"] = True - # Default to suspend refresh. - self.data["refresh"] = False + defs = lib.collect_animation_defs() - # Add options for custom attributes - self.data["attr"] = "" - self.data["attrPrefix"] = "" + defs.extend([ + BoolDef("renderableOnly", + label="Renderable Only", + tooltip="Only export renderable visible shapes", + default=False), + BoolDef("visibleOnly", + label="Visible Only", + tooltip="Only export dag objects visible during " + "frame range", + default=False), + BoolDef("includeParentHierarchy", + label="Include Parent Hierarchy", + tooltip="Whether to include parent hierarchy of nodes in " + "the publish instance", + default=False), + BoolDef("worldSpace", + label="World-Space Export", + default=True), + BoolDef("refresh", + label="Refresh viewport during export", + default=False), + TextDef("attr", + label="Custom Attributes", + default="", + placeholder="attr1, attr2"), + TextDef("attrPrefix", + label="Custom Attributes Prefix", + placeholder="prefix1, prefix2") + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py index 44cbee0502..3f96d91a54 100644 --- a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py +++ b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py @@ -1,58 +1,90 @@ # -*- coding: utf-8 -*- """Creator for Unreal Static Meshes.""" from openpype.hosts.maya.api import plugin, lib -from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io from maya import cmds # noqa -class CreateUnrealStaticMesh(plugin.Creator): +class CreateUnrealStaticMesh(plugin.MayaCreator): """Unreal Static Meshes with collisions.""" - name = "staticMeshMain" + + identifier = "io.openpype.creators.maya.unrealstaticmesh" label = "Unreal - Static Mesh" family = "staticMesh" icon = "cube" dynamic_subset_keys = ["asset"] - def __init__(self, *args, **kwargs): - """Constructor.""" - super(CreateUnrealStaticMesh, self).__init__(*args, **kwargs) - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) + # Defined in settings + collision_prefixes = [] + + def apply_settings(self, project_settings, system_settings): + """Apply project settings to creator""" + settings = project_settings["maya"]["create"]["CreateUnrealStaticMesh"] + self.collision_prefixes = settings["collision_prefixes"] - @classmethod def get_dynamic_data( - cls, variant, task_name, asset_id, project_name, host_name + self, variant, task_name, asset_doc, project_name, host_name, instance ): - dynamic_data = super(CreateUnrealStaticMesh, cls).get_dynamic_data( - variant, task_name, asset_id, project_name, host_name + """ + The default subset name templates for Unreal include {asset} and thus + we should pass that along as dynamic data. + """ + dynamic_data = super(CreateUnrealStaticMesh, self).get_dynamic_data( + variant, task_name, asset_doc, project_name, host_name, instance ) - dynamic_data["asset"] = legacy_io.Session.get("AVALON_ASSET") + dynamic_data["asset"] = asset_doc["name"] return dynamic_data - def process(self): - self.name = "{}_{}".format(self.family, self.name) - with lib.undo_chunk(): - instance = super(CreateUnrealStaticMesh, self).process() - content = cmds.sets(instance, query=True) + def create(self, subset_name, instance_data, pre_create_data): + + with lib.undo_chunk(): + instance = super(CreateUnrealStaticMesh, self).create( + subset_name, instance_data, pre_create_data) + instance_node = instance.get("instance_node") + + # We reorganize the geometry that was originally added into the + # set into either 'collision_SET' or 'geometry_SET' based on the + # collision_prefixes from project settings + members = cmds.sets(instance_node, query=True) + cmds.sets(clear=instance_node) - # empty set and process its former content - cmds.sets(content, rm=instance) geometry_set = cmds.sets(name="geometry_SET", empty=True) collisions_set = cmds.sets(name="collisions_SET", empty=True) - cmds.sets([geometry_set, collisions_set], forceElement=instance) + cmds.sets([geometry_set, collisions_set], + forceElement=instance_node) - members = cmds.ls(content, long=True) or [] + members = cmds.ls(members, long=True) or [] children = cmds.listRelatives(members, allDescendents=True, fullPath=True) or [] - children = cmds.ls(children, type="transform") - for node in children: - if cmds.listRelatives(node, type="shape"): - if [ - n for n in self.collision_prefixes - if node.startswith(n) - ]: - cmds.sets(node, forceElement=collisions_set) - else: - cmds.sets(node, forceElement=geometry_set) + transforms = cmds.ls(members + children, type="transform") + for transform in transforms: + + if not cmds.listRelatives(transform, + type="shape", + noIntermediate=True): + # Exclude all transforms that have no direct shapes + continue + + if self.has_collision_prefix(transform): + cmds.sets(transform, forceElement=collisions_set) + else: + cmds.sets(transform, forceElement=geometry_set) + + def has_collision_prefix(self, node_path): + """Return whether node name of path matches collision prefix. + + If the node name matches the collision prefix we add it to the + `collisions_SET` instead of the `geometry_SET`. + + Args: + node_path (str): Maya node path. + + Returns: + bool: Whether the node should be considered a collision mesh. + + """ + node_name = node_path.rsplit("|", 1)[-1] + for prefix in self.collision_prefixes: + if node_name.startswith(prefix): + return True + return False diff --git a/openpype/hosts/maya/plugins/create/create_vrayproxy.py b/openpype/hosts/maya/plugins/create/create_vrayproxy.py index d135073e82..b0a95538e1 100644 --- a/openpype/hosts/maya/plugins/create/create_vrayproxy.py +++ b/openpype/hosts/maya/plugins/create/create_vrayproxy.py @@ -1,10 +1,14 @@ -from openpype.hosts.maya.api import plugin +from openpype.hosts.maya.api import ( + plugin, + lib +) +from openpype.lib import BoolDef -class CreateVrayProxy(plugin.Creator): +class CreateVrayProxy(plugin.MayaCreator): """Alembic pointcache for animated data""" - name = "vrayproxy" + identifier = "io.openpype.creators.maya.vrayproxy" label = "VRay Proxy" family = "vrayproxy" icon = "gears" @@ -12,15 +16,35 @@ class CreateVrayProxy(plugin.Creator): vrmesh = True alembic = True - def __init__(self, *args, **kwargs): - super(CreateVrayProxy, self).__init__(*args, **kwargs) + def get_instance_attr_defs(self): - self.data["animation"] = False - self.data["frameStart"] = 1 - self.data["frameEnd"] = 1 + defs = [ + BoolDef("animation", + label="Export Animation", + default=False) + ] - # Write vertex colors - self.data["vertexColors"] = False + # Add time range attributes but remove some attributes + # which this instance actually doesn't use + defs.extend(lib.collect_animation_defs()) + remove = {"handleStart", "handleEnd", "step"} + defs = [attr_def for attr_def in defs if attr_def.key not in remove] - self.data["vrmesh"] = self.vrmesh - self.data["alembic"] = self.alembic + defs.extend([ + BoolDef("vertexColors", + label="Write vertex colors", + tooltip="Write vertex colors with the geometry", + default=False), + BoolDef("vrmesh", + label="Export VRayMesh", + tooltip="Publish a .vrmesh (VRayMesh) file for " + "this VRayProxy", + default=self.vrmesh), + BoolDef("alembic", + label="Export Alembic", + tooltip="Publish a .abc (Alembic) file for " + "this VRayProxy", + default=self.alembic), + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_vrayscene.py b/openpype/hosts/maya/plugins/create/create_vrayscene.py index 59d80e6d5b..d601dceb54 100644 --- a/openpype/hosts/maya/plugins/create/create_vrayscene.py +++ b/openpype/hosts/maya/plugins/create/create_vrayscene.py @@ -1,266 +1,52 @@ # -*- coding: utf-8 -*- """Create instance of vrayscene.""" -import os -import json -import appdirs -import requests - -from maya import cmds -import maya.app.renderSetup.model.renderSetup as renderSetup from openpype.hosts.maya.api import ( - lib, + lib_rendersettings, plugin ) -from openpype.settings import ( - get_system_settings, - get_project_settings -) - -from openpype.lib import requests_get -from openpype.pipeline import ( - CreatorError, - legacy_io, -) -from openpype.modules import ModulesManager +from openpype.pipeline import CreatorError +from openpype.lib import BoolDef -class CreateVRayScene(plugin.Creator): +class CreateVRayScene(plugin.RenderlayerCreator): """Create Vray Scene.""" - label = "VRay Scene" + identifier = "io.openpype.creators.maya.vrayscene" + family = "vrayscene" + label = "VRay Scene" icon = "cubes" - _project_settings = None + render_settings = {} + singleton_node_name = "vraysceneMain" - def __init__(self, *args, **kwargs): - """Entry.""" - super(CreateVRayScene, self).__init__(*args, **kwargs) - self._rs = renderSetup.instance() - self.data["exportOnFarm"] = False - deadline_settings = get_system_settings()["modules"]["deadline"] + @classmethod + def apply_settings(cls, project_settings, system_settings): + cls.render_settings = project_settings["maya"]["RenderSettings"] - manager = ModulesManager() - self.deadline_module = manager.modules_by_name["deadline"] + def create(self, subset_name, instance_data, pre_create_data): + # Only allow a single render instance to exist + if self._get_singleton_node(): + raise CreatorError("A Render instance already exists - only " + "one can be configured.") - if not deadline_settings["enabled"]: - self.deadline_servers = {} - return - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"]) + super(CreateVRayScene, self).create(subset_name, + instance_data, + pre_create_data) - try: - default_servers = deadline_settings["deadline_urls"] - project_servers = ( - self._project_settings["deadline"]["deadline_servers"] - ) - self.deadline_servers = { - k: default_servers[k] - for k in project_servers - if k in default_servers - } + # Apply default project render settings on create + if self.render_settings.get("apply_render_settings"): + lib_rendersettings.RenderSettings().set_default_renderer_settings() - if not self.deadline_servers: - self.deadline_servers = default_servers + def get_instance_attr_defs(self): + """Create instance settings.""" - except AttributeError: - # Handle situation were we had only one url for deadline. - # get default deadline webservice url from deadline module - self.deadline_servers = self.deadline_module.deadline_urls - - def process(self): - """Entry point.""" - exists = cmds.ls(self.name) - if exists: - return cmds.warning("%s already exists." % exists[0]) - - use_selection = self.options.get("useSelection") - with lib.undo_chunk(): - self._create_vray_instance_settings() - self.instance = super(CreateVRayScene, self).process() - - index = 1 - namespace_name = "_{}".format(str(self.instance)) - try: - cmds.namespace(rm=namespace_name) - except RuntimeError: - # namespace is not empty, so we leave it untouched - pass - - while(cmds.namespace(exists=namespace_name)): - namespace_name = "_{}{}".format(str(self.instance), index) - index += 1 - - namespace = cmds.namespace(add=namespace_name) - - # add Deadline server selection list - if self.deadline_servers: - cmds.scriptJob( - attributeChange=[ - "{}.deadlineServers".format(self.instance), - self._deadline_webservice_changed - ]) - - # create namespace with instance - layers = self._rs.getRenderLayers() - if use_selection: - print(">>> processing existing layers") - sets = [] - for layer in layers: - print(" - creating set for {}".format(layer.name())) - render_set = cmds.sets( - n="{}:{}".format(namespace, layer.name())) - sets.append(render_set) - cmds.sets(sets, forceElement=self.instance) - - # if no render layers are present, create default one with - # asterix selector - if not layers: - render_layer = self._rs.createRenderLayer('Main') - collection = render_layer.createCollection("defaultCollection") - collection.getSelector().setPattern('*') - - def _deadline_webservice_changed(self): - """Refresh Deadline server dependent options.""" - # get selected server - from maya import cmds - webservice = self.deadline_servers[ - self.server_aliases[ - cmds.getAttr("{}.deadlineServers".format(self.instance)) - ] + return [ + BoolDef("vraySceneMultipleFiles", + label="V-Ray Scene Multiple Files", + default=False), + BoolDef("exportOnFarm", + label="Export on farm", + default=False) ] - pools = self.deadline_module.get_deadline_pools(webservice) - cmds.deleteAttr("{}.primaryPool".format(self.instance)) - cmds.deleteAttr("{}.secondaryPool".format(self.instance)) - cmds.addAttr(self.instance, longName="primaryPool", - attributeType="enum", - enumName=":".join(pools)) - cmds.addAttr(self.instance, longName="secondaryPool", - attributeType="enum", - enumName=":".join(["-"] + pools)) - - def _create_vray_instance_settings(self): - # get pools - pools = [] - - system_settings = get_system_settings()["modules"] - - deadline_enabled = system_settings["deadline"]["enabled"] - muster_enabled = system_settings["muster"]["enabled"] - muster_url = system_settings["muster"]["MUSTER_REST_URL"] - - if deadline_enabled and muster_enabled: - self.log.error( - "Both Deadline and Muster are enabled. " "Cannot support both." - ) - raise CreatorError("Both Deadline and Muster are enabled") - - self.server_aliases = self.deadline_servers.keys() - self.data["deadlineServers"] = self.server_aliases - - if deadline_enabled: - # if default server is not between selected, use first one for - # initial list of pools. - try: - deadline_url = self.deadline_servers["default"] - except KeyError: - deadline_url = [ - self.deadline_servers[k] - for k in self.deadline_servers.keys() - ][0] - - pool_names = self.deadline_module.get_deadline_pools(deadline_url) - - if muster_enabled: - self.log.info(">>> Loading Muster credentials ...") - self._load_credentials() - self.log.info(">>> Getting pools ...") - try: - pools = self._get_muster_pools() - except requests.exceptions.HTTPError as e: - if e.startswith("401"): - self.log.warning("access token expired") - self._show_login() - raise CreatorError("Access token expired") - except requests.exceptions.ConnectionError: - self.log.error("Cannot connect to Muster API endpoint.") - raise CreatorError("Cannot connect to {}".format(muster_url)) - pool_names = [] - for pool in pools: - self.log.info(" - pool: {}".format(pool["name"])) - pool_names.append(pool["name"]) - - self.data["primaryPool"] = pool_names - - self.data["suspendPublishJob"] = False - self.data["priority"] = 50 - self.data["whitelist"] = False - self.data["machineList"] = "" - self.data["vraySceneMultipleFiles"] = False - self.options = {"useSelection": False} # Force no content - - def _load_credentials(self): - """Load Muster credentials. - - Load Muster credentials from file and set ``MUSTER_USER``, - ``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from presets. - - Raises: - CreatorError: If loaded credentials are invalid. - AttributeError: If ``MUSTER_REST_URL`` is not set. - - """ - app_dir = os.path.normpath(appdirs.user_data_dir("pype-app", "pype")) - file_name = "muster_cred.json" - fpath = os.path.join(app_dir, file_name) - file = open(fpath, "r") - muster_json = json.load(file) - self._token = muster_json.get("token", None) - if not self._token: - self._show_login() - raise CreatorError("Invalid access token for Muster") - file.close() - self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL") - if not self.MUSTER_REST_URL: - raise AttributeError("Muster REST API url not set") - - def _get_muster_pools(self): - """Get render pools from Muster. - - Raises: - CreatorError: If pool list cannot be obtained from Muster. - - """ - params = {"authToken": self._token} - api_entry = "/api/pools/list" - response = requests_get(self.MUSTER_REST_URL + api_entry, - params=params) - if response.status_code != 200: - if response.status_code == 401: - self.log.warning("Authentication token expired.") - self._show_login() - else: - self.log.error( - ("Cannot get pools from " - "Muster: {}").format(response.status_code) - ) - raise CreatorError("Cannot get pools from Muster") - try: - pools = response.json()["ResponseData"]["pools"] - except ValueError as e: - self.log.error("Invalid response from Muster server {}".format(e)) - raise CreatorError("Invalid response from Muster server") - - return pools - - def _show_login(self): - # authentication token expired so we need to login to Muster - # again to get it. We use Pype API call to show login window. - api_url = "{}/muster/show_login".format( - os.environ["OPENPYPE_WEBSERVER_URL"]) - self.log.debug(api_url) - login_response = requests_get(api_url, timeout=1) - if login_response.status_code != 200: - self.log.error("Cannot show login form to Muster") - raise CreatorError("Cannot show login form to Muster") diff --git a/openpype/hosts/maya/plugins/create/create_workfile.py b/openpype/hosts/maya/plugins/create/create_workfile.py new file mode 100644 index 0000000000..d84753cd7f --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_workfile.py @@ -0,0 +1,88 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating workfiles.""" +from openpype.pipeline import CreatedInstance, AutoCreator +from openpype.client import get_asset_by_name +from openpype.hosts.maya.api import plugin +from maya import cmds + + +class CreateWorkfile(plugin.MayaCreatorBase, AutoCreator): + """Workfile auto-creator.""" + identifier = "io.openpype.creators.maya.workfile" + label = "Workfile" + family = "workfile" + icon = "fa5.file" + + default_variant = "Main" + + def create(self): + + variant = self.default_variant + current_instance = next( + ( + instance for instance in self.create_context.instances + if instance.creator_identifier == self.identifier + ), None) + + project_name = self.project_name + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name + + if current_instance is None: + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + data = { + "asset": asset_name, + "task": task_name, + "variant": variant + } + data.update( + self.get_dynamic_data( + variant, task_name, asset_doc, + project_name, host_name, current_instance) + ) + self.log.info("Auto-creating workfile instance...") + current_instance = CreatedInstance( + self.family, subset_name, data, self + ) + self._add_instance_to_context(current_instance) + elif ( + current_instance["asset"] != asset_name + or current_instance["task"] != task_name + ): + # Update instance context if is not the same + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + current_instance["asset"] = asset_name + current_instance["task"] = task_name + current_instance["subset"] = subset_name + + def collect_instances(self): + self.cache_subsets(self.collection_shared_data) + cached_subsets = self.collection_shared_data["maya_cached_subsets"] + for node in cached_subsets.get(self.identifier, []): + node_data = self.read_instance_node(node) + + created_instance = CreatedInstance.from_existing(node_data, self) + self._add_instance_to_context(created_instance) + + def update_instances(self, update_list): + for created_inst, _changes in update_list: + data = created_inst.data_to_store() + node = data.get("instance_node") + if not node: + node = self.create_node() + created_inst["instance_node"] = node + data = created_inst.data_to_store() + + self.imprint_instance_node(node, data) + + def create_node(self): + node = cmds.sets(empty=True, name="workfileMain") + cmds.setAttr(node + ".hiddenInOutliner", True) + return node diff --git a/openpype/hosts/maya/plugins/create/create_xgen.py b/openpype/hosts/maya/plugins/create/create_xgen.py index 70e23cf47b..eaafb0959a 100644 --- a/openpype/hosts/maya/plugins/create/create_xgen.py +++ b/openpype/hosts/maya/plugins/create/create_xgen.py @@ -1,10 +1,10 @@ from openpype.hosts.maya.api import plugin -class CreateXgen(plugin.Creator): +class CreateXgen(plugin.MayaCreator): """Xgen""" - name = "xgen" + identifier = "io.openpype.creators.maya.xgen" label = "Xgen" family = "xgen" icon = "pagelines" diff --git a/openpype/hosts/maya/plugins/create/create_yeti_cache.py b/openpype/hosts/maya/plugins/create/create_yeti_cache.py index e8c3203f21..395aa62325 100644 --- a/openpype/hosts/maya/plugins/create/create_yeti_cache.py +++ b/openpype/hosts/maya/plugins/create/create_yeti_cache.py @@ -1,15 +1,14 @@ -from collections import OrderedDict - from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.lib import NumberDef -class CreateYetiCache(plugin.Creator): +class CreateYetiCache(plugin.MayaCreator): """Output for procedural plugin nodes of Yeti """ - name = "yetiDefault" + identifier = "io.openpype.creators.maya.yeticache" label = "Yeti Cache" family = "yeticache" icon = "pagelines" @@ -17,14 +16,23 @@ class CreateYetiCache(plugin.Creator): def __init__(self, *args, **kwargs): super(CreateYetiCache, self).__init__(*args, **kwargs) - self.data["preroll"] = 0 + defs = [ + NumberDef("preroll", + label="Preroll", + minimum=0, + default=0, + decimals=0) + ] # Add animation data without step and handles - anim_data = lib.collect_animation_data() - anim_data.pop("step") - anim_data.pop("handleStart") - anim_data.pop("handleEnd") - self.data.update(anim_data) + defs.extend(lib.collect_animation_defs()) + remove = {"step", "handleStart", "handleEnd"} + defs = [attr_def for attr_def in defs if attr_def.key not in remove] - # Add samples - self.data["samples"] = 3 + # Add samples after frame range + defs.append( + NumberDef("samples", + label="Samples", + default=3, + decimals=0) + ) diff --git a/openpype/hosts/maya/plugins/create/create_yeti_rig.py b/openpype/hosts/maya/plugins/create/create_yeti_rig.py index 7abe2988cd..445bcf46d8 100644 --- a/openpype/hosts/maya/plugins/create/create_yeti_rig.py +++ b/openpype/hosts/maya/plugins/create/create_yeti_rig.py @@ -6,18 +6,22 @@ from openpype.hosts.maya.api import ( ) -class CreateYetiRig(plugin.Creator): +class CreateYetiRig(plugin.MayaCreator): """Output for procedural plugin nodes ( Yeti / XGen / etc)""" + identifier = "io.openpype.creators.maya.yetirig" label = "Yeti Rig" family = "yetiRig" icon = "usb" - def process(self): + def create(self, subset_name, instance_data, pre_create_data): with lib.undo_chunk(): - instance = super(CreateYetiRig, self).process() + instance = super(CreateYetiRig, self).create(subset_name, + instance_data, + pre_create_data) + instance_node = instance.get("instance_node") self.log.info("Creating Rig instance set up ...") input_meshes = cmds.sets(name="input_SET", empty=True) - cmds.sets(input_meshes, forceElement=instance) + cmds.sets(input_meshes, forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/inventory/import_modelrender.py b/openpype/hosts/maya/plugins/inventory/import_modelrender.py index 8a7390bc8d..4db8c4f2f6 100644 --- a/openpype/hosts/maya/plugins/inventory/import_modelrender.py +++ b/openpype/hosts/maya/plugins/inventory/import_modelrender.py @@ -8,7 +8,7 @@ from openpype.client import ( from openpype.pipeline import ( InventoryAction, get_representation_context, - legacy_io, + get_current_project_name, ) from openpype.hosts.maya.api.lib import ( maintained_selection, @@ -35,7 +35,7 @@ class ImportModelRender(InventoryAction): def process(self, containers): from maya import cmds - project_name = legacy_io.active_project() + project_name = get_current_project_name() for container in containers: con_name = container["objectName"] nodes = [] @@ -68,7 +68,7 @@ class ImportModelRender(InventoryAction): from maya import cmds - project_name = legacy_io.active_project() + project_name = get_current_project_name() repre_docs = get_representations( project_name, version_ids=[version_id], fields=["_id", "name"] ) diff --git a/openpype/hosts/maya/plugins/inventory/import_reference.py b/openpype/hosts/maya/plugins/inventory/import_reference.py index afb1e0e17f..ecc424209d 100644 --- a/openpype/hosts/maya/plugins/inventory/import_reference.py +++ b/openpype/hosts/maya/plugins/inventory/import_reference.py @@ -1,7 +1,7 @@ from maya import cmds from openpype.pipeline import InventoryAction -from openpype.hosts.maya.api.plugin import get_reference_node +from openpype.hosts.maya.api.lib import get_reference_node class ImportReference(InventoryAction): diff --git a/openpype/hosts/maya/plugins/load/_load_animation.py b/openpype/hosts/maya/plugins/load/_load_animation.py index 2ba5fe6b64..981b9ef434 100644 --- a/openpype/hosts/maya/plugins/load/_load_animation.py +++ b/openpype/hosts/maya/plugins/load/_load_animation.py @@ -33,15 +33,23 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): suffix="_abc" ) + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + # hero_001 (abc) # asset_counter{optional} - file_url = self.prepare_root_value(self.fname, + path = self.filepath_from_context(context) + file_url = self.prepare_root_value(path, context["project"]["name"]) nodes = cmds.file(file_url, namespace=namespace, sharedReferenceFile=False, - groupReference=True, - groupName=options['group_name'], + groupReference=attach_to_root, + groupName=group_name, reference=True, returnNewNodes=True) diff --git a/openpype/hosts/maya/plugins/load/actions.py b/openpype/hosts/maya/plugins/load/actions.py index 4855f3eed0..d347ef0d08 100644 --- a/openpype/hosts/maya/plugins/load/actions.py +++ b/openpype/hosts/maya/plugins/load/actions.py @@ -5,8 +5,9 @@ import qargparse from openpype.pipeline import load from openpype.hosts.maya.api.lib import ( maintained_selection, - unique_namespace + get_custom_namespace ) +import openpype.hosts.maya.api.plugin class SetFrameRangeLoader(load.LoaderPlugin): @@ -83,7 +84,7 @@ class SetFrameRangeWithHandlesLoader(load.LoaderPlugin): animationEndTime=end) -class ImportMayaLoader(load.LoaderPlugin): +class ImportMayaLoader(openpype.hosts.maya.api.plugin.Loader): """Import action for Maya (unmanaged) Warning: @@ -130,22 +131,25 @@ class ImportMayaLoader(load.LoaderPlugin): if choice is False: return - asset = context['asset'] + custom_group_name, custom_namespace, options = \ + self.get_custom_namespace_and_group(context, data, + "import_loader") - namespace = namespace or unique_namespace( - asset["name"] + "_", - prefix="_" if asset["name"][0].isdigit() else "", - suffix="_", - ) + namespace = get_custom_namespace(custom_namespace) + if not options.get("attach_to_root", True): + custom_group_name = namespace + + path = self.filepath_from_context(context) with maintained_selection(): - nodes = cmds.file(self.fname, + nodes = cmds.file(path, i=True, preserveReferences=True, namespace=namespace, returnNewNodes=True, - groupReference=True, - groupName="{}:{}".format(namespace, name)) + groupReference=options.get("attach_to_root", + True), + groupName=custom_group_name) if data.get("clean_import", False): remove_attributes = ["cbId"] diff --git a/openpype/hosts/maya/plugins/load/load_arnold_standin.py b/openpype/hosts/maya/plugins/load/load_arnold_standin.py index 29215bc5c2..b5cc4d629b 100644 --- a/openpype/hosts/maya/plugins/load/load_arnold_standin.py +++ b/openpype/hosts/maya/plugins/load/load_arnold_standin.py @@ -89,11 +89,12 @@ class ArnoldStandinLoader(load.LoaderPlugin): cmds.parent(standin, root) # Set the standin filepath + repre_path = self.filepath_from_context(context) path, operator = self._setup_proxy( - standin_shape, self.fname, namespace + standin_shape, repre_path, namespace ) cmds.setAttr(standin_shape + ".dso", path, type="string") - sequence = is_sequence(os.listdir(os.path.dirname(self.fname))) + sequence = is_sequence(os.listdir(os.path.dirname(repre_path))) cmds.setAttr(standin_shape + ".useFrameExtension", sequence) fps = float(version["data"].get("fps"))or get_current_session_fps() diff --git a/openpype/hosts/maya/plugins/load/load_assembly.py b/openpype/hosts/maya/plugins/load/load_assembly.py index 275f21be5d..0a2733e03c 100644 --- a/openpype/hosts/maya/plugins/load/load_assembly.py +++ b/openpype/hosts/maya/plugins/load/load_assembly.py @@ -30,7 +30,7 @@ class AssemblyLoader(load.LoaderPlugin): ) containers = setdress.load_package( - filepath=self.fname, + filepath=self.filepath_from_context(context), name=name, namespace=namespace ) diff --git a/openpype/hosts/maya/plugins/load/load_audio.py b/openpype/hosts/maya/plugins/load/load_audio.py index 9e7fd96bdb..265b15f4ae 100644 --- a/openpype/hosts/maya/plugins/load/load_audio.py +++ b/openpype/hosts/maya/plugins/load/load_audio.py @@ -6,7 +6,7 @@ from openpype.client import ( get_version_by_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, load, get_representation_path, ) @@ -68,7 +68,7 @@ class AudioLoader(load.LoaderPlugin): ) # Set frame range. - project_name = legacy_io.active_project() + project_name = get_current_project_name() version = get_version_by_id( project_name, representation["parent"], fields=["parent"] ) diff --git a/openpype/hosts/maya/plugins/load/load_gpucache.py b/openpype/hosts/maya/plugins/load/load_gpucache.py index 794b21eb5d..344f2fd060 100644 --- a/openpype/hosts/maya/plugins/load/load_gpucache.py +++ b/openpype/hosts/maya/plugins/load/load_gpucache.py @@ -37,7 +37,8 @@ class GpuCacheLoader(load.LoaderPlugin): label = "{}:{}".format(namespace, name) root = cmds.group(name=label, empty=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get('model') if c is not None: @@ -56,7 +57,8 @@ class GpuCacheLoader(load.LoaderPlugin): name="{0}Shape".format(transform_name)) # Set the cache filepath - cmds.setAttr(cache + '.cacheFileName', self.fname, type="string") + path = self.filepath_from_context(context) + cmds.setAttr(cache + '.cacheFileName', path, type="string") cmds.setAttr(cache + '.cacheGeomPath', "|", type="string") # root # Lock parenting of the transform and cache diff --git a/openpype/hosts/maya/plugins/load/load_image.py b/openpype/hosts/maya/plugins/load/load_image.py index 552bcc33af..3b1f5442ce 100644 --- a/openpype/hosts/maya/plugins/load/load_image.py +++ b/openpype/hosts/maya/plugins/load/load_image.py @@ -4,7 +4,8 @@ import copy from openpype.lib import EnumDef from openpype.pipeline import ( load, - get_representation_context + get_representation_context, + get_current_host_name, ) from openpype.pipeline.load.utils import get_representation_path_from_context from openpype.pipeline.colorspace import ( @@ -266,7 +267,7 @@ class FileNodeLoader(load.LoaderPlugin): # Assume colorspace from filepath based on project settings project_name = context["project"]["name"] - host_name = os.environ.get("AVALON_APP") + host_name = get_current_host_name() project_settings = get_project_settings(project_name) config_data = get_imageio_config( diff --git a/openpype/hosts/maya/plugins/load/load_image_plane.py b/openpype/hosts/maya/plugins/load/load_image_plane.py index bf13708e9b..117f4f4202 100644 --- a/openpype/hosts/maya/plugins/load/load_image_plane.py +++ b/openpype/hosts/maya/plugins/load/load_image_plane.py @@ -6,9 +6,9 @@ from openpype.client import ( get_version_by_id, ) from openpype.pipeline import ( - legacy_io, load, - get_representation_path + get_representation_path, + get_current_project_name, ) from openpype.hosts.maya.api.pipeline import containerise from openpype.hosts.maya.api.lib import ( @@ -221,7 +221,7 @@ class ImagePlaneLoader(load.LoaderPlugin): type="string") # Set frame range. - project_name = legacy_io.active_project() + project_name = get_current_project_name() version = get_version_by_id( project_name, representation["parent"], fields=["parent"] ) diff --git a/openpype/hosts/maya/plugins/load/load_look.py b/openpype/hosts/maya/plugins/load/load_look.py index 8f3e017658..20617c77bf 100644 --- a/openpype/hosts/maya/plugins/load/load_look.py +++ b/openpype/hosts/maya/plugins/load/load_look.py @@ -7,14 +7,14 @@ from qtpy import QtWidgets from openpype.client import get_representation_by_name from openpype.pipeline import ( - legacy_io, + get_current_project_name, get_representation_path, ) import openpype.hosts.maya.api.plugin from openpype.hosts.maya.api import lib from openpype.widgets.message_window import ScrollMessageBox -from openpype.hosts.maya.api.plugin import get_reference_node +from openpype.hosts.maya.api.lib import get_reference_node class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): @@ -29,11 +29,13 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): color = "orange" def process_reference(self, context, name, namespace, options): - import maya.cmds as cmds + from maya import cmds with lib.maintained_selection(): - file_url = self.prepare_root_value(self.fname, - context["project"]["name"]) + file_url = self.prepare_root_value( + file_url=self.filepath_from_context(context), + project_name=context["project"]["name"] + ) nodes = cmds.file(file_url, namespace=namespace, reference=True, @@ -76,7 +78,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): shader_nodes = cmds.ls(members, type='shadingEngine') nodes = set(self._get_nodes_with_shader(shader_nodes)) - project_name = legacy_io.active_project() + project_name = get_current_project_name() json_representation = get_representation_by_name( project_name, "json", representation["parent"] ) @@ -113,8 +115,8 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): # region compute lookup nodes_by_id = defaultdict(list) - for n in nodes: - nodes_by_id[lib.get_id(n)].append(n) + for node in nodes: + nodes_by_id[lib.get_id(node)].append(node) lib.apply_attributes(attributes, nodes_by_id) def _get_nodes_with_shader(self, shader_nodes): @@ -125,14 +127,16 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): Returns node names """ - import maya.cmds as cmds + from maya import cmds - nodes_list = [] for shader in shader_nodes: - connections = cmds.listConnections(cmds.listHistory(shader, f=1), + future = cmds.listHistory(shader, future=True) + connections = cmds.listConnections(future, type='mesh') if connections: - for connection in connections: - nodes_list.extend(cmds.listRelatives(connection, - shapes=True)) - return nodes_list + # Ensure unique entries only to optimize query and results + connections = list(set(connections)) + return cmds.listRelatives(connections, + shapes=True, + fullPath=True) or [] + return [] diff --git a/openpype/hosts/maya/plugins/load/load_matchmove.py b/openpype/hosts/maya/plugins/load/load_matchmove.py index ee3332bd09..46d1be8300 100644 --- a/openpype/hosts/maya/plugins/load/load_matchmove.py +++ b/openpype/hosts/maya/plugins/load/load_matchmove.py @@ -1,7 +1,6 @@ from maya import mel from openpype.pipeline import load - class MatchmoveLoader(load.LoaderPlugin): """ This will run matchmove script to create track in scene. @@ -18,11 +17,12 @@ class MatchmoveLoader(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - if self.fname.lower().endswith(".py"): - exec(open(self.fname).read()) + path = self.filepath_from_context(context) + if path.lower().endswith(".py"): + exec(open(path).read()) - elif self.fname.lower().endswith(".mel"): - mel.eval('source "{}"'.format(self.fname)) + elif path.lower().endswith(".mel"): + mel.eval('source "{}"'.format(path)) else: self.log.error("Unsupported script type") diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py index 9e0d38df46..d08fcd904e 100644 --- a/openpype/hosts/maya/plugins/load/load_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd.py @@ -36,6 +36,8 @@ class MultiverseUsdLoader(load.LoaderPlugin): suffix="_", ) + path = self.filepath_from_context(context) + # Make sure we can load the plugin cmds.loadPlugin("MultiverseForMaya", quiet=True) import multiverse @@ -46,7 +48,7 @@ class MultiverseUsdLoader(load.LoaderPlugin): with maintained_selection(): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): - shape = multiverse.CreateUsdCompound(self.fname) + shape = multiverse.CreateUsdCompound(path) transform = cmds.listRelatives( shape, parent=True, fullPath=True)[0] diff --git a/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py b/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py index 8a25508ac2..be8d78607b 100644 --- a/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py +++ b/openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py @@ -50,9 +50,10 @@ class MultiverseUsdOverLoader(load.LoaderPlugin): cmds.loadPlugin("MultiverseForMaya", quiet=True) import multiverse + path = self.filepath_from_context(context) nodes = current_usd with maintained_selection(): - multiverse.AddUsdCompoundAssetPath(current_usd[0], self.fname) + multiverse.AddUsdCompoundAssetPath(current_usd[0], path) namespace = current_usd[0].split("|")[1].split(":")[0] diff --git a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py index c288e23ded..b3fbfb2ed9 100644 --- a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py @@ -46,18 +46,19 @@ class RedshiftProxyLoader(load.LoaderPlugin): # Ensure Redshift for Maya is loaded. cmds.loadPlugin("redshift4maya", quiet=True) + path = self.filepath_from_context(context) with maintained_selection(): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): - nodes, group_node = self.create_rs_proxy( - name, self.fname) + nodes, group_node = self.create_rs_proxy(name, path) self[:] = nodes if not nodes: return # colour the group node - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index 74ca27ff3c..91767249e0 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -118,15 +118,20 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): except ValueError: family = "model" + project_name = context["project"]["name"] # True by default to keep legacy behaviours attach_to_root = options.get("attach_to_root", True) group_name = options["group_name"] + # no group shall be created + if not attach_to_root: + group_name = namespace + + path = self.filepath_from_context(context) with maintained_selection(): cmds.loadPlugin("AbcImport.mll", quiet=True) - file_url = self.prepare_root_value(self.fname, - context["project"]["name"]) + file_url = self.prepare_root_value(path, project_name) nodes = cmds.file(file_url, namespace=namespace, sharedReferenceFile=False, @@ -147,11 +152,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): if current_namespace != ":": group_name = current_namespace + ":" + group_name - group_name = "|" + group_name - self[:] = new_nodes if attach_to_root: + group_name = "|" + group_name roots = cmds.listRelatives(group_name, children=True, fullPath=True) or [] @@ -162,7 +166,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): with parent_nodes(roots, parent=None): cmds.xform(group_name, zeroTransformPivots=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + settings = get_project_settings(project_name) display_handle = settings['maya']['load'].get( 'reference_loader', {} @@ -204,6 +208,11 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): self._post_process_rig(name, namespace, context, options) else: if "translate" in options: + if not attach_to_root and new_nodes: + root_nodes = cmds.ls(new_nodes, assemblies=True, + long=True) + # we assume only a single root is ever loaded + group_name = root_nodes[0] cmds.setAttr("{}.translate".format(group_name), *options["translate"]) return new_nodes @@ -221,6 +230,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): self._lock_camera_transforms(members) def _post_process_rig(self, name, namespace, context, options): + nodes = self[:] create_rig_animation_instance( nodes, context, namespace, options=options, log=self.log diff --git a/openpype/hosts/maya/plugins/load/load_rendersetup.py b/openpype/hosts/maya/plugins/load/load_rendersetup.py index 7a2d8b1002..8b85f11958 100644 --- a/openpype/hosts/maya/plugins/load/load_rendersetup.py +++ b/openpype/hosts/maya/plugins/load/load_rendersetup.py @@ -43,8 +43,9 @@ class RenderSetupLoader(load.LoaderPlugin): prefix="_" if asset[0].isdigit() else "", suffix="_", ) - self.log.info(">>> loading json [ {} ]".format(self.fname)) - with open(self.fname, "r") as file: + path = self.filepath_from_context(context) + self.log.info(">>> loading json [ {} ]".format(path)) + with open(path, "r") as file: renderSetup.instance().decode( json.load(file), renderSetup.DECODE_AND_OVERWRITE, None) diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py index 8a386cecfd..0f674a69c4 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py @@ -48,7 +48,8 @@ class LoadVDBtoArnold(load.LoaderPlugin): label = "{}:{}".format(namespace, name) root = cmds.group(name=label, empty=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -65,8 +66,9 @@ class LoadVDBtoArnold(load.LoaderPlugin): name="{}Shape".format(root), parent=root) + path = self.filepath_from_context(context) self._set_path(grid_node, - path=self.fname, + path=path, representation=context["representation"]) # Lock the shape node so the user can't delete the transform/shape diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py index 1f02321dc8..28cfdc7129 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py @@ -67,7 +67,8 @@ class LoadVDBtoRedShift(load.LoaderPlugin): label = "{}:{}".format(namespace, name) root = cmds.createNode("transform", name=label) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -85,7 +86,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin): parent=root) self._set_path(volume_node, - path=self.fname, + path=self.filepath_from_context(context), representation=context["representation"]) nodes = [root, volume_node] diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py b/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py index 9267c59c02..46f2dd674d 100644 --- a/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py +++ b/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py @@ -88,8 +88,9 @@ class LoadVDBtoVRay(load.LoaderPlugin): from openpype.hosts.maya.api.lib import unique_namespace from openpype.hosts.maya.api.pipeline import containerise - assert os.path.exists(self.fname), ( - "Path does not exist: %s" % self.fname + path = self.filepath_from_context(context) + assert os.path.exists(path), ( + "Path does not exist: %s" % path ) try: @@ -126,7 +127,8 @@ class LoadVDBtoVRay(load.LoaderPlugin): label = "{}:{}_VDB".format(namespace, name) root = cmds.group(name=label, empty=True) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) @@ -146,7 +148,7 @@ class LoadVDBtoVRay(load.LoaderPlugin): cmds.connectAttr("time1.outTime", grid_node + ".currentTime") # Set path - self._set_path(grid_node, self.fname, show_preset_popup=True) + self._set_path(grid_node, path, show_preset_popup=True) # Lock the shape node so the user can't delete the transform/shape # as if it was referenced diff --git a/openpype/hosts/maya/plugins/load/load_vrayproxy.py b/openpype/hosts/maya/plugins/load/load_vrayproxy.py index 64184f9e7b..9d926a33ed 100644 --- a/openpype/hosts/maya/plugins/load/load_vrayproxy.py +++ b/openpype/hosts/maya/plugins/load/load_vrayproxy.py @@ -12,9 +12,9 @@ import maya.cmds as cmds from openpype.client import get_representation_by_name from openpype.settings import get_project_settings from openpype.pipeline import ( - legacy_io, load, - get_representation_path + get_current_project_name, + get_representation_path, ) from openpype.hosts.maya.api.lib import ( maintained_selection, @@ -53,7 +53,9 @@ class VRayProxyLoader(load.LoaderPlugin): family = "vrayproxy" # get all representations for this version - self.fname = self._get_abc(context["version"]["_id"]) or self.fname + filename = self._get_abc(context["version"]["_id"]) + if not filename: + filename = self.filepath_from_context(context) asset_name = context['asset']["name"] namespace = namespace or unique_namespace( @@ -69,14 +71,15 @@ class VRayProxyLoader(load.LoaderPlugin): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): nodes, group_node = self.create_vray_proxy( - name, filename=self.fname) + name, filename=filename) self[:] = nodes if not nodes: return # colour the group node - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: @@ -185,12 +188,12 @@ class VRayProxyLoader(load.LoaderPlugin): """ self.log.debug( "Looking for abc in published representations of this version.") - project_name = legacy_io.active_project() + project_name = get_current_project_name() abc_rep = get_representation_by_name(project_name, "abc", version_id) if abc_rep: self.log.debug("Found, we'll link alembic to vray proxy.") file_name = get_representation_path(abc_rep) - self.log.debug("File: {}".format(self.fname)) + self.log.debug("File: {}".format(file_name)) return file_name return "" diff --git a/openpype/hosts/maya/plugins/load/load_vrayscene.py b/openpype/hosts/maya/plugins/load/load_vrayscene.py index d87992f9a7..3a2c3a47f2 100644 --- a/openpype/hosts/maya/plugins/load/load_vrayscene.py +++ b/openpype/hosts/maya/plugins/load/load_vrayscene.py @@ -46,15 +46,18 @@ class VRaySceneLoader(load.LoaderPlugin): with maintained_selection(): cmds.namespace(addNamespace=namespace) with namespaced(namespace, new=False): - nodes, root_node = self.create_vray_scene(name, - filename=self.fname) + nodes, root_node = self.create_vray_scene( + name, + filename=self.filepath_from_context(context) + ) self[:] = nodes if not nodes: return # colour the group node - settings = get_project_settings(os.environ['AVALON_PROJECT']) + project_name = context["project"]["name"] + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: diff --git a/openpype/hosts/maya/plugins/load/load_xgen.py b/openpype/hosts/maya/plugins/load/load_xgen.py index 16f2e8e842..323f8d7eda 100644 --- a/openpype/hosts/maya/plugins/load/load_xgen.py +++ b/openpype/hosts/maya/plugins/load/load_xgen.py @@ -48,7 +48,8 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): return maya_filepath = self.prepare_root_value( - self.fname, context["project"]["name"] + file_url=self.filepath_from_context(context), + project_name=context["project"]["name"] ) # Reference xgen. Xgen does not like being referenced in under a group. diff --git a/openpype/hosts/maya/plugins/load/load_yeti_cache.py b/openpype/hosts/maya/plugins/load/load_yeti_cache.py index 5ba381050a..5cded13d4e 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_cache.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_cache.py @@ -60,15 +60,17 @@ class YetiCacheLoader(load.LoaderPlugin): cmds.loadPlugin("pgYetiMaya", quiet=True) # Create Yeti cache nodes according to settings - settings = self.read_settings(self.fname) + path = self.filepath_from_context(context) + settings = self.read_settings(path) nodes = [] for node in settings["nodes"]: nodes.extend(self.create_node(namespace, node)) group_name = "{}:{}".format(namespace, name) group_node = cmds.group(nodes, name=group_name) + project_name = context["project"]["name"] - settings = get_project_settings(os.environ['AVALON_PROJECT']) + settings = get_project_settings(project_name) colors = settings['maya']['load']['colors'] c = colors.get(family) diff --git a/openpype/hosts/maya/plugins/load/load_yeti_rig.py b/openpype/hosts/maya/plugins/load/load_yeti_rig.py index b8066871b0..6cfcffe27d 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_rig.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_rig.py @@ -19,17 +19,25 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def process_reference( self, context, name=None, namespace=None, options=None ): - group_name = options['group_name'] + path = self.filepath_from_context(context) + + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + with lib.maintained_selection(): file_url = self.prepare_root_value( - self.fname, context["project"]["name"] + path, context["project"]["name"] ) nodes = cmds.file( file_url, namespace=namespace, reference=True, returnNewNodes=True, - groupReference=True, + groupReference=attach_to_root, groupName=group_name ) diff --git a/openpype/hosts/maya/plugins/publish/collect_current_file.py b/openpype/hosts/maya/plugins/publish/collect_current_file.py new file mode 100644 index 0000000000..c7105a7f3c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_current_file.py @@ -0,0 +1,16 @@ + +import pyblish.api + +from maya import cmds + + +class CollectCurrentFile(pyblish.api.ContextPlugin): + """Inject the current working file.""" + + order = pyblish.api.CollectorOrder - 0.4 + label = "Maya Current File" + hosts = ['maya'] + + def process(self, context): + """Inject the current working file""" + context.data['currentFile'] = cmds.file(query=True, sceneName=True) diff --git a/openpype/hosts/maya/plugins/publish/collect_inputs.py b/openpype/hosts/maya/plugins/publish/collect_inputs.py index 895c92762b..30ed21da9c 100644 --- a/openpype/hosts/maya/plugins/publish/collect_inputs.py +++ b/openpype/hosts/maya/plugins/publish/collect_inputs.py @@ -172,7 +172,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin): """Collects inputs from nodes in renderlayer, incl. shaders + camera""" # Get the renderlayer - renderlayer = instance.data.get("setMembers") + renderlayer = instance.data.get("renderlayer") if renderlayer == "defaultRenderLayer": # Assume all loaded containers in the scene are inputs diff --git a/openpype/hosts/maya/plugins/publish/collect_instances.py b/openpype/hosts/maya/plugins/publish/collect_instances.py index 74bdc11a2c..5f914b40d7 100644 --- a/openpype/hosts/maya/plugins/publish/collect_instances.py +++ b/openpype/hosts/maya/plugins/publish/collect_instances.py @@ -1,12 +1,11 @@ from maya import cmds import pyblish.api -import json from openpype.hosts.maya.api.lib import get_all_children -class CollectInstances(pyblish.api.ContextPlugin): - """Gather instances by objectSet and pre-defined attribute +class CollectNewInstances(pyblish.api.InstancePlugin): + """Gather members for instances and pre-defined attribute This collector takes into account assets that are associated with an objectSet and marked with a unique identifier; @@ -25,134 +24,70 @@ class CollectInstances(pyblish.api.ContextPlugin): """ - label = "Collect Instances" + label = "Collect New Instance Data" order = pyblish.api.CollectorOrder hosts = ["maya"] - def process(self, context): + def process(self, instance): - objectset = cmds.ls("*.id", long=True, type="objectSet", - recursive=True, objectsOnly=True) + objset = instance.data.get("instance_node") + if not objset: + self.log.debug("Instance has no `instance_node` data") - context.data['objectsets'] = objectset - for objset in objectset: - - if not cmds.attributeQuery("id", node=objset, exists=True): - continue - - id_attr = "{}.id".format(objset) - if cmds.getAttr(id_attr) != "pyblish.avalon.instance": - continue - - # The developer is responsible for specifying - # the family of each instance. - has_family = cmds.attributeQuery("family", - node=objset, - exists=True) - assert has_family, "\"%s\" was missing a family" % objset - - members = cmds.sets(objset, query=True) - if members is None: - self.log.warning("Skipped empty instance: \"%s\" " % objset) - continue - - self.log.info("Creating instance for {}".format(objset)) - - data = dict() - - # Apply each user defined attribute as data - for attr in cmds.listAttr(objset, userDefined=True) or list(): - try: - value = cmds.getAttr("%s.%s" % (objset, attr)) - except Exception: - # Some attributes cannot be read directly, - # such as mesh and color attributes. These - # are considered non-essential to this - # particular publishing pipeline. - value = None - data[attr] = value - - # temporarily translation of `active` to `publish` till issue has - # been resolved, https://github.com/pyblish/pyblish-base/issues/307 - if "active" in data: - data["publish"] = data["active"] + # TODO: We might not want to do this in the future + # Merge creator attributes into instance.data just backwards compatible + # code still runs as expected + creator_attributes = instance.data.get("creator_attributes", {}) + if creator_attributes: + instance.data.update(creator_attributes) + members = cmds.sets(objset, query=True) or [] + if members: # Collect members members = cmds.ls(members, long=True) or [] dag_members = cmds.ls(members, type="dagNode", long=True) children = get_all_children(dag_members) children = cmds.ls(children, noIntermediate=True, long=True) - - parents = [] - if data.get("includeParentHierarchy", True): - # If `includeParentHierarchy` then include the parents - # so they will also be picked up in the instance by validators - parents = self.get_all_parents(members) + parents = ( + self.get_all_parents(members) + if creator_attributes.get("includeParentHierarchy", True) + else [] + ) members_hierarchy = list(set(members + children + parents)) - if 'families' not in data: - data['families'] = [data.get('family')] - - # Create the instance - instance = context.create_instance(objset) instance[:] = members_hierarchy - instance.data["objset"] = objset - # Store the exact members of the object set - instance.data["setMembers"] = members + elif instance.data["family"] != "workfile": + self.log.warning("Empty instance: \"%s\" " % objset) + # Store the exact members of the object set + instance.data["setMembers"] = members - # Define nice label - name = cmds.ls(objset, long=False)[0] # use short name - label = "{0} ({1})".format(name, data["asset"]) + # TODO: This might make more sense as a separate collector + # Convert frame values to integers + for attr_name in ( + "handleStart", "handleEnd", "frameStart", "frameEnd", + ): + value = instance.data.get(attr_name) + if value is not None: + instance.data[attr_name] = int(value) - # Convert frame values to integers - for attr_name in ( - "handleStart", "handleEnd", "frameStart", "frameEnd", - ): - value = data.get(attr_name) - if value is not None: - data[attr_name] = int(value) + # Append start frame and end frame to label if present + if "frameStart" in instance.data and "frameEnd" in instance.data: + # Take handles from context if not set locally on the instance + for key in ["handleStart", "handleEnd"]: + if key not in instance.data: + value = instance.context.data[key] + if value is not None: + value = int(value) + instance.data[key] = value - # Append start frame and end frame to label if present - if "frameStart" in data and "frameEnd" in data: - # Take handles from context if not set locally on the instance - for key in ["handleStart", "handleEnd"]: - if key not in data: - value = context.data[key] - if value is not None: - value = int(value) - data[key] = value - - data["frameStartHandle"] = int( - data["frameStart"] - data["handleStart"] - ) - data["frameEndHandle"] = int( - data["frameEnd"] + data["handleEnd"] - ) - - label += " [{0}-{1}]".format( - data["frameStartHandle"], data["frameEndHandle"] - ) - - instance.data["label"] = label - instance.data.update(data) - self.log.debug("{}".format(instance.data)) - - # Produce diagnostic message for any graphical - # user interface interested in visualising it. - self.log.info("Found: \"%s\" " % instance.data["name"]) - self.log.debug( - "DATA: {} ".format(json.dumps(instance.data, indent=4))) - - def sort_by_family(instance): - """Sort by family""" - return instance.data.get("families", instance.data.get("family")) - - # Sort/grouped by family (preserving local index) - context[:] = sorted(context, key=sort_by_family) - - return context + instance.data["frameStartHandle"] = int( + instance.data["frameStart"] - instance.data["handleStart"] + ) + instance.data["frameEndHandle"] = int( + instance.data["frameEnd"] + instance.data["handleEnd"] + ) def get_all_parents(self, nodes): """Get all parents by using string operations (optimization) diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py index 287ddc228b..b3da920566 100644 --- a/openpype/hosts/maya/plugins/publish/collect_look.py +++ b/openpype/hosts/maya/plugins/publish/collect_look.py @@ -285,17 +285,17 @@ class CollectLook(pyblish.api.InstancePlugin): instance: Instance to collect. """ - self.log.info("Looking for look associations " + self.log.debug("Looking for look associations " "for %s" % instance.data['name']) # Discover related object sets - self.log.info("Gathering sets ...") + self.log.debug("Gathering sets ...") sets = self.collect_sets(instance) # Lookup set (optimization) instance_lookup = set(cmds.ls(instance, long=True)) - self.log.info("Gathering set relations ...") + self.log.debug("Gathering set relations ...") # Ensure iteration happen in a list so we can remove keys from the # dict within the loop @@ -308,7 +308,7 @@ class CollectLook(pyblish.api.InstancePlugin): # if node is specified as renderer node type, it will be # serialized with its attributes. if cmds.nodeType(obj_set) in RENDERER_NODE_TYPES: - self.log.info("- {} is {}".format( + self.log.debug("- {} is {}".format( obj_set, cmds.nodeType(obj_set))) node_attrs = [] @@ -354,13 +354,13 @@ class CollectLook(pyblish.api.InstancePlugin): # Remove sets that didn't have any members assigned in the end # Thus the data will be limited to only what we need. - self.log.info("obj_set {}".format(sets[obj_set])) + self.log.debug("obj_set {}".format(sets[obj_set])) if not sets[obj_set]["members"]: self.log.info( "Removing redundant set information: {}".format(obj_set)) sets.pop(obj_set, None) - self.log.info("Gathering attribute changes to instance members..") + self.log.debug("Gathering attribute changes to instance members..") attributes = self.collect_attributes_changed(instance) # Store data on the instance @@ -433,14 +433,14 @@ class CollectLook(pyblish.api.InstancePlugin): for node_type in all_supported_nodes: files.extend(cmds.ls(history, type=node_type, long=True)) - self.log.info("Collected file nodes:\n{}".format(files)) + self.log.debug("Collected file nodes:\n{}".format(files)) # Collect textures if any file nodes are found instance.data["resources"] = [] for n in files: for res in self.collect_resources(n): instance.data["resources"].append(res) - self.log.info("Collected resources: {}".format( + self.log.debug("Collected resources: {}".format( instance.data["resources"])) # Log warning when no relevant sets were retrieved for the look. @@ -536,7 +536,7 @@ class CollectLook(pyblish.api.InstancePlugin): # Collect changes to "custom" attributes node_attrs = get_look_attrs(node) - self.log.info( + self.log.debug( "Node \"{0}\" attributes: {1}".format(node, node_attrs) ) diff --git a/openpype/hosts/maya/plugins/publish/collect_pointcache.py b/openpype/hosts/maya/plugins/publish/collect_pointcache.py index d0430c5612..bb9065792f 100644 --- a/openpype/hosts/maya/plugins/publish/collect_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/collect_pointcache.py @@ -16,14 +16,16 @@ class CollectPointcache(pyblish.api.InstancePlugin): instance.data["families"].append("publish.farm") proxy_set = None - for node in instance.data["setMembers"]: - if cmds.nodeType(node) != "objectSet": - continue - members = cmds.sets(node, query=True) - if members is None: - self.log.warning("Skipped empty objectset: \"%s\" " % node) - continue + for node in cmds.ls(instance.data["setMembers"], + exactType="objectSet"): + # Find proxy_SET objectSet in the instance for proxy meshes if node.endswith("proxy_SET"): + members = cmds.sets(node, query=True) + if members is None: + self.log.debug("Skipped empty proxy_SET: \"%s\" " % node) + continue + self.log.debug("Found proxy set: {}".format(node)) + proxy_set = node instance.data["proxy"] = [] instance.data["proxyRoots"] = [] @@ -36,8 +38,9 @@ class CollectPointcache(pyblish.api.InstancePlugin): cmds.listRelatives(member, shapes=True, fullPath=True) ) self.log.debug( - "proxy members: {}".format(instance.data["proxy"]) + "Found proxy members: {}".format(instance.data["proxy"]) ) + break if proxy_set: instance.remove(proxy_set) diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py index babd494758..c17a8789e4 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render.py +++ b/openpype/hosts/maya/plugins/publish/collect_render.py @@ -39,27 +39,29 @@ Provides: instance -> pixelAspect """ -import re import os import platform import json from maya import cmds -import maya.app.renderSetup.model.renderSetup as renderSetup import pyblish.api +from openpype.pipeline import KnownPublishError from openpype.lib import get_formatted_current_time -from openpype.pipeline import legacy_io -from openpype.hosts.maya.api.lib_renderproducts import get as get_layer_render_products # noqa: E501 +from openpype.hosts.maya.api.lib_renderproducts import ( + get as get_layer_render_products, + UnsupportedRendererException +) from openpype.hosts.maya.api import lib -class CollectMayaRender(pyblish.api.ContextPlugin): +class CollectMayaRender(pyblish.api.InstancePlugin): """Gather all publishable render layers from renderSetup.""" order = pyblish.api.CollectorOrder + 0.01 hosts = ["maya"] + families = ["renderlayer"] label = "Collect Render Layers" sync_workfile_version = False @@ -69,388 +71,258 @@ class CollectMayaRender(pyblish.api.ContextPlugin): "underscore": "_" } - def process(self, context): - """Entry point to collector.""" - render_instance = None + def process(self, instance): - for instance in context: - if "rendering" in instance.data["families"]: - render_instance = instance - render_instance.data["remove"] = True + # TODO: Re-add force enable of workfile instance? + # TODO: Re-add legacy layer support with LAYER_ prefix but in Creator + # TODO: Set and collect active state of RenderLayer in Creator using + # renderlayer.isRenderable() + context = instance.context - # make sure workfile instance publishing is enabled - if "workfile" in instance.data["families"]: - instance.data["publish"] = True - - if not render_instance: - self.log.info( - "No render instance found, skipping render " - "layer collection." - ) - return - - render_globals = render_instance - collected_render_layers = render_instance.data["setMembers"] + layer = instance.data["transientData"]["layer"] + objset = instance.data.get("instance_node") filepath = context.data["currentFile"].replace("\\", "/") - asset = legacy_io.Session["AVALON_ASSET"] workspace = context.data["workspaceDir"] - # Retrieve render setup layers - rs = renderSetup.instance() - maya_render_layers = { - layer.name(): layer for layer in rs.getRenderLayers() + # check if layer is renderable + if not layer.isRenderable(): + msg = "Render layer [ {} ] is not " "renderable".format( + layer.name() + ) + self.log.warning(msg) + + # detect if there are sets (subsets) to attach render to + sets = cmds.sets(objset, query=True) or [] + attach_to = [] + for s in sets: + if not cmds.attributeQuery("family", node=s, exists=True): + continue + + attach_to.append( + { + "version": None, # we need integrator for that + "subset": s, + "family": cmds.getAttr("{}.family".format(s)), + } + ) + self.log.info(" -> attach render to: {}".format(s)) + + layer_name = layer.name() + + # collect all frames we are expecting to be rendered + # return all expected files for all cameras and aovs in given + # frame range + try: + layer_render_products = get_layer_render_products(layer.name()) + except UnsupportedRendererException as exc: + raise KnownPublishError(exc) + render_products = layer_render_products.layer_data.products + assert render_products, "no render products generated" + expected_files = [] + multipart = False + for product in render_products: + if product.multipart: + multipart = True + product_name = product.productName + if product.camera and layer_render_products.has_camera_token(): + product_name = "{}{}".format( + product.camera, + "_{}".format(product_name) if product_name else "") + expected_files.append( + { + product_name: layer_render_products.get_files( + product) + }) + + has_cameras = any(product.camera for product in render_products) + assert has_cameras, "No render cameras found." + + self.log.info("multipart: {}".format( + multipart)) + assert expected_files, "no file names were generated, this is a bug" + self.log.info( + "expected files: {}".format( + json.dumps(expected_files, indent=4, sort_keys=True) + ) + ) + + # if we want to attach render to subset, check if we have AOV's + # in expectedFiles. If so, raise error as we cannot attach AOV + # (considered to be subset on its own) to another subset + if attach_to: + assert isinstance(expected_files, list), ( + "attaching multiple AOVs or renderable cameras to " + "subset is not supported" + ) + + # append full path + aov_dict = {} + default_render_folder = context.data.get("project_settings")\ + .get("maya")\ + .get("RenderSettings")\ + .get("default_render_image_folder") or "" + # replace relative paths with absolute. Render products are + # returned as list of dictionaries. + publish_meta_path = None + for aov in expected_files: + full_paths = [] + aov_first_key = list(aov.keys())[0] + for file in aov[aov_first_key]: + full_path = os.path.join(workspace, default_render_folder, + file) + full_path = full_path.replace("\\", "/") + full_paths.append(full_path) + publish_meta_path = os.path.dirname(full_path) + aov_dict[aov_first_key] = full_paths + full_exp_files = [aov_dict] + self.log.info(full_exp_files) + + if publish_meta_path is None: + raise KnownPublishError("Unable to detect any expected output " + "images for: {}. Make sure you have a " + "renderable camera and a valid frame " + "range set for your renderlayer." + "".format(instance.name)) + + frame_start_render = int(self.get_render_attribute( + "startFrame", layer=layer_name)) + frame_end_render = int(self.get_render_attribute( + "endFrame", layer=layer_name)) + + if (int(context.data["frameStartHandle"]) == frame_start_render + and int(context.data["frameEndHandle"]) == frame_end_render): # noqa: W503, E501 + + handle_start = context.data["handleStart"] + handle_end = context.data["handleEnd"] + frame_start = context.data["frameStart"] + frame_end = context.data["frameEnd"] + frame_start_handle = context.data["frameStartHandle"] + frame_end_handle = context.data["frameEndHandle"] + else: + handle_start = 0 + handle_end = 0 + frame_start = frame_start_render + frame_end = frame_end_render + frame_start_handle = frame_start_render + frame_end_handle = frame_end_render + + # find common path to store metadata + # so if image prefix is branching to many directories + # metadata file will be located in top-most common + # directory. + # TODO: use `os.path.commonpath()` after switch to Python 3 + publish_meta_path = os.path.normpath(publish_meta_path) + common_publish_meta_path = os.path.splitdrive( + publish_meta_path)[0] + if common_publish_meta_path: + common_publish_meta_path += os.path.sep + for part in publish_meta_path.replace( + common_publish_meta_path, "").split(os.path.sep): + common_publish_meta_path = os.path.join( + common_publish_meta_path, part) + if part == layer_name: + break + + # TODO: replace this terrible linux hotfix with real solution :) + if platform.system().lower() in ["linux", "darwin"]: + common_publish_meta_path = "/" + common_publish_meta_path + + self.log.info( + "Publish meta path: {}".format(common_publish_meta_path)) + + # Get layer specific settings, might be overrides + colorspace_data = lib.get_color_management_preferences() + data = { + "farm": True, + "attachTo": attach_to, + + "multipartExr": multipart, + "review": instance.data.get("review") or False, + + # Frame range + "handleStart": handle_start, + "handleEnd": handle_end, + "frameStart": frame_start, + "frameEnd": frame_end, + "frameStartHandle": frame_start_handle, + "frameEndHandle": frame_end_handle, + "byFrameStep": int( + self.get_render_attribute("byFrameStep", + layer=layer_name)), + + # Renderlayer + "renderer": self.get_render_attribute( + "currentRenderer", layer=layer_name).lower(), + "setMembers": layer._getLegacyNodeName(), # legacy renderlayer + "renderlayer": layer_name, + + # todo: is `time` and `author` still needed? + "time": get_formatted_current_time(), + "author": context.data["user"], + + # Add source to allow tracing back to the scene from + # which was submitted originally + "source": filepath, + "expectedFiles": full_exp_files, + "publishRenderMetadataFolder": common_publish_meta_path, + "renderProducts": layer_render_products, + "resolutionWidth": lib.get_attr_in_layer( + "defaultResolution.width", layer=layer_name + ), + "resolutionHeight": lib.get_attr_in_layer( + "defaultResolution.height", layer=layer_name + ), + "pixelAspect": lib.get_attr_in_layer( + "defaultResolution.pixelAspect", layer=layer_name + ), + + # todo: Following are likely not needed due to collecting from the + # instance itself if they are attribute definitions + "tileRendering": instance.data.get("tileRendering") or False, # noqa: E501 + "tilesX": instance.data.get("tilesX") or 2, + "tilesY": instance.data.get("tilesY") or 2, + "convertToScanline": instance.data.get( + "convertToScanline") or False, + "useReferencedAovs": instance.data.get( + "useReferencedAovs") or instance.data.get( + "vrayUseReferencedAovs") or False, + "aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501 + "renderSetupIncludeLights": instance.data.get( + "renderSetupIncludeLights" + ), + "colorspaceConfig": colorspace_data["config"], + "colorspaceDisplay": colorspace_data["display"], + "colorspaceView": colorspace_data["view"], } - for layer in collected_render_layers: - if layer.startswith("LAYER_"): - # this is support for legacy mode where render layers - # started with `LAYER_` prefix. - layer_name_pattern = r"^LAYER_(.*)" - else: - # new way is to prefix render layer name with instance - # namespace. - layer_name_pattern = r"^.+:(.*)" + rr_settings = ( + context.data["system_settings"]["modules"]["royalrender"] + ) + if rr_settings["enabled"]: + data["rrPathName"] = instance.data.get("rrPathName") + self.log.info(data["rrPathName"]) - # todo: We should have a more explicit way to link the renderlayer - match = re.match(layer_name_pattern, layer) - if not match: - msg = "Invalid layer name in set [ {} ]".format(layer) - self.log.warning(msg) - continue + if self.sync_workfile_version: + data["version"] = context.data["version"] + for _instance in context: + if _instance.data['family'] == "workfile": + _instance.data["version"] = context.data["version"] - expected_layer_name = match.group(1) - self.log.info("Processing '{}' as layer [ {} ]" - "".format(layer, expected_layer_name)) - - # check if layer is part of renderSetup - if expected_layer_name not in maya_render_layers: - msg = "Render layer [ {} ] is not in " "Render Setup".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - # check if layer is renderable - if not maya_render_layers[expected_layer_name].isRenderable(): - msg = "Render layer [ {} ] is not " "renderable".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - # detect if there are sets (subsets) to attach render to - sets = cmds.sets(layer, query=True) or [] - attach_to = [] - for s in sets: - if not cmds.attributeQuery("family", node=s, exists=True): - continue - - attach_to.append( - { - "version": None, # we need integrator for that - "subset": s, - "family": cmds.getAttr("{}.family".format(s)), - } - ) - self.log.info(" -> attach render to: {}".format(s)) - - layer_name = "rs_{}".format(expected_layer_name) - - # collect all frames we are expecting to be rendered - # return all expected files for all cameras and aovs in given - # frame range - layer_render_products = get_layer_render_products(layer_name) - render_products = layer_render_products.layer_data.products - assert render_products, "no render products generated" - exp_files = [] - multipart = False - for product in render_products: - if product.multipart: - multipart = True - product_name = product.productName - if product.camera and layer_render_products.has_camera_token(): - product_name = "{}{}".format( - product.camera, - "_" + product_name if product_name else "") - exp_files.append( - { - product_name: layer_render_products.get_files( - product) - }) - - has_cameras = any(product.camera for product in render_products) - assert has_cameras, "No render cameras found." - - self.log.info("multipart: {}".format( - multipart)) - assert exp_files, "no file names were generated, this is bug" - self.log.info( - "expected files: {}".format( - json.dumps(exp_files, indent=4, sort_keys=True) - ) - ) - - # if we want to attach render to subset, check if we have AOV's - # in expectedFiles. If so, raise error as we cannot attach AOV - # (considered to be subset on its own) to another subset - if attach_to: - assert isinstance(exp_files, list), ( - "attaching multiple AOVs or renderable cameras to " - "subset is not supported" - ) - - # append full path - aov_dict = {} - default_render_file = context.data.get('project_settings')\ - .get('maya')\ - .get('RenderSettings')\ - .get('default_render_image_folder') or "" - # replace relative paths with absolute. Render products are - # returned as list of dictionaries. - publish_meta_path = None - for aov in exp_files: - full_paths = [] - aov_first_key = list(aov.keys())[0] - for file in aov[aov_first_key]: - full_path = os.path.join(workspace, default_render_file, - file) - full_path = full_path.replace("\\", "/") - full_paths.append(full_path) - publish_meta_path = os.path.dirname(full_path) - aov_dict[aov_first_key] = full_paths - full_exp_files = [aov_dict] - - frame_start_render = int(self.get_render_attribute( - "startFrame", layer=layer_name)) - frame_end_render = int(self.get_render_attribute( - "endFrame", layer=layer_name)) - - if (int(context.data['frameStartHandle']) == frame_start_render - and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - - handle_start = context.data['handleStart'] - handle_end = context.data['handleEnd'] - frame_start = context.data['frameStart'] - frame_end = context.data['frameEnd'] - frame_start_handle = context.data['frameStartHandle'] - frame_end_handle = context.data['frameEndHandle'] - else: - handle_start = 0 - handle_end = 0 - frame_start = frame_start_render - frame_end = frame_end_render - frame_start_handle = frame_start_render - frame_end_handle = frame_end_render - - # find common path to store metadata - # so if image prefix is branching to many directories - # metadata file will be located in top-most common - # directory. - # TODO: use `os.path.commonpath()` after switch to Python 3 - publish_meta_path = os.path.normpath(publish_meta_path) - common_publish_meta_path = os.path.splitdrive( - publish_meta_path)[0] - if common_publish_meta_path: - common_publish_meta_path += os.path.sep - for part in publish_meta_path.replace( - common_publish_meta_path, "").split(os.path.sep): - common_publish_meta_path = os.path.join( - common_publish_meta_path, part) - if part == expected_layer_name: - break - - # TODO: replace this terrible linux hotfix with real solution :) - if platform.system().lower() in ["linux", "darwin"]: - common_publish_meta_path = "/" + common_publish_meta_path - - self.log.info( - "Publish meta path: {}".format(common_publish_meta_path)) - - self.log.info(full_exp_files) - self.log.info("collecting layer: {}".format(layer_name)) - # Get layer specific settings, might be overrides - colorspace_data = lib.get_color_management_preferences() - data = { - "subset": expected_layer_name, - "attachTo": attach_to, - "setMembers": layer_name, - "multipartExr": multipart, - "review": render_instance.data.get("review") or False, - "publish": True, - - "handleStart": handle_start, - "handleEnd": handle_end, - "frameStart": frame_start, - "frameEnd": frame_end, - "frameStartHandle": frame_start_handle, - "frameEndHandle": frame_end_handle, - "byFrameStep": int( - self.get_render_attribute("byFrameStep", - layer=layer_name)), - "renderer": self.get_render_attribute( - "currentRenderer", layer=layer_name).lower(), - # instance subset - "family": "renderlayer", - "families": ["renderlayer"], - "asset": asset, - "time": get_formatted_current_time(), - "author": context.data["user"], - # Add source to allow tracing back to the scene from - # which was submitted originally - "source": filepath, - "expectedFiles": full_exp_files, - "publishRenderMetadataFolder": common_publish_meta_path, - "renderProducts": layer_render_products, - "resolutionWidth": lib.get_attr_in_layer( - "defaultResolution.width", layer=layer_name - ), - "resolutionHeight": lib.get_attr_in_layer( - "defaultResolution.height", layer=layer_name - ), - "pixelAspect": lib.get_attr_in_layer( - "defaultResolution.pixelAspect", layer=layer_name - ), - "tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501 - "tilesX": render_instance.data.get("tilesX") or 2, - "tilesY": render_instance.data.get("tilesY") or 2, - "priority": render_instance.data.get("priority"), - "convertToScanline": render_instance.data.get( - "convertToScanline") or False, - "useReferencedAovs": render_instance.data.get( - "useReferencedAovs") or render_instance.data.get( - "vrayUseReferencedAovs") or False, - "aovSeparator": layer_render_products.layer_data.aov_separator, # noqa: E501 - "renderSetupIncludeLights": render_instance.data.get( - "renderSetupIncludeLights" - ), - "colorspaceConfig": colorspace_data["config"], - "colorspaceDisplay": colorspace_data["display"], - "colorspaceView": colorspace_data["view"], - "strict_error_checking": render_instance.data.get( - "strict_error_checking", True - ) - } - - # Collect Deadline url if Deadline module is enabled - deadline_settings = ( - context.data["system_settings"]["modules"]["deadline"] - ) - if deadline_settings["enabled"]: - data["deadlineUrl"] = render_instance.data["deadlineUrl"] - - if self.sync_workfile_version: - data["version"] = context.data["version"] - - for instance in context: - if instance.data['family'] == "workfile": - instance.data["version"] = context.data["version"] - - # handle standalone renderers - if render_instance.data.get("vrayScene") is True: - data["families"].append("vrayscene_render") - - if render_instance.data.get("assScene") is True: - data["families"].append("assscene_render") - - # Include (optional) global settings - # Get global overrides and translate to Deadline values - overrides = self.parse_options(str(render_globals)) - data.update(**overrides) - - # get string values for pools - primary_pool = overrides["renderGlobals"]["Pool"] - secondary_pool = overrides["renderGlobals"].get("SecondaryPool") - data["primaryPool"] = primary_pool - data["secondaryPool"] = secondary_pool - - # Define nice label - label = "{0} ({1})".format(expected_layer_name, data["asset"]) - label += " [{0}-{1}]".format( - int(data["frameStartHandle"]), int(data["frameEndHandle"]) - ) - - instance = context.create_instance(expected_layer_name) - instance.data["label"] = label - instance.data["farm"] = True - instance.data.update(data) - - def parse_options(self, render_globals): - """Get all overrides with a value, skip those without. - - Here's the kicker. These globals override defaults in the submission - integrator, but an empty value means no overriding is made. - Otherwise, Frames would override the default frames set under globals. - - Args: - render_globals (str): collection of render globals - - Returns: - dict: only overrides with values - - """ - attributes = lib.read(render_globals) - - options = {"renderGlobals": {}} - options["renderGlobals"]["Priority"] = attributes["priority"] - - # Check for specific pools - pool_a, pool_b = self._discover_pools(attributes) - options["renderGlobals"].update({"Pool": pool_a}) - if pool_b: - options["renderGlobals"].update({"SecondaryPool": pool_b}) - - # Machine list - machine_list = attributes["machineList"] - if machine_list: - key = "Whitelist" if attributes["whitelist"] else "Blacklist" - options["renderGlobals"][key] = machine_list - - # Suspend publish job - state = "Suspended" if attributes["suspendPublishJob"] else "Active" - options["publishJobState"] = state - - chunksize = attributes.get("framesPerTask", 1) - options["renderGlobals"]["ChunkSize"] = chunksize + # Define nice label + label = "{0} ({1})".format(layer_name, instance.data["asset"]) + label += " [{0}-{1}]".format( + int(data["frameStartHandle"]), int(data["frameEndHandle"]) + ) + data["label"] = label # Override frames should be False if extendFrames is False. This is # to ensure it doesn't go off doing crazy unpredictable things - override_frames = False - extend_frames = attributes.get("extendFrames", False) - if extend_frames: - override_frames = attributes.get("overrideExistingFrame", False) + extend_frames = instance.data.get("extendFrames", False) + if not extend_frames: + instance.data["overrideExistingFrame"] = False - options["extendFrames"] = extend_frames - options["overrideExistingFrame"] = override_frames - - maya_render_plugin = "MayaBatch" - - options["mayaRenderPlugin"] = maya_render_plugin - - return options - - def _discover_pools(self, attributes): - - pool_a = None - pool_b = None - - # Check for specific pools - pool_b = [] - if "primaryPool" in attributes: - pool_a = attributes["primaryPool"] - if "secondaryPool" in attributes: - pool_b = attributes["secondaryPool"] - - else: - # Backwards compatibility - pool_str = attributes.get("pools", None) - if pool_str: - pool_a, pool_b = pool_str.split(";") - - # Ensure empty entry token is caught - if pool_b == "-": - pool_b = None - - return pool_a, pool_b + # Update the instace + instance.data.update(data) @staticmethod def get_render_attribute(attr, layer): diff --git a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py index 9666499c42..c3dc31ead9 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py +++ b/openpype/hosts/maya/plugins/publish/collect_render_layer_aovs.py @@ -50,7 +50,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin): result = [] # Collect all AOVs / Render Elements - layer = instance.data["setMembers"] + layer = instance.data["renderlayer"] node_type = rp_node_types[renderer] render_elements = cmds.ls(type=node_type) diff --git a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py index 93a37d8693..d1c3cf3b2c 100644 --- a/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py +++ b/openpype/hosts/maya/plugins/publish/collect_renderable_camera.py @@ -19,7 +19,7 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin): if "vrayscene_layer" in instance.data.get("families", []): layer = instance.data.get("layer") else: - layer = instance.data["setMembers"] + layer = instance.data["renderlayer"] self.log.info("layer: {}".format(layer)) cameras = cmds.ls(type="camera", long=True) diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 5c190a4a7b..586939a3b8 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -18,14 +18,10 @@ class CollectReview(pyblish.api.InstancePlugin): def process(self, instance): - self.log.debug('instance: {}'.format(instance)) - - task = legacy_io.Session["AVALON_TASK"] - # Get panel. instance.data["panel"] = cmds.playblast( activeEditor=True - ).split("|")[-1] + ).rsplit("|", 1)[-1] # get cameras members = instance.data['setMembers'] @@ -34,11 +30,12 @@ class CollectReview(pyblish.api.InstancePlugin): camera = cameras[0] if cameras else None context = instance.context - objectset = context.data['objectsets'] + objectset = { + i.data.get("instance_node") for i in context + } - # Convert enum attribute index to string for Display Lights. - index = instance.data.get("displayLights", 0) - display_lights = lib.DISPLAY_LIGHTS_VALUES[index] + # Collect display lights. + display_lights = instance.data.get("displayLights", "default") if display_lights == "project_settings": settings = instance.context.data["project_settings"] settings = settings["maya"]["publish"]["ExtractPlayblast"] @@ -60,7 +57,7 @@ class CollectReview(pyblish.api.InstancePlugin): burninDataMembers["focalLength"] = focal_length # Account for nested instances like model. - reviewable_subsets = list(set(members) & set(objectset)) + reviewable_subsets = list(set(members) & objectset) if reviewable_subsets: if len(reviewable_subsets) > 1: raise KnownPublishError( @@ -97,7 +94,11 @@ class CollectReview(pyblish.api.InstancePlugin): data["frameStart"] = instance.data["frameStart"] data["frameEnd"] = instance.data["frameEnd"] data['step'] = instance.data['step'] - data['fps'] = instance.data['fps'] + # this (with other time related data) should be set on + # representations. Once plugins like Extract Review start + # using representations, this should be removed from here + # as Extract Playblast is already adding fps to representation. + data['fps'] = context.data['fps'] data['review_width'] = instance.data['review_width'] data['review_height'] = instance.data['review_height'] data["isolate"] = instance.data["isolate"] @@ -106,12 +107,16 @@ class CollectReview(pyblish.api.InstancePlugin): data["displayLights"] = display_lights data["burninDataMembers"] = burninDataMembers + for key, value in instance.data["publish_attributes"].items(): + data["publish_attributes"][key] = value + # The review instance must be active cmds.setAttr(str(instance) + '.active', 1) instance.data['remove'] = True else: + task = legacy_io.Session["AVALON_TASK"] legacy_subset_name = task + 'Review' asset_doc = instance.context.data['assetEntity'] project_name = legacy_io.active_project() @@ -133,6 +138,11 @@ class CollectReview(pyblish.api.InstancePlugin): instance.data["frameEndHandle"] instance.data["displayLights"] = display_lights instance.data["burninDataMembers"] = burninDataMembers + # this (with other time related data) should be set on + # representations. Once plugins like Extract Review start + # using representations, this should be removed from here + # as Extract Playblast is already adding fps to representation. + instance.data["fps"] = instance.context.data["fps"] # make ftrack publishable instance.data.setdefault("families", []).append('ftrack') diff --git a/openpype/hosts/maya/plugins/publish/collect_vrayscene.py b/openpype/hosts/maya/plugins/publish/collect_vrayscene.py index 0bae9656f3..b9181337a9 100644 --- a/openpype/hosts/maya/plugins/publish/collect_vrayscene.py +++ b/openpype/hosts/maya/plugins/publish/collect_vrayscene.py @@ -24,129 +24,91 @@ class CollectVrayScene(pyblish.api.InstancePlugin): def process(self, instance): """Collector entry point.""" - collected_render_layers = instance.data["setMembers"] - instance.data["remove"] = True + context = instance.context - _rs = renderSetup.instance() - # current_layer = _rs.getVisibleRenderLayer() + layer = instance.data["transientData"]["layer"] + layer_name = layer.name() + + renderer = self.get_render_attribute("currentRenderer", + layer=layer_name) + if renderer != "vray": + self.log.warning("Layer '{}' renderer is not set to V-Ray".format( + layer_name + )) # collect all frames we are expecting to be rendered - renderer = cmds.getAttr( - "defaultRenderGlobals.currentRenderer" - ).lower() + frame_start_render = int(self.get_render_attribute( + "startFrame", layer=layer_name)) + frame_end_render = int(self.get_render_attribute( + "endFrame", layer=layer_name)) - if renderer != "vray": - raise AssertionError("Vray is not enabled.") + if (int(context.data['frameStartHandle']) == frame_start_render + and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - maya_render_layers = { - layer.name(): layer for layer in _rs.getRenderLayers() + handle_start = context.data['handleStart'] + handle_end = context.data['handleEnd'] + frame_start = context.data['frameStart'] + frame_end = context.data['frameEnd'] + frame_start_handle = context.data['frameStartHandle'] + frame_end_handle = context.data['frameEndHandle'] + else: + handle_start = 0 + handle_end = 0 + frame_start = frame_start_render + frame_end = frame_end_render + frame_start_handle = frame_start_render + frame_end_handle = frame_end_render + + # Get layer specific settings, might be overrides + data = { + "subset": layer_name, + "layer": layer_name, + # TODO: This likely needs fixing now + # Before refactor: cmds.sets(layer, q=True) or ["*"] + "setMembers": ["*"], + "review": False, + "publish": True, + "handleStart": handle_start, + "handleEnd": handle_end, + "frameStart": frame_start, + "frameEnd": frame_end, + "frameStartHandle": frame_start_handle, + "frameEndHandle": frame_end_handle, + "byFrameStep": int( + self.get_render_attribute("byFrameStep", + layer=layer_name)), + "renderer": renderer, + # instance subset + "family": "vrayscene_layer", + "families": ["vrayscene_layer"], + "time": get_formatted_current_time(), + "author": context.data["user"], + # Add source to allow tracing back to the scene from + # which was submitted originally + "source": context.data["currentFile"].replace("\\", "/"), + "resolutionWidth": lib.get_attr_in_layer( + "defaultResolution.height", layer=layer_name + ), + "resolutionHeight": lib.get_attr_in_layer( + "defaultResolution.width", layer=layer_name + ), + "pixelAspect": lib.get_attr_in_layer( + "defaultResolution.pixelAspect", layer=layer_name + ), + "priority": instance.data.get("priority"), + "useMultipleSceneFiles": instance.data.get( + "vraySceneMultipleFiles") } - layer_list = [] - for layer in collected_render_layers: - # every layer in set should start with `LAYER_` prefix - try: - expected_layer_name = re.search(r"^.+:(.*)", layer).group(1) - except IndexError: - msg = "Invalid layer name in set [ {} ]".format(layer) - self.log.warning(msg) - continue + instance.data.update(data) - self.log.info("processing %s" % layer) - # check if layer is part of renderSetup - if expected_layer_name not in maya_render_layers: - msg = "Render layer [ {} ] is not in " "Render Setup".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - # check if layer is renderable - if not maya_render_layers[expected_layer_name].isRenderable(): - msg = "Render layer [ {} ] is not " "renderable".format( - expected_layer_name - ) - self.log.warning(msg) - continue - - layer_name = "rs_{}".format(expected_layer_name) - - self.log.debug(expected_layer_name) - layer_list.append(expected_layer_name) - - frame_start_render = int(self.get_render_attribute( - "startFrame", layer=layer_name)) - frame_end_render = int(self.get_render_attribute( - "endFrame", layer=layer_name)) - - if (int(context.data['frameStartHandle']) == frame_start_render - and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - - handle_start = context.data['handleStart'] - handle_end = context.data['handleEnd'] - frame_start = context.data['frameStart'] - frame_end = context.data['frameEnd'] - frame_start_handle = context.data['frameStartHandle'] - frame_end_handle = context.data['frameEndHandle'] - else: - handle_start = 0 - handle_end = 0 - frame_start = frame_start_render - frame_end = frame_end_render - frame_start_handle = frame_start_render - frame_end_handle = frame_end_render - - # Get layer specific settings, might be overrides - data = { - "subset": expected_layer_name, - "layer": layer_name, - "setMembers": cmds.sets(layer, q=True) or ["*"], - "review": False, - "publish": True, - "handleStart": handle_start, - "handleEnd": handle_end, - "frameStart": frame_start, - "frameEnd": frame_end, - "frameStartHandle": frame_start_handle, - "frameEndHandle": frame_end_handle, - "byFrameStep": int( - self.get_render_attribute("byFrameStep", - layer=layer_name)), - "renderer": self.get_render_attribute("currentRenderer", - layer=layer_name), - # instance subset - "family": "vrayscene_layer", - "families": ["vrayscene_layer"], - "asset": legacy_io.Session["AVALON_ASSET"], - "time": get_formatted_current_time(), - "author": context.data["user"], - # Add source to allow tracing back to the scene from - # which was submitted originally - "source": context.data["currentFile"].replace("\\", "/"), - "resolutionWidth": lib.get_attr_in_layer( - "defaultResolution.height", layer=layer_name - ), - "resolutionHeight": lib.get_attr_in_layer( - "defaultResolution.width", layer=layer_name - ), - "pixelAspect": lib.get_attr_in_layer( - "defaultResolution.pixelAspect", layer=layer_name - ), - "priority": instance.data.get("priority"), - "useMultipleSceneFiles": instance.data.get( - "vraySceneMultipleFiles") - } - - # Define nice label - label = "{0} ({1})".format(expected_layer_name, data["asset"]) - label += " [{0}-{1}]".format( - int(data["frameStartHandle"]), int(data["frameEndHandle"]) - ) - - instance = context.create_instance(expected_layer_name) - instance.data["label"] = label - instance.data.update(data) + # Define nice label + label = "{0} ({1})".format(layer_name, instance.data["asset"]) + label += " [{0}-{1}]".format( + int(data["frameStartHandle"]), int(data["frameEndHandle"]) + ) + instance.data["label"] = label def get_render_attribute(self, attr, layer): """Get attribute from render options. diff --git a/openpype/hosts/maya/plugins/publish/collect_workfile.py b/openpype/hosts/maya/plugins/publish/collect_workfile.py index 12d86869ea..e2b64f1ebd 100644 --- a/openpype/hosts/maya/plugins/publish/collect_workfile.py +++ b/openpype/hosts/maya/plugins/publish/collect_workfile.py @@ -1,46 +1,30 @@ import os import pyblish.api -from maya import cmds -from openpype.pipeline import legacy_io - -class CollectWorkfile(pyblish.api.ContextPlugin): - """Inject the current working file into context""" +class CollectWorkfileData(pyblish.api.InstancePlugin): + """Inject data into Workfile instance""" order = pyblish.api.CollectorOrder - 0.01 label = "Maya Workfile" hosts = ['maya'] + families = ["workfile"] - def process(self, context): + def process(self, instance): """Inject the current working file""" - current_file = cmds.file(query=True, sceneName=True) - context.data['currentFile'] = current_file + context = instance.context + current_file = instance.context.data['currentFile'] folder, file = os.path.split(current_file) filename, ext = os.path.splitext(file) - task = legacy_io.Session["AVALON_TASK"] - - data = {} - - # create instance - instance = context.create_instance(name=filename) - subset = 'workfile' + task.capitalize() - - data.update({ - "subset": subset, - "asset": os.getenv("AVALON_ASSET", None), - "label": subset, - "publish": True, - "family": 'workfile', - "families": ['workfile'], + data = { # noqa "setMembers": [current_file], "frameStart": context.data['frameStart'], "frameEnd": context.data['frameEnd'], "handleStart": context.data['handleStart'], "handleEnd": context.data['handleEnd'] - }) + } data['representations'] = [{ 'name': ext.lstrip("."), @@ -50,8 +34,3 @@ class CollectWorkfile(pyblish.api.ContextPlugin): }] instance.data.update(data) - - self.log.info('Collected instance: {}'.format(file)) - self.log.info('Scene path: {}'.format(current_file)) - self.log.info('staging Dir: {}'.format(folder)) - self.log.info('subset: {}'.format(subset)) diff --git a/openpype/hosts/maya/plugins/publish/extract_assembly.py b/openpype/hosts/maya/plugins/publish/extract_assembly.py index 35932003ee..9b2978d192 100644 --- a/openpype/hosts/maya/plugins/publish/extract_assembly.py +++ b/openpype/hosts/maya/plugins/publish/extract_assembly.py @@ -31,7 +31,7 @@ class ExtractAssembly(publish.Extractor): with open(json_path, "w") as filepath: json.dump(instance.data["scenedata"], filepath, ensure_ascii=False) - self.log.info("Extracting point cache ..") + self.log.debug("Extracting pointcache ..") cmds.select(instance.data["nodesHierarchy"]) # Run basic alembic exporter diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py index 7467fa027d..30e6b89f2f 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py @@ -106,7 +106,7 @@ class ExtractCameraMayaScene(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: diff --git a/openpype/hosts/maya/plugins/publish/extract_import_reference.py b/openpype/hosts/maya/plugins/publish/extract_import_reference.py index 51c82dde92..8bb82be9b6 100644 --- a/openpype/hosts/maya/plugins/publish/extract_import_reference.py +++ b/openpype/hosts/maya/plugins/publish/extract_import_reference.py @@ -8,10 +8,12 @@ import tempfile from openpype.lib import run_subprocess from openpype.pipeline import publish +from openpype.pipeline.publish import OptionalPyblishPluginMixin from openpype.hosts.maya.api import lib -class ExtractImportReference(publish.Extractor): +class ExtractImportReference(publish.Extractor, + OptionalPyblishPluginMixin): """ Extract the scene with imported reference. @@ -32,11 +34,14 @@ class ExtractImportReference(publish.Extractor): cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa def process(self, instance): + if not self.is_active(instance.data): + return + ext_mapping = ( instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: diff --git a/openpype/hosts/maya/plugins/publish/extract_layout.py b/openpype/hosts/maya/plugins/publish/extract_layout.py index 7921fca069..bf5b4fc0e7 100644 --- a/openpype/hosts/maya/plugins/publish/extract_layout.py +++ b/openpype/hosts/maya/plugins/publish/extract_layout.py @@ -6,7 +6,7 @@ from maya import cmds from maya.api import OpenMaya as om from openpype.client import get_representation_by_id -from openpype.pipeline import legacy_io, publish +from openpype.pipeline import publish class ExtractLayout(publish.Extractor): @@ -30,7 +30,7 @@ class ExtractLayout(publish.Extractor): json_data = [] # TODO representation queries can be refactored to be faster - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] for asset in cmds.sets(str(instance), query=True): # Find the container diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index 3cc95a0b2e..b13568c781 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -15,8 +15,14 @@ import pyblish.api from maya import cmds # noqa -from openpype.lib.vendor_bin_utils import find_executable -from openpype.lib import source_hash, run_subprocess, get_oiio_tools_path +from openpype.lib import ( + find_executable, + source_hash, + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) + from openpype.pipeline import legacy_io, publish, KnownPublishError from openpype.hosts.maya.api import lib @@ -267,12 +273,11 @@ class MakeTX(TextureProcessor): """ - maketx_path = get_oiio_tools_path("maketx") - - if not maketx_path: - raise AssertionError( - "OIIO 'maketx' tool not found. Result: {}".format(maketx_path) - ) + try: + maketx_args = get_oiio_tool_args("maketx") + except ToolNotFoundError: + raise KnownPublishError( + "OpenImageIO is not available on the machine") # Define .tx filepath in staging if source file is not .tx fname, ext = os.path.splitext(os.path.basename(source)) @@ -328,8 +333,7 @@ class MakeTX(TextureProcessor): self.log.info("Generating .tx file for %s .." % source) - subprocess_args = [ - maketx_path, + subprocess_args = maketx_args + [ "-v", # verbose "-u", # update mode # --checknan doesn't influence the output file but aborts the @@ -412,7 +416,7 @@ class ExtractLook(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: @@ -444,12 +448,12 @@ class ExtractLook(publish.Extractor): # Remove all members of the sets so they are not included in the # exported file by accident - self.log.info("Processing sets..") + self.log.debug("Processing sets..") lookdata = instance.data["lookData"] relationships = lookdata["relationships"] sets = list(relationships.keys()) if not sets: - self.log.info("No sets found") + self.log.info("No sets found for the look") return # Specify texture processing executables to activate diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py index c2411ca651..d87d6c208a 100644 --- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py +++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py @@ -29,7 +29,7 @@ class ExtractMayaSceneRaw(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: diff --git a/openpype/hosts/maya/plugins/publish/extract_model.py b/openpype/hosts/maya/plugins/publish/extract_model.py index 7c8c3a2981..5137dffd94 100644 --- a/openpype/hosts/maya/plugins/publish/extract_model.py +++ b/openpype/hosts/maya/plugins/publish/extract_model.py @@ -8,7 +8,8 @@ from openpype.pipeline import publish from openpype.hosts.maya.api import lib -class ExtractModel(publish.Extractor): +class ExtractModel(publish.Extractor, + publish.OptionalPyblishPluginMixin): """Extract as Model (Maya Scene). Only extracts contents based on the original "setMembers" data to ensure @@ -31,11 +32,14 @@ class ExtractModel(publish.Extractor): def process(self, instance): """Plugin entry point.""" + if not self.is_active(instance.data): + return + ext_mapping = ( instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py index f44c13767c..9537a11ee4 100644 --- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py @@ -45,7 +45,7 @@ class ExtractAlembic(publish.Extractor): attr_prefixes = instance.data.get("attrPrefix", "").split(";") attr_prefixes = [value for value in attr_prefixes if value.strip()] - self.log.info("Extracting pointcache..") + self.log.debug("Extracting pointcache..") dirname = self.staging_dir(instance) parent_dir = self.staging_dir(instance) @@ -86,7 +86,6 @@ class ExtractAlembic(publish.Extractor): end=end)) suspend = not instance.data.get("refresh", False) - self.log.info(nodes) with suspended_refresh(suspend=suspend): with maintained_selection(): cmds.select(nodes, noExpand=True) diff --git a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py index 4377275635..834b335fc5 100644 --- a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py @@ -29,15 +29,21 @@ class ExtractRedshiftProxy(publish.Extractor): if not anim_on: # Remove animation information because it is not required for # non-animated subsets - instance.data.pop("proxyFrameStart", None) - instance.data.pop("proxyFrameEnd", None) + keys = ["frameStart", + "frameEnd", + "handleStart", + "handleEnd", + "frameStartHandle", + "frameEndHandle"] + for key in keys: + instance.data.pop(key, None) else: - start_frame = instance.data["proxyFrameStart"] - end_frame = instance.data["proxyFrameEnd"] + start_frame = instance.data["frameStartHandle"] + end_frame = instance.data["frameEndHandle"] rs_options = "{}startFrame={};endFrame={};frameStep={};".format( rs_options, start_frame, - end_frame, instance.data["proxyFrameStep"] + end_frame, instance.data["step"] ) root, ext = os.path.splitext(file_path) @@ -48,7 +54,7 @@ class ExtractRedshiftProxy(publish.Extractor): for frame in range( int(start_frame), int(end_frame) + 1, - int(instance.data["proxyFrameStep"]), + int(instance.data["step"]), )] # vertex_colors = instance.data.get("vertexColors", False) @@ -74,8 +80,6 @@ class ExtractRedshiftProxy(publish.Extractor): 'files': repr_files, "stagingDir": staging_dir, } - if anim_on: - representation["frameStart"] = instance.data["proxyFrameStart"] instance.data["representations"].append(representation) self.log.info("Extracted instance '%s' to: %s" diff --git a/openpype/hosts/maya/plugins/publish/extract_rig.py b/openpype/hosts/maya/plugins/publish/extract_rig.py index c71a2f710d..be57b9de07 100644 --- a/openpype/hosts/maya/plugins/publish/extract_rig.py +++ b/openpype/hosts/maya/plugins/publish/extract_rig.py @@ -22,13 +22,13 @@ class ExtractRig(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: self.scene_type = ext_mapping[family] self.log.info( - "Using {} as scene type".format(self.scene_type)) + "Using '.{}' as scene type".format(self.scene_type)) break except AttributeError: # no preset found diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py index e1f847f31a..4a797eb462 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh_abc.py @@ -32,7 +32,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor): optional = True def process(self, instance): - self.log.info("Extracting pointcache..") + self.log.debug("Extracting pointcache..") geo = cmds.listRelatives( instance.data.get("geometry"), allDescendents=True, fullPath=True) diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py index 1d0c5e88c3..9a46c31177 100644 --- a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py +++ b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py @@ -104,7 +104,7 @@ class ExtractYetiRig(publish.Extractor): instance.context.data["project_settings"]["maya"]["ext_mapping"] ) if ext_mapping: - self.log.info("Looking in settings for scene type ...") + self.log.debug("Looking in settings for scene type ...") # use extension mapping for first family found for family in self.families: try: diff --git a/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml b/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml new file mode 100644 index 0000000000..40169b28f9 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/help/validate_maya_units.xml @@ -0,0 +1,21 @@ + + + +Maya scene units +## Invalid maya scene units + +Detected invalid maya scene units: + +{issues} + + + +### How to repair? + +You can automatically repair the scene units by clicking the Repair action on +the right. + +After that restart publishing with Reload button. + + + diff --git a/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml b/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml new file mode 100644 index 0000000000..2ef4bc95c2 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/help/validate_node_ids.xml @@ -0,0 +1,29 @@ + + + +Missing node ids +## Nodes found with missing `cbId` + +Nodes were detected in your scene which are missing required `cbId` +attributes for identification further in the pipeline. + +### How to repair? + +The node ids are auto-generated on scene save, and thus the easiest fix is to +save your scene again. + +After that restart publishing with Reload button. + + +### Invalid nodes + +{nodes} + + +### How could this happen? + +This often happens if you've generated new nodes but haven't saved your scene +after creating the new nodes. + + + diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py index 1a6463fb9d..8e219eae85 100644 --- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py +++ b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py @@ -265,6 +265,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): context = instance.context workspace = context.data["workspaceDir"] + project_name = context.data["projectName"] + asset_name = context.data["asset"] filepath = None @@ -288,7 +290,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): comment = context.data.get("comment", "") scene = os.path.splitext(filename)[0] dirname = os.path.join(workspace, "renders") - renderlayer = instance.data['setMembers'] # rs_beauty + renderlayer = instance.data['renderlayer'] # rs_beauty renderlayer_name = instance.data['subset'] # beauty renderglobals = instance.data["renderGlobals"] # legacy_layers = renderlayer_globals["UseLegacyRenderLayers"] @@ -371,8 +373,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): "jobId": -1, "startOn": 0, "parentId": -1, - "project": os.environ.get('AVALON_PROJECT') or scene, - "shot": os.environ.get('AVALON_ASSET') or scene, + "project": project_name or scene, + "shot": asset_name or scene, "camera": instance.data.get("cameras")[0], "dependMode": 0, "packetSize": 4, @@ -546,3 +548,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin): "%f=%d was rounded off to nearest integer" % (value, int(value)) ) + + +# TODO: Remove hack to avoid this plug-in in new publisher +# This plug-in should actually be in dedicated module +if not os.environ.get("MUSTER_REST_URL"): + del MayaSubmitMuster diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_content.py b/openpype/hosts/maya/plugins/publish/validate_animation_content.py index 9dbb09a046..99acdc7b8f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_content.py @@ -1,6 +1,9 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) class ValidateAnimationContent(pyblish.api.InstancePlugin): @@ -47,4 +50,5 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Animation content is invalid. See log.") + raise PublishValidationError( + "Animation content is invalid. See log.") diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py index 5a527031be..6f5f03ab39 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py @@ -6,6 +6,7 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -35,8 +36,10 @@ class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin): # if a deformer has been created on the shape invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found with mismatching " - "IDs: {0}".format(invalid)) + # TODO: Message formatting can be improved + raise PublishValidationError("Nodes found with mismatching " + "IDs: {0}".format(invalid), + title="Invalid node ids") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py index 6975d583bb..49913fa42b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py +++ b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py @@ -23,11 +23,13 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin): def process(self, instance): # we cannot ask this until user open render settings as - # `defaultArnoldRenderOptions` doesn't exists + # `defaultArnoldRenderOptions` doesn't exist + errors = [] + try: - relative_texture = cmds.getAttr( + absolute_texture = cmds.getAttr( "defaultArnoldRenderOptions.absolute_texture_paths") - relative_procedural = cmds.getAttr( + absolute_procedural = cmds.getAttr( "defaultArnoldRenderOptions.absolute_procedural_paths") texture_search_path = cmds.getAttr( "defaultArnoldRenderOptions.tspath" @@ -42,10 +44,11 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin): scene_dir, scene_basename = os.path.split(cmds.file(q=True, loc=True)) scene_name, _ = os.path.splitext(scene_basename) - assert self.maya_is_true(relative_texture) is not True, \ - ("Texture path is set to be absolute") - assert self.maya_is_true(relative_procedural) is not True, \ - ("Procedural path is set to be absolute") + + if self.maya_is_true(absolute_texture): + errors.append("Texture path is set to be absolute") + if self.maya_is_true(absolute_procedural): + errors.append("Procedural path is set to be absolute") anatomy = instance.context.data["anatomy"] @@ -57,15 +60,20 @@ class ValidateAssRelativePaths(pyblish.api.InstancePlugin): for k in keys: paths.append("[{}]".format(k)) - self.log.info("discovered roots: {}".format(":".join(paths))) + self.log.debug("discovered roots: {}".format(":".join(paths))) - assert ":".join(paths) in texture_search_path, ( - "Project roots are not in texture_search_path" - ) + if ":".join(paths) not in texture_search_path: + errors.append(( + "Project roots {} are not in texture_search_path: {}" + ).format(paths, texture_search_path)) - assert ":".join(paths) in procedural_search_path, ( - "Project roots are not in procedural_search_path" - ) + if ":".join(paths) not in procedural_search_path: + errors.append(( + "Project roots {} are not in procedural_search_path: {}" + ).format(paths, procedural_search_path)) + + if errors: + raise PublishValidationError("\n".join(errors)) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py index 02464b2302..bcc40760e0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_name.py @@ -1,6 +1,9 @@ import pyblish.api import maya.cmds as cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError +) class ValidateAssemblyName(pyblish.api.InstancePlugin): @@ -47,5 +50,5 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found {} invalid named assembly " + raise PublishValidationError("Found {} invalid named assembly " "items".format(len(invalid))) diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py index 229da63c42..41ef78aab4 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py @@ -1,6 +1,8 @@ import pyblish.api import openpype.hosts.maya.api.action - +from openpype.pipeline.publish import ( + PublishValidationError +) class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin): """Ensure namespaces are not nested @@ -23,7 +25,7 @@ class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin): self.log.info("Checking namespace for %s" % instance.name) if self.get_invalid(instance): - raise RuntimeError("Nested namespaces found") + raise PublishValidationError("Nested namespaces found") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py index d1bca4091b..a24455ebaa 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py @@ -1,9 +1,8 @@ import pyblish.api - from maya import cmds import openpype.hosts.maya.api.action -from openpype.pipeline.publish import RepairAction +from openpype.pipeline.publish import PublishValidationError, RepairAction class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): @@ -38,8 +37,9 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found {} invalid transforms of assembly " - "items".format(len(invalid))) + raise PublishValidationError( + ("Found {} invalid transforms of assembly " + "items").format(len(invalid))) @classmethod def get_invalid(cls, instance): @@ -90,6 +90,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): """ from qtpy import QtWidgets + from openpype.hosts.maya.api import lib # Store namespace in variable, cosmetics thingy diff --git a/openpype/hosts/maya/plugins/publish/validate_attributes.py b/openpype/hosts/maya/plugins/publish/validate_attributes.py index 7ebd9d7d03..c76d979fbf 100644 --- a/openpype/hosts/maya/plugins/publish/validate_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_attributes.py @@ -1,17 +1,16 @@ from collections import defaultdict -from maya import cmds - import pyblish.api +from maya import cmds from openpype.hosts.maya.api.lib import set_attribute from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, -) + OptionalPyblishPluginMixin, PublishValidationError, RepairAction, + ValidateContentsOrder) -class ValidateAttributes(pyblish.api.InstancePlugin): +class ValidateAttributes(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Ensure attributes are consistent. Attributes to validate and their values comes from the @@ -32,13 +31,16 @@ class ValidateAttributes(pyblish.api.InstancePlugin): attributes = None def process(self, instance): + if not self.is_active(instance.data): + return + # Check for preset existence. if not self.attributes: return invalid = self.get_invalid(instance, compute=True) if invalid: - raise RuntimeError( + raise PublishValidationError( "Found attributes with invalid values: {}".format(invalid) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py index 13ea53a357..e5745612e9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py @@ -1,8 +1,9 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateCameraAttributes(pyblish.api.InstancePlugin): @@ -65,4 +66,5 @@ class ValidateCameraAttributes(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid camera attributes: %s" % invalid) + raise PublishValidationError( + "Invalid camera attributes: {}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py index 1ce8026fc2..767ac55718 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py @@ -1,8 +1,9 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateCameraContents(pyblish.api.InstancePlugin): @@ -34,7 +35,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): cameras = cmds.ls(shapes, type='camera', long=True) if len(cameras) != 1: cls.log.error("Camera instance must have a single camera. " - "Found {0}: {1}".format(len(cameras), cameras)) + "Found {0}: {1}".format(len(cameras), cameras)) invalid.extend(cameras) # We need to check this edge case because returning an extended @@ -48,10 +49,12 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): "members: {}".format(members)) return members - raise RuntimeError("No cameras found in empty instance.") + raise PublishValidationError( + "No cameras found in empty instance.") if not cls.validate_shapes: - cls.log.info("not validating shapes in the content") + cls.log.debug("Not validating shapes in the camera content" + " because 'validate shapes' is disabled") return invalid # non-camera shapes @@ -60,13 +63,10 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): if shapes: shapes = list(shapes) cls.log.error("Camera instance should only contain camera " - "shapes. Found: {0}".format(shapes)) + "shapes. Found: {0}".format(shapes)) invalid.extend(shapes) - - invalid = list(set(invalid)) - return invalid def process(self, instance): @@ -74,5 +74,5 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid camera contents: " + raise PublishValidationError("Invalid camera contents: " "{0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_color_sets.py b/openpype/hosts/maya/plugins/publish/validate_color_sets.py index 7ce3cca61a..766124cd9e 100644 --- a/openpype/hosts/maya/plugins/publish/validate_color_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_color_sets.py @@ -5,10 +5,12 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin ) -class ValidateColorSets(pyblish.api.Validator): +class ValidateColorSets(pyblish.api.Validator, + OptionalPyblishPluginMixin): """Validate all meshes in the instance have unlocked normals These can be removed manually through: @@ -40,6 +42,8 @@ class ValidateColorSets(pyblish.api.Validator): def process(self, instance): """Raise invalid when any of the meshes have ColorSets""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) diff --git a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py index 210ee4127c..24da091246 100644 --- a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py +++ b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py @@ -1,13 +1,14 @@ -from maya import cmds - import pyblish.api +from maya import cmds import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import maintained_selection -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder) -class ValidateCycleError(pyblish.api.InstancePlugin): +class ValidateCycleError(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate nodes produce no cycle errors.""" order = ValidateContentsOrder + 0.05 @@ -18,9 +19,13 @@ class ValidateCycleError(pyblish.api.InstancePlugin): optional = True def process(self, instance): + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes produce a cycle error: %s" % invalid) + raise PublishValidationError( + "Nodes produce a cycle error: {}".format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py index ccb351c880..c6184ed348 100644 --- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py +++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py @@ -4,7 +4,8 @@ from maya import cmds from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, - PublishValidationError + PublishValidationError, + OptionalPyblishPluginMixin ) from openpype.hosts.maya.api.lib_rendersetup import ( get_attr_overrides, @@ -13,7 +14,8 @@ from openpype.hosts.maya.api.lib_rendersetup import ( from maya.app.renderSetup.model.override import AbsOverride -class ValidateFrameRange(pyblish.api.InstancePlugin): +class ValidateFrameRange(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validates the frame ranges. This is an optional validator checking if the frame range on instance @@ -40,6 +42,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): exclude_families = [] def process(self, instance): + if not self.is_active(instance.data): + return + context = instance.context if instance.data.get("tileRendering"): self.log.info(( @@ -102,10 +107,12 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): "({}).".format(label.title(), values[1], values[0]) ) - for e in errors: - self.log.error(e) + if errors: + report = "Frame range settings are incorrect.\n\n" + for error in errors: + report += "- {}\n\n".format(error) - assert len(errors) == 0, ("Frame range settings are incorrect") + raise PublishValidationError(report, title="Frame Range incorrect") @classmethod def repair(cls, instance): @@ -150,7 +157,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): def repair_renderlayer(cls, instance): """Apply frame range in render settings""" - layer = instance.data["setMembers"] + layer = instance.data["renderlayer"] context = instance.context start_attr = "defaultRenderGlobals.startFrame" diff --git a/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py b/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py index 53c2cf548a..da065fcf94 100644 --- a/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py +++ b/openpype/hosts/maya/plugins/publish/validate_glsl_plugin.py @@ -4,7 +4,8 @@ from maya import cmds import pyblish.api from openpype.pipeline.publish import ( RepairAction, - ValidateContentsOrder + ValidateContentsOrder, + PublishValidationError ) @@ -21,7 +22,7 @@ class ValidateGLSLPlugin(pyblish.api.InstancePlugin): def process(self, instance): if not cmds.pluginInfo("maya2glTF", query=True, loaded=True): - raise RuntimeError("maya2glTF is not loaded") + raise PublishValidationError("maya2glTF is not loaded") @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py index 63849cfd12..7234f5a025 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py @@ -1,6 +1,9 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): @@ -14,18 +17,23 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): invalid = list() - if not instance.data["setMembers"]: + if not instance.data.get("setMembers"): objectset_name = instance.data['name'] invalid.append(objectset_name) return invalid def process(self, instance): - # Allow renderlayer and workfile to be empty - skip_families = ["workfile", "renderlayer", "rendersetup"] + # Allow renderlayer, rendersetup and workfile to be empty + skip_families = {"workfile", "renderlayer", "rendersetup"} if instance.data.get("family") in skip_families: return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Empty instances found: {0}".format(invalid)) + # Invalid will always be a single entry, we log the single name + name = invalid[0] + raise PublishValidationError( + title="Empty instance", + message="Instance '{0}' is empty".format(name) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py index 41bb414829..b257add7e8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py @@ -3,7 +3,9 @@ from __future__ import absolute_import import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, PublishValidationError +) from maya import cmds @@ -108,4 +110,5 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin): asset = instance.data.get("asset") context_asset = instance.context.data["assetEntity"]["name"] msg = "{} has asset {}".format(instance.name, asset) - assert asset == context_asset, msg + if asset != context_asset: + raise PublishValidationError(msg) diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py index bb3dde761c..69e16efe57 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py @@ -2,7 +2,10 @@ import pyblish.api import string import six -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) # Allow only characters, numbers and underscore allowed = set(string.ascii_lowercase + @@ -28,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin): # Ensure subset data if subset is None: - raise RuntimeError("Instance is missing subset " + raise PublishValidationError("Instance is missing subset " "name: {0}".format(subset)) if not isinstance(subset, six.string_types): diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py index 32abe91f48..2f14693ef2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instancer_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_instancer_content.py @@ -1,7 +1,8 @@ import maya.cmds as cmds - import pyblish.api + from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import PublishValidationError class ValidateInstancerContent(pyblish.api.InstancePlugin): @@ -52,7 +53,8 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin): error = True if error: - raise RuntimeError("Instancer Content is invalid. See log.") + raise PublishValidationError( + "Instancer Content is invalid. See log.") def check_geometry_hidden(self, export_members): diff --git a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py index 3514cf0a98..fcfcdce8b6 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py +++ b/openpype/hosts/maya/plugins/publish/validate_instancer_frame_ranges.py @@ -1,7 +1,10 @@ import os import re + import pyblish.api +from openpype.pipeline.publish import PublishValidationError + VERBOSE = False @@ -164,5 +167,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin): if invalid: self.log.error("Invalid nodes: {0}".format(invalid)) - raise RuntimeError("Invalid particle caches in instance. " - "See logs for details.") + raise PublishValidationError( + ("Invalid particle caches in instance. " + "See logs for details.")) diff --git a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py index 624074aaf9..eac13053db 100644 --- a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py +++ b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py @@ -2,7 +2,10 @@ import os import pyblish.api import maya.cmds as cmds -from openpype.pipeline.publish import RepairContextAction +from openpype.pipeline.publish import ( + RepairContextAction, + PublishValidationError +) class ValidateLoadedPlugin(pyblish.api.ContextPlugin): @@ -35,7 +38,7 @@ class ValidateLoadedPlugin(pyblish.api.ContextPlugin): invalid = self.get_invalid(context) if invalid: - raise RuntimeError( + raise PublishValidationError( "Found forbidden plugin name: {}".format(", ".join(invalid)) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_look_contents.py b/openpype/hosts/maya/plugins/publish/validate_look_contents.py index 2d38099f0f..433d997840 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_contents.py @@ -1,6 +1,11 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) + + from maya import cmds # noqa @@ -28,19 +33,16 @@ class ValidateLookContents(pyblish.api.InstancePlugin): """Process all the nodes in the instance""" if not instance[:]: - raise RuntimeError("Instance is empty") + raise PublishValidationError("Instance is empty") invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("'{}' has invalid look " + raise PublishValidationError("'{}' has invalid look " "content".format(instance.name)) @classmethod def get_invalid(cls, instance): """Get all invalid nodes""" - cls.log.info("Validating look content for " - "'{}'".format(instance.name)) - # check if data has the right attributes and content attributes = cls.validate_lookdata_attributes(instance) # check the looks for ID diff --git a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py index 20f561a892..0109f6ebd5 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py @@ -1,7 +1,10 @@ from maya import cmds import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin): @@ -56,4 +59,4 @@ class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin): invalid.append(plug) if invalid: - raise RuntimeError("Invalid connections.") + raise PublishValidationError("Invalid connections.") diff --git a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py index a266a0fd74..5075d4050d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py @@ -6,6 +6,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -30,7 +31,7 @@ class ValidateLookIdReferenceEdits(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid nodes %s" % (invalid,)) + raise PublishValidationError("Invalid nodes %s" % (invalid,)) @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py index f81e511ff3..4e01b55249 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py @@ -1,8 +1,10 @@ from collections import defaultdict import pyblish.api + import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidatePipelineOrder) class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin): @@ -33,8 +35,9 @@ class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Members found without non-unique IDs: " - "{0}".format(invalid)) + raise PublishValidationError( + ("Members found without non-unique IDs: " + "{0}").format(invalid)) @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py index db6aadae8d..231331411b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py @@ -2,7 +2,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin): @@ -37,7 +40,7 @@ class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid node relationships found: " + raise PublishValidationError("Invalid node relationships found: " "{0}".format(invalid)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_look_sets.py b/openpype/hosts/maya/plugins/publish/validate_look_sets.py index 8434ddde04..657bab0479 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_sets.py @@ -1,7 +1,10 @@ import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateLookSets(pyblish.api.InstancePlugin): @@ -48,16 +51,13 @@ class ValidateLookSets(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("'{}' has invalid look " + raise PublishValidationError("'{}' has invalid look " "content".format(instance.name)) @classmethod def get_invalid(cls, instance): """Get all invalid nodes""" - cls.log.info("Validating look content for " - "'{}'".format(instance.name)) - relationships = instance.data["lookData"]["relationships"] invalid = [] diff --git a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py index 9b57b06ee7..dbe7a70e6a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py @@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -27,7 +28,7 @@ class ValidateShadingEngine(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "Found shading engines with incorrect naming:" "\n{}".format(invalid) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py index 788e440d12..acd761a944 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py @@ -1,8 +1,9 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateSingleShader(pyblish.api.InstancePlugin): @@ -23,9 +24,9 @@ class ValidateSingleShader(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found shapes which don't have a single shader " - "assigned: " - "\n{}".format(invalid)) + raise PublishValidationError( + ("Found shapes which don't have a single shader " + "assigned:\n{}").format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_maya_units.py b/openpype/hosts/maya/plugins/publish/validate_maya_units.py index 011df0846c..1d5619795f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_maya_units.py +++ b/openpype/hosts/maya/plugins/publish/validate_maya_units.py @@ -7,6 +7,7 @@ from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline.publish import ( RepairContextAction, ValidateSceneOrder, + PublishXmlValidationError ) @@ -26,6 +27,30 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin): validate_fps = True + nice_message_format = ( + "- {setting} must be {required_value}. " + "Your scene is set to {current_value}" + ) + log_message_format = ( + "Maya scene {setting} must be '{required_value}'. " + "Current value is '{current_value}'." + ) + + @classmethod + def apply_settings(cls, project_settings, system_settings): + """Apply project settings to creator""" + settings = ( + project_settings["maya"]["publish"]["ValidateMayaUnits"] + ) + + cls.validate_linear_units = settings.get("validate_linear_units", + cls.validate_linear_units) + cls.linear_units = settings.get("linear_units", cls.linear_units) + cls.validate_angular_units = settings.get("validate_angular_units", + cls.validate_angular_units) + cls.angular_units = settings.get("angular_units", cls.angular_units) + cls.validate_fps = settings.get("validate_fps", cls.validate_fps) + def process(self, context): # Collected units @@ -34,15 +59,14 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin): fps = context.data.get('fps') - # TODO replace query with using 'context.data["assetEntity"]' - asset_doc = get_current_project_asset() + asset_doc = context.data["assetEntity"] asset_fps = mayalib.convert_to_maya_fps(asset_doc["data"]["fps"]) self.log.info('Units (linear): {0}'.format(linearunits)) self.log.info('Units (angular): {0}'.format(angularunits)) self.log.info('Units (time): {0} FPS'.format(fps)) - valid = True + invalid = [] # Check if units are correct if ( @@ -50,26 +74,43 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin): and linearunits and linearunits != self.linear_units ): - self.log.error("Scene linear units must be {}".format( - self.linear_units)) - valid = False + invalid.append({ + "setting": "Linear units", + "required_value": self.linear_units, + "current_value": linearunits + }) if ( self.validate_angular_units and angularunits and angularunits != self.angular_units ): - self.log.error("Scene angular units must be {}".format( - self.angular_units)) - valid = False + invalid.append({ + "setting": "Angular units", + "required_value": self.angular_units, + "current_value": angularunits + }) if self.validate_fps and fps and fps != asset_fps: - self.log.error( - "Scene must be {} FPS (now is {})".format(asset_fps, fps)) - valid = False + invalid.append({ + "setting": "FPS", + "required_value": asset_fps, + "current_value": fps + }) - if not valid: - raise RuntimeError("Invalid units set.") + if invalid: + + issues = [] + for data in invalid: + self.log.error(self.log_message_format.format(**data)) + issues.append(self.nice_message_format.format(**data)) + issues = "\n".join(issues) + + raise PublishXmlValidationError( + plugin=self, + message="Invalid maya scene units", + formatting_data={"issues": issues} + ) @classmethod def repair(cls, context): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py index a580a1c787..55624726ea 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py @@ -10,12 +10,15 @@ from openpype.hosts.maya.api.lib import ( set_attribute ) from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, RepairAction, ValidateMeshOrder, + PublishValidationError ) -class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): +class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate the mesh has default Arnold attributes. It compares all Arnold attributes from a default mesh. This is to ensure @@ -30,12 +33,14 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction ] + optional = True - if cmds.getAttr( - "defaultRenderGlobals.currentRenderer").lower() == "arnold": - active = True - else: - active = False + + @classmethod + def apply_settings(cls, project_settings, system_settings): + # todo: this should not be done this way + attr = "defaultRenderGlobals.currentRenderer" + cls.active = cmds.getAttr(attr).lower() == "arnold" @classmethod def get_default_attributes(cls): @@ -50,7 +55,7 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): plug = "{}.{}".format(mesh, attr) try: defaults[attr] = get_attribute(plug) - except RuntimeError: + except PublishValidationError: cls.log.debug("Ignoring arnold attribute: {}".format(attr)) return defaults @@ -101,10 +106,12 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): ) def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid_attributes(instance, compute=True) if invalid: - raise RuntimeError( + raise PublishValidationError( "Non-default Arnold attributes found in instance:" " {0}".format(invalid) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py b/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py index 848d66c4ae..c3264f3d98 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_empty.py @@ -4,7 +4,8 @@ import pyblish.api import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, - ValidateMeshOrder + ValidateMeshOrder, + PublishValidationError ) @@ -49,6 +50,6 @@ class ValidateMeshEmpty(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "Meshes found in instance without any vertices: %s" % invalid ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py index b7836b3e92..c382d1b983 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py @@ -2,11 +2,16 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) from openpype.hosts.maya.api.lib import len_flattened -class ValidateMeshHasUVs(pyblish.api.InstancePlugin): +class ValidateMeshHasUVs(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate the current mesh has UVs. It validates whether the current UV set has non-zero UVs and @@ -66,8 +71,19 @@ class ValidateMeshHasUVs(pyblish.api.InstancePlugin): return invalid def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Meshes found in instance without " - "valid UVs: {0}".format(invalid)) + + names = "
".join( + " - {}".format(node) for node in invalid + ) + + raise PublishValidationError( + title="Mesh has missing UVs", + message="Model meshes are required to have UVs.

" + "Meshes detected with invalid or missing UVs:
" + "{0}".format(names) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py index 664e2b5772..48b4d0f557 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py @@ -2,7 +2,17 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateMeshNoNegativeScale(pyblish.api.Validator): @@ -46,5 +56,9 @@ class ValidateMeshNoNegativeScale(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with negative " - "scale: {0}".format(invalid)) + raise PublishValidationError( + "Meshes found with negative scale:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Negative scale" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py index d7711da722..6fd63fb29f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py @@ -2,7 +2,17 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateMeshNonManifold(pyblish.api.Validator): @@ -16,7 +26,7 @@ class ValidateMeshNonManifold(pyblish.api.Validator): order = ValidateMeshOrder hosts = ['maya'] families = ['model'] - label = 'Mesh Non-Manifold Vertices/Edges' + label = 'Mesh Non-Manifold Edges/Vertices' actions = [openpype.hosts.maya.api.action.SelectInvalidAction] @staticmethod @@ -38,5 +48,9 @@ class ValidateMeshNonManifold(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with non-manifold " - "edges/vertices: {0}".format(invalid)) + raise PublishValidationError( + "Meshes found with non-manifold edges/vertices:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Non-Manifold Edges/Vertices" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py index b49ba85648..5ec6e5779b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py @@ -3,10 +3,15 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin): +class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate meshes don't have edges with a zero length. Based on Maya's polyCleanup 'Edges with zero length'. @@ -65,7 +70,14 @@ class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin): def process(self, instance): """Process all meshes""" + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Meshes found with zero " - "edge length: {0}".format(invalid)) + label = "Meshes found with zero edge length" + raise PublishValidationError( + message="{}: {}".format(label, invalid), + title=label, + description="{}:\n- ".format(label) + "\n- ".join(invalid) + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py index 1b754a9829..7855e79119 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py @@ -6,10 +6,20 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError ) -class ValidateMeshNormalsUnlocked(pyblish.api.Validator): +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + +class ValidateMeshNormalsUnlocked(pyblish.api.Validator, + OptionalPyblishPluginMixin): """Validate all meshes in the instance have unlocked normals These can be unlocked manually through: @@ -47,12 +57,18 @@ class ValidateMeshNormalsUnlocked(pyblish.api.Validator): def process(self, instance): """Raise invalid when any of the meshes have locked normals""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with " - "locked normals: {0}".format(invalid)) + raise PublishValidationError( + "Meshes found with locked normals:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Locked normals" + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py index 7dd66eed6c..88e1507dd3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py @@ -6,7 +6,18 @@ import maya.api.OpenMaya as om import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateMeshOrder +from openpype.pipeline.publish import ( + ValidateMeshOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class GetOverlappingUVs(object): @@ -225,7 +236,8 @@ class GetOverlappingUVs(object): return faces -class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin): +class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """ Validate the current mesh overlapping UVs. It validates whether the current UVs are overlapping or not. @@ -281,9 +293,14 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin): return instance.data.get("overlapping_faces", []) def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance, compute=True) if invalid: - raise RuntimeError( - "Meshes found with overlapping UVs: {0}".format(invalid) + raise PublishValidationError( + "Meshes found with overlapping UVs:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Overlapping UVs" ) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py index 2a0abe975c..1db7613999 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py @@ -5,6 +5,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + PublishValidationError ) @@ -102,7 +103,7 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Shapes found with invalid shader " + raise PublishValidationError("Shapes found with invalid shader " "connections: {0}".format(invalid)) @staticmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py index faa360380e..46364735b9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py @@ -6,10 +6,12 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin ) -class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): +class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Warn on multiple UV sets existing for each polygon mesh. On versions prior to Maya 2017 this will force no multiple uv sets because @@ -47,6 +49,8 @@ class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): def process(self, instance): """Process all the nodes in the instance 'objectSet'""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py index 40ddb916ca..116fecbcba 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py @@ -5,10 +5,12 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateMeshOrder, + OptionalPyblishPluginMixin ) -class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin): +class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate model's default set exists and is named 'map1'. In Maya meshes by default have a uv set named "map1" that cannot be @@ -48,6 +50,8 @@ class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin): def process(self, instance): """Process all the nodes in the instance 'objectSet'""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py index d885158004..7167859444 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py @@ -1,12 +1,10 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ( - RepairAction, - ValidateMeshOrder, -) from openpype.hosts.maya.api.lib import len_flattened +from openpype.pipeline.publish import ( + PublishValidationError, RepairAction, ValidateMeshOrder) class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): @@ -40,8 +38,9 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): # This fix only works in Maya 2016 EXT2 and newer if float(cmds.about(version=True)) <= 2016.0: - raise RuntimeError("Repair not supported in Maya version below " - "2016 EXT 2") + raise PublishValidationError( + ("Repair not supported in Maya version below " + "2016 EXT 2")) invalid = cls.get_invalid(instance) for node in invalid: @@ -76,5 +75,6 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Meshes found in instance with vertices that " - "have no edges: %s" % invalid) + raise PublishValidationError( + ("Meshes found in instance with vertices that " + "have no edges: {}").format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py index 723346a285..19373efad9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateModelContent(pyblish.api.InstancePlugin): @@ -28,7 +31,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin): content_instance = instance.data.get("setMembers", None) if not content_instance: cls.log.error("Instance has no nodes!") - return True + return [instance.data["name"]] # All children will be included in the extracted export so we also # validate *all* descendents of the set members and we skip any @@ -60,15 +63,10 @@ class ValidateModelContent(pyblish.api.InstancePlugin): return True # Top group - assemblies = cmds.ls(content_instance, assemblies=True, long=True) - if len(assemblies) != 1 and cls.validate_top_group: + top_parents = set([x.split("|")[1] for x in content_instance]) + if cls.validate_top_group and len(top_parents) != 1: cls.log.error("Must have exactly one top group") - return assemblies - if len(assemblies) == 0: - cls.log.warning("No top group found. " - "(Are there objects in the instance?" - " Or is it parented in another group?)") - return assemblies or True + return top_parents def _is_visible(node): """Return whether node is visible""" @@ -79,11 +77,11 @@ class ValidateModelContent(pyblish.api.InstancePlugin): visibility=True) # The roots must be visible (the assemblies) - for assembly in assemblies: - if not _is_visible(assembly): - cls.log.error("Invisible assembly (root node) is not " - "allowed: {0}".format(assembly)) - invalid.add(assembly) + for parent in top_parents: + if not _is_visible(parent): + cls.log.error("Invisible parent (root node) is not " + "allowed: {0}".format(parent)) + invalid.add(parent) # Ensure at least one shape is visible if not any(_is_visible(shape) for shape in shapes): @@ -97,4 +95,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Model content is invalid. See log.") + raise PublishValidationError( + title="Model content is invalid", + message="See log for more details" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_model_name.py b/openpype/hosts/maya/plugins/publish/validate_model_name.py index 0e7adc640f..6948dcf724 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_name.py @@ -1,22 +1,24 @@ # -*- coding: utf-8 -*- """Validate model nodes names.""" import os -import re import platform +import re +import gridfs +import pyblish.api from maya import cmds -import pyblish.api -from openpype.pipeline import legacy_io -from openpype.pipeline.publish import ValidateContentsOrder import openpype.hosts.maya.api.action +from openpype.client.mongo import OpenPypeMongoConnection from openpype.hosts.maya.api.shader_definition_editor import ( DEFINITION_FILENAME) -from openpype.client.mongo import OpenPypeMongoConnection -import gridfs +from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder) -class ValidateModelName(pyblish.api.InstancePlugin): +class ValidateModelName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate name of model starts with (somename)_###_(materialID)_GEO @@ -148,7 +150,11 @@ class ValidateModelName(pyblish.api.InstancePlugin): def process(self, instance): """Plugin entry point.""" + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Model naming is invalid. See the log.") + raise PublishValidationError( + "Model naming is invalid. See the log.") diff --git a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py index 04db5a061b..68784a165d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py @@ -1,14 +1,19 @@ import os import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) COLOUR_SPACES = ['sRGB', 'linear', 'auto'] MIPMAP_EXTENSIONS = ['tdl'] -class ValidateMvLookContents(pyblish.api.InstancePlugin): +class ValidateMvLookContents(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): order = ValidateContentsOrder families = ['mvLook'] hosts = ['maya'] @@ -23,6 +28,9 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin): enforced_intents = ['-', 'Final'] def process(self, instance): + if not self.is_active(instance.data): + return + intent = instance.context.data['intent']['value'] publishMipMap = instance.data["publishMipMap"] enforced = True @@ -35,7 +43,7 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin): .format(intent)) if not instance[:]: - raise RuntimeError("Instance is empty") + raise PublishValidationError("Instance is empty") invalid = set() @@ -62,12 +70,12 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin): if enforced: invalid.add(node) self.log.error(msg) - raise RuntimeError(msg) + raise PublishValidationError(msg) else: self.log.warning(msg) if invalid: - raise RuntimeError("'{}' has invalid look " + raise PublishValidationError("'{}' has invalid look " "content".format(instance.name)) def valid_file(self, fname): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_animation.py b/openpype/hosts/maya/plugins/publish/validate_no_animation.py index 2e7cafe4ab..9ff189cf83 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_animation.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_animation.py @@ -2,10 +2,22 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateNoAnimation(pyblish.api.Validator): +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + +class ValidateNoAnimation(pyblish.api.Validator, + OptionalPyblishPluginMixin): """Ensure no keyframes on nodes in the Instance. Even though a Model would extract without animCurves correctly this avoids @@ -22,10 +34,17 @@ class ValidateNoAnimation(pyblish.api.Validator): actions = [openpype.hosts.maya.api.action.SelectInvalidAction] def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Keyframes found: {0}".format(invalid)) + raise PublishValidationError( + "Keyframes found on:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Keyframes on model" + ) @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py index a4fb938d43..f0aa9261f7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py @@ -2,7 +2,17 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateNoDefaultCameras(pyblish.api.InstancePlugin): @@ -28,4 +38,10 @@ class ValidateNoDefaultCameras(pyblish.api.InstancePlugin): def process(self, instance): """Process all the cameras in the instance""" invalid = self.get_invalid(instance) - assert not invalid, "Default cameras found: {0}".format(invalid) + if invalid: + raise PublishValidationError( + "Default cameras found:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Default cameras" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py index 0ff03f9165..13eeae5859 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py @@ -4,11 +4,19 @@ import pyblish.api from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) import openpype.hosts.maya.api.action +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + def get_namespace(node_name): # ensure only node's name (not parent path) node_name = node_name.rsplit("|", 1)[-1] @@ -36,7 +44,12 @@ class ValidateNoNamespace(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Namespaces found: {0}".format(invalid)) + raise PublishValidationError( + "Namespaces found:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Namespaces in model" + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py index f77fc81dc1..187135fdf3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py @@ -5,9 +5,17 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + def has_shape_children(node): # Check if any descendants allDescendents = cmds.listRelatives(node, @@ -64,7 +72,12 @@ class ValidateNoNullTransforms(pyblish.api.InstancePlugin): """Process all the transform nodes in the instance """ invalid = self.get_invalid(instance) if invalid: - raise ValueError("Empty transforms found: {0}".format(invalid)) + raise PublishValidationError( + "Empty transforms found without shapes:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Empty transforms" + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py index 2cfdc28128..6ae634be24 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py @@ -2,10 +2,22 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateNoUnknownNodes(pyblish.api.InstancePlugin): +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) + + +class ValidateNoUnknownNodes(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Checks to see if there are any unknown nodes in the instance. This often happens if nodes from plug-ins are used but are not available @@ -29,7 +41,14 @@ class ValidateNoUnknownNodes(pyblish.api.InstancePlugin): def process(self, instance): """Process all the nodes in the instance""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise ValueError("Unknown nodes found: {0}".format(invalid)) + raise PublishValidationError( + "Unknown nodes found:\n\n{0}".format( + _as_report_list(sorted(invalid)) + ), + title="Unknown nodes" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py b/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py index 27e5e6a006..22fd1edc29 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_vraymesh.py @@ -1,5 +1,13 @@ import pyblish.api from maya import cmds +from openpype.pipeline.publish import PublishValidationError + + +def _as_report_list(values, prefix="- ", suffix="\n"): + """Return list as bullet point list for a report""" + if not values: + return "" + return prefix + (suffix + prefix).join(values) class ValidateNoVRayMesh(pyblish.api.InstancePlugin): @@ -11,6 +19,9 @@ class ValidateNoVRayMesh(pyblish.api.InstancePlugin): def process(self, instance): + if not cmds.pluginInfo("vrayformaya", query=True, loaded=True): + return + shapes = cmds.ls(instance, shapes=True, type="mesh") @@ -20,5 +31,11 @@ class ValidateNoVRayMesh(pyblish.api.InstancePlugin): source=True) or [] vray_meshes = cmds.ls(inputs, type='VRayMesh') if vray_meshes: - raise RuntimeError("Meshes that are VRayMeshes shouldn't " - "be pointcached: {0}".format(vray_meshes)) + raise PublishValidationError( + "Meshes that are V-Ray Proxies should not be in an Alembic " + "pointcache.\n" + "Found V-Ray proxies:\n\n{}".format( + _as_report_list(sorted(vray_meshes)) + ), + title="V-Ray Proxies in pointcache" + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_node_ids.py index 796f4c8d76..0c7d647014 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids.py @@ -1,6 +1,9 @@ import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + ValidatePipelineOrder, + PublishXmlValidationError +) import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -34,8 +37,14 @@ class ValidateNodeIDs(pyblish.api.InstancePlugin): # Ensure all nodes have a cbId invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found without " - "IDs: {0}".format(invalid)) + names = "\n".join( + "- {}".format(node) for node in invalid + ) + raise PublishXmlValidationError( + plugin=self, + message="Nodes found without IDs: {}".format(invalid), + formatting_data={"nodes": names} + ) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py index 68c47f3a96..643c970463 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py @@ -1,12 +1,10 @@ +import pyblish.api from maya import cmds -import pyblish.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, -) + PublishValidationError, RepairAction, ValidateContentsOrder) class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin): @@ -35,8 +33,9 @@ class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin): # if a deformer has been created on the shape invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Shapes found that are considered 'Deformed'" - "without object ids: {0}".format(invalid)) + raise PublishValidationError( + ("Shapes found that are considered 'Deformed'" + "without object ids: {0}").format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py index b2f28fd4e5..f15aa2efa8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py @@ -1,10 +1,11 @@ import pyblish.api -from openpype.client import get_assets -from openpype.pipeline import legacy_io -from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action +from openpype.client import get_assets from openpype.hosts.maya.api import lib +from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ( + PublishValidationError, ValidatePipelineOrder) class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin): @@ -29,9 +30,9 @@ class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found asset IDs which are not related to " - "current project in instance: " - "`%s`" % instance.name) + raise PublishValidationError( + ("Found asset IDs which are not related to " + "current project in instance: `{}`").format(instance.name)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py index f901dc58c4..52e706fec9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py @@ -1,11 +1,13 @@ import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidatePipelineOrder) -class ValidateNodeIDsRelated(pyblish.api.InstancePlugin): +class ValidateNodeIDsRelated(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate nodes have a related Colorbleed Id to the instance.data[asset] """ @@ -23,12 +25,15 @@ class ValidateNodeIDsRelated(pyblish.api.InstancePlugin): def process(self, instance): """Process all nodes in instance (including hierarchy)""" + if not self.is_active(instance.data): + return + # Ensure all nodes have a cbId invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes IDs found that are not related to asset " - "'{}' : {}".format(instance.data['asset'], - invalid)) + raise PublishValidationError( + ("Nodes IDs found that are not related to asset " + "'{}' : {}").format(instance.data['asset'], invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py index f7a5e6e292..61386fc939 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py @@ -1,7 +1,10 @@ from collections import defaultdict import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + ValidatePipelineOrder, + PublishValidationError +) import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -29,8 +32,13 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin): # Ensure all nodes have a cbId invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found with non-unique " - "asset IDs: {0}".format(invalid)) + label = "Nodes found with non-unique asset IDs" + raise PublishValidationError( + message="{}: {}".format(label, invalid), + title="Non-unique asset ids on nodes", + description="{}\n- {}".format(label, + "\n- ".join(sorted(invalid))) + ) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py index 6135c9c695..78334cd01f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py @@ -4,7 +4,10 @@ from maya import cmds import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): @@ -48,5 +51,5 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): """Process all directories Set as Filenames in Non-Maya Nodes""" invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Non-existent Path " + raise PublishValidationError("Non-existent Path " "found: {0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py index 78bb022785..f9aa7f82d0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py @@ -1,10 +1,8 @@ +import pyblish.api from maya import cmds -import pyblish.api from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, -) + PublishValidationError, RepairAction, ValidateContentsOrder) class ValidateRenderImageRule(pyblish.api.InstancePlugin): @@ -27,12 +25,12 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin): required_images_rule = self.get_default_render_image_folder(instance) current_images_rule = cmds.workspace(fileRuleEntry="images") - assert current_images_rule == required_images_rule, ( - "Invalid workspace `images` file rule value: '{}'. " - "Must be set to: '{}'".format( - current_images_rule, required_images_rule - ) - ) + if current_images_rule != required_images_rule: + raise PublishValidationError( + ( + "Invalid workspace `images` file rule value: '{}'. " + "Must be set to: '{}'" + ).format(current_images_rule, required_images_rule)) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py index 67ece75af8..9d4410186b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError, +) class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin): @@ -31,5 +34,7 @@ class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin): """Process all the cameras in the instance""" invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Renderable default cameras " - "found: {0}".format(invalid)) + raise PublishValidationError( + title="Rendering default cameras", + message="Renderable default cameras " + "found: {0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py index 77322fefd5..2c0d604175 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py @@ -5,7 +5,10 @@ from maya import cmds import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib_rendersettings import RenderSettings -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateRenderSingleCamera(pyblish.api.InstancePlugin): @@ -28,7 +31,7 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin): """Process all the cameras in the instance""" invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid cameras for render.") + raise PublishValidationError("Invalid cameras for render.") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py index 7919a6eaa1..f8de983e06 100644 --- a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py +++ b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py @@ -1,8 +1,9 @@ import pyblish.api -from openpype.client import get_subset_by_name import openpype.hosts.maya.api.action +from openpype.client import get_subset_by_name from openpype.pipeline import legacy_io +from openpype.pipeline.publish import PublishValidationError class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin): @@ -30,7 +31,8 @@ class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found unregistered subsets: {}".format(invalid)) + raise PublishValidationError( + "Found unregistered subsets: {}".format(invalid)) def get_invalid(self, instance): invalid = [] diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py index 71b91b8e54..dccb4ade78 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py +++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py @@ -9,6 +9,7 @@ import pyblish.api from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError, ) from openpype.hosts.maya.api import lib @@ -112,17 +113,20 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) - assert invalid is False, ("Invalid render settings " - "found for '{}'!".format(instance.name)) + if invalid: + raise PublishValidationError( + title="Invalid Render Settings", + message=("Invalid render settings found " + "for '{}'!".format(instance.name)) + ) @classmethod def get_invalid(cls, instance): invalid = False - multipart = False renderer = instance.data['renderer'] - layer = instance.data['setMembers'] + layer = instance.data['renderlayer'] cameras = instance.data.get("cameras", []) # Get the node attributes for current renderer @@ -280,7 +284,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): render_value = cmds.getAttr( "{}.{}".format(node, data["attribute"]) ) - except RuntimeError: + except PublishValidationError: invalid = True cls.log.error( "Cannot get value of {}.{}".format( diff --git a/openpype/hosts/maya/plugins/publish/validate_resources.py b/openpype/hosts/maya/plugins/publish/validate_resources.py index b7bd47ad0a..7d894a2bef 100644 --- a/openpype/hosts/maya/plugins/publish/validate_resources.py +++ b/openpype/hosts/maya/plugins/publish/validate_resources.py @@ -2,7 +2,10 @@ import os from collections import defaultdict import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateResources(pyblish.api.InstancePlugin): @@ -54,4 +57,4 @@ class ValidateResources(pyblish.api.InstancePlugin): ) if invalid_resources: - raise RuntimeError("Invalid resources in instance.") + raise PublishValidationError("Invalid resources in instance.") diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py index 1096c95486..7b5392f8f9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py @@ -1,7 +1,8 @@ +import pyblish.api from maya import cmds -import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidateContentsOrder) class ValidateRigContents(pyblish.api.InstancePlugin): @@ -31,8 +32,9 @@ class ValidateRigContents(pyblish.api.InstancePlugin): # in the rig instance set_members = instance.data['setMembers'] if not cmds.ls(set_members, type="dagNode", long=True): - raise RuntimeError("No dag nodes in the pointcache instance. " - "(Empty instance?)") + raise PublishValidationError( + ("No dag nodes in the pointcache instance. " + "(Empty instance?)")) # Ensure contents in sets and retrieve long path for all objects output_content = cmds.sets("out_SET", query=True) or [] @@ -79,7 +81,8 @@ class ValidateRigContents(pyblish.api.InstancePlugin): error = True if error: - raise RuntimeError("Invalid rig content. See log for details.") + raise PublishValidationError( + "Invalid rig content. See log for details.") def validate_geometry(self, set_members): """Check if the out set passes the validations diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py index 1e42abdcd9..7bbf4257ab 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py @@ -5,6 +5,7 @@ import pyblish.api from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + PublishValidationError ) import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import undo_chunk @@ -51,7 +52,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError('{} failed, see log ' + raise PublishValidationError('{} failed, see log ' 'information'.format(self.label)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py index 55b2ebd6d8..842c1de01b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py @@ -5,7 +5,9 @@ import pyblish.api from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + PublishValidationError ) + from openpype.hosts.maya.api import lib import openpype.hosts.maya.api.action @@ -48,7 +50,7 @@ class ValidateRigControllersArnoldAttributes(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError('{} failed, see log ' + raise PublishValidationError('{} failed, see log ' 'information'.format(self.label)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py index 03ba381f8d..39f0941faa 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py @@ -7,6 +7,7 @@ from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -37,7 +38,7 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): # if a deformer has been created on the shape invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes found with mismatching " + raise PublishValidationError("Nodes found with mismatching " "IDs: {0}".format(invalid)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index cba70a21b7..cbc750bace 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -9,6 +9,7 @@ from openpype.hosts.maya.api.lib import get_id, set_id from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) @@ -34,7 +35,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance, compute=True) if invalid: - raise RuntimeError("Found nodes with mismatched IDs.") + raise PublishValidationError("Found nodes with mismatched IDs.") @classmethod def get_invalid(cls, instance, compute=False): @@ -46,7 +47,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): invalid = {} if compute: - out_set = next(x for x in instance if x.endswith("out_SET")) + out_set = next(x for x in instance if "out_SET" in x) instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True) instance_nodes = cmds.ls(instance_nodes, long=True) @@ -107,7 +108,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): set_id(instance_node, id_to_set, overwrite=True) if multiple_ids_match: - raise RuntimeError( + raise PublishValidationError( "Multiple matched ids found. Please repair manually: " "{}".format(multiple_ids_match) ) diff --git a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py index f1fa4d3c4c..b48d67e416 100644 --- a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py +++ b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py @@ -1,10 +1,10 @@ import os import maya.cmds as cmds - import pyblish.api -from openpype.pipeline.publish import ValidatePipelineOrder +from openpype.pipeline.publish import ( + PublishValidationError, ValidatePipelineOrder) def is_subdir(path, root_dir): @@ -37,10 +37,11 @@ class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin): scene_name = cmds.file(query=True, sceneName=True) if not scene_name: - raise RuntimeError("Scene hasn't been saved. Workspace can't be " - "validated.") + raise PublishValidationError( + "Scene hasn't been saved. Workspace can't be validated.") root_dir = cmds.workspace(query=True, rootDirectory=True) if not is_subdir(scene_name, root_dir): - raise RuntimeError("Maya workspace is not set correctly.") + raise PublishValidationError( + "Maya workspace is not set correctly.") diff --git a/openpype/hosts/maya/plugins/publish/validate_shader_name.py b/openpype/hosts/maya/plugins/publish/validate_shader_name.py index 034db471da..36bb2c1fee 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shader_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_shader_name.py @@ -1,13 +1,15 @@ import re -from maya import cmds import pyblish.api +from maya import cmds import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + OptionalPyblishPluginMixin, PublishValidationError, ValidateContentsOrder) -class ValidateShaderName(pyblish.api.InstancePlugin): +class ValidateShaderName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate shader name assigned. It should be _<*>_SHD @@ -23,12 +25,14 @@ class ValidateShaderName(pyblish.api.InstancePlugin): # The default connections to check def process(self, instance): + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Found shapes with invalid shader names " - "assigned: " - "\n{}".format(invalid)) + raise PublishValidationError( + ("Found shapes with invalid shader names " + "assigned:\n{}").format(invalid)) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py index 4ab669f46b..d8ad366ed8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py @@ -8,6 +8,7 @@ import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + OptionalPyblishPluginMixin ) @@ -15,7 +16,8 @@ def short_name(node): return node.rsplit("|", 1)[-1].rsplit(":", 1)[-1] -class ValidateShapeDefaultNames(pyblish.api.InstancePlugin): +class ValidateShapeDefaultNames(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validates that Shape names are using Maya's default format. When you create a new polygon cube Maya will name the transform @@ -77,6 +79,8 @@ class ValidateShapeDefaultNames(pyblish.api.InstancePlugin): def process(self, instance): """Process all the shape nodes in the instance""" + if not self.is_active(instance.data): + return invalid = self.get_invalid(instance) if invalid: diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py index c0a9ddcf69..701c80a8af 100644 --- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py +++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_triangulated.py @@ -7,8 +7,10 @@ from openpype.hosts.maya.api.action import ( from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, + PublishValidationError ) + from maya import cmds @@ -28,7 +30,7 @@ class ValidateSkeletalMeshTriangulated(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError( + raise PublishValidationError( "The following objects needs to be triangulated: " "{}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_step_size.py b/openpype/hosts/maya/plugins/publish/validate_step_size.py index 294458f63c..493a6ee65c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_step_size.py +++ b/openpype/hosts/maya/plugins/publish/validate_step_size.py @@ -1,7 +1,10 @@ import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) class ValidateStepSize(pyblish.api.InstancePlugin): @@ -40,4 +43,5 @@ class ValidateStepSize(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Invalid instances found: {0}".format(invalid)) + raise PublishValidationError( + "Invalid instances found: {0}".format(invalid)) diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py index b2a83a80fb..cbc7ee9d5c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py +++ b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py @@ -5,10 +5,15 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): +class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validates transform suffix based on the type of its children shapes. Suffices must be: @@ -47,8 +52,8 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): def get_table_for_invalid(cls): ss = [] for k, v in cls.SUFFIX_NAMING_TABLE.items(): - ss.append(" - {}: {}".format(k, ", ".join(v))) - return "\n".join(ss) + ss.append(" - {}: {}".format(k, ", ".join(v))) + return "
".join(ss) @staticmethod def is_valid_name(node_name, shape_type, @@ -110,9 +115,20 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): instance (:class:`pyblish.api.Instance`): published instance. """ + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: valid = self.get_table_for_invalid() - raise ValueError("Incorrectly named geometry " - "transforms: {0}, accepted suffixes are: " - "\n{1}".format(invalid, valid)) + + names = "
".join( + " - {}".format(node) for node in invalid + ) + valid = valid.replace("\n", "
") + + raise PublishValidationError( + title="Invalid naming suffix", + message="Valid suffixes are:
{0}

" + "Incorrectly named geometry transforms:
{1}" + "".format(valid, names)) diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py index abd9e00af1..906ff17ec9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py +++ b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateTransformZero(pyblish.api.Validator): @@ -62,5 +65,14 @@ class ValidateTransformZero(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise ValueError("Nodes found with transform " - "values: {0}".format(invalid)) + + names = "
".join( + " - {}".format(node) for node in invalid + ) + + raise PublishValidationError( + title="Transform Zero", + message="The model publish allows no transformations. You must" + " freeze transformations to continue.

" + "Nodes found with transform values: " + "{0}".format(names)) diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py index 1425190b82..b2cb2ebda2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py @@ -7,10 +7,15 @@ import pyblish.api import openpype.hosts.maya.api.action from openpype.pipeline import legacy_io from openpype.settings import get_project_settings -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) -class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): +class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): """Validate name of Unreal Static Mesh Unreals naming convention states that staticMesh should start with `SM` @@ -131,6 +136,9 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): return invalid def process(self, instance): + if not self.is_active(instance.data): + return + if not self.validate_mesh and not self.validate_collision: self.log.info("Validation of both mesh and collision names" "is disabled.") @@ -143,4 +151,4 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Model naming is invalid. See log.") + raise PublishValidationError("Model naming is invalid. See log.") diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py index dd699735d9..a420dcb900 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py @@ -6,10 +6,12 @@ import pyblish.api from openpype.pipeline.publish import ( ValidateContentsOrder, RepairAction, + OptionalPyblishPluginMixin ) -class ValidateUnrealUpAxis(pyblish.api.ContextPlugin): +class ValidateUnrealUpAxis(pyblish.api.ContextPlugin, + OptionalPyblishPluginMixin): """Validate if Z is set as up axis in Maya""" optional = True @@ -21,6 +23,9 @@ class ValidateUnrealUpAxis(pyblish.api.ContextPlugin): actions = [RepairAction] def process(self, context): + if not self.is_active(context.data): + return + assert cmds.upAxis(q=True, axis=True) == "z", ( "Invalid axis set as up axis" ) diff --git a/openpype/hosts/maya/plugins/publish/validate_visible_only.py b/openpype/hosts/maya/plugins/publish/validate_visible_only.py index faf634f258..e72782e552 100644 --- a/openpype/hosts/maya/plugins/publish/validate_visible_only.py +++ b/openpype/hosts/maya/plugins/publish/validate_visible_only.py @@ -2,7 +2,10 @@ import pyblish.api from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin): @@ -27,7 +30,7 @@ class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: start, end = self.get_frame_range(instance) - raise RuntimeError("No visible nodes found in " + raise PublishValidationError("No visible nodes found in " "frame range {}-{}.".format(start, end)) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_vray.py b/openpype/hosts/maya/plugins/publish/validate_vray.py index 045ac258a1..bef5967cc9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray.py @@ -1,7 +1,7 @@ from maya import cmds import pyblish.api -from openpype.pipeline import PublishValidationError +from openpype.pipeline.publish import PublishValidationError class ValidateVray(pyblish.api.InstancePlugin): diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py index 366f3bd10e..a71849da00 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py @@ -1,11 +1,9 @@ import pyblish.api +from maya import cmds + from openpype.hosts.maya.api import lib from openpype.pipeline.publish import ( - ValidateContentsOrder, - RepairAction, -) - -from maya import cmds + PublishValidationError, RepairAction, ValidateContentsOrder) class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): @@ -36,7 +34,7 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): vray_settings = cmds.ls("vraySettings", type="VRaySettingsNode") assert vray_settings, "Please ensure a VRay Settings Node is present" - renderlayer = instance.data['setMembers'] + renderlayer = instance.data['renderlayer'] if not lib.get_attr_in_layer(self.enabled_attr, layer=renderlayer): # If not distributed rendering enabled, ignore.. @@ -45,13 +43,14 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): # If distributed rendering is enabled but it is *not* set to ignore # during batch mode we invalidate the instance if not lib.get_attr_in_layer(self.ignored_attr, layer=renderlayer): - raise RuntimeError("Renderlayer has distributed rendering enabled " - "but is not set to ignore in batch mode.") + raise PublishValidationError( + ("Renderlayer has distributed rendering enabled " + "but is not set to ignore in batch mode.")) @classmethod def repair(cls, instance): - renderlayer = instance.data.get("setMembers") + renderlayer = instance.data.get("renderlayer") with lib.renderlayer(renderlayer): cls.log.info("Enabling Distributed Rendering " "ignore in batch mode..") diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py index f49811c2c0..4474f08ba4 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py @@ -5,6 +5,7 @@ from openpype.pipeline.publish import ( context_plugin_should_run, RepairContextAction, ValidateContentsOrder, + PublishValidationError ) from maya import cmds @@ -26,7 +27,10 @@ class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin): invalid = self.get_invalid(context) if invalid: - raise RuntimeError("Found invalid VRay Translator settings!") + raise PublishValidationError( + message="Found invalid VRay Translator settings", + title=self.label + ) @classmethod def get_invalid(cls, context): @@ -35,7 +39,11 @@ class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin): # Get vraySettings node vray_settings = cmds.ls(type="VRaySettingsNode") - assert vray_settings, "Please ensure a VRay Settings Node is present" + if not vray_settings: + raise PublishValidationError( + "Please ensure a VRay Settings Node is present", + title=cls.label + ) node = vray_settings[0] diff --git a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py index 855a96e6b9..7b726de3a8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py +++ b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py @@ -3,6 +3,10 @@ import pyblish.api from maya import cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError +) + class ValidateVrayProxyMembers(pyblish.api.InstancePlugin): @@ -19,7 +23,7 @@ class ValidateVrayProxyMembers(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("'%s' is invalid VRay Proxy for " + raise PublishValidationError("'%s' is invalid VRay Proxy for " "export!" % instance.name) @classmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py index 4842134b12..2b7249ad94 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py @@ -1,7 +1,11 @@ import pyblish.api import maya.cmds as cmds import openpype.hosts.maya.api.action -from openpype.pipeline.publish import RepairAction +from openpype.pipeline.publish import ( + RepairAction, + PublishValidationError +) + class ValidateYetiRigCacheState(pyblish.api.InstancePlugin): @@ -23,7 +27,7 @@ class ValidateYetiRigCacheState(pyblish.api.InstancePlugin): def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Nodes have incorrect I/O settings") + raise PublishValidationError("Nodes have incorrect I/O settings") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py index ebef44774d..96fb475a0a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py @@ -3,7 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishValidationError +) class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator): @@ -19,7 +22,7 @@ class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Yeti Rig has invalid input meshes") + raise PublishValidationError("Yeti Rig has invalid input meshes") @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py index 9914277721..455bf5291a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_settings.py @@ -1,5 +1,7 @@ import pyblish.api +from openpype.pipeline.publish import PublishValidationError + class ValidateYetiRigSettings(pyblish.api.InstancePlugin): """Validate Yeti Rig Settings have collected input connections. @@ -18,8 +20,9 @@ class ValidateYetiRigSettings(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError("Detected invalid Yeti Rig data. (See log) " - "Tip: Save the scene") + raise PublishValidationError( + ("Detected invalid Yeti Rig data. (See log) " + "Tip: Save the scene")) @classmethod def get_invalid(cls, instance): diff --git a/openpype/hosts/maya/startup/userSetup.py b/openpype/hosts/maya/startup/userSetup.py index ae6a999d98..f2899cdb37 100644 --- a/openpype/hosts/maya/startup/userSetup.py +++ b/openpype/hosts/maya/startup/userSetup.py @@ -1,7 +1,7 @@ import os from openpype.settings import get_project_settings -from openpype.pipeline import install_host +from openpype.pipeline import install_host, get_current_project_name from openpype.hosts.maya.api import MayaHost from maya import cmds @@ -12,10 +12,11 @@ install_host(host) print("Starting OpenPype usersetup...") -project_settings = get_project_settings(os.environ['AVALON_PROJECT']) +project_name = get_current_project_name() +settings = get_project_settings(project_name) # Loading plugins explicitly. -explicit_plugins_loading = project_settings["maya"]["explicit_plugins_loading"] +explicit_plugins_loading = settings["maya"]["explicit_plugins_loading"] if explicit_plugins_loading["enabled"]: def _explicit_load_plugins(): for plugin in explicit_plugins_loading["plugins_to_load"]: @@ -46,17 +47,16 @@ if bool(int(os.environ.get(key, "0"))): ) # Build a shelf. -shelf_preset = project_settings['maya'].get('project_shelf') - +shelf_preset = settings['maya'].get('project_shelf') if shelf_preset: - project = os.environ["AVALON_PROJECT"] - - icon_path = os.path.join(os.environ['OPENPYPE_PROJECT_SCRIPTS'], - project, "icons") + icon_path = os.path.join( + os.environ['OPENPYPE_PROJECT_SCRIPTS'], + project_name, + "icons") icon_path = os.path.abspath(icon_path) for i in shelf_preset['imports']: - import_string = "from {} import {}".format(project, i) + import_string = "from {} import {}".format(project_name, i) print(import_string) exec(import_string) diff --git a/openpype/hosts/maya/tools/mayalookassigner/app.py b/openpype/hosts/maya/tools/mayalookassigner/app.py index 64fc04dfc4..b5ce7ada34 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/app.py +++ b/openpype/hosts/maya/tools/mayalookassigner/app.py @@ -4,9 +4,9 @@ import logging from qtpy import QtWidgets, QtCore -from openpype.client import get_last_version_by_subset_id from openpype import style -from openpype.pipeline import legacy_io +from openpype.client import get_last_version_by_subset_id +from openpype.pipeline import get_current_project_name from openpype.tools.utils.lib import qt_app_context from openpype.hosts.maya.api.lib import ( assign_look_by_version, @@ -216,7 +216,7 @@ class MayaLookAssignerWindow(QtWidgets.QWidget): selection = self.assign_selected.isChecked() asset_nodes = self.asset_outliner.get_nodes(selection=selection) - project_name = legacy_io.active_project() + project_name = get_current_project_name() start = time.time() for i, (asset, item) in enumerate(asset_nodes.items()): diff --git a/openpype/hosts/maya/tools/mayalookassigner/commands.py b/openpype/hosts/maya/tools/mayalookassigner/commands.py index c5e6c973cf..a1290aa68d 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/commands.py +++ b/openpype/hosts/maya/tools/mayalookassigner/commands.py @@ -1,14 +1,14 @@ -from collections import defaultdict -import logging import os +import logging +from collections import defaultdict import maya.cmds as cmds -from openpype.client import get_asset_by_id +from openpype.client import get_assets from openpype.pipeline import ( - legacy_io, remove_container, registered_host, + get_current_project_name, ) from openpype.hosts.maya.api import lib @@ -126,18 +126,24 @@ def create_items_from_nodes(nodes): log.warning("No id hashes") return asset_view_items - project_name = legacy_io.active_project() - for _id, id_nodes in id_hashes.items(): - asset = get_asset_by_id(project_name, _id, fields=["name"]) + project_name = get_current_project_name() + asset_ids = set(id_hashes.keys()) + asset_docs = get_assets(project_name, asset_ids, fields=["name"]) + asset_docs_by_id = { + str(asset_doc["_id"]): asset_doc + for asset_doc in asset_docs + } + for asset_id, id_nodes in id_hashes.items(): + asset_doc = asset_docs_by_id.get(asset_id) # Skip if asset id is not found - if not asset: + if not asset_doc: log.warning("Id not found in the database, skipping '%s'." % _id) log.warning("Nodes: %s" % id_nodes) continue # Collect available look subsets for this asset - looks = lib.list_looks(asset["_id"]) + looks = lib.list_looks(project_name, asset_doc["_id"]) # Collect namespaces the asset is found in namespaces = set() @@ -146,8 +152,8 @@ def create_items_from_nodes(nodes): namespaces.add(namespace) asset_view_items.append({ - "label": asset["name"], - "asset": asset, + "label": asset_doc["name"], + "asset": asset_doc, "looks": looks, "namespaces": namespaces }) diff --git a/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py b/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py index c875fec7f0..97fb832f71 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py +++ b/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py @@ -6,7 +6,7 @@ import logging from maya import cmds from openpype.client import get_last_version_by_subset_name -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_project_name import openpype.hosts.maya.lib as maya_lib from . import lib from .alembic import get_alembic_ids_cache @@ -76,7 +76,7 @@ def vrayproxy_assign_look(vrayproxy, subset="lookDefault"): asset_id = node_id.split(":", 1)[0] node_ids_by_asset_id[asset_id].add(node_id) - project_name = legacy_io.active_project() + project_name = get_current_project_name() for asset_id, node_ids in node_ids_by_asset_id.items(): # Get latest look version diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 0efc46edaf..4a1e109b17 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -42,8 +42,10 @@ from openpype.pipeline.template_data import get_template_data_with_names from openpype.pipeline import ( get_current_project_name, discover_legacy_creator_plugins, - legacy_io, Anatomy, + get_current_host_name, + get_current_project_name, + get_current_asset_name, ) from openpype.pipeline.context_tools import ( get_current_project_asset, @@ -422,10 +424,13 @@ def add_publish_knob(node): return node -@deprecated +@deprecated("openpype.hosts.nuke.api.lib.set_node_data") def set_avalon_knob_data(node, data=None, prefix="avalon:"): """[DEPRECATED] Sets data into nodes's avalon knob + This function is still used but soon will be deprecated. + Use `set_node_data` instead. + Arguments: node (nuke.Node): Nuke node to imprint with data, data (dict, optional): Data to be imprinted into AvalonTab @@ -485,10 +490,13 @@ def set_avalon_knob_data(node, data=None, prefix="avalon:"): return node -@deprecated +@deprecated("openpype.hosts.nuke.api.lib.get_node_data") def get_avalon_knob_data(node, prefix="avalon:", create=True): """[DEPRECATED] Gets a data from nodes's avalon knob + This function is still used but soon will be deprecated. + Use `get_node_data` instead. + Arguments: node (obj): Nuke node to search for data, prefix (str, optional): filtering prefix @@ -970,7 +978,7 @@ def check_inventory_versions(): if not repre_ids: return - project_name = legacy_io.active_project() + project_name = get_current_project_name() # Find representations based on found containers repre_docs = get_representations( project_name, @@ -1128,11 +1136,15 @@ def format_anatomy(data): anatomy = Anatomy() log.debug("__ anatomy.templates: {}".format(anatomy.templates)) - padding = int( - anatomy.templates["render"].get( - "frame_padding" + padding = None + if "frame_padding" in anatomy.templates.keys(): + padding = int(anatomy.templates["frame_padding"]) + elif "render" in anatomy.templates.keys(): + padding = int( + anatomy.templates["render"].get( + "frame_padding" + ) ) - ) version = data.get("version", None) if not version: @@ -1142,7 +1154,7 @@ def format_anatomy(data): project_name = anatomy.project_name asset_name = data["asset"] task_name = data["task"] - host_name = os.environ["AVALON_APP"] + host_name = get_current_host_name() context_data = get_template_data_with_names( project_name, asset_name, task_name, host_name ) @@ -1470,7 +1482,7 @@ def create_write_node_legacy( if knob["name"] == "file_type": representation = knob["value"] - host_name = os.environ.get("AVALON_APP") + host_name = get_current_host_name() try: data.update({ "app": host_name, @@ -1693,7 +1705,7 @@ def create_write_node_legacy( knob_value = float(knob_value) if knob_type == "bool": knob_value = bool(knob_value) - if knob_type in ["2d_vector", "3d_vector"]: + if knob_type in ["2d_vector", "3d_vector", "color", "box"]: knob_value = list(knob_value) GN[knob_name].setValue(knob_value) @@ -1709,7 +1721,7 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): Args: node (nuke.Node): nuke node knob_settings (list): list of dict. Keys are `type`, `name`, `value` - kwargs (dict)[optional]: keys for formatable knob settings + kwargs (dict)[optional]: keys for formattable knob settings """ for knob in knob_settings: log.debug("__ knob: {}".format(pformat(knob))) @@ -1726,7 +1738,7 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): ) continue - # first deal with formatable knob settings + # first deal with formattable knob settings if knob_type == "formatable": template = knob["template"] to_type = knob["to_type"] @@ -1735,8 +1747,8 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs): **kwargs ) except KeyError as msg: - log.warning("__ msg: {}".format(msg)) - raise KeyError(msg) + raise KeyError( + "Not able to format expression: {}".format(msg)) # convert value to correct type if to_type == "2d_vector": @@ -1775,8 +1787,8 @@ def convert_knob_value_to_correct_type(knob_type, knob_value): knob_value = knob_value elif knob_type == "color_gui": knob_value = color_gui_to_int(knob_value) - elif knob_type in ["2d_vector", "3d_vector", "color"]: - knob_value = [float(v) for v in knob_value] + elif knob_type in ["2d_vector", "3d_vector", "color", "box"]: + knob_value = [float(val_) for val_ in knob_value] return knob_value @@ -1929,15 +1941,18 @@ class WorkfileSettings(object): def __init__(self, root_node=None, nodes=None, **kwargs): project_doc = kwargs.get("project") if project_doc is None: - project_name = legacy_io.active_project() + project_name = get_current_project_name() project_doc = get_project(project_name) + else: + project_name = project_doc["name"] Context._project_doc = project_doc + self._project_name = project_name self._asset = ( kwargs.get("asset_name") - or legacy_io.Session["AVALON_ASSET"] + or get_current_asset_name() ) - self._asset_entity = get_current_project_asset(self._asset) + self._asset_entity = get_asset_by_name(project_name, self._asset) self._root_node = root_node or nuke.root() self._nodes = self.get_nodes(nodes=nodes) @@ -2061,9 +2076,16 @@ class WorkfileSettings(object): str(workfile_settings["OCIO_config"])) else: - # set values to root + # OCIO config path is defined from prelaunch hook self._root_node["colorManagement"].setValue("OCIO") + # print previous settings in case some were found in workfile + residual_path = self._root_node["customOCIOConfigPath"].value() + if residual_path: + log.info("Residual OCIO config path found: `{}`".format( + residual_path + )) + # we dont need the key anymore workfile_settings.pop("customOCIOConfigPath", None) workfile_settings.pop("colorManagement", None) @@ -2085,9 +2107,35 @@ class WorkfileSettings(object): # set ocio config path if config_data: - current_ocio_path = os.getenv("OCIO") - if current_ocio_path != config_data["path"]: - message = """ + log.info("OCIO config path found: `{}`".format( + config_data["path"])) + + # check if there's a mismatch between environment and settings + correct_settings = self._is_settings_matching_environment( + config_data) + + # if there's no mismatch between environment and settings + if correct_settings: + self._set_ocio_config_path_to_workfile(config_data) + + def _is_settings_matching_environment(self, config_data): + """ Check if OCIO config path is different from environment + + Args: + config_data (dict): OCIO config data from settings + + Returns: + bool: True if settings are matching environment, False otherwise + """ + current_ocio_path = os.environ["OCIO"] + settings_ocio_path = config_data["path"] + + # normalize all paths to forward slashes + current_ocio_path = current_ocio_path.replace("\\", "/") + settings_ocio_path = settings_ocio_path.replace("\\", "/") + + if current_ocio_path != settings_ocio_path: + message = """ It seems like there's a mismatch between the OCIO config path set in your Nuke settings and the actual path set in your OCIO environment. @@ -2105,12 +2153,118 @@ Please note the paths for your reference: Reopening Nuke should synchronize these paths and resolve any discrepancies. """ - nuke.message( - message.format( - env_path=current_ocio_path, - settings_path=config_data["path"] - ) + nuke.message( + message.format( + env_path=current_ocio_path, + settings_path=settings_ocio_path ) + ) + return False + + return True + + def _set_ocio_config_path_to_workfile(self, config_data): + """ Set OCIO config path to workfile + + Path set into nuke workfile. It is trying to replace path with + environment variable if possible. If not, it will set it as it is. + It also saves the script to apply the change, but only if it's not + empty Untitled script. + + Args: + config_data (dict): OCIO config data from settings + + """ + # replace path with env var if possible + ocio_path = self._replace_ocio_path_with_env_var(config_data) + + log.info("Setting OCIO config path to: `{}`".format( + ocio_path)) + + self._root_node["customOCIOConfigPath"].setValue( + ocio_path + ) + self._root_node["OCIO_config"].setValue("custom") + + # only save script if it's not empty + if self._root_node["name"].value() != "": + log.info("Saving script to apply OCIO config path change.") + nuke.scriptSave() + + def _get_included_vars(self, config_template): + """ Get all environment variables included in template + + Args: + config_template (str): OCIO config template from settings + + Returns: + list: list of environment variables included in template + """ + # resolve all environments for whitelist variables + included_vars = [ + "BUILTIN_OCIO_ROOT", + ] + + # include all project root related env vars + for env_var in os.environ: + if env_var.startswith("OPENPYPE_PROJECT_ROOT_"): + included_vars.append(env_var) + + # use regex to find env var in template with format {ENV_VAR} + # this way we make sure only template used env vars are included + env_var_regex = r"\{([A-Z0-9_]+)\}" + env_var = re.findall(env_var_regex, config_template) + if env_var: + included_vars.append(env_var[0]) + + return included_vars + + def _replace_ocio_path_with_env_var(self, config_data): + """ Replace OCIO config path with environment variable + + Environment variable is added as TCL expression to path. TCL expression + is also replacing backward slashes found in path for windows + formatted values. + + Args: + config_data (str): OCIO config dict from settings + + Returns: + str: OCIO config path with environment variable TCL expression + """ + config_path = config_data["path"] + config_template = config_data["template"] + + included_vars = self._get_included_vars(config_template) + + # make sure we return original path if no env var is included + new_path = config_path + + for env_var in included_vars: + env_path = os.getenv(env_var) + if not env_path: + continue + + # it has to be directory current process can see + if not os.path.isdir(env_path): + continue + + # make sure paths are in same format + env_path = env_path.replace("\\", "/") + path = config_path.replace("\\", "/") + + # check if env_path is in path and replace to first found positive + if env_path in path: + # with regsub we make sure path format of slashes is correct + resub_expr = ( + "[regsub -all {{\\\\}} [getenv {}] \"/\"]").format(env_var) + + new_path = path.replace( + env_path, resub_expr + ) + break + + return new_path def set_writes_colorspace(self): ''' Adds correct colorspace to write node dict @@ -2195,7 +2349,6 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. continue preset_clrsp = input["colorspace"] - log.debug(preset_clrsp) if preset_clrsp is not None: current = n["colorspace"].value() future = str(preset_clrsp) @@ -2225,7 +2378,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. knobs["to"])) def set_colorspace(self): - ''' Setting colorpace following presets + ''' Setting colorspace following presets ''' # get imageio nuke_colorspace = get_nuke_imageio_settings() @@ -2233,17 +2386,16 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. log.info("Setting colorspace to workfile...") try: self.set_root_colorspace(nuke_colorspace) - except AttributeError: - msg = "set_colorspace(): missing `workfile` settings in template" + except AttributeError as _error: + msg = "Set Colorspace to workfile error: {}".format(_error) nuke.message(msg) log.info("Setting colorspace to viewers...") try: self.set_viewers_colorspace(nuke_colorspace["viewer"]) - except AttributeError: - msg = "set_colorspace(): missing `viewer` settings in template" + except AttributeError as _error: + msg = "Set Colorspace to viewer error: {}".format(_error) nuke.message(msg) - log.error(msg) log.info("Setting colorspace to write nodes...") try: @@ -2330,7 +2482,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. def reset_resolution(self): """Set resolution to project resolution.""" log.info("Resetting resolution") - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_data = self._asset_entity["data"] format_data = { @@ -2409,7 +2561,7 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. from .utils import set_context_favorites work_dir = os.getenv("AVALON_WORKDIR") - asset = os.getenv("AVALON_ASSET") + asset = get_current_asset_name() favorite_items = OrderedDict() # project @@ -2677,7 +2829,15 @@ def _launch_workfile_app(): host_tools.show_workfiles(parent=None, on_top=True) +@deprecated("openpype.hosts.nuke.api.lib.start_workfile_template_builder") def process_workfile_builder(): + """ [DEPRECATED] Process workfile builder on nuke start + + This function is deprecated and will be removed in future versions. + Use settings for `project_settings/nuke/templated_workfile_build` which are + supported by api `start_workfile_template_builder()`. + """ + # to avoid looping of the callback, remove it! nuke.removeOnCreate(process_workfile_builder, nodeClass="Root") @@ -2686,11 +2846,6 @@ def process_workfile_builder(): workfile_builder = project_settings["nuke"].get( "workfile_builder", {}) - # get all imortant settings - openlv_on = env_value_to_bool( - env_key="AVALON_OPEN_LAST_WORKFILE", - default=None) - # get settings createfv_on = workfile_builder.get("create_first_version") or None builder_on = workfile_builder.get("builder_on_start") or None @@ -2731,20 +2886,15 @@ def process_workfile_builder(): save_file(last_workfile_path) return - # skip opening of last version if it is not enabled - if not openlv_on or not os.path.exists(last_workfile_path): - return - - log.info("Opening last workfile...") - # open workfile - open_file(last_workfile_path) - def start_workfile_template_builder(): from .workfile_template_builder import ( build_workfile_template ) + # remove callback since it would be duplicating the workfile + nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") + # to avoid looping of the callback, remove it! log.info("Starting workfile template builder...") try: @@ -2752,8 +2902,6 @@ def start_workfile_template_builder(): except TemplateProfileNotFound: log.warning("Template profile not found. Skipping...") - # remove callback since it would be duplicating the workfile - nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") @deprecated def recreate_instance(origin_node, avalon_data=None): @@ -2832,7 +2980,8 @@ def add_scripts_menu(): return # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) config = project_settings["nuke"]["scriptsmenu"]["definition"] _menu = project_settings["nuke"]["scriptsmenu"]["name"] @@ -2850,7 +2999,8 @@ def add_scripts_menu(): def add_scripts_gizmo(): # load configuration of custom menu - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) platform_name = platform.system().lower() for gizmo_settings in project_settings["nuke"]["gizmo"]: @@ -2943,6 +3093,7 @@ class DirmapCache: """Caching class to get settings and sync_module easily and only once.""" _project_name = None _project_settings = None + _sync_module_discovered = False _sync_module = None _mapping = None @@ -2960,8 +3111,10 @@ class DirmapCache: @classmethod def sync_module(cls): - if cls._sync_module is None: - cls._sync_module = ModulesManager().modules_by_name["sync_server"] + if not cls._sync_module_discovered: + cls._sync_module_discovered = True + cls._sync_module = ModulesManager().modules_by_name.get( + "sync_server") return cls._sync_module @classmethod diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index 8406a251e9..65b4b91323 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -2,7 +2,7 @@ import nuke import os import importlib -from collections import OrderedDict +from collections import OrderedDict, defaultdict import pyblish.api @@ -20,6 +20,8 @@ from openpype.pipeline import ( register_creator_plugin_path, register_inventory_action_path, AVALON_CONTAINER_ID, + get_current_asset_name, + get_current_task_name, ) from openpype.pipeline.workfile import BuildWorkfile from openpype.tools.utils import host_tools @@ -32,6 +34,7 @@ from .lib import ( get_main_window, add_publish_knob, WorkfileSettings, + # TODO: remove this once workfile builder will be removed process_workfile_builder, start_workfile_template_builder, launch_workfiles_app, @@ -153,11 +156,18 @@ def add_nuke_callbacks(): """ nuke_settings = get_current_project_settings()["nuke"] workfile_settings = WorkfileSettings() + # Set context settings. nuke.addOnCreate( workfile_settings.set_context_settings, nodeClass="Root") + + # adding favorites to file browser nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root") + + # template builder callbacks nuke.addOnCreate(start_workfile_template_builder, nodeClass="Root") + + # TODO: remove this callback once workfile builder will be removed nuke.addOnCreate(process_workfile_builder, nodeClass="Root") # fix ffmpeg settings on script @@ -167,11 +177,12 @@ def add_nuke_callbacks(): nuke.addOnScriptLoad(check_inventory_versions) nuke.addOnScriptSave(check_inventory_versions) - # # set apply all workfile settings on script load and save + # set apply all workfile settings on script load and save nuke.addOnScriptLoad(WorkfileSettings().set_context_settings) + if nuke_settings["nuke-dirmap"]["enabled"]: - log.info("Added Nuke's dirmaping callback ...") + log.info("Added Nuke's dir-mapping callback ...") # Add dirmap for file paths. nuke.addFilenameFilter(dirmap_file_name_filter) @@ -211,6 +222,13 @@ def _show_workfiles(): host_tools.show_workfiles(parent=None, on_top=False) +def get_context_label(): + return "{0}, {1}".format( + get_current_asset_name(), + get_current_task_name() + ) + + def _install_menu(): """Install Avalon menu into Nuke's main menu bar.""" @@ -220,9 +238,7 @@ def _install_menu(): menu = menubar.addMenu(MENU_LABEL) if not ASSIST: - label = "{0}, {1}".format( - os.environ["AVALON_ASSET"], os.environ["AVALON_TASK"] - ) + label = get_context_label() Context.context_label = label context_action = menu.addCommand(label) context_action.setEnabled(False) @@ -338,9 +354,7 @@ def change_context_label(): menubar = nuke.menu("Nuke") menu = menubar.findItem(MENU_LABEL) - label = "{0}, {1}".format( - os.environ["AVALON_ASSET"], os.environ["AVALON_TASK"] - ) + label = get_context_label() rm_item = [ (i, item) for i, item in enumerate(menu.items()) @@ -532,7 +546,10 @@ def list_instances(creator_id=None): Returns: (list) of dictionaries matching instances format """ - listed_instances = [] + instances_by_order = defaultdict(list) + subset_instances = [] + instance_ids = set() + for node in nuke.allNodes(recurseGroups=True): if node.Class() in ["Viewer", "Dot"]: @@ -558,9 +575,57 @@ def list_instances(creator_id=None): if creator_id and instance_data["creator_identifier"] != creator_id: continue - listed_instances.append((node, instance_data)) + if instance_data["instance_id"] in instance_ids: + instance_data.pop("instance_id") + else: + instance_ids.add(instance_data["instance_id"]) - return listed_instances + # node name could change, so update subset name data + _update_subset_name_data(instance_data, node) + + if "render_order" not in node.knobs(): + subset_instances.append((node, instance_data)) + continue + + order = int(node["render_order"].value()) + instances_by_order[order].append((node, instance_data)) + + # Sort instances based on order attribute or subset name. + # TODO: remove in future Publisher enhanced with sorting + ordered_instances = [] + for key in sorted(instances_by_order.keys()): + instances_by_subset = defaultdict(list) + for node, data_ in instances_by_order[key]: + instances_by_subset[data_["subset"]].append((node, data_)) + for subkey in sorted(instances_by_subset.keys()): + ordered_instances.extend(instances_by_subset[subkey]) + + instances_by_subset = defaultdict(list) + for node, data_ in subset_instances: + instances_by_subset[data_["subset"]].append((node, data_)) + for key in sorted(instances_by_subset.keys()): + ordered_instances.extend(instances_by_subset[key]) + + return ordered_instances + + +def _update_subset_name_data(instance_data, node): + """Update subset name data in instance data. + + Args: + instance_data (dict): instance creator data + node (nuke.Node): nuke node + """ + # make sure node name is subset name + old_subset_name = instance_data["subset"] + old_variant = instance_data["variant"] + subset_name_root = old_subset_name.replace(old_variant, "") + + new_subset_name = node.name() + new_variant = new_subset_name.replace(subset_name_root, "") + + instance_data["subset"] = new_subset_name + instance_data["variant"] = new_variant def remove_instance(instance): diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index 7035da2bb5..6d48c09d60 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -19,7 +19,7 @@ from openpype.pipeline import ( CreatorError, Creator as NewCreator, CreatedInstance, - legacy_io + get_current_task_name ) from .lib import ( INSTANCE_DATA_KNOB, @@ -212,9 +212,15 @@ class NukeCreator(NewCreator): created_instance["creator_attributes"].pop(key) def update_instances(self, update_list): - for created_inst, _changes in update_list: + for created_inst, changes in update_list: instance_node = created_inst.transient_data["node"] + # update instance node name if subset name changed + if "subset" in changes.changed_keys: + instance_node["name"].setValue( + changes["subset"].new_value + ) + # in case node is not existing anymore (user erased it manually) try: instance_node.fullName() @@ -256,6 +262,17 @@ class NukeWriteCreator(NukeCreator): family = "write" icon = "sign-out" + def get_linked_knobs(self): + linked_knobs = [] + if "channels" in self.instance_attributes: + linked_knobs.append("channels") + if "ordered" in self.instance_attributes: + linked_knobs.append("render_order") + if "use_range_limit" in self.instance_attributes: + linked_knobs.extend(["___", "first", "last", "use_limit"]) + + return linked_knobs + def integrate_links(self, node, outputs=True): # skip if no selection if not self.selected_node: @@ -310,6 +327,7 @@ class NukeWriteCreator(NukeCreator): "frames": "Use existing frames" } if ("farm_rendering" in self.instance_attributes): + rendering_targets["frames_farm"] = "Use existing frames - farm" rendering_targets["farm"] = "Farm rendering" return EnumDef( @@ -824,41 +842,6 @@ class ExporterReviewMov(ExporterReview): add_tags = [] self.publish_on_farm = farm read_raw = kwargs["read_raw"] - - # TODO: remove this when `reformat_nodes_config` - # is changed in settings - reformat_node_add = kwargs["reformat_node_add"] - reformat_node_config = kwargs["reformat_node_config"] - - # TODO: make this required in future - reformat_nodes_config = kwargs.get("reformat_nodes_config", {}) - - # TODO: remove this once deprecated is removed - # make sure only reformat_nodes_config is used in future - if reformat_node_add and reformat_nodes_config.get("enabled"): - self.log.warning( - "`reformat_node_add` is deprecated. " - "Please use only `reformat_nodes_config` instead.") - reformat_nodes_config = None - - # TODO: reformat code when backward compatibility is not needed - # warning if reformat_nodes_config is not set - if not reformat_nodes_config: - self.log.warning( - "Please set `reformat_nodes_config` in settings. " - "Using `reformat_node_config` instead." - ) - reformat_nodes_config = { - "enabled": reformat_node_add, - "reposition_nodes": [ - { - "node_class": "Reformat", - "knobs": reformat_node_config - } - ] - } - - bake_viewer_process = kwargs["bake_viewer_process"] bake_viewer_input_process_node = kwargs[ "bake_viewer_input_process"] @@ -897,6 +880,7 @@ class ExporterReviewMov(ExporterReview): self._shift_to_previous_node_and_temp(subset, r_node, "Read... `{}`") # add reformat node + reformat_nodes_config = kwargs["reformat_nodes_config"] if reformat_nodes_config["enabled"]: reposition_nodes = reformat_nodes_config["reposition_nodes"] for reposition_node in reposition_nodes: @@ -955,7 +939,11 @@ class ExporterReviewMov(ExporterReview): except Exception: self.log.info("`mov64_codec` knob was not found") - write_node["mov64_write_timecode"].setValue(1) + try: + write_node["mov64_write_timecode"].setValue(1) + except Exception: + self.log.info("`mov64_write_timecode` knob was not found") + write_node["raw"].setValue(1) # connect write_node.setInput(0, self.previous_node) @@ -1173,7 +1161,7 @@ def convert_to_valid_instaces(): from openpype.hosts.nuke.api import workio - task_name = legacy_io.Session["AVALON_TASK"] + task_name = get_current_task_name() # save into new workfile current_file = workio.current_file() diff --git a/openpype/hosts/nuke/api/workfile_template_builder.py b/openpype/hosts/nuke/api/workfile_template_builder.py index 2384e8eca1..9d7604c58d 100644 --- a/openpype/hosts/nuke/api/workfile_template_builder.py +++ b/openpype/hosts/nuke/api/workfile_template_builder.py @@ -25,7 +25,8 @@ from .lib import ( select_nodes, duplicate_node, node_tempfile, - get_main_window + get_main_window, + WorkfileSettings, ) PLACEHOLDER_SET = "PLACEHOLDERS_SET" @@ -113,6 +114,11 @@ class NukePlaceholderPlugin(PlaceholderPlugin): placeholder_data[key] = value return placeholder_data + def delete_placeholder(self, placeholder): + """Remove placeholder if building was successful""" + placeholder_node = nuke.toNode(placeholder.scene_identifier) + nuke.delete(placeholder_node) + class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin): identifier = "nuke.load" @@ -275,14 +281,6 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin): placeholder.data["nb_children"] += 1 reset_selection() - # remove placeholders marked as delete - if ( - placeholder.data.get("delete") - and not placeholder.data.get("keep_placeholder") - ): - self.log.debug("Deleting node: {}".format(placeholder_node.name())) - nuke.delete(placeholder_node) - # go back to root group nuke.root().begin() @@ -689,14 +687,6 @@ class NukePlaceholderCreatePlugin( placeholder.data["nb_children"] += 1 reset_selection() - # remove placeholders marked as delete - if ( - placeholder.data.get("delete") - and not placeholder.data.get("keep_placeholder") - ): - self.log.debug("Deleting node: {}".format(placeholder_node.name())) - nuke.delete(placeholder_node) - # go back to root group nuke.root().begin() @@ -955,6 +945,9 @@ def build_workfile_template(*args, **kwargs): builder = NukeTemplateBuilder(registered_host()) builder.build_template(*args, **kwargs) + # set all settings to shot context default + WorkfileSettings().set_context_settings() + def update_workfile_template(*args): builder = NukeTemplateBuilder(registered_host()) diff --git a/openpype/hosts/nuke/api/workio.py b/openpype/hosts/nuke/api/workio.py index 8d29e0441f..98e59eff71 100644 --- a/openpype/hosts/nuke/api/workio.py +++ b/openpype/hosts/nuke/api/workio.py @@ -1,6 +1,7 @@ """Host API required Work Files tool""" import os import nuke +import shutil from .utils import is_headless @@ -21,21 +22,37 @@ def save_file(filepath): def open_file(filepath): + + def read_script(nuke_script): + nuke.scriptClear() + nuke.scriptReadFile(nuke_script) + nuke.Root()["name"].setValue(nuke_script) + nuke.Root()["project_directory"].setValue(os.path.dirname(nuke_script)) + nuke.Root().setModified(False) + filepath = filepath.replace("\\", "/") # To remain in the same window, we have to clear the script and read # in the contents of the workfile. - nuke.scriptClear() + # Nuke Preferences can be read after the script is read. + read_script(filepath) + if not is_headless(): autosave = nuke.toNode("preferences")["AutoSaveName"].evaluate() - autosave_prmpt = "Autosave detected.\nWould you like to load the autosave file?" # noqa + autosave_prmpt = "Autosave detected.\n" \ + "Would you like to load the autosave file?" # noqa if os.path.isfile(autosave) and nuke.ask(autosave_prmpt): - filepath = autosave + try: + # Overwrite the filepath with autosave + shutil.copy(autosave, filepath) + # Now read the (auto-saved) script again + read_script(filepath) + except shutil.Error as err: + nuke.message( + "Detected autosave file could not be used.\n{}" + + .format(err)) - nuke.scriptReadFile(filepath) - nuke.Root()["name"].setValue(filepath) - nuke.Root()["project_directory"].setValue(os.path.dirname(filepath)) - nuke.Root().setModified(False) return True diff --git a/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py b/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py index 3948a665c6..657291ec51 100644 --- a/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py +++ b/openpype/hosts/nuke/hooks/pre_nukeassist_setup.py @@ -1,11 +1,12 @@ -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook class PrelaunchNukeAssistHook(PreLaunchHook): """ Adding flag when nukeassist """ - app_groups = ["nukeassist"] + app_groups = {"nukeassist"} + launch_types = set() def execute(self): self.launch_context.env["NUKEASSIST"] = "1" diff --git a/openpype/hosts/nuke/plugins/create/create_write_image.py b/openpype/hosts/nuke/plugins/create/create_write_image.py index 0c8adfb75c..8c18739587 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_image.py +++ b/openpype/hosts/nuke/plugins/create/create_write_image.py @@ -64,9 +64,6 @@ class CreateWriteImage(napi.NukeWriteCreator): ) def create_instance_node(self, subset_name, instance_data): - linked_knobs_ = [] - if "use_range_limit" in self.instance_attributes: - linked_knobs_ = ["channels", "___", "first", "last", "use_limit"] # add fpath_template write_data = { @@ -81,7 +78,7 @@ class CreateWriteImage(napi.NukeWriteCreator): write_data, input=self.selected_node, prenodes=self.prenodes, - linked_knobs=linked_knobs_, + linked_knobs=self.get_linked_knobs(), **{ "frame": nuke.frame() } diff --git a/openpype/hosts/nuke/plugins/create/create_write_prerender.py b/openpype/hosts/nuke/plugins/create/create_write_prerender.py index f46dd2d6d5..395c3b002f 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_prerender.py +++ b/openpype/hosts/nuke/plugins/create/create_write_prerender.py @@ -30,6 +30,9 @@ class CreateWritePrerender(napi.NukeWriteCreator): temp_rendering_path_template = ( "{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}") + # Before write node render. + order = 90 + def get_pre_create_attr_defs(self): attr_defs = [ BoolDef( @@ -42,10 +45,6 @@ class CreateWritePrerender(napi.NukeWriteCreator): return attr_defs def create_instance_node(self, subset_name, instance_data): - linked_knobs_ = [] - if "use_range_limit" in self.instance_attributes: - linked_knobs_ = ["channels", "___", "first", "last", "use_limit"] - # add fpath_template write_data = { "creator": self.__class__.__name__, @@ -68,7 +67,7 @@ class CreateWritePrerender(napi.NukeWriteCreator): write_data, input=self.selected_node, prenodes=self.prenodes, - linked_knobs=linked_knobs_, + linked_knobs=self.get_linked_knobs(), **{ "width": width, "height": height diff --git a/openpype/hosts/nuke/plugins/create/create_write_render.py b/openpype/hosts/nuke/plugins/create/create_write_render.py index c24405873a..91acf4eabc 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_render.py +++ b/openpype/hosts/nuke/plugins/create/create_write_render.py @@ -56,11 +56,15 @@ class CreateWriteRender(napi.NukeWriteCreator): actual_format = nuke.root().knob('format').value() width, height = (actual_format.width(), actual_format.height()) + self.log.debug(">>>>>>> : {}".format(self.instance_attributes)) + self.log.debug(">>>>>>> : {}".format(self.get_linked_knobs())) + created_node = napi.create_write_node( subset_name, write_data, input=self.selected_node, prenodes=self.prenodes, + linked_knobs=self.get_linked_knobs(), **{ "width": width, "height": height diff --git a/openpype/hosts/nuke/plugins/create/workfile_creator.py b/openpype/hosts/nuke/plugins/create/workfile_creator.py index 72ef61e63f..c4e0753abc 100644 --- a/openpype/hosts/nuke/plugins/create/workfile_creator.py +++ b/openpype/hosts/nuke/plugins/create/workfile_creator.py @@ -3,7 +3,6 @@ from openpype.client import get_asset_by_name from openpype.pipeline import ( AutoCreator, CreatedInstance, - legacy_io, ) from openpype.hosts.nuke.api import ( INSTANCE_DATA_KNOB, @@ -27,10 +26,10 @@ class WorkfileCreator(AutoCreator): root_node, api.INSTANCE_DATA_KNOB ) - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] + project_name = self.create_context.get_current_project_name() + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name asset_doc = get_asset_by_name(project_name, asset_name) subset_name = self.get_subset_name( diff --git a/openpype/hosts/nuke/plugins/load/load_backdrop.py b/openpype/hosts/nuke/plugins/load/load_backdrop.py index 67c7877e60..fe82d70b5e 100644 --- a/openpype/hosts/nuke/plugins/load/load_backdrop.py +++ b/openpype/hosts/nuke/plugins/load/load_backdrop.py @@ -6,8 +6,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -72,7 +72,7 @@ class LoadBackdropNodes(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # adding nodes to node graph # just in case we are in group lets jump out of it @@ -190,7 +190,7 @@ class LoadBackdropNodes(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_camera_abc.py b/openpype/hosts/nuke/plugins/load/load_camera_abc.py index 40822c9eb7..fec4ee556e 100644 --- a/openpype/hosts/nuke/plugins/load/load_camera_abc.py +++ b/openpype/hosts/nuke/plugins/load/load_camera_abc.py @@ -5,8 +5,8 @@ from openpype.client import ( get_last_version_by_subset_id ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api import ( @@ -57,7 +57,7 @@ class AlembicCameraLoader(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") with maintained_selection(): camera_node = nuke.createNode( @@ -108,7 +108,7 @@ class AlembicCameraLoader(load.LoaderPlugin): None """ # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) object_name = container['objectName'] @@ -180,7 +180,7 @@ class AlembicCameraLoader(load.LoaderPlugin): """ Coloring a node by correct color by actual version """ # get all versions in list - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] ) diff --git a/openpype/hosts/nuke/plugins/load/load_clip.py b/openpype/hosts/nuke/plugins/load/load_clip.py index ee74582544..19038b168d 100644 --- a/openpype/hosts/nuke/plugins/load/load_clip.py +++ b/openpype/hosts/nuke/plugins/load/load_clip.py @@ -8,7 +8,7 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -91,15 +91,16 @@ class LoadClip(plugin.NukeLoader): # reset container id so it is always unique for each instance self.reset_container_id() - self.log.warning(self.extensions) - is_sequence = len(representation["files"]) > 1 if is_sequence: - representation = self._representation_with_hash_in_frame( - representation + context["representation"] = \ + self._representation_with_hash_in_frame( + representation ) - filepath = get_representation_path(representation).replace("\\", "/") + + filepath = self.filepath_from_context(context) + filepath = filepath.replace("\\", "/") self.log.debug("_ filepath: {}".format(filepath)) start_at_workfile = options.get( @@ -154,7 +155,7 @@ class LoadClip(plugin.NukeLoader): read_node["file"].setValue(filepath) used_colorspace = self._set_colorspace( - read_node, version_data, representation["data"]) + read_node, version_data, representation["data"], filepath) self._set_range_to_node(read_node, first, last, start_at_workfile) @@ -164,8 +165,8 @@ class LoadClip(plugin.NukeLoader): "handleStart", "handleEnd"] data_imprint = {} - for k in add_keys: - if k == 'version': + for key in add_keys: + if key == 'version': version_doc = context["version"] if version_doc["type"] == "hero_version": version = "hero" @@ -173,17 +174,20 @@ class LoadClip(plugin.NukeLoader): version = version_doc.get("name") if version: - data_imprint[k] = version + data_imprint[key] = version - elif k == 'colorspace': - colorspace = representation["data"].get(k) - colorspace = colorspace or version_data.get(k) + elif key == 'colorspace': + colorspace = representation["data"].get(key) + colorspace = colorspace or version_data.get(key) data_imprint["db_colorspace"] = colorspace if used_colorspace: data_imprint["used_colorspace"] = used_colorspace else: - data_imprint[k] = context["version"]['data'].get( - k, str(None)) + value_ = context["version"]['data'].get( + key, str(None)) + if isinstance(value_, (str)): + value_ = value_.replace("\\", "/") + data_imprint[key] = value_ data_imprint["objectName"] = read_name @@ -256,6 +260,7 @@ class LoadClip(plugin.NukeLoader): representation = self._representation_with_hash_in_frame( representation ) + filepath = get_representation_path(representation).replace("\\", "/") self.log.debug("_ filepath: {}".format(filepath)) @@ -266,7 +271,7 @@ class LoadClip(plugin.NukeLoader): if "addRetime" in key ] - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) version_data = version_doc.get("data", {}) @@ -303,8 +308,7 @@ class LoadClip(plugin.NukeLoader): # we will switch off undo-ing with viewer_update_and_undo_stop(): used_colorspace = self._set_colorspace( - read_node, version_data, representation["data"], - path=filepath) + read_node, version_data, representation["data"], filepath) self._set_range_to_node(read_node, first, last, start_at_workfile) @@ -451,9 +455,9 @@ class LoadClip(plugin.NukeLoader): return self.node_name_template.format(**name_data) - def _set_colorspace(self, node, version_data, repre_data, path=None): + def _set_colorspace(self, node, version_data, repre_data, path): output_color = None - path = path or self.fname.replace("\\", "/") + path = path.replace("\\", "/") # get colorspace colorspace = repre_data.get("colorspace") colorspace = colorspace or version_data.get("colorspace") diff --git a/openpype/hosts/nuke/plugins/load/load_effects.py b/openpype/hosts/nuke/plugins/load/load_effects.py index eb1c905c4d..89597e76cc 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects.py +++ b/openpype/hosts/nuke/plugins/load/load_effects.py @@ -8,8 +8,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api import ( @@ -72,7 +72,7 @@ class LoadEffects(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # getting data from json file with unicode conversion with open(file, "r") as f: @@ -155,7 +155,7 @@ class LoadEffects(load.LoaderPlugin): """ # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_effects_ip.py b/openpype/hosts/nuke/plugins/load/load_effects_ip.py index 03be8654ed..efe67be4aa 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_effects_ip.py @@ -8,8 +8,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api import lib @@ -73,7 +73,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # getting data from json file with unicode conversion with open(file, "r") as f: @@ -160,7 +160,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo.py b/openpype/hosts/nuke/plugins/load/load_gizmo.py index 2aa7c49723..6b848ee276 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo.py @@ -5,8 +5,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -73,7 +73,7 @@ class LoadGizmo(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # adding nodes to node graph # just in case we are in group lets jump out of it @@ -106,7 +106,7 @@ class LoadGizmo(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py index 2514a28299..a8e1218cbe 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py @@ -6,8 +6,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -75,7 +75,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") # adding nodes to node graph # just in case we are in group lets jump out of it @@ -113,7 +113,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): # get main variables # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node diff --git a/openpype/hosts/nuke/plugins/load/load_image.py b/openpype/hosts/nuke/plugins/load/load_image.py index 0a79ddada7..0dd3a940db 100644 --- a/openpype/hosts/nuke/plugins/load/load_image.py +++ b/openpype/hosts/nuke/plugins/load/load_image.py @@ -7,8 +7,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import ( @@ -86,7 +86,7 @@ class LoadImage(load.LoaderPlugin): if namespace is None: namespace = context['asset']['name'] - file = self.fname + file = self.filepath_from_context(context) if not file: repr_id = context["representation"]["_id"] @@ -96,7 +96,8 @@ class LoadImage(load.LoaderPlugin): file = file.replace("\\", "/") - repr_cont = context["representation"]["context"] + representation = context["representation"] + repr_cont = representation["context"] frame = repr_cont.get("frame") if frame: padding = len(frame) @@ -104,16 +105,7 @@ class LoadImage(load.LoaderPlugin): frame, format(frame_number, "0{}".format(padding))) - name_data = { - "asset": repr_cont["asset"], - "subset": repr_cont["subset"], - "representation": context["representation"]["name"], - "ext": repr_cont["representation"], - "id": context["representation"]["_id"], - "class_name": self.__class__.__name__ - } - - read_name = self.node_name_template.format(**name_data) + read_name = self._get_node_name(representation) # Create the Loader with the filename path set with viewer_update_and_undo_stop(): @@ -201,7 +193,7 @@ class LoadImage(load.LoaderPlugin): format(frame_number, "0{}".format(padding))) # Get start frame from version data - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] @@ -212,6 +204,8 @@ class LoadImage(load.LoaderPlugin): last = first = int(frame_number) # Set the global in to the start frame of the sequence + read_name = self._get_node_name(representation) + node["name"].setValue(read_name) node["file"].setValue(file) node["origfirst"].setValue(first) node["first"].setValue(first) @@ -250,3 +244,17 @@ class LoadImage(load.LoaderPlugin): with viewer_update_and_undo_stop(): nuke.delete(node) + + def _get_node_name(self, representation): + + repre_cont = representation["context"] + name_data = { + "asset": repre_cont["asset"], + "subset": repre_cont["subset"], + "representation": representation["name"], + "ext": repre_cont["representation"], + "id": representation["_id"], + "class_name": self.__class__.__name__ + } + + return self.node_name_template.format(**name_data) diff --git a/openpype/hosts/nuke/plugins/load/load_matchmove.py b/openpype/hosts/nuke/plugins/load/load_matchmove.py index a7d124d472..f942422c00 100644 --- a/openpype/hosts/nuke/plugins/load/load_matchmove.py +++ b/openpype/hosts/nuke/plugins/load/load_matchmove.py @@ -18,8 +18,9 @@ class MatchmoveLoader(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - if self.fname.lower().endswith(".py"): - exec(open(self.fname).read()) + path = self.filepath_from_context(context) + if path.lower().endswith(".py"): + exec(open(path).read()) else: msg = "Unsupported script type" diff --git a/openpype/hosts/nuke/plugins/load/load_model.py b/openpype/hosts/nuke/plugins/load/load_model.py index 36781993ea..0bdcd93dff 100644 --- a/openpype/hosts/nuke/plugins/load/load_model.py +++ b/openpype/hosts/nuke/plugins/load/load_model.py @@ -5,8 +5,8 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, load, + get_current_project_name, get_representation_path, ) from openpype.hosts.nuke.api.lib import maintained_selection @@ -55,7 +55,7 @@ class AlembicModelLoader(load.LoaderPlugin): data_imprint.update({k: version_data[k]}) # getting file path - file = self.fname.replace("\\", "/") + file = self.filepath_from_context(context).replace("\\", "/") with maintained_selection(): model_node = nuke.createNode( @@ -112,7 +112,7 @@ class AlembicModelLoader(load.LoaderPlugin): None """ # Get version from io - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) object_name = container['objectName'] # get corresponding node @@ -187,7 +187,7 @@ class AlembicModelLoader(load.LoaderPlugin): def node_version_color(self, version, node): """ Coloring a node by correct color by actual version""" - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version["parent"], fields=["_id"] ) diff --git a/openpype/hosts/nuke/plugins/load/load_script_precomp.py b/openpype/hosts/nuke/plugins/load/load_script_precomp.py index b74fdf481a..48d4a0900a 100644 --- a/openpype/hosts/nuke/plugins/load/load_script_precomp.py +++ b/openpype/hosts/nuke/plugins/load/load_script_precomp.py @@ -5,7 +5,7 @@ from openpype.client import ( get_last_version_by_subset_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, load, get_representation_path, ) @@ -43,8 +43,8 @@ class LinkAsGroup(load.LoaderPlugin): if namespace is None: namespace = context['asset']['name'] - file = self.fname.replace("\\", "/") - self.log.info("file: {}\n".format(self.fname)) + file = self.filepath_from_context(context).replace("\\", "/") + self.log.info("file: {}\n".format(file)) precomp_name = context["representation"]["context"]["subset"] @@ -123,7 +123,7 @@ class LinkAsGroup(load.LoaderPlugin): root = get_representation_path(representation).replace("\\", "/") # Get start frame from version data - project_name = legacy_io.active_project() + project_name = get_current_project_name() version_doc = get_version_by_id(project_name, representation["parent"]) last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] diff --git a/openpype/hosts/nuke/plugins/publish/collect_instance_data.py b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py similarity index 71% rename from openpype/hosts/nuke/plugins/publish/collect_instance_data.py rename to openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py index 3908aef4bc..b0f69e8ab8 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_instance_data.py +++ b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py @@ -2,11 +2,13 @@ import nuke import pyblish.api -class CollectInstanceData(pyblish.api.InstancePlugin): - """Collect all nodes with Avalon knob.""" +class CollectNukeInstanceData(pyblish.api.InstancePlugin): + """Collect Nuke instance data + + """ order = pyblish.api.CollectorOrder - 0.49 - label = "Collect Instance Data" + label = "Collect Nuke Instance Data" hosts = ["nuke", "nukeassist"] # presets @@ -40,5 +42,14 @@ class CollectInstanceData(pyblish.api.InstancePlugin): "pixelAspect": pixel_aspect }) + + # add creator attributes to instance + creator_attributes = instance.data["creator_attributes"] + instance.data.update(creator_attributes) + + # add review family if review activated on instance + if instance.data.get("review"): + instance.data["families"].append("review") + self.log.debug("Collected instance: {}".format( instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_reads.py b/openpype/hosts/nuke/plugins/publish/collect_reads.py index 831ae29a27..38938a3dda 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_reads.py +++ b/openpype/hosts/nuke/plugins/publish/collect_reads.py @@ -2,8 +2,6 @@ import os import re import nuke import pyblish.api -from openpype.client import get_asset_by_name -from openpype.pipeline import legacy_io class CollectNukeReads(pyblish.api.InstancePlugin): @@ -15,16 +13,9 @@ class CollectNukeReads(pyblish.api.InstancePlugin): families = ["source"] def process(self, instance): - node = instance.data["transientData"]["node"] - - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] - asset_doc = get_asset_by_name(project_name, asset_name) - - self.log.debug("asset_doc: {}".format(asset_doc["data"])) - self.log.debug("checking instance: {}".format(instance)) + node = instance.data["transientData"]["node"] if node.Class() != "Read": return diff --git a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py index 5701087697..c7d65ffd24 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py +++ b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py @@ -5,7 +5,7 @@ import nuke class CollectSlate(pyblish.api.InstancePlugin): """Check if SLATE node is in scene and connected to rendering tree""" - order = pyblish.api.CollectorOrder + 0.09 + order = pyblish.api.CollectorOrder + 0.002 label = "Collect Slate Node" hosts = ["nuke"] families = ["render"] @@ -13,10 +13,14 @@ class CollectSlate(pyblish.api.InstancePlugin): def process(self, instance): node = instance.data["transientData"]["node"] - slate = next((n for n in nuke.allNodes() - if "slate" in n.name().lower() - if not n["disable"].getValue()), - None) + slate = next( + ( + n_ for n_ in nuke.allNodes() + if "slate" in n_.name().lower() + if not n_["disable"].getValue() + ), + None + ) if slate: # check if slate node is connected to write node tree diff --git a/openpype/hosts/nuke/plugins/publish/collect_writes.py b/openpype/hosts/nuke/plugins/publish/collect_writes.py index 2d1caacdc3..6f9245f5b9 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/collect_writes.py @@ -1,5 +1,4 @@ import os -from pprint import pformat import nuke import pyblish.api from openpype.hosts.nuke import api as napi @@ -15,30 +14,16 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, hosts = ["nuke", "nukeassist"] families = ["render", "prerender", "image"] + # cache + _write_nodes = {} + _frame_ranges = {} + def process(self, instance): - self.log.debug(pformat(instance.data)) - creator_attributes = instance.data["creator_attributes"] - instance.data.update(creator_attributes) group_node = instance.data["transientData"]["node"] render_target = instance.data["render_target"] - family = instance.data["family"] - families = instance.data["families"] - # add targeted family to families - instance.data["families"].append( - "{}.{}".format(family, render_target) - ) - if instance.data.get("review"): - instance.data["families"].append("review") - - child_nodes = napi.get_instance_group_node_childs(instance) - instance.data["transientData"]["childNodes"] = child_nodes - - write_node = None - for x in child_nodes: - if x.Class() == "Write": - write_node = x + write_node = self._write_node_helper(instance) if write_node is None: self.log.warning( @@ -48,113 +33,134 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, ) return - instance.data["writeNode"] = write_node - self.log.debug("checking instance: {}".format(instance)) + # get colorspace and add to version data + colorspace = napi.get_colorspace_from_node(write_node) - # Determine defined file type - ext = write_node["file_type"].value() + if render_target == "frames": + self._set_existing_files_data(instance, colorspace) - # Get frame range - handle_start = instance.context.data["handleStart"] - handle_end = instance.context.data["handleEnd"] - first_frame = int(nuke.root()["first_frame"].getValue()) - last_frame = int(nuke.root()["last_frame"].getValue()) - frame_length = int(last_frame - first_frame + 1) + elif render_target == "frames_farm": + collected_frames = self._set_existing_files_data( + instance, colorspace) - if write_node["use_limit"].getValue(): - first_frame = int(write_node["first"].getValue()) - last_frame = int(write_node["last"].getValue()) + self._set_expected_files(instance, collected_frames) + + self._add_farm_instance_data(instance) + + elif render_target == "farm": + self._add_farm_instance_data(instance) + + # set additional instance data + self._set_additional_instance_data(instance, render_target, colorspace) + + def _set_existing_files_data(self, instance, colorspace): + """Set existing files data to instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + colorspace (str): colorspace + + Returns: + list: collected frames + """ + collected_frames = self._get_collected_frames(instance) + + representation = self._get_existing_frames_representation( + instance, collected_frames + ) + + # inject colorspace data + self.set_representation_colorspace( + representation, instance.context, + colorspace=colorspace + ) + + instance.data["representations"].append(representation) + + return collected_frames + + def _set_expected_files(self, instance, collected_frames): + """Set expected files to instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + """ + write_node = self._write_node_helper(instance) write_file_path = nuke.filename(write_node) output_dir = os.path.dirname(write_file_path) - # get colorspace and add to version data - colorspace = napi.get_colorspace_from_node(write_node) + instance.data["expectedFiles"] = [ + os.path.join(output_dir, source_file) + for source_file in collected_frames + ] - self.log.debug('output dir: {}'.format(output_dir)) + def _get_frame_range_data(self, instance): + """Get frame range data from instance. - if render_target == "frames": - representation = { - 'name': ext, - 'ext': ext, - "stagingDir": output_dir, - "tags": [] - } + Args: + instance (pyblish.api.Instance): pyblish instance - # get file path knob - node_file_knob = write_node["file"] - # list file paths based on input frames - expected_paths = list(sorted({ - node_file_knob.evaluate(frame) - for frame in range(first_frame, last_frame + 1) - })) + Returns: + tuple: first_frame, last_frame + """ - # convert only to base names - expected_filenames = [ - os.path.basename(filepath) - for filepath in expected_paths - ] + instance_name = instance.data["name"] - # make sure files are existing at folder - collected_frames = [ - filename - for filename in os.listdir(output_dir) - if filename in expected_filenames - ] + if self._frame_ranges.get(instance_name): + # return cashed write node + return self._frame_ranges[instance_name] - if collected_frames: - collected_frames_len = len(collected_frames) - frame_start_str = "%0{}d".format( - len(str(last_frame))) % first_frame - representation['frameStart'] = frame_start_str + write_node = self._write_node_helper(instance) - # in case slate is expected and not yet rendered - self.log.debug("_ frame_length: {}".format(frame_length)) - self.log.debug("_ collected_frames_len: {}".format( - collected_frames_len)) + # Get frame range from workfile + first_frame = int(nuke.root()["first_frame"].getValue()) + last_frame = int(nuke.root()["last_frame"].getValue()) - # this will only run if slate frame is not already - # rendered from previews publishes - if ( - "slate" in families - and frame_length == collected_frames_len - and family == "render" - ): - frame_slate_str = ( - "{{:0{}d}}".format(len(str(last_frame))) - ).format(first_frame - 1) + # Get frame range from write node if activated + if write_node["use_limit"].getValue(): + first_frame = int(write_node["first"].getValue()) + last_frame = int(write_node["last"].getValue()) - slate_frame = collected_frames[0].replace( - frame_start_str, frame_slate_str) - collected_frames.insert(0, slate_frame) + # add to cache + self._frame_ranges[instance_name] = (first_frame, last_frame) - if collected_frames_len == 1: - representation['files'] = collected_frames.pop() - else: - representation['files'] = collected_frames + return first_frame, last_frame - # inject colorspace data - self.set_representation_colorspace( - representation, instance.context, - colorspace=colorspace - ) + def _set_additional_instance_data( + self, instance, render_target, colorspace + ): + """Set additional instance data. - instance.data["representations"].append(representation) - self.log.info("Publishing rendered frames ...") + Args: + instance (pyblish.api.Instance): pyblish instance + render_target (str): render target + colorspace (str): colorspace + """ + family = instance.data["family"] - elif render_target == "farm": - farm_keys = ["farm_chunk", "farm_priority", "farm_concurrency"] - for key in farm_keys: - # Skip if key is not in creator attributes - if key not in creator_attributes: - continue - # Add farm attributes to instance - instance.data[key] = creator_attributes[key] + # add targeted family to families + instance.data["families"].append( + "{}.{}".format(family, render_target) + ) + self.log.debug("Appending render target to families: {}.{}".format( + family, render_target) + ) - # Farm rendering - instance.data["transfer"] = False - instance.data["farm"] = True - self.log.info("Farm rendering ON ...") + write_node = self._write_node_helper(instance) + + # Determine defined file type + ext = write_node["file_type"].value() + + # get frame range data + handle_start = instance.context.data["handleStart"] + handle_end = instance.context.data["handleEnd"] + first_frame, last_frame = self._get_frame_range_data(instance) + + # get output paths + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) # TODO: remove this when we have proper colorspace support version_data = { @@ -188,9 +194,208 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, "frameEndHandle": last_frame, }) + + # TODO temporarily set stagingDir as persistent for backward + # compatibility. This is mainly focused on `renders`folders which + # were previously not cleaned up (and could be used in read notes) + # this logic should be removed and replaced with custom staging dir + instance.data["stagingDir_persistent"] = True + + def _write_node_helper(self, instance): + """Helper function to get write node from instance. + + Also sets instance transient data with child nodes. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + nuke.Node: write node + """ + instance_name = instance.data["name"] + + if self._write_nodes.get(instance_name): + # return cashed write node + return self._write_nodes[instance_name] + + # get all child nodes from group node + child_nodes = napi.get_instance_group_node_childs(instance) + + # set child nodes to instance transient data + instance.data["transientData"]["childNodes"] = child_nodes + + write_node = None + for node_ in child_nodes: + if node_.Class() == "Write": + write_node = node_ + + if write_node: + # for slate frame extraction + instance.data["transientData"]["writeNode"] = write_node + # add to cache + self._write_nodes[instance_name] = write_node + + return self._write_nodes[instance_name] + + def _get_existing_frames_representation( + self, + instance, + collected_frames + ): + """Get existing frames representation. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + + Returns: + dict: representation + """ + + first_frame, last_frame = self._get_frame_range_data(instance) + + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + # Determine defined file type + ext = write_node["file_type"].value() + + representation = { + "name": ext, + "ext": ext, + "stagingDir": output_dir, + "tags": [] + } + + frame_start_str = self._get_frame_start_str(first_frame, last_frame) + + representation['frameStart'] = frame_start_str + + # set slate frame + collected_frames = self._add_slate_frame_to_collected_frames( + instance, + collected_frames, + first_frame, + last_frame + ) + + if len(collected_frames) == 1: + representation['files'] = collected_frames.pop() + else: + representation['files'] = collected_frames + + return representation + + def _get_frame_start_str(self, first_frame, last_frame): + """Get frame start string. + + Args: + first_frame (int): first frame + last_frame (int): last frame + + Returns: + str: frame start string + """ + # convert first frame to string with padding + return ( + "{{:0{}d}}".format(len(str(last_frame))) + ).format(first_frame) + + def _add_slate_frame_to_collected_frames( + self, + instance, + collected_frames, + first_frame, + last_frame + ): + """Add slate frame to collected frames. + + Args: + instance (pyblish.api.Instance): pyblish instance + collected_frames (list): collected frames + first_frame (int): first frame + last_frame (int): last frame + + Returns: + list: collected frames + """ + frame_start_str = self._get_frame_start_str(first_frame, last_frame) + frame_length = int(last_frame - first_frame + 1) + + # this will only run if slate frame is not already + # rendered from previews publishes + if ( + "slate" in instance.data["families"] + and frame_length == len(collected_frames) + ): + frame_slate_str = self._get_frame_start_str( + first_frame - 1, + last_frame + ) + + slate_frame = collected_frames[0].replace( + frame_start_str, frame_slate_str) + collected_frames.insert(0, slate_frame) + + return collected_frames + + def _add_farm_instance_data(self, instance): + """Add farm publishing related instance data. + + Args: + instance (pyblish.api.Instance): pyblish instance + """ + # make sure rendered sequence on farm will # be used for extract review if not instance.data.get("review"): instance.data["useSequenceForReview"] = False - self.log.debug("instance.data: {}".format(pformat(instance.data))) + # Farm rendering + instance.data.update({ + "transfer": False, + "farm": True # to skip integrate + }) + self.log.info("Farm rendering ON ...") + + def _get_collected_frames(self, instance): + """Get collected frames. + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + list: collected frames + """ + + first_frame, last_frame = self._get_frame_range_data(instance) + + write_node = self._write_node_helper(instance) + + write_file_path = nuke.filename(write_node) + output_dir = os.path.dirname(write_file_path) + + # get file path knob + node_file_knob = write_node["file"] + # list file paths based on input frames + expected_paths = list(sorted({ + node_file_knob.evaluate(frame) + for frame in range(first_frame, last_frame + 1) + })) + + # convert only to base names + expected_filenames = { + os.path.basename(filepath) + for filepath in expected_paths + } + + # make sure files are existing at folder + collected_frames = [ + filename + for filename in os.listdir(output_dir) + if filename in expected_filenames + ] + + return collected_frames diff --git a/openpype/hosts/nuke/plugins/publish/extract_camera.py b/openpype/hosts/nuke/plugins/publish/extract_camera.py index 4286f71e83..33df6258ae 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_camera.py +++ b/openpype/hosts/nuke/plugins/publish/extract_camera.py @@ -11,9 +11,9 @@ from openpype.hosts.nuke.api.lib import maintained_selection class ExtractCamera(publish.Extractor): - """ 3D camera exctractor + """ 3D camera extractor """ - label = 'Exctract Camera' + label = 'Extract Camera' order = pyblish.api.ExtractorOrder families = ["camera"] hosts = ["nuke"] diff --git a/openpype/hosts/nuke/plugins/publish/extract_model.py b/openpype/hosts/nuke/plugins/publish/extract_model.py index 814d404137..00462f8035 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_model.py +++ b/openpype/hosts/nuke/plugins/publish/extract_model.py @@ -11,9 +11,9 @@ from openpype.hosts.nuke.api.lib import ( class ExtractModel(publish.Extractor): - """ 3D model exctractor + """ 3D model extractor """ - label = 'Exctract Model' + label = 'Extract Model' order = pyblish.api.ExtractorOrder families = ["model"] hosts = ["nuke"] diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index 06c086b10d..25262a7418 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -249,7 +249,7 @@ class ExtractSlateFrame(publish.Extractor): # Add file to representation files # - get write node - write_node = instance.data["writeNode"] + write_node = instance.data["transientData"]["writeNode"] # - evaluate filepaths for first frame and slate frame first_filename = os.path.basename( write_node["file"].evaluate(first_frame)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index 21eefda249..d57d55f85d 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -54,6 +54,7 @@ class ExtractThumbnail(publish.Extractor): def render_thumbnail(self, instance, output_name=None, **kwargs): first_frame = instance.data["frameStartHandle"] last_frame = instance.data["frameEndHandle"] + colorspace = instance.data["colorspace"] # find frame range and define middle thumb frame mid_frame = int((last_frame - first_frame) / 2) @@ -112,8 +113,8 @@ class ExtractThumbnail(publish.Extractor): if self.use_rendered and os.path.isfile(path_render): # check if file exist otherwise connect to write node rnode = nuke.createNode("Read") - rnode["file"].setValue(path_render) + rnode["colorspace"].setValue(colorspace) # turn it raw if none of baking is ON if all([ diff --git a/openpype/hosts/photoshop/api/README.md b/openpype/hosts/photoshop/api/README.md index 4a36746cb2..7bd2bcb1bf 100644 --- a/openpype/hosts/photoshop/api/README.md +++ b/openpype/hosts/photoshop/api/README.md @@ -210,8 +210,9 @@ class ImageLoader(load.LoaderPlugin): representations = ["*"] def load(self, context, name=None, namespace=None, data=None): + path = self.filepath_from_context(context) with photoshop.maintained_selection(): - layer = stub.import_smart_object(self.fname) + layer = stub.import_smart_object(path) self[:] = [layer] diff --git a/openpype/hosts/photoshop/api/pipeline.py b/openpype/hosts/photoshop/api/pipeline.py index 73dc80260c..56ae2a4c25 100644 --- a/openpype/hosts/photoshop/api/pipeline.py +++ b/openpype/hosts/photoshop/api/pipeline.py @@ -6,11 +6,8 @@ import pyblish.api from openpype.lib import register_event_callback, Logger from openpype.pipeline import ( - legacy_io, register_loader_plugin_path, register_creator_plugin_path, - deregister_loader_plugin_path, - deregister_creator_plugin_path, AVALON_CONTAINER_ID, ) @@ -23,6 +20,7 @@ from openpype.host import ( from openpype.pipeline.load import any_outdated_containers from openpype.hosts.photoshop import PHOTOSHOP_HOST_DIR +from openpype.tools.utils import get_openpype_qt_app from . import lib @@ -111,14 +109,6 @@ class PhotoshopHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): item["id"] = "publish_context" _get_stub().imprint(item["id"], item) - def get_context_title(self): - """Returns title for Creator window""" - - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - return "{}/{}/{}".format(project_name, asset_name, task_name) - def list_instances(self): """List all created instances to publish from current workfile. @@ -174,10 +164,7 @@ def check_inventory(): return # Warn about outdated containers. - _app = QtWidgets.QApplication.instance() - if not _app: - print("Starting new QApplication..") - _app = QtWidgets.QApplication([]) + _app = get_openpype_qt_app() message_box = QtWidgets.QMessageBox() message_box.setIcon(QtWidgets.QMessageBox.Warning) diff --git a/openpype/hosts/photoshop/plugins/load/load_image.py b/openpype/hosts/photoshop/plugins/load/load_image.py index 91a9787781..eb770bbd20 100644 --- a/openpype/hosts/photoshop/plugins/load/load_image.py +++ b/openpype/hosts/photoshop/plugins/load/load_image.py @@ -22,7 +22,8 @@ class ImageLoader(photoshop.PhotoshopLoader): name ) with photoshop.maintained_selection(): - layer = self.import_layer(self.fname, layer_name, stub) + path = self.filepath_from_context(context) + layer = self.import_layer(path, layer_name, stub) self[:] = [layer] namespace = namespace or layer_name diff --git a/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py b/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py index c25c5a8f2c..f9fceb80bb 100644 --- a/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py +++ b/openpype/hosts/photoshop/plugins/load/load_image_from_sequence.py @@ -29,11 +29,13 @@ class ImageFromSequenceLoader(photoshop.PhotoshopLoader): options = [] def load(self, context, name=None, namespace=None, data=None): + + path = self.filepath_from_context(context) if data.get("frame"): - self.fname = os.path.join( - os.path.dirname(self.fname), data["frame"] + path = os.path.join( + os.path.dirname(path), data["frame"] ) - if not os.path.exists(self.fname): + if not os.path.exists(path): return stub = self.get_stub() @@ -42,7 +44,7 @@ class ImageFromSequenceLoader(photoshop.PhotoshopLoader): ) with photoshop.maintained_selection(): - layer = stub.import_smart_object(self.fname, layer_name) + layer = stub.import_smart_object(path, layer_name) self[:] = [layer] namespace = namespace or layer_name diff --git a/openpype/hosts/photoshop/plugins/load/load_reference.py b/openpype/hosts/photoshop/plugins/load/load_reference.py index 1f32a5d23c..5772e243d5 100644 --- a/openpype/hosts/photoshop/plugins/load/load_reference.py +++ b/openpype/hosts/photoshop/plugins/load/load_reference.py @@ -23,7 +23,8 @@ class ReferenceLoader(photoshop.PhotoshopLoader): stub.get_layers(), context["asset"]["name"], name ) with photoshop.maintained_selection(): - layer = self.import_layer(self.fname, layer_name, stub) + path = self.filepath_from_context(context) + layer = self.import_layer(path, layer_name, stub) self[:] = [layer] namespace = namespace or layer_name diff --git a/openpype/hosts/photoshop/plugins/publish/closePS.py b/openpype/hosts/photoshop/plugins/publish/closePS.py index b4ded96001..b4c3a4c966 100644 --- a/openpype/hosts/photoshop/plugins/publish/closePS.py +++ b/openpype/hosts/photoshop/plugins/publish/closePS.py @@ -17,7 +17,7 @@ class ClosePS(pyblish.api.ContextPlugin): active = True hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): self.log.info("ClosePS") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py index ce408f8d01..f1d8419608 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py @@ -6,8 +6,6 @@ from openpype.pipeline.create import get_subset_name class CollectAutoImage(pyblish.api.ContextPlugin): """Creates auto image in non artist based publishes (Webpublisher). - - 'remotepublish' should be renamed to 'autopublish' or similar in the future """ label = "Collect Auto Image" @@ -15,7 +13,7 @@ class CollectAutoImage(pyblish.api.ContextPlugin): hosts = ["photoshop"] order = pyblish.api.CollectorOrder + 0.2 - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): family = "image" diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py index 7de4adcaf4..82ba0ac09c 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py @@ -20,7 +20,7 @@ class CollectAutoReview(pyblish.api.ContextPlugin): label = "Collect Auto Review" hosts = ["photoshop"] order = pyblish.api.CollectorOrder + 0.2 - targets = ["remotepublish"] + targets = ["automated"] publish = True diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py index d10cf62c67..01dc50af40 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py @@ -12,7 +12,7 @@ class CollectAutoWorkfile(pyblish.api.ContextPlugin): label = "Collect Workfile" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): family = "workfile" diff --git a/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py b/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py index a5fea7ac7d..b13ff5e476 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_batch_data.py @@ -35,7 +35,7 @@ class CollectBatchData(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder - 0.495 label = "Collect batch data" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["webpublish"] def process(self, context): self.log.info("CollectBatchData") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py b/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py index 90fca8398f..c16616bcb2 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_color_coded_instances.py @@ -34,7 +34,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin): label = "Instances" order = pyblish.api.CollectorOrder hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] # configurable by Settings color_code_mapping = [] diff --git a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py index 2502689e4b..eec6f1fae4 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py @@ -18,6 +18,7 @@ Provides: import pyblish.api from openpype.client import get_last_version_by_subset_name +from openpype.pipeline.version_start import get_versioning_start class CollectPublishedVersion(pyblish.api.ContextPlugin): @@ -26,7 +27,7 @@ class CollectPublishedVersion(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.190 label = "Collect published version" hosts = ["photoshop"] - targets = ["remotepublish"] + targets = ["automated"] def process(self, context): workfile_subset_name = None @@ -47,9 +48,17 @@ class CollectPublishedVersion(pyblish.api.ContextPlugin): version_doc = get_last_version_by_subset_name(project_name, workfile_subset_name, asset_id) - version_int = 1 + if version_doc: - version_int += int(version_doc["name"]) + version_int = int(version_doc["name"]) + 1 + else: + version_int = get_versioning_start( + project_name, + "photoshop", + task_name=context.data["task"], + task_type=context.data["taskType"], + project_settings=context.data["project_settings"] + ) self.log.debug(f"Setting {version_int} to context.") context.data["version"] = version_int diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index d5416a389d..4aa7a05bd1 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -1,10 +1,9 @@ import os -import shutil from PIL import Image from openpype.lib import ( run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, ) from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop @@ -85,7 +84,7 @@ class ExtractReview(publish.Extractor): instance.data["representations"].append(repre_skeleton) processed_img_names = [img_list] - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_args = get_ffmpeg_tool_args("ffmpeg") instance.data["stagingDir"] = staging_dir @@ -94,13 +93,21 @@ class ExtractReview(publish.Extractor): source_files_pattern = self._check_and_resize(processed_img_names, source_files_pattern, staging_dir) - self._generate_thumbnail(ffmpeg_path, instance, source_files_pattern, - staging_dir) + self._generate_thumbnail( + list(ffmpeg_args), + instance, + source_files_pattern, + staging_dir) no_of_frames = len(processed_img_names) if no_of_frames > 1: - self._generate_mov(ffmpeg_path, instance, fps, no_of_frames, - source_files_pattern, staging_dir) + self._generate_mov( + list(ffmpeg_args), + instance, + fps, + no_of_frames, + source_files_pattern, + staging_dir) self.log.info(f"Extracted {instance} to {staging_dir}") @@ -142,8 +149,9 @@ class ExtractReview(publish.Extractor): "tags": self.mov_options['tags'] }) - def _generate_thumbnail(self, ffmpeg_path, instance, source_files_pattern, - staging_dir): + def _generate_thumbnail( + self, ffmpeg_args, instance, source_files_pattern, staging_dir + ): """Generates scaled down thumbnail and adds it as representation. Args: @@ -157,8 +165,7 @@ class ExtractReview(publish.Extractor): # Generate thumbnail thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") self.log.info(f"Generate thumbnail {thumbnail_path}") - args = [ - ffmpeg_path, + args = ffmpeg_args + [ "-y", "-i", source_files_pattern, "-vf", "scale=300:-1", diff --git a/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py b/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py index b9d721dbdb..1a4932fe99 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_asset_name from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -28,10 +28,10 @@ class ValidateInstanceAssetRepair(pyblish.api.Action): # Apply pyblish.logic to get the instances for the plug-in instances = pyblish.api.instances_by_plugin(failed, plugin) stub = photoshop.stub() + current_asset_name = get_current_asset_name() for instance in instances: data = stub.read(instance[0]) - - data["asset"] = legacy_io.Session["AVALON_ASSET"] + data["asset"] = current_asset_name stub.imprint(instance[0], data) @@ -55,7 +55,7 @@ class ValidateInstanceAsset(OptionalPyblishPluginMixin, def process(self, instance): instance_asset = instance.data["asset"] - current_asset = legacy_io.Session["AVALON_ASSET"] + current_asset = get_current_asset_name() if instance_asset != current_asset: msg = ( diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py index e5846c2fc2..59c27f29da 100644 --- a/openpype/hosts/resolve/api/plugin.py +++ b/openpype/hosts/resolve/api/plugin.py @@ -291,7 +291,7 @@ class ClipLoader: active_bin = None data = dict() - def __init__(self, cls, context, **options): + def __init__(self, cls, context, path, **options): """ Initialize object Arguments: @@ -304,6 +304,7 @@ class ClipLoader: self.__dict__.update(cls.__dict__) self.context = context self.active_project = lib.get_current_project() + self.fname = path # try to get value from options or evaluate key value for `handles` self.with_handles = options.get("handles") or bool( diff --git a/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py b/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py index bc03baad8d..73f5ac75b1 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_last_workfile.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes class PreLaunchResolveLastWorkfile(PreLaunchHook): @@ -9,7 +9,8 @@ class PreLaunchResolveLastWorkfile(PreLaunchHook): workfile. This property is set explicitly in Launcher. """ order = 10 - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): if not self.data.get("start_last_workfile"): diff --git a/openpype/hosts/resolve/hooks/pre_resolve_setup.py b/openpype/hosts/resolve/hooks/pre_resolve_setup.py index 3fd39d665c..326f37dffc 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_setup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_setup.py @@ -1,7 +1,7 @@ import os from pathlib import Path import platform -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.hosts.resolve.utils import setup @@ -30,7 +30,8 @@ class PreLaunchResolveSetup(PreLaunchHook): """ - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): current_platform = platform.system().lower() diff --git a/openpype/hosts/resolve/hooks/pre_resolve_startup.py b/openpype/hosts/resolve/hooks/pre_resolve_startup.py index 599e0c0008..6dbfd09a37 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_startup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_startup.py @@ -1,6 +1,6 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook, LaunchTypes import openpype.hosts.resolve @@ -9,7 +9,8 @@ class PreLaunchResolveStartup(PreLaunchHook): """ order = 11 - app_groups = ["resolve"] + app_groups = {"resolve"} + launch_types = {LaunchTypes.local} def execute(self): # Set the openpype prelaunch startup script path for easy access diff --git a/openpype/hosts/resolve/plugins/load/load_clip.py b/openpype/hosts/resolve/plugins/load/load_clip.py index 05bfb003d6..3a59ecea80 100644 --- a/openpype/hosts/resolve/plugins/load/load_clip.py +++ b/openpype/hosts/resolve/plugins/load/load_clip.py @@ -7,7 +7,7 @@ from openpype.client import ( # from openpype.hosts import resolve from openpype.pipeline import ( get_representation_path, - legacy_io, + get_current_project_name, ) from openpype.hosts.resolve.api import lib, plugin from openpype.hosts.resolve.api.pipeline import ( @@ -55,8 +55,9 @@ class LoadClip(plugin.TimelineItemLoader): }) # load clip to timeline and get main variables + path = self.filepath_from_context(context) timeline_item = plugin.ClipLoader( - self, context, **options).load() + self, context, path, **options).load() namespace = namespace or timeline_item.GetName() version = context['version'] version_data = version.get("data", {}) @@ -109,16 +110,16 @@ class LoadClip(plugin.TimelineItemLoader): namespace = container['namespace'] timeline_item_data = lib.get_pype_timeline_item_by_name(namespace) timeline_item = timeline_item_data["clip"]["item"] - project_name = legacy_io.active_project() + project_name = get_current_project_name() version = get_version_by_id(project_name, representation["parent"]) version_data = version.get("data", {}) version_name = version.get("name", None) colorspace = version_data.get("colorspace", None) object_name = "{}_{}".format(name, namespace) - self.fname = get_representation_path(representation) + path = get_representation_path(representation) context["version"] = {"data": version_data} - loader = plugin.ClipLoader(self, context) + loader = plugin.ClipLoader(self, context, path) timeline_item = loader.update(timeline_item) # add additional metadata from the version to imprint Avalon knob @@ -152,7 +153,7 @@ class LoadClip(plugin.TimelineItemLoader): # define version name version_name = version.get("name", None) # get all versions in list - project_name = legacy_io.active_project() + project_name = get_current_project_name() last_version_doc = get_last_version_by_subset_id( project_name, version["parent"], diff --git a/openpype/hosts/resolve/plugins/publish/precollect_workfile.py b/openpype/hosts/resolve/plugins/publish/precollect_workfile.py index 0f94216556..a2f3eaed7a 100644 --- a/openpype/hosts/resolve/plugins/publish/precollect_workfile.py +++ b/openpype/hosts/resolve/plugins/publish/precollect_workfile.py @@ -1,8 +1,8 @@ import pyblish.api from pprint import pformat +from openpype.pipeline import get_current_asset_name from openpype.hosts.resolve import api as rapi -from openpype.pipeline import legacy_io from openpype.hosts.resolve.otio import davinci_export @@ -14,7 +14,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin): def process(self, context): - asset = legacy_io.Session["AVALON_ASSET"] + asset = get_current_asset_name() subset = "workfile" project = rapi.get_current_project() fps = project.GetSetting("timelineFrameRate") diff --git a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py index 9f02d65d00..b99503b3c8 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py @@ -1,8 +1,9 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_streams, path_to_subprocess_arg, run_subprocess, @@ -62,12 +63,12 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin): instance.context.data["cleanupFullPaths"].append(full_thumbnail_path) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_executable_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} jpeg_items = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(ffmpeg_executable_args), # override file if already exists "-y" ] diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py index a7ae02a2eb..fd2d4a9f36 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py @@ -1,7 +1,5 @@ -import os import pyblish.api -from openpype.settings import get_project_settings from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, @@ -21,27 +19,30 @@ class ValidateTextureBatchWorkfiles(pyblish.api.InstancePlugin): optional = True def process(self, instance): - if instance.data["family"] == "workfile": - ext = instance.data["representations"][0]["ext"] - main_workfile_extensions = self.get_main_workfile_extensions() - if ext not in main_workfile_extensions: - self.log.warning("Only secondary workfile present!") - return + if instance.data["family"] != "workfile": + return - if not instance.data.get("resources"): - msg = "No secondary workfile present for workfile '{}'". \ - format(instance.data["name"]) - ext = main_workfile_extensions[0] - formatting_data = {"file_name": instance.data["name"], - "extension": ext} + ext = instance.data["representations"][0]["ext"] + main_workfile_extensions = self.get_main_workfile_extensions( + instance + ) + if ext not in main_workfile_extensions: + self.log.warning("Only secondary workfile present!") + return - raise PublishXmlValidationError(self, msg, - formatting_data=formatting_data - ) + if not instance.data.get("resources"): + msg = "No secondary workfile present for workfile '{}'". \ + format(instance.data["name"]) + ext = main_workfile_extensions[0] + formatting_data = {"file_name": instance.data["name"], + "extension": ext} + + raise PublishXmlValidationError( + self, msg, formatting_data=formatting_data) @staticmethod - def get_main_workfile_extensions(): - project_settings = get_project_settings(os.environ["AVALON_PROJECT"]) + def get_main_workfile_extensions(instance): + project_settings = instance.context.data["project_settings"] try: extensions = (project_settings["standalonepublisher"] diff --git a/openpype/hosts/substancepainter/plugins/load/load_mesh.py b/openpype/hosts/substancepainter/plugins/load/load_mesh.py index 822095641d..57db869a11 100644 --- a/openpype/hosts/substancepainter/plugins/load/load_mesh.py +++ b/openpype/hosts/substancepainter/plugins/load/load_mesh.py @@ -47,7 +47,8 @@ class SubstanceLoadProjectMesh(load.LoaderPlugin): if not substance_painter.project.is_open(): # Allow to 'initialize' a new project - result = prompt_new_file_with_mesh(mesh_filepath=self.fname) + path = self.filepath_from_context(context) + result = prompt_new_file_with_mesh(mesh_filepath=path) if not result: self.log.info("User cancelled new project prompt.") return @@ -65,7 +66,7 @@ class SubstanceLoadProjectMesh(load.LoaderPlugin): else: raise LoadError("Reload of mesh failed") - path = self.fname + path = self.filepath_from_context(context) substance_painter.project.reload_mesh(path, settings, on_mesh_reload) diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py index d11abd1019..316f72509e 100644 --- a/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py +++ b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py @@ -114,7 +114,7 @@ class CollectTextureSet(pyblish.api.InstancePlugin): # Clone the instance image_instance = context.create_instance(image_subset) image_instance[:] = instance[:] - image_instance.data.update(copy.deepcopy(instance.data)) + image_instance.data.update(copy.deepcopy(dict(instance.data))) image_instance.data["name"] = image_subset image_instance.data["label"] = image_subset image_instance.data["subset"] = image_subset diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_frame_range_asset_entity.py b/openpype/hosts/traypublisher/plugins/publish/collect_frame_range_asset_entity.py new file mode 100644 index 0000000000..c18e10e438 --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/collect_frame_range_asset_entity.py @@ -0,0 +1,42 @@ +import pyblish.api +from openpype.pipeline import OptionalPyblishPluginMixin + + +class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Collect Frame Range data From Asset Entity + + Frame range data will only be collected if the keys + are not yet collected for the instance. + """ + + order = pyblish.api.CollectorOrder + 0.491 + label = "Collect Frame Data From Asset Entity" + families = ["plate", "pointcache", + "vdbcache", "online", + "render"] + hosts = ["traypublisher"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + missing_keys = [] + for key in ( + "fps", + "frameStart", + "frameEnd", + "handleStart", + "handleEnd" + ): + if key not in instance.data: + missing_keys.append(key) + keys_set = [] + for key in missing_keys: + asset_data = instance.data["assetEntity"]["data"] + if key in asset_data: + instance.data[key] = asset_data[key] + keys_set.append(key) + if keys_set: + self.log.debug(f"Frame range data {keys_set} " + "has been collected from asset entity.") diff --git a/openpype/hosts/tvpaint/api/lib.py b/openpype/hosts/tvpaint/api/lib.py index 49846d7f29..f8b8c29cdb 100644 --- a/openpype/hosts/tvpaint/api/lib.py +++ b/openpype/hosts/tvpaint/api/lib.py @@ -233,7 +233,7 @@ def get_layers_pre_post_behavior(layer_ids, communicator=None): Pre and Post behaviors is enumerator of possible values: - "none" - - "repeat" / "loop" + - "repeat" - "pingpong" - "hold" @@ -242,7 +242,7 @@ def get_layers_pre_post_behavior(layer_ids, communicator=None): { 0: { "pre": "none", - "post": "loop" + "post": "repeat" } } ``` diff --git a/openpype/hosts/tvpaint/hooks/pre_launch_args.py b/openpype/hosts/tvpaint/hooks/pre_launch_args.py index c31403437a..a1c946b60b 100644 --- a/openpype/hosts/tvpaint/hooks/pre_launch_args.py +++ b/openpype/hosts/tvpaint/hooks/pre_launch_args.py @@ -1,7 +1,5 @@ -from openpype.lib import ( - PreLaunchHook, - get_openpype_execute_args -) +from openpype.lib import get_openpype_execute_args +from openpype.lib.applications import PreLaunchHook, LaunchTypes class TvpaintPrelaunchHook(PreLaunchHook): @@ -13,7 +11,8 @@ class TvpaintPrelaunchHook(PreLaunchHook): Existence of last workfile is checked. If workfile does not exists tries to copy templated workfile from predefined path. """ - app_groups = ["tvpaint"] + app_groups = {"tvpaint"} + launch_types = {LaunchTypes.local} def execute(self): # Pop tvpaint executable diff --git a/openpype/hosts/tvpaint/lib.py b/openpype/hosts/tvpaint/lib.py index 95653b6ecb..97cf8d3633 100644 --- a/openpype/hosts/tvpaint/lib.py +++ b/openpype/hosts/tvpaint/lib.py @@ -77,13 +77,15 @@ def _calculate_pre_behavior_copy( for frame_idx in range(range_start, layer_frame_start): output_idx_by_frame_idx[frame_idx] = first_exposure_frame - elif pre_beh in ("loop", "repeat"): + elif pre_beh == "repeat": # Loop backwards from last frame of layer for frame_idx in reversed(range(range_start, layer_frame_start)): eq_frame_idx_offset = ( (layer_frame_end - frame_idx) % frame_count ) - eq_frame_idx = layer_frame_end - eq_frame_idx_offset + eq_frame_idx = layer_frame_start + ( + layer_frame_end - eq_frame_idx_offset + ) output_idx_by_frame_idx[frame_idx] = eq_frame_idx elif pre_beh == "pingpong": @@ -139,10 +141,10 @@ def _calculate_post_behavior_copy( for frame_idx in range(layer_frame_end + 1, range_end + 1): output_idx_by_frame_idx[frame_idx] = last_exposure_frame - elif post_beh in ("loop", "repeat"): + elif post_beh == "repeat": # Loop backwards from last frame of layer for frame_idx in range(layer_frame_end + 1, range_end + 1): - eq_frame_idx = frame_idx % frame_count + eq_frame_idx = layer_frame_start + (frame_idx % frame_count) output_idx_by_frame_idx[frame_idx] = eq_frame_idx elif post_beh == "pingpong": diff --git a/openpype/hosts/tvpaint/plugins/load/load_image.py b/openpype/hosts/tvpaint/plugins/load/load_image.py index 5283d04355..a400738019 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_image.py +++ b/openpype/hosts/tvpaint/plugins/load/load_image.py @@ -77,8 +77,9 @@ class ImportImage(plugin.Loader): ) # Fill import script with filename and layer name # - filename mus not contain backwards slashes + path = self.filepath_from_context(context).replace("\\", "/") george_script = self.import_script.format( - self.fname.replace("\\", "/"), + path, layer_name, load_options_str ) diff --git a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py index 7f7a68cc41..edc116a8e4 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py +++ b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py @@ -86,10 +86,12 @@ class LoadImage(plugin.Loader): subset_name = context["subset"]["name"] layer_name = self.get_unique_layer_name(asset_name, subset_name) + path = self.filepath_from_context(context) + # Fill import script with filename and layer name # - filename mus not contain backwards slashes george_script = self.import_script.format( - self.fname.replace("\\", "/"), + path.replace("\\", "/"), layer_name, load_options_str ) @@ -271,9 +273,6 @@ class LoadImage(plugin.Loader): # Remove old layers self._remove_layers(layer_ids=layer_ids_to_remove) - # Change `fname` to new representation - self.fname = self.filepath_from_context(context) - name = container["name"] namespace = container["namespace"] new_container = self.load(context, name, namespace, {}) diff --git a/openpype/hosts/tvpaint/plugins/load/load_sound.py b/openpype/hosts/tvpaint/plugins/load/load_sound.py index f312db262a..3003280eef 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_sound.py +++ b/openpype/hosts/tvpaint/plugins/load/load_sound.py @@ -60,9 +60,10 @@ class ImportSound(plugin.Loader): output_filepath = output_file.name.replace("\\", "/") # Prepare george script + path = self.filepath_from_context(context).replace("\\", "/") import_script = "\n".join(self.import_script_lines) george_script = import_script.format( - self.fname.replace("\\", "/"), + path, output_filepath ) self.log.info("*** George script:\n{}\n***".format(george_script)) diff --git a/openpype/hosts/tvpaint/plugins/load/load_workfile.py b/openpype/hosts/tvpaint/plugins/load/load_workfile.py index fc7588f56e..169bfdcdd8 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_workfile.py +++ b/openpype/hosts/tvpaint/plugins/load/load_workfile.py @@ -3,7 +3,7 @@ import os from openpype.lib import StringTemplate from openpype.pipeline import ( registered_host, - legacy_io, + get_current_context, Anatomy, ) from openpype.pipeline.workfile import ( @@ -18,6 +18,7 @@ from openpype.hosts.tvpaint.api.lib import ( from openpype.hosts.tvpaint.api.pipeline import ( get_current_workfile_context, ) +from openpype.pipeline.version_start import get_versioning_start class LoadWorkfile(plugin.Loader): @@ -31,18 +32,18 @@ class LoadWorkfile(plugin.Loader): def load(self, context, name, namespace, options): # Load context of current workfile as first thing # - which context and extension has - host = registered_host() - current_file = host.get_current_workfile() - - context = get_current_workfile_context() - - filepath = self.fname.replace("\\", "/") + filepath = self.filepath_from_context(context) + filepath = filepath.replace("\\", "/") if not os.path.exists(filepath): raise FileExistsError( "The loaded file does not exist. Try downloading it first." ) + host = registered_host() + current_file = host.get_current_workfile() + work_context = get_current_workfile_context() + george_script = "tv_LoadProject '\"'\"{}\"'\"'".format( filepath ) @@ -50,14 +51,15 @@ class LoadWorkfile(plugin.Loader): # Save workfile. host_name = "tvpaint" - project_name = context.get("project") - asset_name = context.get("asset") - task_name = context.get("task") - # Far cases when there is workfile without context + project_name = work_context.get("project") + asset_name = work_context.get("asset") + task_name = work_context.get("task") + # Far cases when there is workfile without work_context if not asset_name: - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] + context = get_current_context() + project_name = context["project_name"] + asset_name = context["asset_name"] + task_name = context["task_name"] template_key = get_workfile_template_key_from_context( asset_name, @@ -94,7 +96,13 @@ class LoadWorkfile(plugin.Loader): )[1] if version is None: - version = 1 + version = get_versioning_start( + project_name, + "tvpaint", + task_name=task_name, + task_type=data["task"]["type"], + family="workfile" + ) else: version += 1 diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py index ab5bbc5e2c..c10fc4de97 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py @@ -9,7 +9,8 @@ import json import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, + ToolNotFoundError, run_subprocess, ) from openpype.pipeline import KnownPublishError @@ -34,11 +35,12 @@ class ExtractConvertToEXR(pyblish.api.InstancePlugin): if not repres: return - oiio_path = get_oiio_tools_path() - # Raise an exception when oiiotool is not available - # - this can currently happen on MacOS machines - if not os.path.exists(oiio_path): - KnownPublishError( + try: + oiio_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + # Raise an exception when oiiotool is not available + # - this can currently happen on MacOS machines + raise KnownPublishError( "OpenImageIO tool is not available on this machine." ) @@ -64,8 +66,8 @@ class ExtractConvertToEXR(pyblish.api.InstancePlugin): src_filepaths.add(src_filepath) - args = [ - oiio_path, src_filepath, + args = oiio_args + [ + src_filepath, "--compression", self.exr_compression, # TODO how to define color conversion? "--colorconvert", "sRGB", "linear", diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py index ed23950b35..fcc5d98ab6 100644 --- a/openpype/hosts/unreal/addon.py +++ b/openpype/hosts/unreal/addon.py @@ -1,7 +1,6 @@ import os import re from openpype.modules import IHostAddon, OpenPypeModule -from openpype.widgets.message_window import Window UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -13,6 +12,11 @@ class UnrealAddon(OpenPypeModule, IHostAddon): def initialize(self, module_settings): self.enabled = True + def get_global_environments(self): + return { + "AYON_UNREAL_ROOT": UNREAL_ROOT_DIR, + } + def add_implementation_envs(self, env, app): """Modify environments to contain all required for implementation.""" # Set AYON_UNREAL_PLUGIN required for Unreal implementation @@ -21,6 +25,8 @@ class UnrealAddon(OpenPypeModule, IHostAddon): from .lib import get_compatible_integration + from openpype.widgets.message_window import Window + pattern = re.compile(r'^\d+-\d+$') if not pattern.match(app.name): @@ -53,7 +59,8 @@ class UnrealAddon(OpenPypeModule, IHostAddon): # Set default environments if are not set via settings defaults = { - "OPENPYPE_LOG_NO_COLORS": "True" + "OPENPYPE_LOG_NO_COLORS": "True", + "UE_PYTHONPATH": os.environ.get("PYTHONPATH", ""), } for key, value in defaults.items(): if not env.get(key): diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index f01609d314..202d7854f6 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -3,21 +3,22 @@ import os import copy from pathlib import Path -from openpype.widgets.splash_screen import SplashScreen + from qtpy import QtCore + +from openpype import resources +from openpype.lib.applications import ( + PreLaunchHook, + ApplicationLaunchFailed, + LaunchTypes, +) +from openpype.pipeline.workfile import get_workfile_template_key +import openpype.hosts.unreal.lib as unreal_lib from openpype.hosts.unreal.ue_workers import ( UEProjectGenerationWorker, UEPluginInstallWorker ) - -from openpype import resources -from openpype.lib import ( - PreLaunchHook, - ApplicationLaunchFailed, - ApplicationNotFound, -) -from openpype.pipeline.workfile import get_workfile_template_key -import openpype.hosts.unreal.lib as unreal_lib +from openpype.hosts.unreal.ui import SplashScreen class UnrealPrelaunchHook(PreLaunchHook): @@ -29,6 +30,8 @@ class UnrealPrelaunchHook(PreLaunchHook): shell script. """ + app_groups = {"unreal"} + launch_types = {LaunchTypes.local} def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -111,6 +114,7 @@ class UnrealPrelaunchHook(PreLaunchHook): ue_project_worker = UEProjectGenerationWorker() ue_project_worker.setup( engine_version, + self.data["project_name"], unreal_project_name, engine_path, project_dir @@ -186,24 +190,36 @@ class UnrealPrelaunchHook(PreLaunchHook): project_path.mkdir(parents=True, exist_ok=True) - # Set "AYON_UNREAL_PLUGIN" to current process environment for - # execution of `create_unreal_project` - - if self.launch_context.env.get("AYON_UNREAL_PLUGIN"): - self.log.info(( - f"{self.signature} using Ayon plugin from " - f"{self.launch_context.env.get('AYON_UNREAL_PLUGIN')}" - )) - env_key = "AYON_UNREAL_PLUGIN" - if self.launch_context.env.get(env_key): - os.environ[env_key] = self.launch_context.env[env_key] - # engine_path points to the specific Unreal Engine root # so, we are going up from the executable itself 3 levels. engine_path: Path = Path(executable).parents[3] - if not unreal_lib.check_plugin_existence(engine_path): - self.exec_plugin_install(engine_path) + # Check if new env variable exists, and if it does, if the path + # actually contains the plugin. If not, install it. + + built_plugin_path = self.launch_context.env.get( + "AYON_BUILT_UNREAL_PLUGIN", None) + + if unreal_lib.check_built_plugin_existance(built_plugin_path): + self.log.info(( + f"{self.signature} using existing built Ayon plugin from " + f"{built_plugin_path}" + )) + unreal_lib.copy_built_plugin(engine_path, Path(built_plugin_path)) + else: + # Set "AYON_UNREAL_PLUGIN" to current process environment for + # execution of `create_unreal_project` + env_key = "AYON_UNREAL_PLUGIN" + if self.launch_context.env.get(env_key): + self.log.info(( + f"{self.signature} using Ayon plugin from " + f"{self.launch_context.env.get(env_key)}" + )) + if self.launch_context.env.get(env_key): + os.environ[env_key] = self.launch_context.env[env_key] + + if not unreal_lib.check_plugin_existence(engine_path): + self.exec_plugin_install(engine_path) project_file = project_path / unreal_project_filename diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index 97771472cf..6d544f65b2 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -1,17 +1,16 @@ # -*- coding: utf-8 -*- """Unreal launching and project tools.""" +import json import os import platform -import json - +import re +import subprocess +from collections import OrderedDict +from distutils import dir_util +from pathlib import Path from typing import List -from distutils import dir_util -import subprocess -import re -from pathlib import Path -from collections import OrderedDict from openpype.settings import get_project_settings @@ -179,6 +178,7 @@ def _parse_launcher_locations(install_json_path: str) -> dict: def create_unreal_project(project_name: str, + unreal_project_name: str, ue_version: str, pr_dir: Path, engine_path: Path, @@ -193,7 +193,8 @@ def create_unreal_project(project_name: str, folder and enable this plugin. Args: - project_name (str): Name of the project. + project_name (str): Name of the project in AYON. + unreal_project_name (str): Name of the project in Unreal. ue_version (str): Unreal engine version (like 4.23). pr_dir (Path): Path to directory where project will be created. engine_path (Path): Path to Unreal Engine installation. @@ -211,8 +212,12 @@ def create_unreal_project(project_name: str, Returns: None + Deprecated: + since 3.16.0 + """ env = env or os.environ + preset = get_project_settings(project_name)["unreal"]["project_setup"] ue_id = ".".join(ue_version.split(".")[:2]) # get unreal engine identifier @@ -230,7 +235,7 @@ def create_unreal_project(project_name: str, ue_editor_exe: Path = get_editor_exe_path(engine_path, ue_version) cmdlet_project: Path = get_path_to_cmdlet_project(ue_version) - project_file = pr_dir / f"{project_name}.uproject" + project_file = pr_dir / f"{unreal_project_name}.uproject" print("--- Generating a new project ...") commandlet_cmd = [f'{ue_editor_exe.as_posix()}', @@ -251,8 +256,9 @@ def create_unreal_project(project_name: str, return_code = gen_process.wait() if return_code and return_code != 0: - raise RuntimeError(f'Failed to generate \'{project_name}\' project! ' - f'Exited with return code {return_code}') + raise RuntimeError( + (f"Failed to generate '{unreal_project_name}' project! " + f"Exited with return code {return_code}")) print("--- Project has been generated successfully.") @@ -282,7 +288,7 @@ def create_unreal_project(project_name: str, subprocess.run(command1) command2 = [u_build_tool.as_posix(), - f"-ModuleWithSuffix={project_name},3555", arch, + f"-ModuleWithSuffix={unreal_project_name},3555", arch, "Development", "-TargetType=Editor", f'-Project={project_file}', f'{project_file}', @@ -363,11 +369,11 @@ def get_compatible_integration( def get_path_to_cmdlet_project(ue_version: str) -> Path: cmd_project = Path( - os.path.abspath(os.getenv("OPENPYPE_ROOT"))) + os.path.dirname(os.path.abspath(__file__))) # For now, only tested on Windows (For Linux and Mac # it has to be implemented) - cmd_project /= f"openpype/hosts/unreal/integration/UE_{ue_version}" + cmd_project /= f"integration/UE_{ue_version}" # if the integration doesn't exist for current engine version # try to find the closest to it. @@ -423,6 +429,36 @@ def get_build_id(engine_path: Path, ue_version: str) -> str: return "{" + loaded_modules.get("BuildId") + "}" +def check_built_plugin_existance(plugin_path) -> bool: + if not plugin_path: + return False + + integration_plugin_path = Path(plugin_path) + + if not integration_plugin_path.is_dir(): + raise RuntimeError("Path to the integration plugin is null!") + + if not (integration_plugin_path / "Binaries").is_dir() \ + or not (integration_plugin_path / "Intermediate").is_dir(): + return False + + return True + + +def copy_built_plugin(engine_path: Path, plugin_path: Path) -> None: + ayon_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/Ayon" + + if not ayon_plugin_path.is_dir(): + ayon_plugin_path.mkdir(parents=True, exist_ok=True) + + engine_plugin_config_path: Path = ayon_plugin_path / "Config" + engine_plugin_config_path.mkdir(exist_ok=True) + + dir_util._path_created = {} + + dir_util.copy_tree(plugin_path.as_posix(), ayon_plugin_path.as_posix()) + + def check_plugin_existence(engine_path: Path, env: dict = None) -> bool: env = env or os.environ integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py index 52eea4122a..1d60b63f9a 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py @@ -76,18 +76,24 @@ class AnimationAlembicLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): unreal.EditorAssetLibrary.make_directory(asset_dir) - task = self.get_task(self.fname, asset_dir, asset_name, False) + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) asset_tools = unreal.AssetToolsHelpers.get_asset_tools() asset_tools.import_asset_tasks([task]) diff --git a/openpype/hosts/unreal/plugins/load/load_animation.py b/openpype/hosts/unreal/plugins/load/load_animation.py index a5ecb677e8..7ed85ee411 100644 --- a/openpype/hosts/unreal/plugins/load/load_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_animation.py @@ -26,7 +26,7 @@ class AnimationFBXLoader(plugin.Loader): icon = "cube" color = "orange" - def _process(self, asset_dir, asset_name, instance_name): + def _process(self, path, asset_dir, asset_name, instance_name): automated = False actor = None @@ -55,7 +55,7 @@ class AnimationFBXLoader(plugin.Loader): asset_doc = get_current_project_asset(fields=["data.fps"]) - task.set_editor_property('filename', self.fname) + task.set_editor_property('filename', path) task.set_editor_property('destination_path', asset_dir) task.set_editor_property('destination_name', asset_name) task.set_editor_property('replace_existing', False) @@ -177,14 +177,15 @@ class AnimationFBXLoader(plugin.Loader): EditorAssetLibrary.make_directory(asset_dir) - libpath = self.fname.replace("fbx", "json") + path = self.filepath_from_context(context) + libpath = path.replace(".fbx", ".json") with open(libpath, "r") as fp: data = json.load(fp) instance_name = data.get("instance_name") - animation = self._process(asset_dir, asset_name, instance_name) + animation = self._process(path, asset_dir, asset_name, instance_name) asset_content = EditorAssetLibrary.list_assets( hierarchy_dir, recursive=True, include_folder=False) diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py index 59ea14697d..d663ce20ea 100644 --- a/openpype/hosts/unreal/plugins/load/load_camera.py +++ b/openpype/hosts/unreal/plugins/load/load_camera.py @@ -12,7 +12,7 @@ from unreal import ( from openpype.client import get_asset_by_name from openpype.pipeline import ( AYON_CONTAINER_ID, - legacy_io, + get_current_project_name, ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api.pipeline import ( @@ -184,7 +184,7 @@ class CameraLoader(plugin.Loader): frame_ranges[i + 1][0], frame_ranges[i + 1][1], [level]) - project_name = legacy_io.active_project() + project_name = get_current_project_name() data = get_asset_by_name(project_name, asset)["data"] cam_seq.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) @@ -200,12 +200,13 @@ class CameraLoader(plugin.Loader): settings.set_editor_property('reduce_keys', False) if cam_seq: + path = self.filepath_from_context(context) self._import_camera( EditorLevelLibrary.get_editor_world(), cam_seq, cam_seq.get_bindings(), settings, - self.fname + path ) # Set range of all sections @@ -389,7 +390,7 @@ class CameraLoader(plugin.Loader): # Set range of all sections # Changing the range of the section is not enough. We need to change # the frame of all the keys in the section. - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset = container.get('asset') data = get_asset_by_name(project_name, asset)["data"] diff --git a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py index 3a292fdbd1..13ba236a7d 100644 --- a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py @@ -111,8 +111,9 @@ class PointCacheAlembicLoader(plugin.Loader): if frame_start == frame_end: frame_end += 1 + path = self.filepath_from_context(context) task = self.get_task( - self.fname, asset_dir, asset_name, False, frame_start, frame_end) + path, asset_dir, asset_name, False, frame_start, frame_end) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index 86b2e1456c..3b82da5068 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -23,7 +23,7 @@ from openpype.pipeline import ( load_container, get_representation_path, AYON_CONTAINER_ID, - legacy_io, + get_current_project_name, ) from openpype.pipeline.context_tools import get_current_project_asset from openpype.settings import get_current_project_settings @@ -302,7 +302,7 @@ class LayoutLoader(plugin.Loader): if not version_ids: return output - project_name = legacy_io.active_project() + project_name = get_current_project_name() repre_docs = get_representations( project_name, representation_names=["fbx", "abc"], @@ -603,7 +603,7 @@ class LayoutLoader(plugin.Loader): frame_ranges[i + 1][0], frame_ranges[i + 1][1], [level]) - project_name = legacy_io.active_project() + project_name = get_current_project_name() data = get_asset_by_name(project_name, asset)["data"] shot.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) @@ -618,7 +618,8 @@ class LayoutLoader(plugin.Loader): EditorLevelLibrary.load_level(level) - loaded_assets = self._process(self.fname, asset_dir, shot) + path = self.filepath_from_context(context) + loaded_assets = self._process(path, asset_dir, shot) for s in sequences: EditorAssetLibrary.save_asset(s.get_path_name()) diff --git a/openpype/hosts/unreal/plugins/load/load_layout_existing.py b/openpype/hosts/unreal/plugins/load/load_layout_existing.py index 929a9a1399..c53e92930a 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout_existing.py +++ b/openpype/hosts/unreal/plugins/load/load_layout_existing.py @@ -11,7 +11,7 @@ from openpype.pipeline import ( load_container, get_representation_path, AYON_CONTAINER_ID, - legacy_io, + get_current_project_name, ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as upipeline @@ -380,7 +380,8 @@ class ExistingLayoutLoader(plugin.Loader): raise AssertionError("Current level not saved") project_name = context["project"]["name"] - containers = self._process(self.fname, project_name) + path = self.filepath_from_context(context) + containers = self._process(path, project_name) curr_level_path = Path( curr_level.get_outer().get_path_name()).parent.as_posix() @@ -410,7 +411,7 @@ class ExistingLayoutLoader(plugin.Loader): asset_dir = container.get('namespace') source_path = get_representation_path(representation) - project_name = legacy_io.active_project() + project_name = get_current_project_name() containers = self._process(source_path, project_name) data = { diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py index 7591d5582f..9285602b64 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py @@ -78,18 +78,24 @@ class SkeletalMeshAlembicLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): unreal.EditorAssetLibrary.make_directory(asset_dir) - task = self.get_task(self.fname, asset_dir, asset_name, False) + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py index e9676cde3a..9aa0e4d1a8 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py @@ -23,7 +23,7 @@ class SkeletalMeshFBXLoader(plugin.Loader): def load(self, context, name, namespace, options): """Load and containerise representation into Content Browser. - This is two step process. First, import FBX to temporary path and + This is a two step process. First, import FBX to temporary path and then call `containerise()` on it - this moves all content to new directory and then it will create AssetContainer there and imprint it with metadata. This will mark this path as container. @@ -52,11 +52,16 @@ class SkeletalMeshFBXLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) - version = context.get('version').get('name') + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix @@ -65,7 +70,8 @@ class SkeletalMeshFBXLoader(plugin.Loader): task = unreal.AssetImportTask() - task.set_editor_property('filename', self.fname) + path = self.filepath_from_context(context) + task.set_editor_property('filename', path) task.set_editor_property('destination_path', asset_dir) task.set_editor_property('destination_name', asset_name) task.set_editor_property('replace_existing', False) diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py index befc7b0ac9..bb13692f9e 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py @@ -79,11 +79,13 @@ class StaticMeshAlembicLoader(plugin.Loader): root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" else: - asset_name = "{}".format(name) - version = context.get('version').get('name') + name_version = f"{name}_v{version.get('name'):03d}" default_conversion = False if options.get("default_conversion"): @@ -91,15 +93,16 @@ class StaticMeshAlembicLoader(plugin.Loader): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}_v{version:03d}", suffix="") + f"{root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): unreal.EditorAssetLibrary.make_directory(asset_dir) + path = self.filepath_from_context(context) task = self.get_task( - self.fname, asset_dir, asset_name, False, default_conversion) + path, asset_dir, asset_name, False, default_conversion) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py index e416256486..ffc68d8375 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py @@ -78,17 +78,24 @@ class StaticMeshFBXLoader(plugin.Loader): asset_name = "{}_{}".format(asset, name) else: asset_name = "{}".format(name) + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name}", suffix="" + f"{root}/{asset}/{name_version}", suffix="" ) container_name += suffix unreal.EditorAssetLibrary.make_directory(asset_dir) - task = self.get_task(self.fname, asset_dir, asset_name, False) + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 diff --git a/openpype/hosts/unreal/plugins/load/load_uasset.py b/openpype/hosts/unreal/plugins/load/load_uasset.py index 30f63abe39..88aaac41e8 100644 --- a/openpype/hosts/unreal/plugins/load/load_uasset.py +++ b/openpype/hosts/unreal/plugins/load/load_uasset.py @@ -64,8 +64,9 @@ class UAssetLoader(plugin.Loader): destination_path = asset_dir.replace( "/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1) + path = self.filepath_from_context(context) shutil.copy( - self.fname, + path, f"{destination_path}/{name}_{unique_number:02}.{self.extension}") # Create Asset Container diff --git a/openpype/hosts/unreal/plugins/publish/extract_layout.py b/openpype/hosts/unreal/plugins/publish/extract_layout.py index 57e7957575..d30d04551d 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_layout.py +++ b/openpype/hosts/unreal/plugins/publish/extract_layout.py @@ -8,7 +8,7 @@ from unreal import EditorLevelLibrary as ell from unreal import EditorAssetLibrary as eal from openpype.client import get_representation_by_name -from openpype.pipeline import legacy_io, publish +from openpype.pipeline import publish class ExtractLayout(publish.Extractor): @@ -32,7 +32,7 @@ class ExtractLayout(publish.Extractor): "Wrong level loaded" json_data = [] - project_name = legacy_io.active_project() + project_name = instance.context.data["projectName"] for member in instance[:]: actor = ell.get_actor_reference(member) diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py index 2b7e1375e6..386ad877d7 100644 --- a/openpype/hosts/unreal/ue_workers.py +++ b/openpype/hosts/unreal/ue_workers.py @@ -3,16 +3,17 @@ import os import platform import re import subprocess +import tempfile from distutils import dir_util +from distutils.dir_util import copy_tree from pathlib import Path from typing import List, Union -import tempfile -from distutils.dir_util import copy_tree - -import openpype.hosts.unreal.lib as ue_lib from qtpy import QtCore +import openpype.hosts.unreal.lib as ue_lib +from openpype.settings import get_project_settings + def parse_comp_progress(line: str, progress_signal: QtCore.Signal(int)): match = re.search(r"\[[1-9]+/[0-9]+]", line) @@ -39,42 +40,71 @@ def retrieve_exit_code(line: str): return None -class UEProjectGenerationWorker(QtCore.QObject): +class UEWorker(QtCore.QObject): finished = QtCore.Signal(str) - failed = QtCore.Signal(str) + failed = QtCore.Signal(str, int) progress = QtCore.Signal(int) log = QtCore.Signal(str) + + engine_path: Path = None + env = None + + def execute(self): + raise NotImplementedError("Please implement this method!") + + def run(self): + try: + self.execute() + except Exception as e: + import traceback + self.log.emit(str(e)) + self.log.emit(traceback.format_exc()) + self.failed.emit(str(e), 1) + raise e + + +class UEProjectGenerationWorker(UEWorker): stage_begin = QtCore.Signal(str) ue_version: str = None project_name: str = None - env = None - engine_path: Path = None project_dir: Path = None dev_mode = False def setup(self, ue_version: str, - project_name, + project_name: str, + unreal_project_name, engine_path: Path, project_dir: Path, dev_mode: bool = False, env: dict = None): + """Set the worker with necessary parameters. + + Args: + ue_version (str): Unreal Engine version. + project_name (str): Name of the project in AYON. + unreal_project_name (str): Name of the project in Unreal. + engine_path (Path): Path to the Unreal Engine. + project_dir (Path): Path to the project directory. + dev_mode (bool, optional): Whether to run the project in dev mode. + Defaults to False. + env (dict, optional): Environment variables. Defaults to None. + + """ self.ue_version = ue_version self.project_dir = project_dir self.env = env or os.environ - preset = ue_lib.get_project_settings( - project_name - )["unreal"]["project_setup"] + preset = get_project_settings(project_name)["unreal"]["project_setup"] if dev_mode or preset["dev_mode"]: self.dev_mode = True - self.project_name = project_name + self.project_name = unreal_project_name self.engine_path = engine_path - def run(self): + def execute(self): # engine_path should be the location of UE_X.X folder ue_editor_exe = ue_lib.get_editor_exe_path(self.engine_path, @@ -285,15 +315,8 @@ class UEProjectGenerationWorker(QtCore.QObject): self.finished.emit("Project successfully built!") -class UEPluginInstallWorker(QtCore.QObject): - finished = QtCore.Signal(str) +class UEPluginInstallWorker(UEWorker): installing = QtCore.Signal(str) - failed = QtCore.Signal(str, int) - progress = QtCore.Signal(int) - log = QtCore.Signal(str) - - engine_path: Path = None - env = None def setup(self, engine_path: Path, env: dict = None, ): self.engine_path = engine_path @@ -361,7 +384,7 @@ class UEPluginInstallWorker(QtCore.QObject): dir_util.remove_tree(temp_dir.as_posix()) - def run(self): + def execute(self): src_plugin_dir = Path(self.env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(src_plugin_dir): diff --git a/openpype/hosts/unreal/ui/__init__.py b/openpype/hosts/unreal/ui/__init__.py new file mode 100644 index 0000000000..606b21ef19 --- /dev/null +++ b/openpype/hosts/unreal/ui/__init__.py @@ -0,0 +1,5 @@ +from .splash_screen import SplashScreen + +__all__ = ( + "SplashScreen", +) diff --git a/openpype/widgets/splash_screen.py b/openpype/hosts/unreal/ui/splash_screen.py similarity index 98% rename from openpype/widgets/splash_screen.py rename to openpype/hosts/unreal/ui/splash_screen.py index 7c1ff72ecd..7ac77821d9 100644 --- a/openpype/widgets/splash_screen.py +++ b/openpype/hosts/unreal/ui/splash_screen.py @@ -1,6 +1,5 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype import style, resources -from igniter.nice_progress_bar import NiceProgressBar class SplashScreen(QtWidgets.QDialog): @@ -143,7 +142,7 @@ class SplashScreen(QtWidgets.QDialog): button_layout.addWidget(self.close_btn) # Progress Bar - self.progress_bar = NiceProgressBar() + self.progress_bar = QtWidgets.QProgressBar() self.progress_bar.setValue(0) self.progress_bar.setAlignment(QtCore.Qt.AlignTop) diff --git a/openpype/hosts/webpublisher/README.md b/openpype/hosts/webpublisher/README.md index 0826e44490..07a957fa7f 100644 --- a/openpype/hosts/webpublisher/README.md +++ b/openpype/hosts/webpublisher/README.md @@ -3,4 +3,4 @@ Webpublisher Plugins meant for processing of Webpublisher. -Gets triggered by calling openpype.cli.remotepublish with appropriate arguments. \ No newline at end of file +Gets triggered by calling `openpype_console modules webpublisher publish` with appropriate arguments. diff --git a/openpype/hosts/webpublisher/addon.py b/openpype/hosts/webpublisher/addon.py index eb7fced2e6..4438775b03 100644 --- a/openpype/hosts/webpublisher/addon.py +++ b/openpype/hosts/webpublisher/addon.py @@ -20,11 +20,10 @@ class WebpublisherAddon(OpenPypeModule, IHostAddon): Close Python process at the end. """ - from openpype.pipeline.publish.lib import remote_publish - from .lib import get_webpublish_conn, publish_and_log + from .lib import get_webpublish_conn, publish_and_log, publish_in_test if is_test: - remote_publish(log, close_plugin_name) + publish_in_test(log, close_plugin_name) return dbcon = get_webpublish_conn() diff --git a/openpype/hosts/webpublisher/lib.py b/openpype/hosts/webpublisher/lib.py index b207f85b46..ecd28d2432 100644 --- a/openpype/hosts/webpublisher/lib.py +++ b/openpype/hosts/webpublisher/lib.py @@ -12,7 +12,6 @@ from openpype.client.mongo import OpenPypeMongoConnection from openpype.settings import get_project_settings from openpype.lib import Logger from openpype.lib.profiles_filtering import filter_profiles -from openpype.pipeline.publish.lib import find_close_plugin ERROR_STATUS = "error" IN_PROGRESS_STATUS = "in_progress" @@ -68,6 +67,46 @@ def get_batch_asset_task_info(ctx): return asset, task_name, task_type +def find_close_plugin(close_plugin_name, log): + if close_plugin_name: + plugins = pyblish.api.discover() + for plugin in plugins: + if plugin.__name__ == close_plugin_name: + return plugin + + log.debug("Close plugin not found, app might not close.") + + +def publish_in_test(log, close_plugin_name=None): + """Loops through all plugins, logs to console. Used for tests. + + Args: + log (Logger) + close_plugin_name (Optional[str]): Name of plugin with responsibility + to close application. + """ + + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" + + close_plugin = find_close_plugin(close_plugin_name, log) + + for result in pyblish.util.publish_iter(): + for record in result["records"]: + # Why do we log again? pyblish logger is logging to stdout... + log.info("{}: {}".format(result["plugin"].label, record.msg)) + + if not result["error"]: + continue + + # QUESTION We don't break on error? + error_message = error_format.format(**result) + log.error(error_message) + if close_plugin: # close host app explicitly after error + context = pyblish.api.Context() + close_plugin().process(context) + + def get_webpublish_conn(): """Get connection to OP 'webpublishes' collection.""" mongo_client = OpenPypeMongoConnection.get_mongo_client() @@ -231,7 +270,7 @@ def find_variant_key(application_manager, host): def get_task_data(batch_dir): """Return parsed data from first task manifest.json - Used for `remotepublishfromapp` command where batch contains only + Used for `publishfromapp` command where batch contains only single task with publishable workfile. Returns: diff --git a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py index 79ed499a20..1416255083 100644 --- a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py +++ b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py @@ -25,6 +25,7 @@ from openpype.lib import ( ) from openpype.pipeline.create import get_subset_name from openpype_modules.webpublisher.lib import parse_json +from openpype.pipeline.version_start import get_versioning_start class CollectPublishedFiles(pyblish.api.ContextPlugin): @@ -103,7 +104,13 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): project_settings=context.data["project_settings"] ) version = self._get_next_version( - project_name, asset_doc, subset_name + project_name, + asset_doc, + task_name, + task_type, + family, + subset_name, + context ) next_versions.append(version) @@ -141,8 +148,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): try: no_of_frames = self._get_number_of_frames(file_url) if no_of_frames: - frame_end = int(frame_start) + \ - math.ceil(no_of_frames) + frame_end = ( + int(frame_start) + math.ceil(no_of_frames) + ) frame_end = math.ceil(frame_end) - 1 instance.data["frameEnd"] = frame_end self.log.debug("frameEnd:: {}".format( @@ -270,7 +278,16 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): config["families"], config["tags"]) - def _get_next_version(self, project_name, asset_doc, subset_name): + def _get_next_version( + self, + project_name, + asset_doc, + task_name, + task_type, + family, + subset_name, + context + ): """Returns version number or 1 for 'asset' and 'subset'""" version_doc = get_last_version_by_subset_name( @@ -279,9 +296,19 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): asset_doc["_id"], fields=["name"] ) - version = 1 if version_doc: - version += int(version_doc["name"]) + version = int(version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + "webpublisher", + task_name=task_name, + task_type=task_type, + family=family, + subset=subset_name, + project_settings=context.data["project_settings"] + ) + return version def _get_number_of_frames(self, file_url): diff --git a/openpype/hosts/webpublisher/publish_functions.py b/openpype/hosts/webpublisher/publish_functions.py index 83f53ced68..f5dc88f54d 100644 --- a/openpype/hosts/webpublisher/publish_functions.py +++ b/openpype/hosts/webpublisher/publish_functions.py @@ -6,7 +6,7 @@ import pyblish.util from openpype.lib import Logger from openpype.lib.applications import ( ApplicationManager, - get_app_environments_for_context, + LaunchTypes, ) from openpype.pipeline import install_host from openpype.hosts.webpublisher.api import WebpublisherHost @@ -34,7 +34,7 @@ def cli_publish(project_name, batch_path, user_email, targets): Args: project_name (str): project to publish (only single context is - expected per call of remotepublish + expected per call of 'publish') batch_path (str): Path batch folder. Contains subfolders with resources (workfile, another subfolder 'renders' etc.) user_email (string): email address for webpublisher - used to @@ -49,8 +49,8 @@ def cli_publish(project_name, batch_path, user_email, targets): if not batch_path: raise RuntimeError("No publish paths specified") - log = Logger.get_logger("remotepublish") - log.info("remotepublish command") + log = Logger.get_logger("Webpublish") + log.info("Webpublish command") # Register target and host webpublisher_host = WebpublisherHost() @@ -107,7 +107,7 @@ def cli_publish_from_app( Args: project_name (str): project to publish (only single context is - expected per call of remotepublish + expected per call of publish batch_path (str): Path batch folder. Contains subfolders with resources (workfile, another subfolder 'renders' etc.) host_name (str): 'photoshop' @@ -117,9 +117,9 @@ def cli_publish_from_app( (to choose validator for example) """ - log = Logger.get_logger("RemotePublishFromApp") + log = Logger.get_logger("PublishFromApp") - log.info("remotepublishphotoshop command") + log.info("Webpublish photoshop command") task_data = get_task_data(batch_path) @@ -156,22 +156,31 @@ def cli_publish_from_app( found_variant_key = find_variant_key(application_manager, host_name) app_name = "{}/{}".format(host_name, found_variant_key) + data = { + "last_workfile_path": workfile_path, + "start_last_workfile": True, + "project_name": project_name, + "asset_name": asset_name, + "task_name": task_name, + "launch_type": LaunchTypes.automated, + } + launch_context = application_manager.create_launch_context( + app_name, **data) + launch_context.run_prelaunch_hooks() + # must have for proper launch of app - env = get_app_environments_for_context( - project_name, - asset_name, - task_name, - app_name - ) + env = launch_context.env print("env:: {}".format(env)) + env["OPENPYPE_PUBLISH_DATA"] = batch_path + # must pass identifier to update log lines for a batch + env["BATCH_LOG_ID"] = str(_id) + env["HEADLESS_PUBLISH"] = 'true' # to use in app lib + env["USER_EMAIL"] = user_email + os.environ.update(env) - os.environ["OPENPYPE_PUBLISH_DATA"] = batch_path - # must pass identifier to update log lines for a batch - os.environ["BATCH_LOG_ID"] = str(_id) - os.environ["HEADLESS_PUBLISH"] = 'true' # to use in app lib - os.environ["USER_EMAIL"] = user_email - + # Why is this here? Registered host in this process does not affect + # regitered host in launched process. pyblish.api.register_host(host_name) if targets: if isinstance(targets, str): @@ -184,15 +193,7 @@ def cli_publish_from_app( os.environ["PYBLISH_TARGETS"] = os.pathsep.join( set(current_targets)) - data = { - "last_workfile_path": workfile_path, - "start_last_workfile": True, - "project_name": project_name, - "asset_name": asset_name, - "task_name": task_name - } - - launched_app = application_manager.launch(app_name, **data) + launched_app = application_manager.launch_with_context(launch_context) timeout = get_timeout(project_name, host_name, task_type) diff --git a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py index 9fe4b4d3c1..e56f245d27 100644 --- a/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py +++ b/openpype/hosts/webpublisher/webserver_service/webpublish_routes.py @@ -216,7 +216,7 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): "extensions": [".tvpp"], "command": "publish", "arguments": { - "targets": ["tvpaint_worker"] + "targets": ["tvpaint_worker", "webpublish"] }, "add_to_queue": False }, @@ -230,7 +230,7 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): # Make sure targets are set to None for cases that default # would change # - targets argument is not used in 'publishfromapp' - "targets": ["remotepublish"] + "targets": ["automated", "webpublish"] }, # does publish need to be handled by a queue, eg. only # single process running concurrently? @@ -247,7 +247,7 @@ class BatchPublishEndpoint(WebpublishApiEndpoint): "project": content["project_name"], "user": content["user"], - "targets": ["filespublish"] + "targets": ["filespublish", "webpublish"] } add_to_queue = False diff --git a/openpype/hosts/webpublisher/webserver_service/webserver.py b/openpype/hosts/webpublisher/webserver_service/webserver.py index 093b53d9d3..d7c2ea01b9 100644 --- a/openpype/hosts/webpublisher/webserver_service/webserver.py +++ b/openpype/hosts/webpublisher/webserver_service/webserver.py @@ -45,7 +45,7 @@ def run_webserver(executable, upload_dir, host=None, port=None): server_manager = webserver_module.create_new_server_manager(port, host) webserver_url = server_manager.url - # queue for remotepublishfromapp tasks + # queue for publishfromapp tasks studio_task_queue = collections.deque() resource = RestApiResource(server_manager, diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 06de486f2e..f1eb564e5e 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -5,11 +5,11 @@ import sys import os import site +from openpype import PACKAGE_DIR # Add Python version specific vendor folder python_version_dir = os.path.join( - os.getenv("OPENPYPE_REPOS_ROOT", ""), - "openpype", "vendor", "python", "python_{}".format(sys.version[0]) + PACKAGE_DIR, "vendor", "python", "python_{}".format(sys.version[0]) ) # Prepend path in sys paths sys.path.insert(0, python_version_dir) @@ -22,11 +22,14 @@ from .events import ( ) from .vendor_bin_utils import ( + ToolNotFoundError, find_executable, get_vendor_bin_path, get_oiio_tools_path, + get_oiio_tool_args, get_ffmpeg_tool_path, - is_oiio_supported + get_ffmpeg_tool_args, + is_oiio_supported, ) from .attribute_definitions import ( @@ -52,12 +55,13 @@ from .env_tools import ( from .terminal import Terminal from .execute import ( + get_ayon_launcher_args, get_openpype_execute_args, - get_pype_execute_args, get_linux_launcher_args, execute, run_subprocess, run_detached_process, + run_ayon_launcher_process, run_openpype_process, clean_envs_for_openpype_process, path_to_subprocess_arg, @@ -65,7 +69,6 @@ from .execute import ( ) from .log import ( Logger, - PypeLogger, ) from .path_templates import ( @@ -77,12 +80,6 @@ from .path_templates import ( FormatObject, ) -from .mongo import ( - get_default_components, - validate_mongo_connection, - OpenPypeMongoConnection -) - from .dateutils import ( get_datetime_data, get_timestamp, @@ -115,25 +112,6 @@ from .transcoding import ( convert_ffprobe_fps_value, convert_ffprobe_fps_to_float, ) -from .avalon_context import ( - CURRENT_DOC_SCHEMAS, - create_project, - - get_workfile_template_key, - get_workfile_template_key_from_context, - get_last_workfile_with_version, - get_last_workfile, - - BuildWorkfile, - - get_creator_by_name, - - get_custom_workfile_template, - - get_custom_workfile_template_by_context, - get_custom_workfile_template_by_string_context, - get_custom_workfile_template -) from .local_settings import ( IniSettingRegistry, @@ -163,9 +141,6 @@ from .applications import ( ) from .plugin_tools import ( - TaskNotSetError, - get_subset_name, - get_subset_name_with_asset_doc, prepare_template_data, source_hash, ) @@ -177,9 +152,6 @@ from .path_tools import ( version_up, get_version_from_path, get_last_version_from_path, - create_project_folders, - create_workdir_extra_folders, - get_project_basic_paths, ) from .openpype_version import ( @@ -205,13 +177,13 @@ __all__ = [ "emit_event", "register_event_callback", - "find_executable", + "get_ayon_launcher_args", "get_openpype_execute_args", - "get_pype_execute_args", "get_linux_launcher_args", "execute", "run_subprocess", "run_detached_process", + "run_ayon_launcher_process", "run_openpype_process", "clean_envs_for_openpype_process", "path_to_subprocess_arg", @@ -220,9 +192,13 @@ __all__ = [ "env_value_to_bool", "get_paths_from_environ", + "ToolNotFoundError", + "find_executable", "get_vendor_bin_path", "get_oiio_tools_path", + "get_oiio_tool_args", "get_ffmpeg_tool_path", + "get_ffmpeg_tool_args", "is_oiio_supported", "AbstractAttrDef", @@ -257,22 +233,6 @@ __all__ = [ "convert_ffprobe_fps_value", "convert_ffprobe_fps_to_float", - "CURRENT_DOC_SCHEMAS", - "create_project", - - "get_workfile_template_key", - "get_workfile_template_key_from_context", - "get_last_workfile_with_version", - "get_last_workfile", - - "BuildWorkfile", - - "get_creator_by_name", - - "get_custom_workfile_template_by_context", - "get_custom_workfile_template_by_string_context", - "get_custom_workfile_template", - "IniSettingRegistry", "JSONSettingRegistry", "OpenPypeSecureRegistry", @@ -298,9 +258,7 @@ __all__ = [ "filter_profiles", - "TaskNotSetError", - "get_subset_name", - "get_subset_name_with_asset_doc", + "prepare_template_data", "source_hash", "format_file_size", @@ -323,15 +281,6 @@ __all__ = [ "get_formatted_current_time", "Logger", - "PypeLogger", - - "get_default_components", - "validate_mongo_connection", - "OpenPypeMongoConnection", - - "create_project_folders", - "create_workdir_extra_folders", - "get_project_basic_paths", "op_version_control_available", "get_openpype_version", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index 8adae34827..ff5e27c122 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -11,10 +11,7 @@ from abc import ABCMeta, abstractmethod import six -from openpype.client import ( - get_project, - get_asset_by_name, -) +from openpype import AYON_SERVER_ENABLED, PACKAGE_DIR from openpype.settings import ( get_system_settings, get_project_settings, @@ -46,6 +43,25 @@ CUSTOM_LAUNCH_APP_GROUPS = { } +class LaunchTypes: + """Launch types are filters for pre/post-launch hooks. + + Please use these variables in case they'll change values. + """ + + # Local launch - application is launched on local machine + local = "local" + # Farm render job - application is on farm + farm_render = "farm-render" + # Farm publish job - integration post-render job + farm_publish = "farm-publish" + # Remote launch - application is launched on remote machine from which + # can be started publishing + remote = "remote" + # Automated launch - application is launched with automated publishing + automated = "automated" + + def parse_environments(env_data, env_group=None, platform_name=None): """Parse environment values from settings byt group and platform. @@ -482,6 +498,42 @@ class ApplicationManager: break return output + def create_launch_context(self, app_name, **data): + """Prepare launch context for application. + + Args: + app_name (str): Name of application that should be launched. + **data (Any): Any additional data. Data may be used during + + Returns: + ApplicationLaunchContext: Launch context for application. + + Raises: + ApplicationNotFound: Application was not found by entered name. + """ + + app = self.applications.get(app_name) + if not app: + raise ApplicationNotFound(app_name) + + executable = app.find_executable() + + return ApplicationLaunchContext( + app, executable, **data + ) + + def launch_with_context(self, launch_context): + """Launch application using existing launch context. + + Args: + launch_context (ApplicationLaunchContext): Prepared launch + context. + """ + + if not launch_context.executable: + raise ApplictionExecutableNotFound(launch_context.application) + return launch_context.launch() + def launch(self, app_name, **data): """Launch procedure. @@ -502,18 +554,10 @@ class ApplicationManager: failed. Exception should contain explanation message, traceback should not be needed. """ - app = self.applications.get(app_name) - if not app: - raise ApplicationNotFound(app_name) - executable = app.find_executable() - if not executable: - raise ApplictionExecutableNotFound(app) + context = self.create_launch_context(app_name, **data) + return self.launch_with_context(context) - context = ApplicationLaunchContext( - app, executable, **data - ) - return context.launch() class EnvironmentToolGroup: @@ -735,13 +779,17 @@ class LaunchHook: # Order of prelaunch hook, will be executed as last if set to None. order = None # List of host implementations, skipped if empty. - hosts = [] - # List of application groups - app_groups = [] - # List of specific application names - app_names = [] - # List of platform availability, skipped if empty. - platforms = [] + hosts = set() + # Set of application groups + app_groups = set() + # Set of specific application names + app_names = set() + # Set of platform availability + platforms = set() + # Set of launch types for which is available + # - if empty then is available for all launch types + # - by default has 'local' which is most common reason for launc hooks + launch_types = {LaunchTypes.local} def __init__(self, launch_context): """Constructor of launch hook. @@ -789,6 +837,10 @@ class LaunchHook: if launch_context.app_name not in cls.app_names: return False + if cls.launch_types: + if launch_context.launch_type not in cls.launch_types: + return False + return True @property @@ -858,9 +910,9 @@ class PostLaunchHook(LaunchHook): class ApplicationLaunchContext: """Context of launching application. - Main purpose of context is to prepare launch arguments and keyword arguments - for new process. Most important part of keyword arguments preparations - are environment variables. + Main purpose of context is to prepare launch arguments and keyword + arguments for new process. Most important part of keyword arguments + preparations are environment variables. During the whole process is possible to use `data` attribute to store object usable in multiple places. @@ -873,14 +925,30 @@ class ApplicationLaunchContext: insert argument between `nuke.exe` and `--NukeX`. To keep them together it is better to wrap them in another list: `[["nuke.exe", "--NukeX"]]`. + Notes: + It is possible to use launch context only to prepare environment + variables. In that case `executable` may be None and can be used + 'run_prelaunch_hooks' method to run prelaunch hooks which prepare + them. + Args: application (Application): Application definition. executable (ApplicationExecutable): Object with path to executable. + env_group (Optional[str]): Environment variable group. If not set + 'DEFAULT_ENV_SUBGROUP' is used. + launch_type (Optional[str]): Launch type. If not set 'local' is used. **data (dict): Any additional data. Data may be used during preparation to store objects usable in multiple places. """ - def __init__(self, application, executable, env_group=None, **data): + def __init__( + self, + application, + executable, + env_group=None, + launch_type=None, + **data + ): from openpype.modules import ModulesManager # Application object @@ -895,6 +963,10 @@ class ApplicationLaunchContext: self.executable = executable + if launch_type is None: + launch_type = LaunchTypes.local + self.launch_type = launch_type + if env_group is None: env_group = DEFAULT_ENV_SUBGROUP @@ -902,8 +974,11 @@ class ApplicationLaunchContext: self.data = dict(data) + launch_args = [] + if executable is not None: + launch_args = executable.as_args() # subprocess.Popen launch arguments (first argument in constructor) - self.launch_args = executable.as_args() + self.launch_args = launch_args self.launch_args.extend(application.arguments) if self.data.get("app_args"): self.launch_args.extend(self.data.pop("app_args")) @@ -945,6 +1020,7 @@ class ApplicationLaunchContext: self.postlaunch_hooks = None self.process = None + self._prelaunch_hooks_executed = False @property def env(self): @@ -1214,6 +1290,27 @@ class ApplicationLaunchContext: # Return process which is already terminated return process + def run_prelaunch_hooks(self): + """Run prelaunch hooks. + + This method will be executed only once, any future calls will skip + the processing. + """ + + if self._prelaunch_hooks_executed: + self.log.warning("Prelaunch hooks were already executed.") + return + # Discover launch hooks + self.discover_launch_hooks() + + # Execute prelaunch hooks + for prelaunch_hook in self.prelaunch_hooks: + self.log.debug("Executing prelaunch hook: {}".format( + str(prelaunch_hook.__class__.__name__) + )) + prelaunch_hook.execute() + self._prelaunch_hooks_executed = True + def launch(self): """Collect data for new process and then create it. @@ -1226,15 +1323,8 @@ class ApplicationLaunchContext: self.log.warning("Application was already launched.") return - # Discover launch hooks - self.discover_launch_hooks() - - # Execute prelaunch hooks - for prelaunch_hook in self.prelaunch_hooks: - self.log.debug("Executing prelaunch hook: {}".format( - str(prelaunch_hook.__class__.__name__) - )) - prelaunch_hook.execute() + if not self._prelaunch_hooks_executed: + self.run_prelaunch_hooks() self.log.debug("All prelaunch hook executed. Starting new process.") @@ -1352,6 +1442,7 @@ def get_app_environments_for_context( task_name, app_name, env_group=None, + launch_type=None, env=None, modules_manager=None ): @@ -1362,63 +1453,33 @@ def get_app_environments_for_context( task_name (str): Name of task. app_name (str): Name of application that is launched and can be found by ApplicationManager. - env (dict): Initial environment variables. `os.environ` is used when - not passed. - modules_manager (ModulesManager): Initialized modules manager. + env_group (Optional[str]): Name of environment group. If not passed + default group is used. + launch_type (Optional[str]): Type for which prelaunch hooks are + executed. + env (Optional[dict[str, str]]): Initial environment variables. + `os.environ` is used when not passed. + modules_manager (Optional[ModulesManager]): Initialized modules + manager. Returns: dict: Environments for passed context and application. """ - from openpype.modules import ModulesManager - from openpype.pipeline import AvalonMongoDB, Anatomy - from openpype.lib.openpype_version import is_running_staging - - # Avalon database connection - dbcon = AvalonMongoDB() - dbcon.Session["AVALON_PROJECT"] = project_name - dbcon.install() - - # Project document - project_doc = get_project(project_name) - asset_doc = get_asset_by_name(project_name, asset_name) - - if modules_manager is None: - modules_manager = ModulesManager() - - # Prepare app object which can be obtained only from ApplciationManager + # Prepare app object which can be obtained only from ApplicationManager app_manager = ApplicationManager() - app = app_manager.applications[app_name] - - # Project's anatomy - anatomy = Anatomy(project_name) - - data = EnvironmentPrepData({ - "project_name": project_name, - "asset_name": asset_name, - "task_name": task_name, - - "app": app, - - "dbcon": dbcon, - "project_doc": project_doc, - "asset_doc": asset_doc, - - "anatomy": anatomy, - - "env": env - }) - data["env"].update(anatomy.root_environments()) - if is_running_staging(): - data["env"]["OPENPYPE_IS_STAGING"] = "1" - - prepare_app_environments(data, env_group, modules_manager) - prepare_context_environments(data, env_group, modules_manager) - - # Discard avalon connection - dbcon.uninstall() - - return data["env"] + context = app_manager.create_launch_context( + app_name, + project_name=project_name, + asset_name=asset_name, + task_name=task_name, + env_group=env_group, + launch_type=launch_type, + env=env, + modules_manager=modules_manager, + ) + context.run_prelaunch_hooks() + return context.env def _merge_env(env, current_env): @@ -1444,10 +1505,8 @@ def _add_python_version_paths(app, env, logger, modules_manager): return # Add Python 2/3 modules - openpype_root = os.getenv("OPENPYPE_REPOS_ROOT") python_vendor_dir = os.path.join( - openpype_root, - "openpype", + PACKAGE_DIR, "vendor", "python" ) @@ -1649,11 +1708,7 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): project_doc = data["project_doc"] asset_doc = data["asset_doc"] task_name = data["task_name"] - if ( - not project_doc - or not asset_doc - or not task_name - ): + if not project_doc: log.info( "Skipping context environments preparation." " Launch context does not contain required data." @@ -1666,18 +1721,16 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): system_settings = get_system_settings() data["project_settings"] = project_settings data["system_settings"] = system_settings - # Apply project specific environments on current env value - apply_project_environments_value( - project_name, data["env"], project_settings, env_group - ) app = data["app"] context_env = { "AVALON_PROJECT": project_doc["name"], - "AVALON_ASSET": asset_doc["name"], - "AVALON_TASK": task_name, "AVALON_APP_NAME": app.full_name } + if asset_doc: + context_env["AVALON_ASSET"] = asset_doc["name"] + if task_name: + context_env["AVALON_TASK"] = task_name log.debug( "Context environments set:\n{}".format( @@ -1685,9 +1738,25 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): ) ) data["env"].update(context_env) + + # Apply project specific environments on current env value + # - apply them once the context environments are set + apply_project_environments_value( + project_name, data["env"], project_settings, env_group + ) + if not app.is_host: return + data["env"]["AVALON_APP"] = app.host_name + + if not asset_doc or not task_name: + # QUESTION replace with log.info and skip workfile discovery? + # - technically it should be possible to launch host without context + raise ApplicationLaunchFailed( + "Host launch require asset and task context." + ) + workdir_data = get_template_data( project_doc, asset_doc, task_name, app.host_name, system_settings ) @@ -1725,7 +1794,6 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): "Couldn't create workdir because: {}".format(str(exc)) ) - data["env"]["AVALON_APP"] = app.host_name data["env"]["AVALON_WORKDIR"] = workdir _prepare_last_workfile(data, workdir, modules_manager) @@ -1959,17 +2027,28 @@ def get_non_python_host_kwargs(kwargs, allow_console=True): allow_console (bool): use False for inner Popen opening app itself or it will open additional console (at least for Harmony) """ + if kwargs is None: kwargs = {} if platform.system().lower() != "windows": return kwargs - executable_path = os.environ.get("OPENPYPE_EXECUTABLE") + if AYON_SERVER_ENABLED: + executable_path = os.environ.get("AYON_EXECUTABLE") + else: + executable_path = os.environ.get("OPENPYPE_EXECUTABLE") + executable_filename = "" if executable_path: executable_filename = os.path.basename(executable_path) - if "openpype_gui" in executable_filename: + + if AYON_SERVER_ENABLED: + is_gui_executable = "ayon_console" not in executable_filename + else: + is_gui_executable = "openpype_gui" in executable_filename + + if is_gui_executable: kwargs.update({ "creationflags": subprocess.CREATE_NO_WINDOW, "stdout": subprocess.DEVNULL, diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py deleted file mode 100644 index a9ae27cb79..0000000000 --- a/openpype/lib/avalon_context.py +++ /dev/null @@ -1,654 +0,0 @@ -"""Should be used only inside of hosts.""" - -import platform -import logging -import functools -import warnings - -import six - -from openpype.client import ( - get_project, - get_asset_by_name, -) -from openpype.client.operations import ( - CURRENT_ASSET_DOC_SCHEMA, - CURRENT_PROJECT_SCHEMA, - CURRENT_PROJECT_CONFIG_SCHEMA, -) -from .profiles_filtering import filter_profiles -from .path_templates import StringTemplate - -legacy_io = None - -log = logging.getLogger("AvalonContext") - - -# Backwards compatibility - should not be used anymore -# - Will be removed in OP 3.16.* -CURRENT_DOC_SCHEMAS = { - "project": CURRENT_PROJECT_SCHEMA, - "asset": CURRENT_ASSET_DOC_SCHEMA, - "config": CURRENT_PROJECT_CONFIG_SCHEMA -} - - -class AvalonContextDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", AvalonContextDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=AvalonContextDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.client.operations.create_project") -def create_project( - project_name, project_code, library_project=False, dbcon=None -): - """Create project using OpenPype settings. - - This project creation function is not validating project document on - creation. It is because project document is created blindly with only - minimum required information about project which is it's name, code, type - and schema. - - Entered project name must be unique and project must not exist yet. - - Args: - project_name(str): New project name. Should be unique. - project_code(str): Project's code should be unique too. - library_project(bool): Project is library project. - dbcon(AvalonMongoDB): Object of connection to MongoDB. - - Raises: - ValueError: When project name already exists in MongoDB. - - Returns: - dict: Created project document. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client.operations import create_project - - return create_project(project_name, project_code, library_project) - - -def with_pipeline_io(func): - @functools.wraps(func) - def wrapped(*args, **kwargs): - global legacy_io - if legacy_io is None: - from openpype.pipeline import legacy_io - return func(*args, **kwargs) - return wrapped - - -@deprecated("openpype.client.get_linked_asset_ids") -def get_linked_asset_ids(asset_doc): - """Return linked asset ids for `asset_doc` from DB - - Args: - asset_doc (dict): Asset document from DB. - - Returns: - (list): MongoDB ids of input links. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_asset_ids - from openpype.pipeline import legacy_io - - project_name = legacy_io.active_project() - - return get_linked_asset_ids(project_name, asset_doc=asset_doc) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key_from_context") -def get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name=None, - dbcon=None, project_settings=None -): - """Helper function to get template key for workfile template. - - Do the same as `get_workfile_template_key` but returns value for "session - context". - - It is required to pass one of 'dbcon' with already set project name or - 'project_name' arguments. - - Args: - asset_name(str): Name of asset document. - task_name(str): Task name for which is template key retrieved. - Must be available on asset document under `data.tasks`. - host_name(str): Name of host implementation for which is workfile - used. - project_name(str): Project name where asset and task is. Not required - when 'dbcon' is passed. - dbcon(AvalonMongoDB): Connection to mongo with already set project - under `AVALON_PROJECT`. Not required when 'project_name' is passed. - project_settings(dict): Project settings for passed 'project_name'. - Not required at all but makes function faster. - Raises: - ValueError: When both 'dbcon' and 'project_name' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import ( - get_workfile_template_key_from_context - ) - - if not project_name: - if not dbcon: - raise ValueError(( - "`get_workfile_template_key_from_context` requires to pass" - " one of 'dbcon' or 'project_name' arguments." - )) - project_name = dbcon.active_project() - - return get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name, project_settings - ) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key") -def get_workfile_template_key( - task_type, host_name, project_name=None, project_settings=None -): - """Workfile template key which should be used to get workfile template. - - Function is using profiles from project settings to return right template - for passet task type and host name. - - One of 'project_name' or 'project_settings' must be passed it is preferred - to pass settings if are already available. - - Args: - task_type(str): Name of task type. - host_name(str): Name of host implementation (e.g. "maya", "nuke", ...) - project_name(str): Name of project in which context should look for - settings. Not required if `project_settings` are passed. - project_settings(dict): Prepare project settings for project name. - Not needed if `project_name` is passed. - - Raises: - ValueError: When both 'project_name' and 'project_settings' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_workfile_template_key - - return get_workfile_template_key( - task_type, host_name, project_name, project_settings - ) - - -@deprecated("openpype.pipeline.context_tools.compute_session_changes") -def compute_session_changes( - session, task=None, asset=None, app=None, template_key=None -): - """Compute the changes for a Session object on asset, task or app switch - - This does *NOT* update the Session object, but returns the changes - required for a valid update of the Session. - - Args: - session (dict): The initial session to compute changes to. - This is required for computing the full Work Directory, as that - also depends on the values that haven't changed. - task (str, Optional): Name of task to switch to. - asset (str or dict, Optional): Name of asset to switch to. - You can also directly provide the Asset dictionary as returned - from the database to avoid an additional query. (optimization) - app (str, Optional): Name of app to switch to. - - Returns: - dict: The required changes in the Session dictionary. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import compute_session_changes - - if isinstance(asset, six.string_types): - project_name = legacy_io.active_project() - asset = get_asset_by_name(project_name, asset) - - return compute_session_changes( - session, - asset, - task, - template_key - ) - - -@deprecated("openpype.pipeline.context_tools.get_workdir_from_session") -def get_workdir_from_session(session=None, template_key=None): - """Calculate workdir path based on session data. - - Args: - session (Union[None, Dict[str, str]]): Session to use. If not passed - current context session is used (from legacy_io). - template_key (Union[str, None]): Precalculate template key to define - workfile template name in Anatomy. - - Returns: - str: Workdir path. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.context_tools import get_workdir_from_session - - return get_workdir_from_session(session, template_key) - - -@deprecated("openpype.pipeline.context_tools.change_current_context") -def update_current_task(task=None, asset=None, app=None, template_key=None): - """Update active Session to a new task work area. - - This updates the live Session to a different `asset`, `task` or `app`. - - Args: - task (str): The task to set. - asset (str): The asset to set. - app (str): The app to set. - - Returns: - dict: The changed key, values in the current Session. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import change_current_context - - project_name = legacy_io.active_project() - if isinstance(asset, six.string_types): - asset = get_asset_by_name(project_name, asset) - - return change_current_context(asset, task, template_key) - - -@deprecated("openpype.pipeline.workfile.BuildWorkfile") -def BuildWorkfile(): - """Build workfile class was moved to workfile pipeline. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.workfile import BuildWorkfile - - return BuildWorkfile() - - -@deprecated("openpype.pipeline.create.get_legacy_creator_by_name") -def get_creator_by_name(creator_name, case_sensitive=False): - """Find creator plugin by name. - - Args: - creator_name (str): Name of creator class that should be returned. - case_sensitive (bool): Match of creator plugin name is case sensitive. - Set to `False` by default. - - Returns: - Creator: Return first matching plugin or `None`. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.create import get_legacy_creator_by_name - - return get_legacy_creator_by_name(creator_name, case_sensitive) - - -def _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy=None -): - """Prepare Task context for anatomy data. - - WARNING: this data structure is currently used only in workfile templates. - Key "task" is currently in rest of pipeline used as string with task - name. - - Args: - project_doc (dict): Project document with available "name" and - "data.code" keys. - asset_doc (dict): Asset document from MongoDB. - task_name (str): Name of context task. - anatomy (Anatomy): Optionally Anatomy for passed project name can be - passed as Anatomy creation may be slow. - - Returns: - dict: With Anatomy context data. - """ - - from openpype.pipeline.template_data import get_general_template_data - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - asset_name = asset_doc["name"] - project_task_types = anatomy["tasks"] - - # get relevant task type from asset doc - assert task_name in asset_doc["data"]["tasks"], ( - "Task name \"{}\" not found on asset \"{}\"".format( - task_name, asset_name - ) - ) - - task_type = asset_doc["data"]["tasks"][task_name].get("type") - - assert task_type, ( - "Task name \"{}\" on asset \"{}\" does not have specified task type." - ).format(asset_name, task_name) - - # get short name for task type defined in default anatomy settings - project_task_type_data = project_task_types.get(task_type) - assert project_task_type_data, ( - "Something went wrong. Default anatomy tasks are not holding" - "requested task type: `{}`".format(task_type) - ) - - data = { - "project": { - "name": project_doc["name"], - "code": project_doc["data"].get("code") - }, - "asset": asset_name, - "task": { - "name": task_name, - "type": task_type, - "short": project_task_type_data["short_name"] - } - } - - system_general_data = get_general_template_data() - data.update(system_general_data) - - return data - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_context") -def get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - It is expected that passed argument are already queried documents of - project and asset as parents of processing task name. - - Existence of formatted path is not validated. - - Args: - template_profiles(list): Template profiles from settings. - project_doc(dict): Project document from MongoDB. - asset_doc(dict): Asset document from MongoDB. - task_name(str): Name of task for which templates are filtered. - anatomy(Anatomy): Optionally passed anatomy object for passed project - name. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - # get project, asset, task anatomy context data - anatomy_context_data = _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy - ) - # add root dict - anatomy_context_data["root"] = anatomy.roots - - # get task type for the task in context - current_task_type = anatomy_context_data["task"]["type"] - - # get path from matching profile - matching_item = filter_profiles( - template_profiles, - {"task_types": current_task_type} - ) - # when path is available try to format it in case - # there are some anatomy template strings - if matching_item: - template = matching_item["path"][platform.system().lower()] - return StringTemplate.format_strict_template( - template, anatomy_context_data - ) - - return None - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_string_context" -) -def get_custom_workfile_template_by_string_context( - template_profiles, project_name, asset_name, task_name, - dbcon=None, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - Passed context are string representations of project, asset and task. - Function will query documents of project and asset to be able use - `get_custom_workfile_template_by_context` for rest of logic. - - Args: - template_profiles(list): Loaded workfile template profiles. - project_name(str): Project name. - asset_name(str): Asset name. - task_name(str): Task name. - dbcon(AvalonMongoDB): Optional avalon implementation of mongo - connection with context Session. - anatomy(Anatomy): Optionally prepared anatomy object for passed - project. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - project_name = None - if anatomy is not None: - project_name = anatomy.project_name - - if not project_name and dbcon is not None: - project_name = dbcon.active_project() - - if not project_name: - raise ValueError("Can't determina project") - - project_doc = get_project(project_name, fields=["name", "data.code"]) - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["name", "data.tasks"]) - - return get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy - ) - - -@deprecated("openpype.pipeline.context_tools.get_custom_workfile_template") -def get_custom_workfile_template(template_profiles): - """Filter and fill workfile template profiles by current context. - - Current context is defined by `legacy_io.Session`. That's why this - function should be used only inside host where context is set and stable. - - Args: - template_profiles(list): Template profiles from settings. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - - return get_custom_workfile_template_by_string_context( - template_profiles, - legacy_io.Session["AVALON_PROJECT"], - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"], - legacy_io - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile_with_version") -def get_last_workfile_with_version( - workdir, file_template, fill_data, extensions -): - """Return last workfile version. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - - Returns: - tuple: Last workfile with version if there is any otherwise - returns (None, None). - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile_with_version - - return get_last_workfile_with_version( - workdir, file_template, fill_data, extensions - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile") -def get_last_workfile( - workdir, file_template, fill_data, extensions, full_path=False -): - """Return last workfile filename. - - Returns file with version 1 if there is not workfile yet. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - full_path(bool): Full path to file is returned if set to True. - - Returns: - str: Last or first workfile as filename of full path to filename. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile - - return get_last_workfile( - workdir, file_template, fill_data, extensions, full_path - ) - - -@deprecated("openpype.client.get_linked_representation_id") -def get_linked_ids_for_representations( - project_name, repre_ids, dbcon=None, link_type=None, max_depth=0 -): - """Returns list of linked ids of particular type (if provided). - - Goes from representations to version, back to representations - Args: - project_name (str) - repre_ids (list) or (ObjectId) - dbcon (avalon.mongodb.AvalonMongoDB, optional): Avalon Mongo connection - with Session. - link_type (str): ['reference', '..] - max_depth (int): limit how many levels of recursion - - Returns: - (list) of ObjectId - linked representations - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_representation_id - - if not isinstance(repre_ids, list): - repre_ids = [repre_ids] - - output = [] - for repre_id in repre_ids: - output.extend(get_linked_representation_id( - project_name, - repre_id=repre_id, - link_type=link_type, - max_depth=max_depth - )) - return output diff --git a/openpype/lib/delivery.py b/openpype/lib/delivery.py deleted file mode 100644 index efb542de75..0000000000 --- a/openpype/lib/delivery.py +++ /dev/null @@ -1,252 +0,0 @@ -"""Functions useful for delivery action or loader""" -import os -import shutil -import functools -import warnings - - -class DeliveryDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", DeliveryDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=DeliveryDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.lib.path_tools.collect_frames") -def collect_frames(files): - """Returns dict of source path and its frame, if from sequence - - Uses clique as most precise solution, used when anatomy template that - created files is not known. - - Assumption is that frames are separated by '.', negative frames are not - allowed. - - Args: - files(list) or (set with single value): list of source paths - - Returns: - (dict): {'/asset/subset_v001.0001.png': '0001', ....} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import collect_frames - - return collect_frames(files) - - -@deprecated("openpype.lib.path_tools.format_file_size") -def sizeof_fmt(num, suffix=None): - """Returns formatted string with size in appropriate unit - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import format_file_size - return format_file_size(num, suffix) - - -@deprecated("openpype.pipeline.load.get_representation_path_with_anatomy") -def path_from_representation(representation, anatomy): - """Get representation path using representation document and anatomy. - - Args: - representation (Dict[str, Any]): Representation document. - anatomy (Anatomy): Project anatomy. - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.load import get_representation_path_with_anatomy - - return get_representation_path_with_anatomy(representation, anatomy) - - -@deprecated -def copy_file(src_path, dst_path): - """Hardlink file if possible(to save space), copy if not""" - from openpype.lib import create_hard_link # safer importing - - if os.path.exists(dst_path): - return - try: - create_hard_link( - src_path, - dst_path - ) - except OSError: - shutil.copyfile(src_path, dst_path) - - -@deprecated("openpype.pipeline.delivery.get_format_dict") -def get_format_dict(anatomy, location_path): - """Returns replaced root values from user provider value. - - Args: - anatomy (Anatomy) - location_path (str): user provided value - - Returns: - (dict): prepared for formatting of a template - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import get_format_dict - - return get_format_dict(anatomy, location_path) - - -@deprecated("openpype.pipeline.delivery.check_destination_path") -def check_destination_path(repre_id, - anatomy, anatomy_data, - datetime_data, template_name): - """ Try to create destination path based on 'template_name'. - - In the case that path cannot be filled, template contains unmatched - keys, provide error message to filter out repre later. - - Args: - anatomy (Anatomy) - anatomy_data (dict): context to fill anatomy - datetime_data (dict): values with actual date - template_name (str): to pick correct delivery template - - Returns: - (collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import check_destination_path - - return check_destination_path( - repre_id, - anatomy, - anatomy_data, - datetime_data, - template_name - ) - - -@deprecated("openpype.pipeline.delivery.deliver_single_file") -def process_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """Copy single file to calculated path based on template - - Args: - src_path(str): path of source representation file - _repre (dict): full repre, used only in process_sequence, here only - as to share same signature - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_single_file - - return deliver_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) - - -@deprecated("openpype.pipeline.delivery.deliver_sequence") -def process_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """ For Pype2(mainly - works in 3 too) where representation might not - contain files. - - Uses listing physical files (not 'files' on repre as a)might not be - present, b)might not be reliable for representation and copying them. - - TODO Should be refactored when files are sufficient to drive all - representations. - - Args: - src_path(str): path of source representation file - repre (dict): full representation - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_sequence - - return deliver_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index 6f52efdfcc..c54541a116 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -5,6 +5,8 @@ import platform import json import tempfile +from openpype import AYON_SERVER_ENABLED + from .log import Logger from .vendor_bin_utils import find_executable @@ -162,12 +164,19 @@ def run_subprocess(*args, **kwargs): return full_output -def clean_envs_for_openpype_process(env=None): - """Modify environments that may affect OpenPype process. +def clean_envs_for_ayon_process(env=None): + """Modify environments that may affect ayon-launcher process. Main reason to implement this function is to pop PYTHONPATH which may be affected by in-host environments. + + Args: + env (Optional[dict[str, str]]): Environment variables to modify. + + Returns: + dict[str, str]: Environment variables for ayon process. """ + if env is None: env = os.environ @@ -179,6 +188,64 @@ def clean_envs_for_openpype_process(env=None): return env +def clean_envs_for_openpype_process(env=None): + """Modify environments that may affect OpenPype process. + + Main reason to implement this function is to pop PYTHONPATH which may be + affected by in-host environments. + """ + + if AYON_SERVER_ENABLED: + return clean_envs_for_ayon_process(env=env) + + if env is None: + env = os.environ + + # Exclude some environment variables from a copy of the environment + env = env.copy() + for key in ["PYTHONPATH", "PYTHONHOME"]: + env.pop(key, None) + + return env + + +def run_ayon_launcher_process(*args, **kwargs): + """Execute OpenPype process with passed arguments and wait. + + Wrapper for 'run_process' which prepends OpenPype executable arguments + before passed arguments and define environments if are not passed. + + Values from 'os.environ' are used for environments if are not passed. + They are cleaned using 'clean_envs_for_openpype_process' function. + + Example: + ``` + run_ayon_process("run", "") + ``` + + Args: + *args (str): ayon-launcher cli arguments. + **kwargs (Any): Keyword arguments for subprocess.Popen. + + Returns: + str: Full output of subprocess concatenated stdout and stderr. + """ + + args = get_ayon_launcher_args(*args) + env = kwargs.pop("env", None) + # Keep env untouched if are passed and not empty + if not env: + # Skip envs that can affect OpenPype process + # - fill more if you find more + env = clean_envs_for_openpype_process(os.environ) + + # Only keep OpenPype version if we are running from build. + if not is_running_from_build(): + env.pop("OPENPYPE_VERSION", None) + + return run_subprocess(args, env=env, **kwargs) + + def run_openpype_process(*args, **kwargs): """Execute OpenPype process with passed arguments and wait. @@ -189,14 +256,16 @@ def run_openpype_process(*args, **kwargs): They are cleaned using 'clean_envs_for_openpype_process' function. Example: - ``` - run_detached_process("run", "") - ``` + >>> run_openpype_process("version") Args: *args (tuple): OpenPype cli arguments. - **kwargs (dict): Keyword arguments for for subprocess.Popen. + **kwargs (dict): Keyword arguments for subprocess.Popen. """ + + if AYON_SERVER_ENABLED: + return run_ayon_launcher_process(*args, **kwargs) + args = get_openpype_execute_args(*args) env = kwargs.pop("env", None) # Keep env untouched if are passed and not empty @@ -219,18 +288,18 @@ def run_detached_process(args, **kwargs): They are cleaned using 'clean_envs_for_openpype_process' function. Example: - ``` - run_detached_openpype_process("run", "") - ``` + >>> run_detached_process("run", "./path_to.py") + Args: *args (tuple): OpenPype cli arguments. - **kwargs (dict): Keyword arguments for for subprocess.Popen. + **kwargs (dict): Keyword arguments for subprocess.Popen. Returns: subprocess.Popen: Pointer to launched process but it is possible that launched process is already killed (on linux). """ + env = kwargs.pop("env", None) # Keep env untouched if are passed and not empty if not env: @@ -294,16 +363,37 @@ def path_to_subprocess_arg(path): return subprocess.list2cmdline([path]) -def get_pype_execute_args(*args): - """Backwards compatible function for 'get_openpype_execute_args'.""" - import traceback +def get_ayon_launcher_args(*args): + """Arguments to run ayon-launcher process. - log = Logger.get_logger("get_pype_execute_args") - stack = "\n".join(traceback.format_stack()) - log.warning(( - "Using deprecated function 'get_pype_execute_args'. Called from:\n{}" - ).format(stack)) - return get_openpype_execute_args(*args) + Arguments for subprocess when need to spawn new pype process. Which may be + needed when new python process for pype scripts must be executed in build + pype. + + Reasons: + Ayon-launcher started from code has different executable set to + virtual env python and must have path to script as first argument + which is not needed for built application. + + Args: + *args (str): Any arguments that will be added after executables. + + Returns: + list[str]: List of arguments to run ayon-launcher process. + """ + + executable = os.environ["AYON_EXECUTABLE"] + launch_args = [executable] + + executable_filename = os.path.basename(executable) + if "python" in executable_filename.lower(): + filepath = os.path.join(os.environ["AYON_ROOT"], "start.py") + launch_args.append(filepath) + + if args: + launch_args.extend(args) + + return launch_args def get_openpype_execute_args(*args): @@ -321,19 +411,22 @@ def get_openpype_execute_args(*args): It is possible to pass any arguments that will be added after pype executables. """ - pype_executable = os.environ["OPENPYPE_EXECUTABLE"] - pype_args = [pype_executable] - executable_filename = os.path.basename(pype_executable) + if AYON_SERVER_ENABLED: + return get_ayon_launcher_args(*args) + + executable = os.environ["OPENPYPE_EXECUTABLE"] + launch_args = [executable] + + executable_filename = os.path.basename(executable) if "python" in executable_filename.lower(): - pype_args.append( - os.path.join(os.environ["OPENPYPE_ROOT"], "start.py") - ) + filepath = os.path.join(os.environ["OPENPYPE_ROOT"], "start.py") + launch_args.append(filepath) if args: - pype_args.extend(args) + launch_args.extend(args) - return pype_args + return launch_args def get_linux_launcher_args(*args): @@ -345,6 +438,9 @@ def get_linux_launcher_args(*args): It is possible that this function is used in OpenPype build which does not have yet the new executable. In that case 'None' is returned. + Todos: + Replace by script in scripts for ayon-launcher. + Args: args (iterable): List of additional arguments added after executable argument. @@ -353,19 +449,24 @@ def get_linux_launcher_args(*args): list: Executables with possible positional argument to script when called from code. """ - filename = "app_launcher" - openpype_executable = os.environ["OPENPYPE_EXECUTABLE"] - executable_filename = os.path.basename(openpype_executable) + filename = "app_launcher" + if AYON_SERVER_ENABLED: + executable = os.environ["AYON_EXECUTABLE"] + else: + executable = os.environ["OPENPYPE_EXECUTABLE"] + + executable_filename = os.path.basename(executable) if "python" in executable_filename.lower(): - script_path = os.path.join( - os.environ["OPENPYPE_ROOT"], - "{}.py".format(filename) - ) - launch_args = [openpype_executable, script_path] + if AYON_SERVER_ENABLED: + root = os.environ["AYON_ROOT"] + else: + root = os.environ["OPENPYPE_ROOT"] + script_path = os.path.join(root, "{}.py".format(filename)) + launch_args = [executable, script_path] else: new_executable = os.path.join( - os.path.dirname(openpype_executable), + os.path.dirname(executable), filename ) executable_path = find_executable(new_executable) diff --git a/openpype/lib/local_settings.py b/openpype/lib/local_settings.py index c6c9699240..3fb35a7e7b 100644 --- a/openpype/lib/local_settings.py +++ b/openpype/lib/local_settings.py @@ -29,6 +29,7 @@ except ImportError: import six import appdirs +from openpype import AYON_SERVER_ENABLED from openpype.settings import ( get_local_settings, get_system_settings @@ -517,11 +518,54 @@ def _create_local_site_id(registry=None): return new_id +def get_ayon_appdirs(*args): + """Local app data directory of AYON client. + + Args: + *args (Iterable[str]): Subdirectories/files in local app data dir. + + Returns: + str: Path to directory/file in local app data dir. + """ + + return os.path.join( + appdirs.user_data_dir("AYON", "Ynput"), + *args + ) + + +def _get_ayon_local_site_id(): + # used for background syncing + site_id = os.environ.get("AYON_SITE_ID") + if site_id: + return site_id + + site_id_path = get_ayon_appdirs("site_id") + if os.path.exists(site_id_path): + with open(site_id_path, "r") as stream: + site_id = stream.read() + + if site_id: + return site_id + + try: + from ayon_common.utils import get_local_site_id as _get_local_site_id + site_id = _get_local_site_id() + except ImportError: + raise ValueError("Couldn't access local site id") + + return site_id + + def get_local_site_id(): """Get local site identifier. Identifier is created if does not exists yet. """ + + if AYON_SERVER_ENABLED: + return _get_ayon_local_site_id() + # override local id from environment # used for background syncing if os.environ.get("OPENPYPE_LOCAL_ID"): diff --git a/openpype/lib/log.py b/openpype/lib/log.py index 26dcd86eec..72071063ec 100644 --- a/openpype/lib/log.py +++ b/openpype/lib/log.py @@ -24,6 +24,7 @@ import traceback import threading import copy +from openpype import AYON_SERVER_ENABLED from openpype.client.mongo import ( MongoEnvNotSet, get_default_components, @@ -212,7 +213,7 @@ class Logger: log_mongo_url_components = None # Database name in Mongo - log_database_name = os.environ["OPENPYPE_DATABASE_NAME"] + log_database_name = os.environ.get("OPENPYPE_DATABASE_NAME") # Collection name under database in Mongo log_collection_name = "logs" @@ -326,12 +327,17 @@ class Logger: # Change initialization state to prevent runtime changes # if is executed during runtime cls.initialized = False - cls.log_mongo_url_components = get_default_components() + if not AYON_SERVER_ENABLED: + cls.log_mongo_url_components = get_default_components() # Define if should logging to mongo be used - use_mongo_logging = bool(log4mongo is not None) - if use_mongo_logging: - use_mongo_logging = os.environ.get("OPENPYPE_LOG_TO_SERVER") == "1" + if AYON_SERVER_ENABLED: + use_mongo_logging = False + else: + use_mongo_logging = ( + log4mongo is not None + and os.environ.get("OPENPYPE_LOG_TO_SERVER") == "1" + ) # Set mongo id for process (ONLY ONCE) if use_mongo_logging and cls.mongo_process_id is None: @@ -453,6 +459,9 @@ class Logger: if not cls.use_mongo_logging: return + if not cls.log_database_name: + raise ValueError("Database name for logs is not set") + client = log4mongo.handlers._connection if not client: client = cls.get_log_mongo_connection() @@ -483,21 +492,3 @@ class Logger: cls.initialize() return OpenPypeMongoConnection.get_mongo_client() - - -class PypeLogger(Logger): - """Duplicate of 'Logger'. - - Deprecated: - Class will be removed after release version 3.16.* - """ - - @classmethod - def get_logger(cls, *args, **kwargs): - logger = Logger.get_logger(*args, **kwargs) - # TODO uncomment when replaced most of places - logger.warning(( - "'openpype.lib.PypeLogger' is deprecated class." - " Please use 'openpype.lib.Logger' instead." - )) - return logger diff --git a/openpype/lib/mongo.py b/openpype/lib/mongo.py deleted file mode 100644 index bb2ee6016a..0000000000 --- a/openpype/lib/mongo.py +++ /dev/null @@ -1,61 +0,0 @@ -import warnings -import functools -from openpype.client.mongo import ( - MongoEnvNotSet, - OpenPypeMongoConnection, -) - - -class MongoDeprecatedWarning(DeprecationWarning): - pass - - -def mongo_deprecated(func): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - @functools.wraps(func) - def new_func(*args, **kwargs): - warnings.simplefilter("always", MongoDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'." - " Function was moved to 'openpype.client.mongo'." - ).format(func.__name__), - category=MongoDeprecatedWarning, - stacklevel=2 - ) - return func(*args, **kwargs) - return new_func - - -@mongo_deprecated -def get_default_components(): - from openpype.client.mongo import get_default_components - - return get_default_components() - - -@mongo_deprecated -def should_add_certificate_path_to_mongo_url(mongo_url): - from openpype.client.mongo import should_add_certificate_path_to_mongo_url - - return should_add_certificate_path_to_mongo_url(mongo_url) - - -@mongo_deprecated -def validate_mongo_connection(mongo_uri): - from openpype.client.mongo import validate_mongo_connection - - return validate_mongo_connection(mongo_uri) - - -__all__ = ( - "MongoEnvNotSet", - "OpenPypeMongoConnection", - "get_default_components", - "should_add_certificate_path_to_mongo_url", - "validate_mongo_connection", -) diff --git a/openpype/lib/openpype_version.py b/openpype/lib/openpype_version.py index e052002468..1c8356d5fe 100644 --- a/openpype/lib/openpype_version.py +++ b/openpype/lib/openpype_version.py @@ -13,6 +13,7 @@ import os import sys import openpype.version +from openpype import AYON_SERVER_ENABLED from .python_module_tools import import_filepath @@ -25,8 +26,25 @@ def get_openpype_version(): return openpype.version.__version__ +def get_ayon_launcher_version(): + version_filepath = os.path.join( + os.environ["AYON_ROOT"], + "version.py" + ) + if not os.path.exists(version_filepath): + return None + content = {} + with open(version_filepath, "r") as stream: + exec(stream.read(), content) + return content["__version__"] + + def get_build_version(): """OpenPype version of build.""" + + if AYON_SERVER_ENABLED: + return get_ayon_launcher_version() + # Return OpenPype version if is running from code if not is_running_from_build(): return get_openpype_version() @@ -50,7 +68,11 @@ def is_running_from_build(): Returns: bool: True if running from build. """ - executable_path = os.environ["OPENPYPE_EXECUTABLE"] + + if AYON_SERVER_ENABLED: + executable_path = os.environ["AYON_EXECUTABLE"] + else: + executable_path = os.environ["OPENPYPE_EXECUTABLE"] executable_filename = os.path.basename(executable_path) if "python" in executable_filename.lower(): return False @@ -58,6 +80,8 @@ def is_running_from_build(): def is_staging_enabled(): + if AYON_SERVER_ENABLED: + return os.getenv("AYON_USE_STAGING") == "1" return os.environ.get("OPENPYPE_USE_STAGING") == "1" @@ -88,6 +112,9 @@ def is_running_staging(): bool: Using staging version or not. """ + if AYON_SERVER_ENABLED: + return is_staging_enabled() + if os.environ.get("OPENPYPE_IS_STAGING") == "1": return True diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index 0b6d0a3391..fec6a0c47d 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -2,59 +2,12 @@ import os import re import logging import platform -import functools -import warnings import clique log = logging.getLogger(__name__) -class PathToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PathToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PathToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - def format_file_size(file_size, suffix=None): """Returns formatted string with size in appropriate unit. @@ -269,99 +222,3 @@ def get_last_version_from_path(path_dir, filter): return filtred_files[-1] return None - - -@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths") -def concatenate_splitted_paths(split_paths, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import concatenate_splitted_paths - - return concatenate_splitted_paths(split_paths, anatomy) - - -@deprecated -def get_format_data(anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.template_data import get_project_template_data - - data = get_project_template_data(project_name=anatomy.project_name) - data["root"] = anatomy.roots - return data - - -@deprecated("openpype.pipeline.project_folders.fill_paths") -def fill_paths(path_list, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import fill_paths - - return fill_paths(path_list, anatomy) - - -@deprecated("openpype.pipeline.project_folders.create_project_folders") -def create_project_folders(basic_paths, project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_project_folders - - return create_project_folders(project_name, basic_paths) - - -@deprecated("openpype.pipeline.project_folders.get_project_basic_paths") -def get_project_basic_paths(project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import get_project_basic_paths - - return get_project_basic_paths(project_name) - - -@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders") -def create_workdir_extra_folders( - workdir, host_name, task_type, task_name, project_name, - project_settings=None -): - """Create extra folders in work directory based on context. - - Args: - workdir (str): Path to workdir where workfiles is stored. - host_name (str): Name of host implementation. - task_type (str): Type of task for which extra folders should be - created. - task_name (str): Name of task for which extra folders should be - created. - project_name (str): Name of project on which task is. - project_settings (dict): Prepared project settings. Are loaded if not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_workdir_extra_folders - - return create_workdir_extra_folders( - workdir, - host_name, - task_type, - task_name, - project_name, - project_settings - ) diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py index 10fd3940b8..d204fc2c8f 100644 --- a/openpype/lib/plugin_tools.py +++ b/openpype/lib/plugin_tools.py @@ -4,157 +4,9 @@ import os import logging import re -import warnings -import functools - -from openpype.client import get_asset_by_id - log = logging.getLogger(__name__) -class PluginToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PluginToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PluginToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.create.TaskNotSetError") -def TaskNotSetError(*args, **kwargs): - from openpype.pipeline.create import TaskNotSetError - - return TaskNotSetError(*args, **kwargs) - - -@deprecated("openpype.pipeline.create.get_subset_name") -def get_subset_name_with_asset_doc( - family, - variant, - task_name, - asset_doc, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None -): - """Calculate subset name based on passed context and OpenPype settings. - - Subst name templates are defined in `project_settings/global/tools/creator - /subset_name_profiles` where are profiles with host name, family, task name - and task type filters. If context does not match any profile then - `DEFAULT_SUBSET_TEMPLATE` is used as default template. - - That's main reason why so many arguments are required to calculate subset - name. - - Args: - family (str): Instance family. - variant (str): In most of cases it is user input during creation. - task_name (str): Task name on which context is instance created. - asset_doc (dict): Queried asset document with it's tasks in data. - Used to get task type. - project_name (str): Name of project on which is instance created. - Important for project settings that are loaded. - host_name (str): One of filtering criteria for template profile - filters. - default_template (str): Default template if any profile does not match - passed context. Constant 'DEFAULT_SUBSET_TEMPLATE' is used if - is not passed. - dynamic_data (dict): Dynamic data specific for a creator which creates - instance. - """ - - from openpype.pipeline.create import get_subset_name - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - -@deprecated -def get_subset_name( - family, - variant, - task_name, - asset_id, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None, - dbcon=None -): - """Calculate subset name using OpenPype settings. - - This variant of function expects asset id as argument. - - This is legacy function should be replaced with - `get_subset_name_with_asset_doc` where asset document is expected. - """ - - from openpype.pipeline.create import get_subset_name - - if project_name is None: - project_name = dbcon.project_name - - asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"]) - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - def prepare_template_data(fill_pairs): """ Prepares formatted data for filling template. diff --git a/openpype/lib/pype_info.py b/openpype/lib/pype_info.py index 8370ecc88f..2f57d76850 100644 --- a/openpype/lib/pype_info.py +++ b/openpype/lib/pype_info.py @@ -5,6 +5,7 @@ import platform import getpass import socket +from openpype import AYON_SERVER_ENABLED from openpype.settings.lib import get_local_settings from .execute import get_openpype_execute_args from .local_settings import get_local_site_id @@ -33,6 +34,21 @@ def get_openpype_info(): } +def get_ayon_info(): + executable_args = get_openpype_execute_args() + if is_running_from_build(): + version_type = "build" + else: + version_type = "code" + return { + "build_verison": get_build_version(), + "version_type": version_type, + "executable": executable_args[-1], + "ayon_root": os.environ["AYON_ROOT"], + "server_url": os.environ["AYON_SERVER_URL"] + } + + def get_workstation_info(): """Basic information about workstation.""" host_name = socket.gethostname() @@ -52,12 +68,17 @@ def get_workstation_info(): def get_all_current_info(): """All information about current process in one dictionary.""" - return { - "pype": get_openpype_info(), + + output = { "workstation": get_workstation_info(), "env": os.environ.copy(), "local_settings": get_local_settings() } + if AYON_SERVER_ENABLED: + output["ayon"] = get_ayon_info() + else: + output["openpype"] = get_openpype_info() + return output def extract_pype_info_to_file(dirpath): diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index de6495900e..2bae28786e 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -11,8 +11,8 @@ import xml.etree.ElementTree from .execute import run_subprocess from .vendor_bin_utils import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, ) @@ -83,11 +83,11 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False): Stdout should contain xml format string. """ - args = [ - get_oiio_tools_path(), + args = get_oiio_tool_args( + "oiiotool", "--info", "-v" - ] + ) if subimages: args.append("-a") @@ -486,12 +486,11 @@ def convert_for_ffmpeg( compression = "none" # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -656,12 +655,11 @@ def convert_input_paths_for_ffmpeg( for input_path in input_paths: # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -729,8 +727,8 @@ def get_ffprobe_data(path_to_file, logger=None): logger.info( "Getting information about input \"{}\".".format(path_to_file) ) - args = [ - get_ffmpeg_tool_path("ffprobe"), + ffprobe_args = get_ffmpeg_tool_args("ffprobe") + args = ffprobe_args + [ "-hide_banner", "-loglevel", "fatal", "-show_error", @@ -1084,13 +1082,13 @@ def convert_colorspace( if logger is None: logger = logging.getLogger(__name__) - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", input_path, # Don't add any additional attributes "--nosoftwareattrib", "--colorconfig", config_path - ] + ) if all([target_colorspace, view, display]): raise ValueError("Colorspace and both screen and display" diff --git a/openpype/lib/usdlib.py b/openpype/lib/usdlib.py index 5ef1d38f87..c166feb3a6 100644 --- a/openpype/lib/usdlib.py +++ b/openpype/lib/usdlib.py @@ -9,7 +9,7 @@ except ImportError: from mvpxr import Usd, UsdGeom, Sdf, Kind from openpype.client import get_project, get_asset_by_name -from openpype.pipeline import legacy_io, Anatomy +from openpype.pipeline import Anatomy, get_current_project_name log = logging.getLogger(__name__) @@ -126,7 +126,7 @@ def create_model(filename, asset, variant_subsets): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = get_asset_by_name(project_name, asset) assert asset_doc, "Asset not found: %s" % asset @@ -177,7 +177,7 @@ def create_shade(filename, asset, variant_subsets): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = get_asset_by_name(project_name, asset) assert asset_doc, "Asset not found: %s" % asset @@ -213,7 +213,7 @@ def create_shade_variation(filename, asset, model_variant, shade_variants): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = get_asset_by_name(project_name, asset) assert asset_doc, "Asset not found: %s" % asset @@ -314,7 +314,7 @@ def get_usd_master_path(asset, subset, representation): """ - project_name = legacy_io.active_project() + project_name = get_current_project_name() anatomy = Anatomy(project_name) project_doc = get_project( project_name, @@ -334,6 +334,9 @@ def get_usd_master_path(asset, subset, representation): "name": project_name, "code": project_doc.get("data", {}).get("code") }, + "folder": { + "name": asset_doc["name"], + }, "asset": asset_doc["name"], "subset": subset, "representation": representation, diff --git a/openpype/lib/vendor_bin_utils.py b/openpype/lib/vendor_bin_utils.py index f27c78d486..dc8bb7435e 100644 --- a/openpype/lib/vendor_bin_utils.py +++ b/openpype/lib/vendor_bin_utils.py @@ -3,9 +3,15 @@ import logging import platform import subprocess +from openpype import AYON_SERVER_ENABLED + log = logging.getLogger("Vendor utils") +class ToolNotFoundError(Exception): + """Raised when tool arguments are not found.""" + + class CachedToolPaths: """Cache already used and discovered tools and their executables. @@ -252,7 +258,7 @@ def _check_args_returncode(args): return proc.returncode == 0 -def _oiio_executable_validation(filepath): +def _oiio_executable_validation(args): """Validate oiio tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -270,32 +276,63 @@ def _oiio_executable_validation(filepath): should be used. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "--help"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_oiio_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get oiio arguments + from ayon_third_party import get_oiio_arguments + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_oiio_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get OpenImageIO args. Reason: {}".format(exc)) + return None def get_oiio_tools_path(tool="oiiotool"): - """Path to vendorized OpenImageIO tool executables. + """Path to OpenImageIO tool executables. - On Window it adds .exe extension if missing from tool argument. + On Windows it adds .exe extension if missing from tool argument. Args: - tool (string): Tool name (oiiotool, maketx, ...). + tool (string): Tool name 'oiiotool', 'maketx', etc. Default is "oiiotool". """ if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool) + if args: + if len(args) > 1: + raise ValueError( + "AYON oiio arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_OIIO_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -321,7 +358,33 @@ def get_oiio_tools_path(tool="oiiotool"): return tool_executable_path -def _ffmpeg_executable_validation(filepath): +def get_oiio_tool_args(tool_name, *extra_args): + """Arguments to launch OpenImageIO tool. + + Args: + tool_name (str): Tool name 'oiiotool', 'maketx', etc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool_name) + if args: + return args + extra_args + + path = get_oiio_tools_path(tool_name) + if path: + return [path] + extra_args + raise ToolNotFoundError( + "OIIO '{}' tool not found.".format(tool_name) + ) + + +def _ffmpeg_executable_validation(args): """Validate ffmpeg tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -338,24 +401,45 @@ def _ffmpeg_executable_validation(filepath): It does not validate if the executable is really a ffmpeg tool. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "-version"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_ffmpeg_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get ffmpeg arguments + from ayon_third_party import get_ffmpeg_arguments + + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_ffmpeg_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get FFmpeg args. Reason: {}".format(exc)) + return None def get_ffmpeg_tool_path(tool="ffmpeg"): """Path to vendorized FFmpeg executable. Args: - tool (string): Tool name (ffmpeg, ffprobe, ...). + tool (str): Tool name 'ffmpeg', 'ffprobe', etc. Default is "ffmpeg". Returns: @@ -365,6 +449,17 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool) + if args is not None: + if len(args) > 1: + raise ValueError( + "AYON ffmpeg arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_FFMPEG_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -390,19 +485,44 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): return tool_executable_path +def get_ffmpeg_tool_args(tool_name, *extra_args): + """Arguments to launch FFmpeg tool. + + Args: + tool_name (str): Tool name 'ffmpeg', 'ffprobe', exc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool_name) + if args: + return args + extra_args + + executable_path = get_ffmpeg_tool_path(tool_name) + if executable_path: + return [executable_path] + extra_args + raise ToolNotFoundError( + "FFmpeg '{}' tool not found.".format(tool_name) + ) + + def is_oiio_supported(): """Checks if oiiotool is configured for this platform. Returns: bool: OIIO tool executable is available. """ - loaded_path = oiio_path = get_oiio_tools_path() - if oiio_path: - oiio_path = find_executable(oiio_path) - if not oiio_path: - log.debug("OIIOTool is not configured or not present at {}".format( - loaded_path - )) + try: + args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + args = None + if not args: + log.debug("OIIOTool is not configured or not present.") return False - return True + return _oiio_executable_validation(args) diff --git a/openpype/modules/base.py b/openpype/modules/base.py index fb9b4e1096..9b3637c48a 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """Base class for Pype Modules.""" +import copy import os import sys import json @@ -12,8 +13,12 @@ import collections import traceback from uuid import uuid4 from abc import ABCMeta, abstractmethod -import six +import six +import appdirs +import ayon_api + +from openpype import AYON_SERVER_ENABLED from openpype.settings import ( get_system_settings, SYSTEM_SETTINGS_KEY, @@ -30,8 +35,9 @@ from openpype.settings.lib import ( from openpype.lib import ( Logger, import_filepath, - import_module_from_dirpath + import_module_from_dirpath, ) +from openpype.lib.openpype_version import is_staging_enabled from .interfaces import ( OpenPypeInterface, @@ -186,7 +192,11 @@ def get_dynamic_modules_dirs(): Returns: list: Paths loaded from studio overrides. """ + output = [] + if AYON_SERVER_ENABLED: + return output + value = get_studio_system_settings_overrides() for key in ("modules", "addon_paths", platform.system().lower()): if key not in value: @@ -299,6 +309,134 @@ def load_modules(force=False): time.sleep(0.1) +def _get_ayon_addons_information(): + """Receive information about addons to use from server. + + Todos: + Actually ask server for the information. + Allow project name as optional argument to be able to query information + about used addons for specific project. + Returns: + List[Dict[str, Any]]: List of addon information to use. + """ + + output = [] + bundle_name = os.getenv("AYON_BUNDLE_NAME") + bundles = ayon_api.get_bundles()["bundles"] + final_bundle = next( + ( + bundle + for bundle in bundles + if bundle["name"] == bundle_name + ), + None + ) + if final_bundle is None: + return output + + bundle_addons = final_bundle["addons"] + addons = ayon_api.get_addons_info()["addons"] + for addon in addons: + name = addon["name"] + versions = addon.get("versions") + addon_version = bundle_addons.get(name) + if addon_version is None or not versions: + continue + version = versions.get(addon_version) + if version: + version = copy.deepcopy(version) + version["name"] = name + version["version"] = addon_version + output.append(version) + return output + + +def _load_ayon_addons(openpype_modules, modules_key, log): + """Load AYON addons based on information from server. + + This function should not trigger downloading of any addons but only use + what is already available on the machine (at least in first stages of + development). + + Args: + openpype_modules (_ModuleClass): Module object where modules are + stored. + log (logging.Logger): Logger object. + + Returns: + List[str]: List of v3 addons to skip to load because v4 alternative is + imported. + """ + + v3_addons_to_skip = [] + + addons_info = _get_ayon_addons_information() + if not addons_info: + return v3_addons_to_skip + addons_dir = os.path.join( + appdirs.user_data_dir("AYON", "Ynput"), + "addons" + ) + if not os.path.exists(addons_dir): + log.warning("Addons directory does not exists. Path \"{}\"".format( + addons_dir + )) + return v3_addons_to_skip + + for addon_info in addons_info: + addon_name = addon_info["name"] + addon_version = addon_info["version"] + + folder_name = "{}_{}".format(addon_name, addon_version) + addon_dir = os.path.join(addons_dir, folder_name) + if not os.path.exists(addon_dir): + log.warning(( + "Directory for addon {} {} does not exists. Path \"{}\"" + ).format(addon_name, addon_version, addon_dir)) + continue + + sys.path.insert(0, addon_dir) + imported_modules = [] + for name in os.listdir(addon_dir): + path = os.path.join(addon_dir, name) + basename, ext = os.path.splitext(name) + is_dir = os.path.isdir(path) + is_py_file = ext.lower() == ".py" + if not is_py_file and not is_dir: + continue + + try: + mod = __import__(basename, fromlist=("",)) + imported_modules.append(mod) + except BaseException: + log.warning( + "Failed to import \"{}\"".format(basename), + exc_info=True + ) + + if not imported_modules: + log.warning("Addon {} {} has no content to import".format( + addon_name, addon_version + )) + continue + + if len(imported_modules) == 1: + mod = imported_modules[0] + addon_alias = getattr(mod, "V3_ALIAS", None) + if not addon_alias: + addon_alias = addon_name + v3_addons_to_skip.append(addon_alias) + new_import_str = "{}.{}".format(modules_key, addon_alias) + + sys.modules[new_import_str] = mod + setattr(openpype_modules, addon_alias, mod) + + else: + log.info("More then one module was imported") + + return v3_addons_to_skip + + def _load_modules(): # Key under which will be modules imported in `sys.modules` modules_key = "openpype_modules" @@ -308,6 +446,12 @@ def _load_modules(): log = Logger.get_logger("ModulesLoader") + ignore_addon_names = [] + if AYON_SERVER_ENABLED: + ignore_addon_names = _load_ayon_addons( + openpype_modules, modules_key, log + ) + # Look for OpenPype modules in paths defined with `get_module_dirs` # - dynamically imported OpenPype modules and addons module_dirs = get_module_dirs() @@ -351,6 +495,9 @@ def _load_modules(): fullpath = os.path.join(dirpath, filename) basename, ext = os.path.splitext(filename) + if basename in ignore_addon_names: + continue + # Validations if os.path.isdir(fullpath): # Check existence of init file diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py index e3e94d50cd..23e959d84c 100644 --- a/openpype/modules/deadline/abstract_submit_deadline.py +++ b/openpype/modules/deadline/abstract_submit_deadline.py @@ -19,8 +19,13 @@ import requests import pyblish.api from openpype.pipeline.publish import ( AbstractMetaInstancePlugin, - KnownPublishError + KnownPublishError, + OpenPypePyblishPluginMixin ) +from openpype.pipeline.publish.lib import ( + replace_with_published_scene_path +) +from openpype import AYON_SERVER_ENABLED JSONDecodeError = getattr(json.decoder, "JSONDecodeError", ValueError) @@ -35,7 +40,7 @@ def requests_post(*args, **kwargs): Warning: Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. + of defense SSL is providing, and it is not recommended. """ if 'verify' not in kwargs: @@ -56,7 +61,7 @@ def requests_get(*args, **kwargs): Warning: Disabling SSL certificate validation is defeating one line - of defense SSL is providing and it is not recommended. + of defense SSL is providing, and it is not recommended. """ if 'verify' not in kwargs: @@ -393,16 +398,28 @@ class DeadlineJobInfo(object): for key, value in data.items(): setattr(self, key, value) + def add_render_job_env_var(self): + """Check if in OP or AYON mode and use appropriate env var.""" + if AYON_SERVER_ENABLED: + self.EnvironmentKeyValue["AYON_RENDER_JOB"] = "1" + self.EnvironmentKeyValue["AYON_BUNDLE_NAME"] = ( + os.environ["AYON_BUNDLE_NAME"]) + else: + self.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + @six.add_metaclass(AbstractMetaInstancePlugin) -class AbstractSubmitDeadline(pyblish.api.InstancePlugin): +class AbstractSubmitDeadline(pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin): """Class abstracting access to Deadline.""" label = "Submit to Deadline" order = pyblish.api.IntegratorOrder + 0.1 + import_reference = False use_published = True asset_dependencies = False + default_priority = 50 def __init__(self, *args, **kwargs): super(AbstractSubmitDeadline, self).__init__(*args, **kwargs) @@ -521,72 +538,8 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): published. """ - instance = self._instance - workfile_instance = self._get_workfile_instance(instance.context) - if workfile_instance is None: - return - - # determine published path from Anatomy. - template_data = workfile_instance.data.get("anatomyData") - rep = workfile_instance.data["representations"][0] - template_data["representation"] = rep.get("name") - template_data["ext"] = rep.get("ext") - template_data["comment"] = None - - anatomy = instance.context.data['anatomy'] - template_obj = anatomy.templates_obj["publish"]["path"] - template_filled = template_obj.format_strict(template_data) - file_path = os.path.normpath(template_filled) - - self.log.info("Using published scene for render {}".format(file_path)) - - if not os.path.exists(file_path): - self.log.error("published scene does not exist!") - raise - - if not replace_in_path: - return file_path - - # now we need to switch scene in expected files - # because token will now point to published - # scene file and that might differ from current one - def _clean_name(path): - return os.path.splitext(os.path.basename(path))[0] - - new_scene = _clean_name(file_path) - orig_scene = _clean_name(instance.context.data["currentFile"]) - expected_files = instance.data.get("expectedFiles") - - if isinstance(expected_files[0], dict): - # we have aovs and we need to iterate over them - new_exp = {} - for aov, files in expected_files[0].items(): - replaced_files = [] - for f in files: - replaced_files.append( - str(f).replace(orig_scene, new_scene) - ) - new_exp[aov] = replaced_files - # [] might be too much here, TODO - instance.data["expectedFiles"] = [new_exp] - else: - new_exp = [] - for f in expected_files: - new_exp.append( - str(f).replace(orig_scene, new_scene) - ) - instance.data["expectedFiles"] = new_exp - - metadata_folder = instance.data.get("publishRenderMetadataFolder") - if metadata_folder: - metadata_folder = metadata_folder.replace(orig_scene, - new_scene) - instance.data["publishRenderMetadataFolder"] = metadata_folder - self.log.info("Scene name was switched {} -> {}".format( - orig_scene, new_scene - )) - - return file_path + return replace_with_published_scene_path( + self._instance, replace_in_path=replace_in_path) def assemble_payload( self, job_info=None, plugin_info=None, aux_files=None): @@ -647,22 +600,3 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): self._instance.data["deadlineSubmissionJob"] = result return result["_id"] - - @staticmethod - def _get_workfile_instance(context): - """Find workfile instance in context""" - for i in context: - - is_workfile = ( - "workfile" in i.data.get("families", []) or - i.data["family"] == "workfile" - ) - if not is_workfile: - continue - - # test if there is instance of workfile waiting - # to be published. - assert i.data.get("publish", True) is True, ( - "Workfile (scene) must be published along") - - return i diff --git a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py index 2de6073e29..eadfc3c83e 100644 --- a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py +++ b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py @@ -21,6 +21,8 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): def process(self, instance): instance.data["deadlineUrl"] = self._collect_deadline_url(instance) + instance.data["deadlineUrl"] = \ + instance.data["deadlineUrl"].strip().rstrip("/") self.log.info( "Using {} for submission.".format(instance.data["deadlineUrl"])) diff --git a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py index 1a0d615dc3..58721efad3 100644 --- a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py +++ b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py @@ -48,3 +48,6 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin): context.data["defaultDeadline"] = deadline_webservice self.log.debug("Overriding from project settings with {}".format( # noqa: E501 deadline_webservice)) + + context.data["defaultDeadline"] = \ + context.data["defaultDeadline"].strip().rstrip("/") diff --git a/openpype/modules/deadline/plugins/publish/collect_pools.py b/openpype/modules/deadline/plugins/publish/collect_pools.py index e221eb00ea..a25b149f11 100644 --- a/openpype/modules/deadline/plugins/publish/collect_pools.py +++ b/openpype/modules/deadline/plugins/publish/collect_pools.py @@ -36,12 +36,17 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin, instance.data["primaryPool"] = ( attr_values.get("primaryPool") or self.primary_pool or "none" ) + if instance.data["primaryPool"] == "-": + instance.data["primaryPool"] = None if not instance.data.get("secondaryPool"): instance.data["secondaryPool"] = ( attr_values.get("secondaryPool") or self.secondary_pool or "none" # noqa ) + if instance.data["secondaryPool"] == "-": + instance.data["secondaryPool"] = None + @classmethod def get_attribute_defs(cls): # TODO: Preferably this would be an enum for the user diff --git a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py index 83dd5b49e2..009375e87e 100644 --- a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py @@ -106,8 +106,8 @@ class AfterEffectsSubmitDeadline( if value: dln_job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - dln_job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + dln_job_info.add_render_job_env_var() return dln_job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py index 84fca11d9d..2c37268f04 100644 --- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py @@ -299,8 +299,8 @@ class HarmonySubmitDeadline( if value: job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() return job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py index 254914a850..108c377078 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py @@ -88,7 +88,6 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): "AVALON_APP_NAME", "OPENPYPE_DEV", "OPENPYPE_LOG_NO_COLORS", - "OPENPYPE_VERSION" ] # Add OpenPype version if we are running from build. @@ -106,8 +105,8 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): if value: job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() for i, filepath in enumerate(instance.data["files"]): dirname = os.path.dirname(filepath) diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py index b6a30e36b7..8e05582962 100644 --- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py @@ -20,6 +20,7 @@ from openpype.hosts.max.api.lib import ( from openpype.hosts.max.api.lib_rendersettings import RenderSettings from openpype_modules.deadline import abstract_submit_deadline from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo +from openpype.lib import is_running_from_build @attr.s @@ -110,9 +111,13 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, "AVALON_TASK", "AVALON_APP_NAME", "OPENPYPE_DEV", - "OPENPYPE_VERSION", "IS_TEST" ] + + # Add OpenPype version if we are running from build. + if is_running_from_build(): + keys.append("OPENPYPE_VERSION") + # Add mongo url if it's enabled if self._instance.context.data.get("deadlinePassMongoUrl"): keys.append("OPENPYPE_MONGO") @@ -126,8 +131,8 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, continue job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Add list of expected files to job @@ -179,20 +184,18 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, } self.log.debug("Submitting 3dsMax render..") - payload = self._use_published_name(payload_data) + project_settings = instance.context.data["project_settings"] + payload = self._use_published_name(payload_data, project_settings) job_info, plugin_info = payload self.submit(self.assemble_payload(job_info, plugin_info)) - def _use_published_name(self, data): + def _use_published_name(self, data, project_settings): instance = self._instance job_info = copy.deepcopy(self.job_info) plugin_info = copy.deepcopy(self.plugin_info) plugin_data = {} - project_setting = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) - multipass = get_multipass_setting(project_setting) + multipass = get_multipass_setting(project_settings) if multipass: plugin_data["DisableMultipass"] = 0 else: diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index a6cdcb7e71..75d24b28f0 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -30,8 +30,16 @@ import attr from maya import cmds -from openpype.pipeline import legacy_io - +from openpype.pipeline import ( + legacy_io, + OpenPypePyblishPluginMixin +) +from openpype.lib import ( + BoolDef, + NumberDef, + TextDef, + EnumDef +) from openpype.hosts.maya.api.lib_rendersettings import RenderSettings from openpype.hosts.maya.api.lib import get_attr_in_layer @@ -39,6 +47,7 @@ from openpype_modules.deadline import abstract_submit_deadline from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo from openpype.tests.lib import is_in_tests from openpype.lib import is_running_from_build +from openpype.pipeline.farm.tools import iter_expected_files def _validate_deadline_bool_value(instance, attribute, value): @@ -92,7 +101,8 @@ class ArnoldPluginInfo(object): ArnoldFile = attr.ib(default=None) -class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): +class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, + OpenPypePyblishPluginMixin): label = "Submit Render to Deadline" hosts = ["maya"] @@ -106,6 +116,24 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): jobInfo = {} pluginInfo = {} group = "none" + strict_error_checking = True + + @classmethod + def apply_settings(cls, project_settings, system_settings): + settings = project_settings["deadline"]["publish"]["MayaSubmitDeadline"] # noqa + + # Take some defaults from settings + cls.asset_dependencies = settings.get("asset_dependencies", + cls.asset_dependencies) + cls.import_reference = settings.get("import_reference", + cls.import_reference) + cls.use_published = settings.get("use_published", cls.use_published) + cls.priority = settings.get("priority", cls.priority) + cls.tile_priority = settings.get("tile_priority", cls.tile_priority) + cls.limit = settings.get("limit", cls.limit) + cls.group = settings.get("group", cls.group) + cls.strict_error_checking = settings.get("strict_error_checking", + cls.strict_error_checking) def get_job_info(self): job_info = DeadlineJobInfo(Plugin="MayaBatch") @@ -151,6 +179,19 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): if self.limit: job_info.LimitGroups = ",".join(self.limit) + attr_values = self.get_attr_values_from_data(instance.data) + render_globals = instance.data.setdefault("renderGlobals", dict()) + machine_list = attr_values.get("machineList", "") + if machine_list: + if attr_values.get("whitelist", True): + machine_list_key = "Whitelist" + else: + machine_list_key = "Blacklist" + render_globals[machine_list_key] = machine_list + + job_info.Priority = attr_values.get("priority") + job_info.ChunkSize = attr_values.get("chunkSize") + # Add options from RenderGlobals render_globals = instance.data.get("renderGlobals", {}) job_info.update(render_globals) @@ -185,8 +226,8 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): continue job_info.EnvironmentKeyValue[key] = value - # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + job_info.add_render_job_env_var() job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Adding file dependencies. @@ -198,7 +239,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): # Add list of expected files to job # --------------------------------- exp = instance.data.get("expectedFiles") - for filepath in self._iter_expected_files(exp): + for filepath in iter_expected_files(exp): job_info.OutputDirectory += os.path.dirname(filepath) job_info.OutputFilename += os.path.basename(filepath) @@ -223,8 +264,10 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): "renderSetupIncludeLights", default_rs_include_lights) if rs_include_lights not in {"1", "0", True, False}: rs_include_lights = default_rs_include_lights - strict_error_checking = instance.data.get("strict_error_checking", - True) + + attr_values = self.get_attr_values_from_data(instance.data) + strict_error_checking = attr_values.get("strict_error_checking", + self.strict_error_checking) plugin_info = MayaPluginInfo( SceneFile=self.scene_path, Version=cmds.about(version=True), @@ -254,7 +297,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): # TODO: Avoid the need for this logic here, needed for submit publish # Store output dir for unified publisher (filesequence) expected_files = instance.data["expectedFiles"] - first_file = next(self._iter_expected_files(expected_files)) + first_file = next(iter_expected_files(expected_files)) output_dir = os.path.dirname(first_file) instance.data["outputDir"] = output_dir instance.data["toBeRenderedOn"] = "deadline" @@ -422,11 +465,13 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): assembly_job_info.Name += " - Tile Assembly Job" assembly_job_info.Frames = 1 assembly_job_info.MachineLimit = 1 - assembly_job_info.Priority = instance.data.get( - "tile_priority", self.tile_priority - ) + + attr_values = self.get_attr_values_from_data(instance.data) + assembly_job_info.Priority = attr_values.get("tile_priority", + self.tile_priority) assembly_job_info.TileJob = False + # TODO: This should be a new publisher attribute definition pool = instance.context.data["project_settings"]["deadline"] pool = pool["publish"]["ProcessSubmittedJobOnFarm"]["deadline_pool"] assembly_job_info.Pool = pool or instance.data.get("primaryPool", "") @@ -519,7 +564,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): "submitting assembly job {} of {}".format(i + 1, num_assemblies) ) - self.log.info(payload) assembly_job_id = self.submit(payload) assembly_job_ids.append(assembly_job_id) @@ -772,16 +816,43 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): end=int(self._instance.data["frameEndHandle"]), ) - @staticmethod - def _iter_expected_files(exp): - if isinstance(exp[0], dict): - for _aov, files in exp[0].items(): - for file in files: - yield file - else: - for file in exp: - yield file + @classmethod + def get_attribute_defs(cls): + defs = super(MayaSubmitDeadline, cls).get_attribute_defs() + defs.extend([ + NumberDef("priority", + label="Priority", + default=cls.default_priority, + decimals=0), + NumberDef("chunkSize", + label="Frames Per Task", + default=1, + decimals=0, + minimum=1, + maximum=1000), + TextDef("machineList", + label="Machine List", + default="", + placeholder="machine1,machine2"), + EnumDef("whitelist", + label="Machine List (Allow/Deny)", + items={ + True: "Allow List", + False: "Deny List", + }, + default=False), + NumberDef("tile_priority", + label="Tile Assembler Priority", + decimals=0, + default=cls.tile_priority), + BoolDef("strict_error_checking", + label="Strict Error Checking", + default=cls.strict_error_checking), + + ]) + + return defs def _format_tiles( filename, diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py index 25f859554f..0d23f44333 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_remote_publish_deadline.py @@ -1,18 +1,34 @@ import os -import requests +import attr from datetime import datetime from maya import cmds +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io, PublishXmlValidationError -from openpype.settings import get_project_settings from openpype.tests.lib import is_in_tests from openpype.lib import is_running_from_build +from openpype_modules.deadline import abstract_submit_deadline +from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo import pyblish.api -class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): +@attr.s +class MayaPluginInfo(object): + Build = attr.ib(default=None) # Don't force build + StrictErrorChecking = attr.ib(default=True) + + SceneFile = attr.ib(default=None) # Input scene + Version = attr.ib(default=None) # Mandatory for Deadline + ProjectPath = attr.ib(default=None) + + ScriptJob = attr.ib(default=True) + ScriptFilename = attr.ib(default=None) + + +class MayaSubmitRemotePublishDeadline( + abstract_submit_deadline.AbstractSubmitDeadline): """Submit Maya scene to perform a local publish in Deadline. Publishing in Deadline can be helpful for scenes that publish very slow. @@ -36,13 +52,6 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): targets = ["local"] def process(self, instance): - project_name = instance.context.data["projectName"] - # TODO settings can be received from 'context.data["project_settings"]' - settings = get_project_settings(project_name) - # use setting for publish job on farm, no reason to have it separately - deadline_publish_job_sett = (settings["deadline"] - ["publish"] - ["ProcessSubmittedJobOnFarm"]) # Ensure no errors so far if not (all(result["success"] @@ -54,54 +63,39 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): "Skipping submission..") return + super(MayaSubmitRemotePublishDeadline, self).process(instance) + + def get_job_info(self): + instance = self._instance + context = instance.context + + project_name = instance.context.data["projectName"] scene = instance.context.data["currentFile"] scenename = os.path.basename(scene) job_name = "{scene} [PUBLISH]".format(scene=scenename) batch_name = "{code} - {scene}".format(code=project_name, scene=scenename) + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") - # Generate the payload for Deadline submission - payload = { - "JobInfo": { - "Plugin": "MayaBatch", - "BatchName": batch_name, - "Name": job_name, - "UserName": instance.context.data["user"], - "Comment": instance.context.data.get("comment", ""), - # "InitialStatus": state - "Department": deadline_publish_job_sett["deadline_department"], - "ChunkSize": deadline_publish_job_sett["deadline_chunk_size"], - "Priority": deadline_publish_job_sett["deadline_priority"], - "Group": deadline_publish_job_sett["deadline_group"], - "Pool": deadline_publish_job_sett["deadline_pool"], - }, - "PluginInfo": { + job_info = DeadlineJobInfo(Plugin="MayaBatch") + job_info.BatchName = batch_name + job_info.Name = job_name + job_info.UserName = context.data.get("user") + job_info.Comment = context.data.get("comment", "") - "Build": None, # Don't force build - "StrictErrorChecking": True, - "ScriptJob": True, + # use setting for publish job on farm, no reason to have it separately + project_settings = context.data["project_settings"] + deadline_publish_job_sett = project_settings["deadline"]["publish"]["ProcessSubmittedJobOnFarm"] # noqa + job_info.Department = deadline_publish_job_sett["deadline_department"] + job_info.ChunkSize = deadline_publish_job_sett["deadline_chunk_size"] + job_info.Priority = deadline_publish_job_sett["deadline_priority"] + job_info.Group = deadline_publish_job_sett["deadline_group"] + job_info.Pool = deadline_publish_job_sett["deadline_pool"] - # Inputs - "SceneFile": scene, - "ScriptFilename": "{OPENPYPE_REPOS_ROOT}/openpype/scripts/remote_publish.py", # noqa - - # Mandatory for Deadline - "Version": cmds.about(version=True), - - # Resolve relative references - "ProjectPath": cmds.workspace(query=True, - rootDirectory=True), - - }, - - # Mandatory for Deadline, may be empty - "AuxFiles": [] - } - - # Include critical environment variables with submission + api.Session + # Include critical environment variables with submission + Session keys = [ "FTRACK_API_USER", "FTRACK_API_KEY", @@ -117,29 +111,30 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin): # TODO replace legacy_io with context.data environment["AVALON_PROJECT"] = project_name - environment["AVALON_ASSET"] = legacy_io.Session["AVALON_ASSET"] - environment["AVALON_TASK"] = legacy_io.Session["AVALON_TASK"] + environment["AVALON_ASSET"] = instance.context.data["asset"] + environment["AVALON_TASK"] = instance.context.data["task"] environment["AVALON_APP_NAME"] = os.environ.get("AVALON_APP_NAME") environment["OPENPYPE_LOG_NO_COLORS"] = "1" - environment["OPENPYPE_REMOTE_JOB"] = "1" environment["OPENPYPE_USERNAME"] = instance.context.data["user"] environment["OPENPYPE_PUBLISH_SUBSET"] = instance.data["subset"] environment["OPENPYPE_REMOTE_PUBLISH"] = "1" - payload["JobInfo"].update({ - "EnvironmentKeyValue%d" % index: "{key}={value}".format( - key=key, - value=environment[key] - ) for index, key in enumerate(environment) - }) + if AYON_SERVER_ENABLED: + environment["AYON_REMOTE_PUBLISH"] = "1" + else: + environment["OPENPYPE_REMOTE_PUBLISH"] = "1" + for key, value in environment.items(): + job_info.EnvironmentKeyValue[key] = value - self.log.info("Submitting Deadline job ...") - deadline_url = instance.context.data["defaultDeadline"] - # if custom one is set in instance, use that - if instance.data.get("deadlineUrl"): - deadline_url = instance.data.get("deadlineUrl") - assert deadline_url, "Requires Deadline Webservice URL" - url = "{}/api/jobs".format(deadline_url) - response = requests.post(url, json=payload, timeout=10) - if not response.ok: - raise Exception(response.text) + def get_plugin_info(self): + + scene = self._instance.context.data["currentFile"] + + plugin_info = MayaPluginInfo() + plugin_info.SceneFile = scene + plugin_info.ScriptFilename = "{OPENPYPE_REPOS_ROOT}/openpype/scripts/remote_publish.py" # noqa + plugin_info.Version = cmds.about(version=True) + plugin_info.ProjectPath = cmds.workspace(query=True, + rootDirectory=True) + + return attr.asdict(plugin_info) diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py index 4900231783..cfdeb4968b 100644 --- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py @@ -8,6 +8,8 @@ import requests import pyblish.api import nuke + +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io from openpype.pipeline.publish import ( OpenPypePyblishPluginMixin @@ -88,7 +90,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, if not instance.data.get("farm"): self.log.debug("Skipping local instance.") return - instance.data["attributeValues"] = self.get_attr_values_from_data( instance.data) @@ -121,13 +122,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, render_path = instance.data['path'] script_path = context.data["currentFile"] - for item in context: - if "workfile" in item.data["families"]: - msg = "Workfile (scene) must be published along" - assert item.data["publish"] is True, msg - - template_data = item.data.get("anatomyData") - rep = item.data.get("representations")[0].get("name") + for item_ in context: + if "workfile" in item_.data["family"]: + template_data = item_.data.get("anatomyData") + rep = item_.data.get("representations")[0].get("name") template_data["representation"] = rep template_data["ext"] = rep template_data["comment"] = None @@ -139,19 +137,24 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, "Using published scene for render {}".format(script_path) ) - response = self.payload_submit( - instance, - script_path, - render_path, - node.name(), - submit_frame_start, - submit_frame_end - ) - # Store output dir for unified publisher (filesequence) - instance.data["deadlineSubmissionJob"] = response.json() - instance.data["outputDir"] = os.path.dirname( - render_path).replace("\\", "/") - instance.data["publishJobState"] = "Suspended" + # only add main rendering job if target is not frames_farm + r_job_response_json = None + if instance.data["render_target"] != "frames_farm": + r_job_response = self.payload_submit( + instance, + script_path, + render_path, + node.name(), + submit_frame_start, + submit_frame_end + ) + r_job_response_json = r_job_response.json() + instance.data["deadlineSubmissionJob"] = r_job_response_json + + # Store output dir for unified publisher (filesequence) + instance.data["outputDir"] = os.path.dirname( + render_path).replace("\\", "/") + instance.data["publishJobState"] = "Suspended" if instance.data.get("bakingNukeScripts"): for baking_script in instance.data["bakingNukeScripts"]: @@ -159,18 +162,20 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, script_path = baking_script["bakeScriptPath"] exe_node_name = baking_script["bakeWriteNodeName"] - resp = self.payload_submit( + b_job_response = self.payload_submit( instance, script_path, render_path, exe_node_name, submit_frame_start, submit_frame_end, - response.json() + r_job_response_json, + baking_submission=True ) # Store output dir for unified publisher (filesequence) - instance.data["deadlineSubmissionJob"] = resp.json() + instance.data["deadlineSubmissionJob"] = b_job_response.json() + instance.data["publishJobState"] = "Suspended" # add to list of job Id @@ -178,7 +183,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, instance.data["bakingSubmissionJobs"] = [] instance.data["bakingSubmissionJobs"].append( - resp.json()["_id"]) + b_job_response.json()["_id"]) # redefinition of families if "render" in instance.data["family"]: @@ -197,15 +202,35 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, exe_node_name, start_frame, end_frame, - response_data=None + response_data=None, + baking_submission=False, ): + """Submit payload to Deadline + + Args: + instance (pyblish.api.Instance): pyblish instance + script_path (str): path to nuke script + render_path (str): path to rendered images + exe_node_name (str): name of the node to render + start_frame (int): start frame + end_frame (int): end frame + response_data Optional[dict]: response data from + previous submission + baking_submission Optional[bool]: if it's baking submission + + Returns: + requests.Response + """ render_dir = os.path.normpath(os.path.dirname(render_path)) - batch_name = os.path.basename(script_path) - jobname = "%s - %s" % (batch_name, instance.name) + + # batch name + src_filepath = instance.context.data["currentFile"] + batch_name = os.path.basename(src_filepath) + job_name = os.path.basename(render_path) + if is_in_tests(): batch_name += datetime.now().strftime("%d%m%Y%H%M%S") - output_filename_0 = self.preview_fname(render_path) if not response_data: @@ -226,11 +251,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, # Top-level group name "BatchName": batch_name, - # Asset dependency to wait for at least the scene file to sync. - # "AssetDependency0": script_path, - # Job name, as seen in Monitor - "Name": jobname, + "Name": job_name, # Arbitrary username, for visualisation in Monitor "UserName": self._deadline_user, @@ -292,12 +314,17 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, "AuxFiles": [] } - if response_data.get("_id"): + # TODO: rewrite for baking with sequences + if baking_submission: payload["JobInfo"].update({ "JobType": "Normal", + "ChunkSize": 99999999 + }) + + if response_data.get("_id"): + payload["JobInfo"].update({ "BatchName": response_data["Props"]["Batch"], "JobDependency0": response_data["_id"], - "ChunkSize": 99999999 }) # Include critical environment variables with submission @@ -337,8 +364,14 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, if _path.lower().startswith('openpype_'): environment[_path] = os.environ[_path] - # to recognize job from PYPE for turning Event On/Off - environment["OPENPYPE_RENDER_JOB"] = "1" + # to recognize render jobs + if AYON_SERVER_ENABLED: + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + render_job_label = "AYON_RENDER_JOB" + else: + render_job_label = "OPENPYPE_RENDER_JOB" + + environment[render_job_label] = "1" # finally search replace in values of any key if self.env_search_replace_values: diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 292fe58cca..bf4411ef43 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -1,56 +1,30 @@ # -*- coding: utf-8 -*- """Submit publishing job to farm.""" - import os import json import re -from copy import copy, deepcopy +from copy import deepcopy import requests import clique import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_last_version_by_subset_name, - get_representations, -) -from openpype.pipeline import ( - get_representation_path, - legacy_io, ) +from openpype.pipeline import publish, legacy_io +from openpype.lib import EnumDef, is_running_from_build from openpype.tests.lib import is_in_tests -from openpype.pipeline.farm.patterning import match_aov_pattern -from openpype.lib import is_running_from_build -from openpype.pipeline import publish +from openpype.pipeline.version_start import get_versioning_start - -def get_resources(project_name, version, extension=None): - """Get the files from the specific version.""" - - # TODO this functions seems to be weird - # - it's looking for representation with one extension or first (any) - # representation from a version? - # - not sure how this should work, maybe it does for specific use cases - # but probably can't be used for all resources from 2D workflows - extensions = None - if extension: - extensions = [extension] - repre_docs = list(get_representations( - project_name, version_ids=[version["_id"]], extensions=extensions - )) - assert repre_docs, "This is a bug" - - representation = repre_docs[0] - directory = get_representation_path(representation) - print("Source: ", directory) - resources = sorted( - [ - os.path.normpath(os.path.join(directory, fname)) - for fname in os.listdir(directory) - ] - ) - - return resources +from openpype.pipeline.farm.pyblish_functions import ( + create_skeleton_instance, + create_instances_for_aov, + attach_instances_to_subset, + prepare_representations, + create_metadata_path +) def get_resource_files(resources, frame_range=None): @@ -81,6 +55,7 @@ def get_resource_files(resources, frame_range=None): class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, + publish.OpenPypePyblishPluginMixin, publish.ColormanagedPyblishPluginMixin): """Process Job submitted on farm. @@ -117,13 +92,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, label = "Submit image sequence jobs to Deadline or Muster" order = pyblish.api.IntegratorOrder + 0.2 icon = "tractor" - deadline_plugin = "OpenPype" + targets = ["local"] hosts = ["fusion", "max", "maya", "nuke", "houdini", "celaction", "aftereffects", "harmony"] - families = ["render.farm", "prerender.farm", + families = ["render.farm", "render.frames_farm", + "prerender.farm", "prerender.frames_farm", "renderlayer", "imagesequence", "vrayscene", "maxrender", "arnold_rop", "mantra_rop", @@ -146,13 +122,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "FTRACK_SERVER", "AVALON_APP_NAME", "OPENPYPE_USERNAME", - "OPENPYPE_SG_USER" + "OPENPYPE_SG_USER", + "KITSU_LOGIN", + "KITSU_PWD" ] - # Add OpenPype version if we are running from build. - if is_running_from_build(): - environ_keys.append("OPENPYPE_VERSION") - # custom deadline attributes deadline_department = "" deadline_pool = "" @@ -183,36 +157,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, # poor man exclusion skip_integration_repre_list = [] - def _create_metadata_path(self, instance): - ins_data = instance.data - # Ensure output dir exists - output_dir = ins_data.get( - "publishRenderMetadataFolder", ins_data["outputDir"]) - - try: - if not os.path.isdir(output_dir): - os.makedirs(output_dir) - except OSError: - # directory is not available - self.log.warning("Path is unreachable: `{}`".format(output_dir)) - - metadata_filename = "{}_metadata.json".format(ins_data["subset"]) - - metadata_path = os.path.join(output_dir, metadata_filename) - - # Convert output dir to `{root}/rest/of/path/...` with Anatomy - success, rootless_mtdt_p = self.anatomy.find_root_template_from_path( - metadata_path) - if not success: - # `rootless_path` is not set to `output_dir` if none of roots match - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(output_dir)) - rootless_mtdt_p = metadata_path - - return metadata_path, rootless_mtdt_p - def _submit_deadline_post_job(self, instance, job, instances): """Submit publish job to Deadline. @@ -227,38 +171,54 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, subset = data["subset"] job_name = "Publish - {subset}".format(subset=subset) + anatomy = instance.context.data['anatomy'] + # instance.data.get("subset") != instances[0]["subset"] # 'Main' vs 'renderMain' override_version = None instance_version = instance.data.get("version") # take this if exists if instance_version != 1: override_version = instance_version + output_dir = self._get_publish_folder( - instance.context.data['anatomy'], + anatomy, deepcopy(instance.data["anatomyData"]), instance.data.get("asset"), instances[0]["subset"], - 'render', + instance.context, + instances[0]["family"], override_version ) # Transfer the environment from the original job to this dependent # job so they use the same environment metadata_path, rootless_metadata_path = \ - self._create_metadata_path(instance) + create_metadata_path(instance, anatomy) environment = { - "AVALON_PROJECT": legacy_io.Session["AVALON_PROJECT"], - "AVALON_ASSET": legacy_io.Session["AVALON_ASSET"], - "AVALON_TASK": legacy_io.Session["AVALON_TASK"], + "AVALON_PROJECT": instance.context.data["projectName"], + "AVALON_ASSET": instance.context.data["asset"], + "AVALON_TASK": instance.context.data["task"], "OPENPYPE_USERNAME": instance.context.data["user"], - "OPENPYPE_PUBLISH_JOB": "1", - "OPENPYPE_RENDER_JOB": "0", - "OPENPYPE_REMOTE_JOB": "0", "OPENPYPE_LOG_NO_COLORS": "1", "IS_TEST": str(int(is_in_tests())) } + if AYON_SERVER_ENABLED: + environment["AYON_PUBLISH_JOB"] = "1" + environment["AYON_RENDER_JOB"] = "0" + environment["AYON_REMOTE_PUBLISH"] = "0" + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + deadline_plugin = "Ayon" + else: + environment["OPENPYPE_PUBLISH_JOB"] = "1" + environment["OPENPYPE_RENDER_JOB"] = "0" + environment["OPENPYPE_REMOTE_PUBLISH"] = "0" + deadline_plugin = "OpenPype" + # Add OpenPype version if we are running from build. + if is_running_from_build(): + self.environ_keys.append("OPENPYPE_VERSION") + # add environments from self.environ_keys for env_key in self.environ_keys: if os.getenv(env_key): @@ -278,6 +238,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, priority = self.deadline_priority or instance.data.get("priority", 50) + instance_settings = self.get_attr_values_from_data(instance.data) + initial_status = instance_settings.get("publishJobState", "Active") + # TODO: Remove this backwards compatibility of `suspend_publish` + if instance.data.get("suspend_publish"): + initial_status = "Suspended" + args = [ "--headless", 'publish', @@ -295,7 +261,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, ) payload = { "JobInfo": { - "Plugin": self.deadline_plugin, + "Plugin": deadline_plugin, "BatchName": job["Props"]["Batch"], "Name": job_name, "UserName": job["Props"]["User"], @@ -304,6 +270,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "Department": self.deadline_department, "ChunkSize": self.deadline_chunk_size, "Priority": priority, + "InitialStatus": initial_status, "Group": self.deadline_group, "Pool": self.deadline_pool or instance.data.get("primaryPool"), @@ -325,20 +292,19 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, self.log.info("Adding tile assembly jobs as dependencies...") job_index = 0 for assembly_id in instance.data.get("assemblySubmissionJobs"): - payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501 + payload["JobInfo"]["JobDependency{}".format( + job_index)] = assembly_id # noqa: E501 job_index += 1 elif instance.data.get("bakingSubmissionJobs"): self.log.info("Adding baking submission jobs as dependencies...") job_index = 0 for assembly_id in instance.data["bakingSubmissionJobs"]: - payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501 + payload["JobInfo"]["JobDependency{}".format( + job_index)] = assembly_id # noqa: E501 job_index += 1 - else: + elif job.get("_id"): payload["JobInfo"]["JobDependency0"] = job["_id"] - if instance.data.get("suspend_publish"): - payload["JobInfo"]["InitialStatus"] = "Suspended" - for index, (key_, value_) in enumerate(environment.items()): payload["JobInfo"].update( { @@ -362,413 +328,13 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, return deadline_publish_job_id - def _copy_extend_frames(self, instance, representation): - """Copy existing frames from latest version. - - This will copy all existing frames from subset's latest version back - to render directory and rename them to what renderer is expecting. - - Arguments: - instance (pyblish.plugin.Instance): instance to get required - data from - representation (dict): presentation to operate on - - """ - import speedcopy - - self.log.info("Preparing to copy ...") - start = instance.data.get("frameStart") - end = instance.data.get("frameEnd") - project_name = legacy_io.active_project() - - # get latest version of subset - # this will stop if subset wasn't published yet - project_name = legacy_io.active_project() - version = get_last_version_by_subset_name( - project_name, - instance.data.get("subset"), - asset_name=instance.data.get("asset") - ) - - # get its files based on extension - subset_resources = get_resources( - project_name, version, representation.get("ext") - ) - r_col, _ = clique.assemble(subset_resources) - - # if override remove all frames we are expecting to be rendered - # so we'll copy only those missing from current render - if instance.data.get("overrideExistingFrame"): - for frame in range(start, end + 1): - if frame not in r_col.indexes: - continue - r_col.indexes.remove(frame) - - # now we need to translate published names from representation - # back. This is tricky, right now we'll just use same naming - # and only switch frame numbers - resource_files = [] - r_filename = os.path.basename( - representation.get("files")[0]) # first file - op = re.search(self.R_FRAME_NUMBER, r_filename) - pre = r_filename[:op.start("frame")] - post = r_filename[op.end("frame"):] - assert op is not None, "padding string wasn't found" - for frame in list(r_col): - fn = re.search(self.R_FRAME_NUMBER, frame) - # silencing linter as we need to compare to True, not to - # type - assert fn is not None, "padding string wasn't found" - # list of tuples (source, destination) - staging = representation.get("stagingDir") - staging = self.anatomy.fill_root(staging) - resource_files.append( - (frame, - os.path.join(staging, - "{}{}{}".format(pre, - fn.group("frame"), - post))) - ) - - # test if destination dir exists and create it if not - output_dir = os.path.dirname(representation.get("files")[0]) - if not os.path.isdir(output_dir): - os.makedirs(output_dir) - - # copy files - for source in resource_files: - speedcopy.copy(source[0], source[1]) - self.log.info(" > {}".format(source[1])) - - self.log.info( - "Finished copying %i files" % len(resource_files)) - - def _create_instances_for_aov( - self, instance_data, exp_files, additional_data, do_not_add_review - ): - """Create instance for each AOV found. - - This will create new instance for every aov it can detect in expected - files list. - - Arguments: - instance_data (pyblish.plugin.Instance): skeleton data for instance - (those needed) later by collector - exp_files (list): list of expected files divided by aovs - additional_data (dict): - do_not_add_review (bool): explicitly skip review - - Returns: - list of instances - - """ - task = os.environ["AVALON_TASK"] - subset = instance_data["subset"] - cameras = instance_data.get("cameras", []) - instances = [] - # go through aovs in expected files - for aov, files in exp_files[0].items(): - cols, rem = clique.assemble(files) - # we shouldn't have any reminders. And if we do, it should - # be just one item for single frame renders. - if not cols and rem: - assert len(rem) == 1, ("Found multiple non related files " - "to render, don't know what to do " - "with them.") - col = rem[0] - ext = os.path.splitext(col)[1].lstrip(".") - else: - # but we really expect only one collection. - # Nothing else make sense. - assert len(cols) == 1, "only one image sequence type is expected" # noqa: E501 - ext = cols[0].tail.lstrip(".") - col = list(cols[0]) - - self.log.debug(col) - # create subset name `familyTaskSubset_AOV` - group_name = 'render{}{}{}{}'.format( - task[0].upper(), task[1:], - subset[0].upper(), subset[1:]) - - cam = [c for c in cameras if c in col.head] - if cam: - if aov: - subset_name = '{}_{}_{}'.format(group_name, cam, aov) - else: - subset_name = '{}_{}'.format(group_name, cam) - else: - if aov: - subset_name = '{}_{}'.format(group_name, aov) - else: - subset_name = '{}'.format(group_name) - - if isinstance(col, (list, tuple)): - staging = os.path.dirname(col[0]) - else: - staging = os.path.dirname(col) - - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path(staging) - ) - if success: - staging = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging)) - - self.log.info("Creating data for: {}".format(subset_name)) - - app = os.environ.get("AVALON_APP", "") - - if isinstance(col, list): - render_file_name = os.path.basename(col[0]) - else: - render_file_name = os.path.basename(col) - aov_patterns = self.aov_filter - - preview = match_aov_pattern(app, aov_patterns, render_file_name) - # toggle preview on if multipart is on - - if instance_data.get("multipartExr"): - self.log.debug("Adding preview tag because its multipartExr") - preview = True - self.log.debug("preview:{}".format(preview)) - new_instance = deepcopy(instance_data) - new_instance["subset"] = subset_name - new_instance["subsetGroup"] = group_name - - preview = preview and not do_not_add_review - if preview: - new_instance["review"] = True - - # create representation - if isinstance(col, (list, tuple)): - files = [os.path.basename(f) for f in col] - else: - files = os.path.basename(col) - - # Copy render product "colorspace" data to representation. - colorspace = "" - products = additional_data["renderProducts"].layer_data.products - for product in products: - if product.productName == aov: - colorspace = product.colorspace - break - - rep = { - "name": ext, - "ext": ext, - "files": files, - "frameStart": int(instance_data.get("frameStartHandle")), - "frameEnd": int(instance_data.get("frameEndHandle")), - # If expectedFile are absolute, we need only filenames - "stagingDir": staging, - "fps": new_instance.get("fps"), - "tags": ["review"] if preview else [], - "colorspaceData": { - "colorspace": colorspace, - "config": { - "path": additional_data["colorspaceConfig"], - "template": additional_data["colorspaceTemplate"] - }, - "display": additional_data["display"], - "view": additional_data["view"] - } - } - - # support conversion from tiled to scanline - if instance_data.get("convertToScanline"): - self.log.info("Adding scanline conversion.") - rep["tags"].append("toScanline") - - # poor man exclusion - if ext in self.skip_integration_repre_list: - rep["tags"].append("delete") - - self._solve_families(new_instance, preview) - - new_instance["representations"] = [rep] - - # if extending frames from existing version, copy files from there - # into our destination directory - if new_instance.get("extendFrames", False): - self._copy_extend_frames(new_instance, rep) - instances.append(new_instance) - self.log.debug("instances:{}".format(instances)) - return instances - - def _get_representations(self, instance_data, exp_files, - do_not_add_review): - """Create representations for file sequences. - - This will return representations of expected files if they are not - in hierarchy of aovs. There should be only one sequence of files for - most cases, but if not - we create representation from each of them. - - Arguments: - instance_data (dict): instance.data for which we are - setting representations - exp_files (list): list of expected files - do_not_add_review (bool): explicitly skip review - - Returns: - list of representations - - """ - representations = [] - host_name = os.environ.get("AVALON_APP", "") - collections, remainders = clique.assemble(exp_files) - - # create representation for every collected sequence - for collection in collections: - ext = collection.tail.lstrip(".") - preview = False - # TODO 'useSequenceForReview' is temporary solution which does - # not work for 100% of cases. We must be able to tell what - # expected files contains more explicitly and from what - # should be review made. - # - "review" tag is never added when is set to 'False' - if instance_data["useSequenceForReview"]: - # toggle preview on if multipart is on - if instance_data.get("multipartExr", False): - self.log.debug( - "Adding preview tag because its multipartExr" - ) - preview = True - else: - render_file_name = list(collection)[0] - # if filtered aov name is found in filename, toggle it for - # preview video rendering - preview = match_aov_pattern( - host_name, self.aov_filter, render_file_name - ) - - staging = os.path.dirname(list(collection)[0]) - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path(staging) - ) - if success: - staging = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging)) - - frame_start = int(instance_data.get("frameStartHandle")) - if instance_data.get("slate"): - frame_start -= 1 - - preview = preview and not do_not_add_review - rep = { - "name": ext, - "ext": ext, - "files": [os.path.basename(f) for f in list(collection)], - "frameStart": frame_start, - "frameEnd": int(instance_data.get("frameEndHandle")), - # If expectedFile are absolute, we need only filenames - "stagingDir": staging, - "fps": instance_data.get("fps"), - "tags": ["review"] if preview else [], - } - - # poor man exclusion - if ext in self.skip_integration_repre_list: - rep["tags"].append("delete") - - if instance_data.get("multipartExr", False): - rep["tags"].append("multipartExr") - - # support conversion from tiled to scanline - if instance_data.get("convertToScanline"): - self.log.info("Adding scanline conversion.") - rep["tags"].append("toScanline") - - representations.append(rep) - - self._solve_families(instance_data, preview) - - # add remainders as representations - for remainder in remainders: - ext = remainder.split(".")[-1] - - staging = os.path.dirname(remainder) - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path(staging) - ) - if success: - staging = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging)) - - rep = { - "name": ext, - "ext": ext, - "files": os.path.basename(remainder), - "stagingDir": staging, - } - - preview = match_aov_pattern( - host_name, self.aov_filter, remainder - ) - preview = preview and not do_not_add_review - if preview: - rep.update({ - "fps": instance_data.get("fps"), - "tags": ["review"] - }) - self._solve_families(instance_data, preview) - - already_there = False - for repre in instance_data.get("representations", []): - # might be added explicitly before by publish_on_farm - already_there = repre.get("files") == rep["files"] - if already_there: - self.log.debug("repre {} already_there".format(repre)) - break - - if not already_there: - representations.append(rep) - - for rep in representations: - # inject colorspace data - self.set_representation_colorspace( - rep, self.context, - colorspace=instance_data["colorspace"] - ) - - return representations - - def _solve_families(self, instance, preview=False): - families = instance.get("families") - - # if we have one representation with preview tag - # flag whole instance for review and for ftrack - if preview: - if "ftrack" not in families: - if os.environ.get("FTRACK_SERVER"): - self.log.debug( - "Adding \"ftrack\" to families because of preview tag." - ) - families.append("ftrack") - if "review" not in families: - self.log.debug( - "Adding \"review\" to families because of preview tag." - ) - families.append("review") - instance["families"] = families def process(self, instance): # type: (pyblish.api.Instance) -> None """Process plugin. - Detect type of renderfarm submission and create and post dependent job - in case of Deadline. It creates json file with metadata needed for + Detect type of render farm submission and create and post dependent + job in case of Deadline. It creates json file with metadata needed for publishing in directory of render. Args: @@ -779,151 +345,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, self.log.debug("Skipping local instance.") return - data = instance.data.copy() - context = instance.context - self.context = context - self.anatomy = instance.context.data["anatomy"] - - asset = data.get("asset") or legacy_io.Session["AVALON_ASSET"] - subset = data.get("subset") - - start = instance.data.get("frameStart") - if start is None: - start = context.data["frameStart"] - - end = instance.data.get("frameEnd") - if end is None: - end = context.data["frameEnd"] - - handle_start = instance.data.get("handleStart") - if handle_start is None: - handle_start = context.data["handleStart"] - - handle_end = instance.data.get("handleEnd") - if handle_end is None: - handle_end = context.data["handleEnd"] - - fps = instance.data.get("fps") - if fps is None: - fps = context.data["fps"] - - if data.get("extendFrames", False): - start, end = self._extend_frames( - asset, - subset, - start, - end, - data["overrideExistingFrame"]) - - try: - source = data["source"] - except KeyError: - source = context.data["currentFile"] - - success, rootless_path = ( - self.anatomy.find_root_template_from_path(source) - ) - if success: - source = rootless_path - - else: - # `rootless_path` is not set to `source` if none of roots match - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues." - ).format(source)) - - family = "render" - if ("prerender" in instance.data["families"] or - "prerender.farm" in instance.data["families"]): - family = "prerender" - families = [family] - - # pass review to families if marked as review - do_not_add_review = False - if data.get("review"): - families.append("review") - elif data.get("review") == False: - self.log.debug("Instance has review explicitly disabled.") - do_not_add_review = True - - instance_skeleton_data = { - "family": family, - "subset": subset, - "families": families, - "asset": asset, - "frameStart": start, - "frameEnd": end, - "handleStart": handle_start, - "handleEnd": handle_end, - "frameStartHandle": start - handle_start, - "frameEndHandle": end + handle_end, - "comment": instance.data["comment"], - "fps": fps, - "source": source, - "extendFrames": data.get("extendFrames"), - "overrideExistingFrame": data.get("overrideExistingFrame"), - "pixelAspect": data.get("pixelAspect", 1), - "resolutionWidth": data.get("resolutionWidth", 1920), - "resolutionHeight": data.get("resolutionHeight", 1080), - "multipartExr": data.get("multipartExr", False), - "jobBatchName": data.get("jobBatchName", ""), - "useSequenceForReview": data.get("useSequenceForReview", True), - # map inputVersions `ObjectId` -> `str` so json supports it - "inputVersions": list(map(str, data.get("inputVersions", []))), - "colorspace": instance.data.get("colorspace") - } - - # skip locking version if we are creating v01 - instance_version = instance.data.get("version") # take this if exists - if instance_version != 1: - instance_skeleton_data["version"] = instance_version - - # transfer specific families from original instance to new render - for item in self.families_transfer: - if item in instance.data.get("families", []): - instance_skeleton_data["families"] += [item] - - # transfer specific properties from original instance based on - # mapping dictionary `instance_transfer` - for key, values in self.instance_transfer.items(): - if key in instance.data.get("families", []): - for v in values: - instance_skeleton_data[v] = instance.data.get(v) - - # look into instance data if representations are not having any - # which are having tag `publish_on_farm` and include them - for repre in instance.data.get("representations", []): - staging_dir = repre.get("stagingDir") - if staging_dir: - success, rootless_staging_dir = ( - self.anatomy.find_root_template_from_path( - staging_dir - ) - ) - if success: - repre["stagingDir"] = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(staging_dir)) - repre["stagingDir"] = staging_dir - - if "publish_on_farm" in repre.get("tags"): - # create representations attribute of not there - if "representations" not in instance_skeleton_data.keys(): - instance_skeleton_data["representations"] = [] - - instance_skeleton_data["representations"].append(repre) - - instances = None - assert data.get("expectedFiles"), ("Submission from old Pype version" - " - missing expectedFiles") + anatomy = instance.context.data["anatomy"] + instance_skeleton_data = create_skeleton_instance( + instance, families_transfer=self.families_transfer, + instance_transfer=self.instance_transfer) """ - if content of `expectedFiles` are dictionaries, we will handle - it as list of AOVs, creating instance from every one of them. + if content of `expectedFiles` list are dictionaries, we will handle + it as list of AOVs, creating instance for every one of them. Example: -------- @@ -945,7 +374,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, This will create instances for `beauty` and `Z` subset adding those files to their respective representations. - If we've got only list of files, we collect all filesequences. + If we have only list of files, we collect all file sequences. More then one doesn't probably make sense, but we'll handle it like creating one instance with multiple representations. @@ -962,58 +391,26 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, This will result in one instance with two representations: `foo` and `xxx` """ + do_not_add_review = False + if instance.data.get("review") is False: + self.log.debug("Instance has review explicitly disabled.") + do_not_add_review = True - self.log.info(data.get("expectedFiles")) - - if isinstance(data.get("expectedFiles")[0], dict): - # we cannot attach AOVs to other subsets as we consider every - # AOV subset of its own. - - additional_data = { - "renderProducts": instance.data["renderProducts"], - "colorspaceConfig": instance.data["colorspaceConfig"], - "display": instance.data["colorspaceDisplay"], - "view": instance.data["colorspaceView"] - } - - # Get templated path from absolute config path. - anatomy = instance.context.data["anatomy"] - colorspaceTemplate = instance.data["colorspaceConfig"] - success, rootless_staging_dir = ( - anatomy.find_root_template_from_path(colorspaceTemplate) - ) - if success: - colorspaceTemplate = rootless_staging_dir - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(colorspaceTemplate)) - additional_data["colorspaceTemplate"] = colorspaceTemplate - - if len(data.get("attachTo")) > 0: - assert len(data.get("expectedFiles")[0].keys()) == 1, ( - "attaching multiple AOVs or renderable cameras to " - "subset is not supported") - - # create instances for every AOV we found in expected files. - # note: this is done for every AOV and every render camere (if - # there are multiple renderable cameras in scene) - instances = self._create_instances_for_aov( - instance_skeleton_data, - data.get("expectedFiles"), - additional_data, - do_not_add_review - ) - self.log.info("got {} instance{}".format( - len(instances), - "s" if len(instances) > 1 else "")) - + if isinstance(instance.data.get("expectedFiles")[0], dict): + instances = create_instances_for_aov( + instance, instance_skeleton_data, + self.aov_filter, self.skip_integration_repre_list, + do_not_add_review) else: - representations = self._get_representations( + representations = prepare_representations( instance_skeleton_data, - data.get("expectedFiles"), - do_not_add_review + instance.data.get("expectedFiles"), + anatomy, + self.aov_filter, + self.skip_integration_repre_list, + do_not_add_review, + instance.context, + self ) if "representations" not in instance_skeleton_data.keys(): @@ -1023,25 +420,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, instance_skeleton_data["representations"] += representations instances = [instance_skeleton_data] - # if we are attaching to other subsets, create copy of existing - # instances, change data to match its subset and replace - # existing instances with modified data + # attach instances to subset if instance.data.get("attachTo"): - self.log.info("Attaching render to subset:") - new_instances = [] - for at in instance.data.get("attachTo"): - for i in instances: - new_i = copy(i) - new_i["version"] = at.get("version") - new_i["subset"] = at.get("subset") - new_i["family"] = at.get("family") - new_i["append"] = True - # don't set subsetGroup if we are attaching - new_i.pop("subsetGroup") - new_instances.append(new_i) - self.log.info(" - {} / v{}".format( - at.get("subset"), at.get("version"))) - instances = new_instances + instances = attach_instances_to_subset( + instance.data.get("attachTo"), instances + ) r''' SUBMiT PUBLiSH JOB 2 D34DLiN3 ____ @@ -1056,11 +439,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, render_job = None submission_type = "" if instance.data.get("toBeRenderedOn") == "deadline": - render_job = data.pop("deadlineSubmissionJob", None) + render_job = instance.data.pop("deadlineSubmissionJob", None) submission_type = "deadline" if instance.data.get("toBeRenderedOn") == "muster": - render_job = data.pop("musterSubmissionJob", None) + render_job = instance.data.pop("musterSubmissionJob", None) submission_type = "muster" if not render_job and instance.data.get("tileRendering") is False: @@ -1082,10 +465,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, render_job["Props"]["Batch"] = instance.data.get( "jobBatchName") else: - render_job["Props"]["Batch"] = os.path.splitext( - os.path.basename(context.data.get("currentFile")))[0] + batch = os.path.splitext(os.path.basename( + instance.context.data.get("currentFile")))[0] + render_job["Props"]["Batch"] = batch # User is deadline user - render_job["Props"]["User"] = context.data.get( + render_job["Props"]["User"] = instance.context.data.get( "deadlineUser", getpass.getuser()) render_job["Props"]["Env"] = { @@ -1094,6 +478,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "FTRACK_SERVER": os.environ.get("FTRACK_SERVER"), } + deadline_publish_job_id = None if submission_type == "deadline": # get default deadline webservice url from deadline module self.deadline_url = instance.context.data["defaultDeadline"] @@ -1111,15 +496,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, # publish job file publish_job = { - "asset": asset, - "frameStart": start, - "frameEnd": end, - "fps": context.data.get("fps", None), - "source": source, - "user": context.data["user"], - "version": context.data["version"], # this is workfile version - "intent": context.data.get("intent"), - "comment": context.data.get("comment"), + "asset": instance_skeleton_data["asset"], + "frameStart": instance_skeleton_data["frameStart"], + "frameEnd": instance_skeleton_data["frameEnd"], + "fps": instance_skeleton_data["fps"], + "source": instance_skeleton_data["source"], + "user": instance.context.data["user"], + "version": instance.context.data["version"], # workfile version + "intent": instance.context.data.get("intent"), + "comment": instance.context.data.get("comment"), "job": render_job or None, "session": legacy_io.Session.copy(), "instances": instances @@ -1129,7 +514,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, publish_job["deadline_publish_job_id"] = deadline_publish_job_id # add audio to metadata file if available - audio_file = context.data.get("audioFile") + audio_file = instance.context.data.get("audioFile") if audio_file and os.path.isfile(audio_file): publish_job.update({"audio": audio_file}) @@ -1142,57 +527,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, } publish_job.update({"ftrack": ftrack}) - metadata_path, rootless_metadata_path = self._create_metadata_path( - instance) + metadata_path, rootless_metadata_path = \ + create_metadata_path(instance, anatomy) - self.log.info("Writing json file: {}".format(metadata_path)) with open(metadata_path, "w") as f: json.dump(publish_job, f, indent=4, sort_keys=True) - def _extend_frames(self, asset, subset, start, end): - """Get latest version of asset nad update frame range. - - Based on minimum and maximuma values. - - Arguments: - asset (str): asset name - subset (str): subset name - start (int): start frame - end (int): end frame - - Returns: - (int, int): upddate frame start/end - - """ - # Frame comparison - prev_start = None - prev_end = None - - project_name = legacy_io.active_project() - version = get_last_version_by_subset_name( - project_name, - subset, - asset_name=asset - ) - - # Set prev start / end frames for comparison - if not prev_start and not prev_end: - prev_start = version["data"]["frameStart"] - prev_end = version["data"]["frameEnd"] - - updated_start = min(start, prev_start) - updated_end = max(end, prev_end) - - self.log.info( - "Updating start / end frame : " - "{} - {}".format(updated_start, updated_end) - ) - - return updated_start, updated_end - def _get_publish_folder(self, anatomy, template_data, - asset, subset, - family='render', version=None): + asset, subset, context, + family, version=None): """ Extracted logic to pre-calculate real publish folder, which is calculated in IntegrateNew inside of Deadline process. @@ -1217,8 +560,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, be stored based on 'publish' template """ + + project_name = context.data["projectName"] if not version: - project_name = legacy_io.active_project() version = get_last_version_by_subset_name( project_name, subset, @@ -1227,13 +571,32 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, if version: version = int(version["name"]) + 1 else: - version = 1 + version = get_versioning_start( + project_name, + template_data["app"], + task_name=template_data["task"]["name"], + task_type=template_data["task"]["type"], + family="render", + subset=subset, + project_settings=context.data["project_settings"] + ) + + host_name = context.data["hostName"] + task_info = template_data.get("task") or {} + + template_name = publish.get_publish_template_name( + project_name, + host_name, + family, + task_info.get("name"), + task_info.get("type"), + ) template_data["subset"] = subset - template_data["family"] = "render" + template_data["family"] = family template_data["version"] = version - render_templates = anatomy.templates_obj["render"] + render_templates = anatomy.templates_obj[template_name] if "folder" in render_templates: publish_folder = render_templates["folder"].format_strict( template_data @@ -1241,7 +604,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, else: # solve deprecated situation when `folder` key is not underneath # `publish` anatomy - project_name = legacy_io.Session["AVALON_PROJECT"] self.log.warning(( "Deprecation warning: Anatomy does not have set `folder`" " key underneath `publish` (in global of for project `{}`)." @@ -1251,3 +613,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, publish_folder = os.path.dirname(file_path) return publish_folder + + @classmethod + def get_attribute_defs(cls): + return [ + EnumDef("publishJobState", + label="Publish Job State", + items=["Active", "Suspended"], + default="Active") + ] diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py b/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py index d5016a4d82..a30401e7dc 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_connection.py @@ -10,7 +10,7 @@ class ValidateDeadlineConnection(pyblish.api.InstancePlugin): label = "Validate Deadline Web Service" order = pyblish.api.ValidatorOrder hosts = ["maya", "nuke"] - families = ["renderlayer"] + families = ["renderlayer", "render"] def process(self, instance): # get default deadline webservice url from deadline module diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py index e1c0595830..594f0ef866 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py @@ -19,6 +19,7 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin, order = pyblish.api.ValidatorOrder families = ["rendering", "render.farm", + "render.frames_farm", "renderFarm", "renderlayer", "maxrender"] diff --git a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py index ff4be677e7..9f1f7bc518 100644 --- a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py +++ b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py @@ -20,8 +20,19 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): allow_user_override = True def process(self, instance): - self.instance = instance - frame_list = self._get_frame_list(instance.data["render_job_id"]) + """Process all the nodes in the instance""" + + # get dependency jobs ids for retrieving frame list + dependent_job_ids = self._get_dependent_job_ids(instance) + + if not dependent_job_ids: + self.log.warning("No dependent jobs found for instance: {}" + "".format(instance)) + return + + # get list of frames from dependent jobs + frame_list = self._get_dependent_jobs_frames( + instance, dependent_job_ids) for repre in instance.data["representations"]: expected_files = self._get_expected_files(repre) @@ -78,26 +89,45 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): ) ) - def _get_frame_list(self, original_job_id): + def _get_dependent_job_ids(self, instance): + """Returns list of dependent job ids from instance metadata.json + + Args: + instance (pyblish.api.Instance): pyblish instance + + Returns: + (list): list of dependent job ids + + """ + dependent_job_ids = [] + + # job_id collected from metadata.json + original_job_id = instance.data["render_job_id"] + + dependent_job_ids_env = os.environ.get("RENDER_JOB_IDS") + if dependent_job_ids_env: + dependent_job_ids = dependent_job_ids_env.split(',') + elif original_job_id: + dependent_job_ids = [original_job_id] + + return dependent_job_ids + + def _get_dependent_jobs_frames(self, instance, dependent_job_ids): """Returns list of frame ranges from all render job. Render job might be re-submitted so job_id in metadata.json could be invalid. GlobalJobPreload injects current job id to RENDER_JOB_IDS. Args: - original_job_id (str) + instance (pyblish.api.Instance): pyblish instance + dependent_job_ids (list): list of dependent job ids Returns: (list) """ all_frame_lists = [] - render_job_ids = os.environ.get("RENDER_JOB_IDS") - if render_job_ids: - render_job_ids = render_job_ids.split(',') - else: # fallback - render_job_ids = [original_job_id] - for job_id in render_job_ids: - job_info = self._get_job_info(job_id) + for job_id in dependent_job_ids: + job_info = self._get_job_info(instance, job_id) frame_list = job_info["Props"].get("Frames") if frame_list: all_frame_lists.extend(frame_list.split(',')) @@ -152,18 +182,25 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): return file_name_template, frame_placeholder - def _get_job_info(self, job_id): + def _get_job_info(self, instance, job_id): """Calls DL for actual job info for 'job_id' Might be different than job info saved in metadata.json if user manually changes job pre/during rendering. + Args: + instance (pyblish.api.Instance): pyblish instance + job_id (str): Deadline job id + + Returns: + (dict): Job info from Deadline + """ # get default deadline webservice url from deadline module - deadline_url = self.instance.context.data["defaultDeadline"] + deadline_url = instance.context.data["defaultDeadline"] # if custom one is set in instance, use that - if self.instance.data.get("deadlineUrl"): - deadline_url = self.instance.data.get("deadlineUrl") + if instance.data.get("deadlineUrl"): + deadline_url = instance.data.get("deadlineUrl") assert deadline_url, "Requires Deadline Webservice URL" url = "{}/api/jobs?JobID={}".format(deadline_url, job_id) diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico new file mode 100644 index 0000000000..aea977a125 Binary files /dev/null and b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.ico differ diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options new file mode 100644 index 0000000000..1fbe1ef299 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.options @@ -0,0 +1,9 @@ +[Arguments] +Type=string +Label=Arguments +Category=Python Options +CategoryOrder=0 +Index=1 +Description=The arguments to pass to the script. If no arguments are required, leave this blank. +Required=false +DisableIfBlank=true diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param new file mode 100644 index 0000000000..8ba044ff81 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.param @@ -0,0 +1,35 @@ +[About] +Type=label +Label=About +Category=About Plugin +CategoryOrder=-1 +Index=0 +Default=Ayon Plugin for Deadline +Description=Not configurable + +[AyonExecutable] +Type=multilinemultifilename +Label=Ayon Executable +Category=Ayon Executables +CategoryOrder=1 +Index=0 +Default= +Description=The path to the Ayon executable. Enter alternative paths on separate lines. + +[AyonServerUrl] +Type=string +Label=Ayon Server Url +Category=Ayon Credentials +CategoryOrder=2 +Index=0 +Default= +Description=Url to Ayon server + +[AyonApiKey] +Type=password +Label=Ayon API key +Category=Ayon Credentials +CategoryOrder=2 +Index=0 +Default= +Description=API key for service account on Ayon Server diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py new file mode 100644 index 0000000000..1544acc2a4 --- /dev/null +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py @@ -0,0 +1,150 @@ +#!/usr/bin/env python3 + +from System.IO import Path +from System.Text.RegularExpressions import Regex + +from Deadline.Plugins import PluginType, DeadlinePlugin +from Deadline.Scripting import ( + StringUtils, + FileUtils, + DirectoryUtils, + RepositoryUtils +) + +import re +import os +import platform + + +###################################################################### +# This is the function that Deadline calls to get an instance of the +# main DeadlinePlugin class. +###################################################################### +def GetDeadlinePlugin(): + return AyonDeadlinePlugin() + + +def CleanupDeadlinePlugin(deadlinePlugin): + deadlinePlugin.Cleanup() + + +class AyonDeadlinePlugin(DeadlinePlugin): + """ + Standalone plugin for publishing from Ayon + + Calls Ayonexecutable 'ayon_console' from first correctly found + file based on plugin configuration. Uses 'publish' command and passes + path to metadata json file, which contains all needed information + for publish process. + """ + def __init__(self): + super().__init__() + self.InitializeProcessCallback += self.InitializeProcess + self.RenderExecutableCallback += self.RenderExecutable + self.RenderArgumentCallback += self.RenderArgument + + def Cleanup(self): + for stdoutHandler in self.StdoutHandlers: + del stdoutHandler.HandleCallback + + del self.InitializeProcessCallback + del self.RenderExecutableCallback + del self.RenderArgumentCallback + + def InitializeProcess(self): + self.PluginType = PluginType.Simple + self.StdoutHandling = True + + self.SingleFramesOnly = self.GetBooleanPluginInfoEntryWithDefault( + "SingleFramesOnly", False) + self.LogInfo("Single Frames Only: %s" % self.SingleFramesOnly) + + self.AddStdoutHandlerCallback( + ".*Progress: (\d+)%.*").HandleCallback += self.HandleProgress + + def RenderExecutable(self): + job = self.GetJob() + + # set required env vars for Ayon + # cannot be in InitializeProcess as it is too soon + config = RepositoryUtils.GetPluginConfig("Ayon") + ayon_server_url = ( + job.GetJobEnvironmentKeyValue("AYON_SERVER_URL") or + config.GetConfigEntryWithDefault("AyonServerUrl", "") + ) + ayon_api_key = ( + job.GetJobEnvironmentKeyValue("AYON_API_KEY") or + config.GetConfigEntryWithDefault("AyonApiKey", "") + ) + ayon_bundle_name = job.GetJobEnvironmentKeyValue("AYON_BUNDLE_NAME") + + environment = { + "AYON_SERVER_URL": ayon_server_url, + "AYON_API_KEY": ayon_api_key, + "AYON_BUNDLE_NAME": ayon_bundle_name, + } + + for env, val in environment.items(): + self.SetProcessEnvironmentVariable(env, val) + + exe_list = self.GetConfigEntry("AyonExecutable") + # clean '\ ' for MacOS pasting + if platform.system().lower() == "darwin": + exe_list = exe_list.replace("\\ ", " ") + exe = FileUtils.SearchFileList(exe_list) + + if exe == "": + self.FailRender( + "Ayon executable was not found " + + "in the semicolon separated list " + + "\"" + ";".join(exe_list) + "\". " + + "The path to the render executable can be configured " + + "from the Plugin Configuration in the Deadline Monitor.") + return exe + + def RenderArgument(self): + arguments = str(self.GetPluginInfoEntryWithDefault("Arguments", "")) + arguments = RepositoryUtils.CheckPathMapping(arguments) + + arguments = re.sub(r"<(?i)STARTFRAME>", str(self.GetStartFrame()), + arguments) + arguments = re.sub(r"<(?i)ENDFRAME>", str(self.GetEndFrame()), + arguments) + arguments = re.sub(r"<(?i)QUOTE>", "\"", arguments) + + arguments = self.ReplacePaddedFrame(arguments, + "<(?i)STARTFRAME%([0-9]+)>", + self.GetStartFrame()) + arguments = self.ReplacePaddedFrame(arguments, + "<(?i)ENDFRAME%([0-9]+)>", + self.GetEndFrame()) + + count = 0 + for filename in self.GetAuxiliaryFilenames(): + localAuxFile = Path.Combine(self.GetJobsDataDirectory(), filename) + arguments = re.sub(r"<(?i)AUXFILE" + str(count) + r">", + localAuxFile.replace("\\", "/"), arguments) + count += 1 + + return arguments + + def ReplacePaddedFrame(self, arguments, pattern, frame): + frameRegex = Regex(pattern) + while True: + frameMatch = frameRegex.Match(arguments) + if not frameMatch.Success: + break + paddingSize = int(frameMatch.Groups[1].Value) + if paddingSize > 0: + padding = StringUtils.ToZeroPaddedString( + frame, paddingSize, False) + else: + padding = str(frame) + arguments = arguments.replace( + frameMatch.Groups[0].Value, padding) + + return arguments + + def HandleProgress(self): + progress = float(self.GetRegexMatch(1)) + self.SetProgress(progress) diff --git a/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py b/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py index 15226bb773..5f7e1f1032 100644 --- a/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py +++ b/openpype/modules/deadline/repository/custom/plugins/GlobalJobPreLoad.py @@ -355,6 +355,13 @@ def inject_openpype_environment(deadlinePlugin): " AVALON_TASK, AVALON_APP_NAME" )) + openpype_mongo = job.GetJobEnvironmentKeyValue("OPENPYPE_MONGO") + if openpype_mongo: + # inject env var for OP extractenvironments + # SetEnvironmentVariable is important, not SetProcessEnv... + deadlinePlugin.SetEnvironmentVariable("OPENPYPE_MONGO", + openpype_mongo) + if not os.environ.get("OPENPYPE_MONGO"): print(">>> Missing OPENPYPE_MONGO env var, process won't work") @@ -398,6 +405,151 @@ def inject_openpype_environment(deadlinePlugin): raise +def inject_ayon_environment(deadlinePlugin): + """ Pull env vars from Ayon and push them to rendering process. + + Used for correct paths, configuration from OpenPype etc. + """ + job = deadlinePlugin.GetJob() + + print(">>> Injecting Ayon environments ...") + try: + exe_list = get_ayon_executable() + exe = FileUtils.SearchFileList(exe_list) + + if not exe: + raise RuntimeError(( + "Ayon executable was not found in the semicolon " + "separated list \"{}\"." + "The path to the render executable can be configured" + " from the Plugin Configuration in the Deadline Monitor." + ).format(";".join(exe_list))) + + print("--- Ayon executable: {}".format(exe)) + + ayon_bundle_name = job.GetJobEnvironmentKeyValue("AYON_BUNDLE_NAME") + if not ayon_bundle_name: + raise RuntimeError("Missing env var in job properties " + "AYON_BUNDLE_NAME") + + config = RepositoryUtils.GetPluginConfig("Ayon") + ayon_server_url = ( + job.GetJobEnvironmentKeyValue("AYON_SERVER_URL") or + config.GetConfigEntryWithDefault("AyonServerUrl", "") + ) + ayon_api_key = ( + job.GetJobEnvironmentKeyValue("AYON_API_KEY") or + config.GetConfigEntryWithDefault("AyonApiKey", "") + ) + + if not all([ayon_server_url, ayon_api_key]): + raise RuntimeError(( + "Missing required values for server url and api key. " + "Please fill in Ayon Deadline plugin or provide by " + "AYON_SERVER_URL and AYON_API_KEY" + )) + + # tempfile.TemporaryFile cannot be used because of locking + temp_file_name = "{}_{}.json".format( + datetime.utcnow().strftime('%Y%m%d%H%M%S%f'), + str(uuid.uuid1()) + ) + export_url = os.path.join(tempfile.gettempdir(), temp_file_name) + print(">>> Temporary path: {}".format(export_url)) + + args = [ + "--headless", + "extractenvironments", + export_url + ] + + add_kwargs = { + "project": job.GetJobEnvironmentKeyValue("AVALON_PROJECT"), + "asset": job.GetJobEnvironmentKeyValue("AVALON_ASSET"), + "task": job.GetJobEnvironmentKeyValue("AVALON_TASK"), + "app": job.GetJobEnvironmentKeyValue("AVALON_APP_NAME"), + "envgroup": "farm", + } + + if job.GetJobEnvironmentKeyValue('IS_TEST'): + args.append("--automatic-tests") + + if all(add_kwargs.values()): + for key, value in add_kwargs.items(): + args.extend(["--{}".format(key), value]) + else: + raise RuntimeError(( + "Missing required env vars: AVALON_PROJECT, AVALON_ASSET," + " AVALON_TASK, AVALON_APP_NAME" + )) + + environment = { + "AYON_SERVER_URL": ayon_server_url, + "AYON_API_KEY": ayon_api_key, + "AYON_BUNDLE_NAME": ayon_bundle_name, + } + for env, val in environment.items(): + deadlinePlugin.SetEnvironmentVariable(env, val) + + args_str = subprocess.list2cmdline(args) + print(">>> Executing: {} {}".format(exe, args_str)) + process_exitcode = deadlinePlugin.RunProcess( + exe, args_str, os.path.dirname(exe), -1 + ) + + if process_exitcode != 0: + raise RuntimeError( + "Failed to run Ayon process to extract environments." + ) + + print(">>> Loading file ...") + with open(export_url) as fp: + contents = json.load(fp) + + for key, value in contents.items(): + deadlinePlugin.SetProcessEnvironmentVariable(key, value) + + script_url = job.GetJobPluginInfoKeyValue("ScriptFilename") + if script_url: + script_url = script_url.format(**contents).replace("\\", "/") + print(">>> Setting script path {}".format(script_url)) + job.SetJobPluginInfoKeyValue("ScriptFilename", script_url) + + print(">>> Removing temporary file") + os.remove(export_url) + + print(">> Injection end.") + except Exception as e: + if hasattr(e, "output"): + print(">>> Exception {}".format(e.output)) + import traceback + print(traceback.format_exc()) + print("!!! Injection failed.") + RepositoryUtils.FailJob(job) + raise + + +def get_ayon_executable(): + """Return OpenPype Executable from Event Plug-in Settings + + Returns: + (list) of paths + Raises: + (RuntimeError) if no path configured at all + """ + config = RepositoryUtils.GetPluginConfig("Ayon") + exe_list = config.GetConfigEntryWithDefault("AyonExecutable", "") + + if not exe_list: + raise RuntimeError("Path to Ayon executable not configured." + "Please set it in Ayon Deadline Plugin.") + + # clean '\ ' for MacOS pasting + if platform.system().lower() == "darwin": + exe_list = exe_list.replace("\\ ", " ") + return exe_list + + def inject_render_job_id(deadlinePlugin): """Inject dependency ids to publish process as env var for validation.""" print(">>> Injecting render job id ...") @@ -422,16 +574,29 @@ def __main__(deadlinePlugin): openpype_publish_job = \ job.GetJobEnvironmentKeyValue('OPENPYPE_PUBLISH_JOB') or '0' openpype_remote_job = \ - job.GetJobEnvironmentKeyValue('OPENPYPE_REMOTE_JOB') or '0' + job.GetJobEnvironmentKeyValue('OPENPYPE_REMOTE_PUBLISH') or '0' - print("--- Job type - render {}".format(openpype_render_job)) - print("--- Job type - publish {}".format(openpype_publish_job)) - print("--- Job type - remote {}".format(openpype_remote_job)) if openpype_publish_job == '1' and openpype_render_job == '1': raise RuntimeError("Misconfiguration. Job couldn't be both " + "render and publish.") if openpype_publish_job == '1': inject_render_job_id(deadlinePlugin) - elif openpype_render_job == '1' or openpype_remote_job == '1': + if openpype_render_job == '1' or openpype_remote_job == '1': inject_openpype_environment(deadlinePlugin) + + ayon_render_job = \ + job.GetJobEnvironmentKeyValue('AYON_RENDER_JOB') or '0' + ayon_publish_job = \ + job.GetJobEnvironmentKeyValue('AYON_PUBLISH_JOB') or '0' + ayon_remote_job = \ + job.GetJobEnvironmentKeyValue('AYON_REMOTE_PUBLISH') or '0' + + if ayon_publish_job == '1' and ayon_render_job == '1': + raise RuntimeError("Misconfiguration. Job couldn't be both " + + "render and publish.") + + if ayon_publish_job == '1': + inject_render_job_id(deadlinePlugin) + if ayon_render_job == '1' or ayon_remote_job == '1': + inject_ayon_environment(deadlinePlugin) diff --git a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py index 0615af95dd..2f6e9cf379 100644 --- a/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py +++ b/openpype/modules/deadline/repository/custom/plugins/HarmonyOpenPype/HarmonyOpenPype.py @@ -8,13 +8,14 @@ from Deadline.Scripting import * def GetDeadlinePlugin(): return HarmonyOpenPypePlugin() - + def CleanupDeadlinePlugin( deadlinePlugin ): deadlinePlugin.Cleanup() - + class HarmonyOpenPypePlugin( DeadlinePlugin ): def __init__( self ): + super().__init__() self.InitializeProcessCallback += self.InitializeProcess self.RenderExecutableCallback += self.RenderExecutable self.RenderArgumentCallback += self.RenderArgument @@ -24,11 +25,11 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): print("Cleanup") for stdoutHandler in self.StdoutHandlers: del stdoutHandler.HandleCallback - + del self.InitializeProcessCallback del self.RenderExecutableCallback del self.RenderArgumentCallback - + def CheckExitCode( self, exitCode ): print("check code") if exitCode != 0: @@ -36,20 +37,20 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): self.LogInfo( "Renderer reported an error with error code 100. This will be ignored, since the option to ignore it is specified in the Job Properties." ) else: self.FailRender( "Renderer returned non-zero error code %d. Check the renderer's output." % exitCode ) - + def InitializeProcess( self ): self.PluginType = PluginType.Simple self.StdoutHandling = True self.PopupHandling = True - + self.AddStdoutHandlerCallback( "Rendered frame ([0-9]+)" ).HandleCallback += self.HandleStdoutProgress - + def HandleStdoutProgress( self ): startFrame = self.GetStartFrame() endFrame = self.GetEndFrame() if( endFrame - startFrame + 1 != 0 ): self.SetProgress( 100 * ( int(self.GetRegexMatch(1)) - startFrame + 1 ) / ( endFrame - startFrame + 1 ) ) - + def RenderExecutable( self ): version = int( self.GetPluginInfoEntry( "Version" ) ) exe = "" @@ -58,7 +59,7 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): if( exe == "" ): self.FailRender( "Harmony render executable was not found in the configured separated list \"" + exeList + "\". The path to the render executable can be configured from the Plugin Configuration in the Deadline Monitor." ) return exe - + def RenderArgument( self ): renderArguments = "-batch" @@ -72,20 +73,20 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): resolutionX = self.GetIntegerPluginInfoEntryWithDefault( "ResolutionX", -1 ) resolutionY = self.GetIntegerPluginInfoEntryWithDefault( "ResolutionY", -1 ) fov = self.GetFloatPluginInfoEntryWithDefault( "FieldOfView", -1 ) - + if resolutionX > 0 and resolutionY > 0 and fov > 0: renderArguments += " -res " + str( resolutionX ) + " " + str( resolutionY ) + " " + str( fov ) - + camera = self.GetPluginInfoEntryWithDefault( "Camera", "" ) - + if not camera == "": renderArguments += " -camera " + camera - + startFrame = str( self.GetStartFrame() ) endFrame = str( self.GetEndFrame() ) - + renderArguments += " -frames " + startFrame + " " + endFrame - + if not self.GetBooleanPluginInfoEntryWithDefault( "IsDatabase", False ): sceneFilename = self.GetPluginInfoEntryWithDefault( "SceneFile", self.GetDataFilename() ) sceneFilename = RepositoryUtils.CheckPathMapping( sceneFilename ) @@ -99,12 +100,12 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): renderArguments += " -scene " + scene version = self.GetPluginInfoEntryWithDefault( "SceneVersion", "" ) renderArguments += " -version " + version - + #tempSceneDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) ) - #preRenderScript = + #preRenderScript = rendernodeNum = 0 scriptBuilder = StringBuilder() - + while True: nodeName = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Node", "" ) if nodeName == "": @@ -115,35 +116,35 @@ class HarmonyOpenPypePlugin( DeadlinePlugin ): nodeLeadingZero = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "LeadingZero", "" ) nodeFormat = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Format", "" ) nodeStartFrame = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "StartFrame", "" ) - + if not nodePath == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"drawingName\", 1, \"" + nodePath + "\" );") - + if not nodeLeadingZero == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"leadingZeros\", 1, \"" + nodeLeadingZero + "\" );") - + if not nodeFormat == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"drawingType\", 1, \"" + nodeFormat + "\" );") - + if not nodeStartFrame == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"start\", 1, \"" + nodeStartFrame + "\" );") - + if nodeType == "Movie": nodePath = self.GetPluginInfoEntryWithDefault( "Output" + str( rendernodeNum ) + "Path", "" ) if not nodePath == "": scriptBuilder.AppendLine("node.setTextAttr( \"" + nodeName + "\", \"moviePath\", 1, \"" + nodePath + "\" );") - + rendernodeNum += 1 - + tempDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) ) preRenderScriptName = Path.Combine( tempDirectory, "preRenderScript.txt" ) - + File.WriteAllText( preRenderScriptName, scriptBuilder.ToString() ) - + preRenderInlineScript = self.GetPluginInfoEntryWithDefault( "PreRenderInlineScript", "" ) if preRenderInlineScript: renderArguments += " -preRenderInlineScript \"" + preRenderInlineScript +"\"" - + renderArguments += " -preRenderScript \"" + preRenderScriptName +"\"" - + return renderArguments diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py b/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py index 6e1b973fb9..004c58d346 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPype/OpenPype.py @@ -38,6 +38,7 @@ class OpenPypeDeadlinePlugin(DeadlinePlugin): for publish process. """ def __init__(self): + super().__init__() self.InitializeProcessCallback += self.InitializeProcess self.RenderExecutableCallback += self.RenderExecutable self.RenderArgumentCallback += self.RenderArgument @@ -107,7 +108,7 @@ class OpenPypeDeadlinePlugin(DeadlinePlugin): "Scanning for compatible requested " f"version {requested_version}")) dir_list = self.GetConfigEntry("OpenPypeInstallationDirs") - + # clean '\ ' for MacOS pasting if platform.system().lower() == "darwin": dir_list = dir_list.replace("\\ ", " ") diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py index b51daffbc8..9641c16d20 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py @@ -249,6 +249,7 @@ class OpenPypeTileAssembler(DeadlinePlugin): def __init__(self): """Init.""" + super().__init__() self.InitializeProcessCallback += self.initialize_process self.RenderExecutableCallback += self.render_executable self.RenderArgumentCallback += self.render_argument diff --git a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py index df9147bdf7..442206feba 100644 --- a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py @@ -40,6 +40,7 @@ class SyncToAvalonServer(ServerAction): #: Action description. description = "Send data from Ftrack to Avalon" role_list = {"Pypeclub", "Administrator", "Project Manager"} + settings_key = "sync_to_avalon" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -48,11 +49,16 @@ class SyncToAvalonServer(ServerAction): def discover(self, session, entities, event): """ Validation """ # Check if selection is valid + is_valid = False for ent in event["data"]["selection"]: # Ignore entities that are not tasks or projects if ent["entityType"].lower() in ["show", "task"]: - return True - return False + is_valid = True + break + + if is_valid: + is_valid = self.valid_roles(session, entities, event) + return is_valid def launch(self, session, in_entities, event): self.log.debug("{}: Creating job".format(self.label)) diff --git a/openpype/modules/ftrack/ftrack_module.py b/openpype/modules/ftrack/ftrack_module.py index d61b5f0b26..b5152ff9c4 100644 --- a/openpype/modules/ftrack/ftrack_module.py +++ b/openpype/modules/ftrack/ftrack_module.py @@ -123,18 +123,7 @@ class FtrackModule( # Add Python 2 modules python_paths = [ # `python-ftrack-api` - os.path.join(python_2_vendor, "ftrack-python-api", "source"), - # `arrow` - os.path.join(python_2_vendor, "arrow"), - # `builtins` from `python-future` - # - `python-future` is strict Python 2 module that cause crashes - # of Python 3 scripts executed through OpenPype - # (burnin script etc.) - os.path.join(python_2_vendor, "builtins"), - # `backports.functools_lru_cache` - os.path.join( - python_2_vendor, "backports.functools_lru_cache" - ) + os.path.join(python_2_vendor, "ftrack-python-api", "source") ] # Load PYTHONPATH from current launch context diff --git a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py index 86ecffd5b8..ac4e499e41 100644 --- a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py +++ b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py @@ -2,11 +2,12 @@ import os import ftrack_api from openpype.settings import get_project_settings -from openpype.lib import PostLaunchHook +from openpype.lib.applications import PostLaunchHook, LaunchTypes class PostFtrackHook(PostLaunchHook): order = None + launch_types = {LaunchTypes.local} def execute(self): project_name = self.data.get("project_name") diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py index e13b7e65cd..fe3275ce2c 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py @@ -1,8 +1,6 @@ import logging import pyblish.api -from openpype.pipeline import legacy_io - class CollectFtrackApi(pyblish.api.ContextPlugin): """ Collects an ftrack session and the current task id. """ @@ -24,9 +22,9 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): self.log.debug("Ftrack user: \"{0}\"".format(session.api_user)) # Collect task - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] + project_name = context.data["projectName"] + asset_name = context.data["asset"] + task_name = context.data["task"] # Find project entity project_query = 'Project where full_name is "{0}"'.format(project_name) diff --git a/openpype/modules/ftrack/plugins/publish/collect_username.py b/openpype/modules/ftrack/plugins/publish/collect_username.py index 798f3960a8..0c7c0a57be 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_username.py +++ b/openpype/modules/ftrack/plugins/publish/collect_username.py @@ -33,7 +33,7 @@ class CollectUsernameForWebpublish(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder + 0.0015 label = "Collect ftrack username" hosts = ["webpublisher", "photoshop"] - targets = ["remotepublish", "filespublish", "tvpaint_worker"] + targets = ["webpublish"] def process(self, context): self.log.info("{}".format(self.__class__.__name__)) diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py index deb8b414f0..4d474fab10 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py @@ -11,10 +11,8 @@ Provides: """ import os -import sys import collections -import six import pyblish.api import clique @@ -355,7 +353,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin): status_name = asset_version_data.pop("status_name", None) # Try query asset version by criteria (asset id and version) - version = asset_version_data.get("version") or 0 + version = asset_version_data.get("version") or "0" asset_version_entity = self._query_asset_version( session, version, asset_id ) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/.gitignore b/openpype/modules/ftrack/python2_vendor/arrow/.gitignore deleted file mode 100644 index 0448d0cf0c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/.gitignore +++ /dev/null @@ -1,211 +0,0 @@ -README.rst.new - -# Small entry point file for debugging tasks -test.py - -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class - -# C extensions -*.so - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -pip-wheel-metadata/ -share/python-wheels/ -*.egg-info/ -.installed.cfg -*.egg - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.nox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 -db.sqlite3-journal - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# IPython -profile_default/ -ipython_config.py - -# pyenv -.python-version - -# celery beat schedule file -celerybeat-schedule - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -local/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ -.dmypy.json -dmypy.json - -# Pyre type checker -.pyre/ - -# Swap -[._]*.s[a-v][a-z] -[._]*.sw[a-p] -[._]s[a-rt-v][a-z] -[._]ss[a-gi-z] -[._]sw[a-p] - -# Session -Session.vim -Sessionx.vim - -# Temporary -.netrwhist -*~ -# Auto-generated tag files -tags -# Persistent undo -[._]*.un~ - -.idea/ -.vscode/ - -# General -.DS_Store -.AppleDouble -.LSOverride - -# Icon must end with two \r -Icon - - -# Thumbnails -._* - -# Files that might appear in the root of a volume -.DocumentRevisions-V100 -.fseventsd -.Spotlight-V100 -.TemporaryItems -.Trashes -.VolumeIcon.icns -.com.apple.timemachine.donotpresent - -# Directories potentially created on remote AFP share -.AppleDB -.AppleDesktop -Network Trash Folder -Temporary Items -.apdisk - -*~ - -# temporary files which can be created if a process still has a handle open of a deleted file -.fuse_hidden* - -# KDE directory preferences -.directory - -# Linux trash folder which might appear on any partition or disk -.Trash-* - -# .nfs files are created when an open file is removed but is still being accessed -.nfs* - -# Windows thumbnail cache files -Thumbs.db -Thumbs.db:encryptable -ehthumbs.db -ehthumbs_vista.db - -# Dump file -*.stackdump - -# Folder config file -[Dd]esktop.ini - -# Recycle Bin used on file shares -$RECYCLE.BIN/ - -# Windows Installer files -*.cab -*.msi -*.msix -*.msm -*.msp - -# Windows shortcuts -*.lnk diff --git a/openpype/modules/ftrack/python2_vendor/arrow/.pre-commit-config.yaml b/openpype/modules/ftrack/python2_vendor/arrow/.pre-commit-config.yaml deleted file mode 100644 index 1f5128595b..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/.pre-commit-config.yaml +++ /dev/null @@ -1,41 +0,0 @@ -default_language_version: - python: python3 -repos: - - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v3.2.0 - hooks: - - id: trailing-whitespace - - id: end-of-file-fixer - - id: fix-encoding-pragma - exclude: ^arrow/_version.py - - id: requirements-txt-fixer - - id: check-ast - - id: check-yaml - - id: check-case-conflict - - id: check-docstring-first - - id: check-merge-conflict - - id: debug-statements - - repo: https://github.com/timothycrosley/isort - rev: 5.4.2 - hooks: - - id: isort - - repo: https://github.com/asottile/pyupgrade - rev: v2.7.2 - hooks: - - id: pyupgrade - - repo: https://github.com/pre-commit/pygrep-hooks - rev: v1.6.0 - hooks: - - id: python-no-eval - - id: python-check-blanket-noqa - - id: rst-backticks - - repo: https://github.com/psf/black - rev: 20.8b1 - hooks: - - id: black - args: [--safe, --quiet] - - repo: https://gitlab.com/pycqa/flake8 - rev: 3.8.3 - hooks: - - id: flake8 - additional_dependencies: [flake8-bugbear] diff --git a/openpype/modules/ftrack/python2_vendor/arrow/CHANGELOG.rst b/openpype/modules/ftrack/python2_vendor/arrow/CHANGELOG.rst deleted file mode 100644 index 0b55a4522c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/CHANGELOG.rst +++ /dev/null @@ -1,598 +0,0 @@ -Changelog -========= - -0.17.0 (2020-10-2) -------------------- - -- [WARN] Arrow will **drop support** for Python 2.7 and 3.5 in the upcoming 1.0.0 release. This is the last major release to support Python 2.7 and Python 3.5. -- [NEW] Arrow now properly handles imaginary datetimes during DST shifts. For example: - -..code-block:: python - >>> just_before = arrow.get(2013, 3, 31, 1, 55, tzinfo="Europe/Paris") - >>> just_before.shift(minutes=+10) - - -..code-block:: python - >>> before = arrow.get("2018-03-10 23:00:00", "YYYY-MM-DD HH:mm:ss", tzinfo="US/Pacific") - >>> after = arrow.get("2018-03-11 04:00:00", "YYYY-MM-DD HH:mm:ss", tzinfo="US/Pacific") - >>> result=[(t, t.to("utc")) for t in arrow.Arrow.range("hour", before, after)] - >>> for r in result: - ... print(r) - ... - (, ) - (, ) - (, ) - (, ) - (, ) - -- [NEW] Added ``humanize`` week granularity translation for Tagalog. -- [CHANGE] Calls to the ``timestamp`` property now emit a ``DeprecationWarning``. In a future release, ``timestamp`` will be changed to a method to align with Python's datetime module. If you would like to continue using the property, please change your code to use the ``int_timestamp`` or ``float_timestamp`` properties instead. -- [CHANGE] Expanded and improved Catalan locale. -- [FIX] Fixed a bug that caused ``Arrow.range()`` to incorrectly cut off ranges in certain scenarios when using month, quarter, or year endings. -- [FIX] Fixed a bug that caused day of week token parsing to be case sensitive. -- [INTERNAL] A number of functions were reordered in arrow.py for better organization and grouping of related methods. This change will have no impact on usage. -- [INTERNAL] A minimum tox version is now enforced for compatibility reasons. Contributors must use tox >3.18.0 going forward. - -0.16.0 (2020-08-23) -------------------- - -- [WARN] Arrow will **drop support** for Python 2.7 and 3.5 in the upcoming 1.0.0 release. The 0.16.x and 0.17.x releases are the last to support Python 2.7 and 3.5. -- [NEW] Implemented `PEP 495 `_ to handle ambiguous datetimes. This is achieved by the addition of the ``fold`` attribute for Arrow objects. For example: - -.. code-block:: python - - >>> before = Arrow(2017, 10, 29, 2, 0, tzinfo='Europe/Stockholm') - - >>> before.fold - 0 - >>> before.ambiguous - True - >>> after = Arrow(2017, 10, 29, 2, 0, tzinfo='Europe/Stockholm', fold=1) - - >>> after = before.replace(fold=1) - - -- [NEW] Added ``normalize_whitespace`` flag to ``arrow.get``. This is useful for parsing log files and/or any files that may contain inconsistent spacing. For example: - -.. code-block:: python - - >>> arrow.get("Jun 1 2005 1:33PM", "MMM D YYYY H:mmA", normalize_whitespace=True) - - >>> arrow.get("2013-036 \t 04:05:06Z", normalize_whitespace=True) - - -0.15.8 (2020-07-23) -------------------- - -- [WARN] Arrow will **drop support** for Python 2.7 and 3.5 in the upcoming 1.0.0 release. The 0.15.x, 0.16.x, and 0.17.x releases are the last to support Python 2.7 and 3.5. -- [NEW] Added ``humanize`` week granularity translation for Czech. -- [FIX] ``arrow.get`` will now pick sane defaults when weekdays are passed with particular token combinations, see `#446 `_. -- [INTERNAL] Moved arrow to an organization. The repo can now be found `here `_. -- [INTERNAL] Started issuing deprecation warnings for Python 2.7 and 3.5. -- [INTERNAL] Added Python 3.9 to CI pipeline. - -0.15.7 (2020-06-19) -------------------- - -- [NEW] Added a number of built-in format strings. See the `docs `_ for a complete list of supported formats. For example: - -.. code-block:: python - - >>> arw = arrow.utcnow() - >>> arw.format(arrow.FORMAT_COOKIE) - 'Wednesday, 27-May-2020 10:30:35 UTC' - -- [NEW] Arrow is now fully compatible with Python 3.9 and PyPy3. -- [NEW] Added Makefile, tox.ini, and requirements.txt files to the distribution bundle. -- [NEW] Added French Canadian and Swahili locales. -- [NEW] Added ``humanize`` week granularity translation for Hebrew, Greek, Macedonian, Swedish, Slovak. -- [FIX] ms and μs timestamps are now normalized in ``arrow.get()``, ``arrow.fromtimestamp()``, and ``arrow.utcfromtimestamp()``. For example: - -.. code-block:: python - - >>> ts = 1591161115194556 - >>> arw = arrow.get(ts) - - >>> arw.timestamp - 1591161115 - -- [FIX] Refactored and updated Macedonian, Hebrew, Korean, and Portuguese locales. - -0.15.6 (2020-04-29) -------------------- - -- [NEW] Added support for parsing and formatting `ISO 8601 week dates `_ via a new token ``W``, for example: - -.. code-block:: python - - >>> arrow.get("2013-W29-6", "W") - - >>> utc=arrow.utcnow() - >>> utc - - >>> utc.format("W") - '2020-W04-4' - -- [NEW] Formatting with ``x`` token (microseconds) is now possible, for example: - -.. code-block:: python - - >>> dt = arrow.utcnow() - >>> dt.format("x") - '1585669870688329' - >>> dt.format("X") - '1585669870' - -- [NEW] Added ``humanize`` week granularity translation for German, Italian, Polish & Taiwanese locales. -- [FIX] Consolidated and simplified German locales. -- [INTERNAL] Moved testing suite from nosetest/Chai to pytest/pytest-mock. -- [INTERNAL] Converted xunit-style setup and teardown functions in tests to pytest fixtures. -- [INTERNAL] Setup Github Actions for CI alongside Travis. -- [INTERNAL] Help support Arrow's future development by donating to the project on `Open Collective `_. - -0.15.5 (2020-01-03) -------------------- - -- [WARN] Python 2 reached EOL on 2020-01-01. arrow will **drop support** for Python 2 in a future release to be decided (see `#739 `_). -- [NEW] Added bounds parameter to ``span_range``, ``interval`` and ``span`` methods. This allows you to include or exclude the start and end values. -- [NEW] ``arrow.get()`` can now create arrow objects from a timestamp with a timezone, for example: - -.. code-block:: python - - >>> arrow.get(1367900664, tzinfo=tz.gettz('US/Pacific')) - - -- [NEW] ``humanize`` can now combine multiple levels of granularity, for example: - -.. code-block:: python - - >>> later140 = arrow.utcnow().shift(seconds=+8400) - >>> later140.humanize(granularity="minute") - 'in 139 minutes' - >>> later140.humanize(granularity=["hour", "minute"]) - 'in 2 hours and 19 minutes' - -- [NEW] Added Hong Kong locale (``zh_hk``). -- [NEW] Added ``humanize`` week granularity translation for Dutch. -- [NEW] Numbers are now displayed when using the seconds granularity in ``humanize``. -- [CHANGE] ``range`` now supports both the singular and plural forms of the ``frames`` argument (e.g. day and days). -- [FIX] Improved parsing of strings that contain punctuation. -- [FIX] Improved behaviour of ``humanize`` when singular seconds are involved. - -0.15.4 (2019-11-02) -------------------- - -- [FIX] Fixed an issue that caused package installs to fail on Conda Forge. - -0.15.3 (2019-11-02) -------------------- - -- [NEW] ``factory.get()`` can now create arrow objects from a ISO calendar tuple, for example: - -.. code-block:: python - - >>> arrow.get((2013, 18, 7)) - - -- [NEW] Added a new token ``x`` to allow parsing of integer timestamps with milliseconds and microseconds. -- [NEW] Formatting now supports escaping of characters using the same syntax as parsing, for example: - -.. code-block:: python - - >>> arw = arrow.now() - >>> fmt = "YYYY-MM-DD h [h] m" - >>> arw.format(fmt) - '2019-11-02 3 h 32' - -- [NEW] Added ``humanize`` week granularity translations for Chinese, Spanish and Vietnamese. -- [CHANGE] Added ``ParserError`` to module exports. -- [FIX] Added support for midnight at end of day. See `#703 `_ for details. -- [INTERNAL] Created Travis build for macOS. -- [INTERNAL] Test parsing and formatting against full timezone database. - -0.15.2 (2019-09-14) -------------------- - -- [NEW] Added ``humanize`` week granularity translations for Portuguese and Brazilian Portuguese. -- [NEW] Embedded changelog within docs and added release dates to versions. -- [FIX] Fixed a bug that caused test failures on Windows only, see `#668 `_ for details. - -0.15.1 (2019-09-10) -------------------- - -- [NEW] Added ``humanize`` week granularity translations for Japanese. -- [FIX] Fixed a bug that caused Arrow to fail when passed a negative timestamp string. -- [FIX] Fixed a bug that caused Arrow to fail when passed a datetime object with ``tzinfo`` of type ``StaticTzInfo``. - -0.15.0 (2019-09-08) -------------------- - -- [NEW] Added support for DDD and DDDD ordinal date tokens. The following functionality is now possible: ``arrow.get("1998-045")``, ``arrow.get("1998-45", "YYYY-DDD")``, ``arrow.get("1998-045", "YYYY-DDDD")``. -- [NEW] ISO 8601 basic format for dates and times is now supported (e.g. ``YYYYMMDDTHHmmssZ``). -- [NEW] Added ``humanize`` week granularity translations for French, Russian and Swiss German locales. -- [CHANGE] Timestamps of type ``str`` are no longer supported **without a format string** in the ``arrow.get()`` method. This change was made to support the ISO 8601 basic format and to address bugs such as `#447 `_. - -The following will NOT work in v0.15.0: - -.. code-block:: python - - >>> arrow.get("1565358758") - >>> arrow.get("1565358758.123413") - -The following will work in v0.15.0: - -.. code-block:: python - - >>> arrow.get("1565358758", "X") - >>> arrow.get("1565358758.123413", "X") - >>> arrow.get(1565358758) - >>> arrow.get(1565358758.123413) - -- [CHANGE] When a meridian token (a|A) is passed and no meridians are available for the specified locale (e.g. unsupported or untranslated) a ``ParserError`` is raised. -- [CHANGE] The timestamp token (``X``) will now match float timestamps of type ``str``: ``arrow.get(“1565358758.123415”, “X”)``. -- [CHANGE] Strings with leading and/or trailing whitespace will no longer be parsed without a format string. Please see `the docs `_ for ways to handle this. -- [FIX] The timestamp token (``X``) will now only match on strings that **strictly contain integers and floats**, preventing incorrect matches. -- [FIX] Most instances of ``arrow.get()`` returning an incorrect ``Arrow`` object from a partial parsing match have been eliminated. The following issue have been addressed: `#91 `_, `#196 `_, `#396 `_, `#434 `_, `#447 `_, `#456 `_, `#519 `_, `#538 `_, `#560 `_. - -0.14.7 (2019-09-04) -------------------- - -- [CHANGE] ``ArrowParseWarning`` will no longer be printed on every call to ``arrow.get()`` with a datetime string. The purpose of the warning was to start a conversation about the upcoming 0.15.0 changes and we appreciate all the feedback that the community has given us! - -0.14.6 (2019-08-28) -------------------- - -- [NEW] Added support for ``week`` granularity in ``Arrow.humanize()``. For example, ``arrow.utcnow().shift(weeks=-1).humanize(granularity="week")`` outputs "a week ago". This change introduced two new untranslated words, ``week`` and ``weeks``, to all locale dictionaries, so locale contributions are welcome! -- [NEW] Fully translated the Brazilian Portugese locale. -- [CHANGE] Updated the Macedonian locale to inherit from a Slavic base. -- [FIX] Fixed a bug that caused ``arrow.get()`` to ignore tzinfo arguments of type string (e.g. ``arrow.get(tzinfo="Europe/Paris")``). -- [FIX] Fixed a bug that occurred when ``arrow.Arrow()`` was instantiated with a ``pytz`` tzinfo object. -- [FIX] Fixed a bug that caused Arrow to fail when passed a sub-second token, that when rounded, had a value greater than 999999 (e.g. ``arrow.get("2015-01-12T01:13:15.9999995")``). Arrow should now accurately propagate the rounding for large sub-second tokens. - -0.14.5 (2019-08-09) -------------------- - -- [NEW] Added Afrikaans locale. -- [CHANGE] Removed deprecated ``replace`` shift functionality. Users looking to pass plural properties to the ``replace`` function to shift values should use ``shift`` instead. -- [FIX] Fixed bug that occurred when ``factory.get()`` was passed a locale kwarg. - -0.14.4 (2019-07-30) -------------------- - -- [FIX] Fixed a regression in 0.14.3 that prevented a tzinfo argument of type string to be passed to the ``get()`` function. Functionality such as ``arrow.get("2019072807", "YYYYMMDDHH", tzinfo="UTC")`` should work as normal again. -- [CHANGE] Moved ``backports.functools_lru_cache`` dependency from ``extra_requires`` to ``install_requires`` for ``Python 2.7`` installs to fix `#495 `_. - -0.14.3 (2019-07-28) -------------------- - -- [NEW] Added full support for Python 3.8. -- [CHANGE] Added warnings for upcoming factory.get() parsing changes in 0.15.0. Please see `#612 `_ for full details. -- [FIX] Extensive refactor and update of documentation. -- [FIX] factory.get() can now construct from kwargs. -- [FIX] Added meridians to Spanish Locale. - -0.14.2 (2019-06-06) -------------------- - -- [CHANGE] Travis CI builds now use tox to lint and run tests. -- [FIX] Fixed UnicodeDecodeError on certain locales (#600). - -0.14.1 (2019-06-06) -------------------- - -- [FIX] Fixed ``ImportError: No module named 'dateutil'`` (#598). - -0.14.0 (2019-06-06) -------------------- - -- [NEW] Added provisional support for Python 3.8. -- [CHANGE] Removed support for EOL Python 3.4. -- [FIX] Updated setup.py with modern Python standards. -- [FIX] Upgraded dependencies to latest versions. -- [FIX] Enabled flake8 and black on travis builds. -- [FIX] Formatted code using black and isort. - -0.13.2 (2019-05-30) -------------------- - -- [NEW] Add is_between method. -- [FIX] Improved humanize behaviour for near zero durations (#416). -- [FIX] Correct humanize behaviour with future days (#541). -- [FIX] Documentation updates. -- [FIX] Improvements to German Locale. - -0.13.1 (2019-02-17) -------------------- - -- [NEW] Add support for Python 3.7. -- [CHANGE] Remove deprecation decorators for Arrow.range(), Arrow.span_range() and Arrow.interval(), all now return generators, wrap with list() to get old behavior. -- [FIX] Documentation and docstring updates. - -0.13.0 (2019-01-09) -------------------- - -- [NEW] Added support for Python 3.6. -- [CHANGE] Drop support for Python 2.6/3.3. -- [CHANGE] Return generator instead of list for Arrow.range(), Arrow.span_range() and Arrow.interval(). -- [FIX] Make arrow.get() work with str & tzinfo combo. -- [FIX] Make sure special RegEx characters are escaped in format string. -- [NEW] Added support for ZZZ when formatting. -- [FIX] Stop using datetime.utcnow() in internals, use datetime.now(UTC) instead. -- [FIX] Return NotImplemented instead of TypeError in arrow math internals. -- [NEW] Added Estonian Locale. -- [FIX] Small fixes to Greek locale. -- [FIX] TagalogLocale improvements. -- [FIX] Added test requirements to setup. -- [FIX] Improve docs for get, now and utcnow methods. -- [FIX] Correct typo in depreciation warning. - -0.12.1 ------- - -- [FIX] Allow universal wheels to be generated and reliably installed. -- [FIX] Make humanize respect only_distance when granularity argument is also given. - -0.12.0 ------- - -- [FIX] Compatibility fix for Python 2.x - -0.11.0 ------- - -- [FIX] Fix grammar of ArabicLocale -- [NEW] Add Nepali Locale -- [FIX] Fix month name + rename AustriaLocale -> AustrianLocale -- [FIX] Fix typo in Basque Locale -- [FIX] Fix grammar in PortugueseBrazilian locale -- [FIX] Remove pip --user-mirrors flag -- [NEW] Add Indonesian Locale - -0.10.0 ------- - -- [FIX] Fix getattr off by one for quarter -- [FIX] Fix negative offset for UTC -- [FIX] Update arrow.py - -0.9.0 ------ - -- [NEW] Remove duplicate code -- [NEW] Support gnu date iso 8601 -- [NEW] Add support for universal wheels -- [NEW] Slovenian locale -- [NEW] Slovak locale -- [NEW] Romanian locale -- [FIX] respect limit even if end is defined range -- [FIX] Separate replace & shift functions -- [NEW] Added tox -- [FIX] Fix supported Python versions in documentation -- [NEW] Azerbaijani locale added, locale issue fixed in Turkish. -- [FIX] Format ParserError's raise message - -0.8.0 ------ - -- [] - -0.7.1 ------ - -- [NEW] Esperanto locale (batisteo) - -0.7.0 ------ - -- [FIX] Parse localized strings #228 (swistakm) -- [FIX] Modify tzinfo parameter in ``get`` api #221 (bottleimp) -- [FIX] Fix Czech locale (PrehistoricTeam) -- [FIX] Raise TypeError when adding/subtracting non-dates (itsmeolivia) -- [FIX] Fix pytz conversion error (Kudo) -- [FIX] Fix overzealous time truncation in span_range (kdeldycke) -- [NEW] Humanize for time duration #232 (ybrs) -- [NEW] Add Thai locale (sipp11) -- [NEW] Adding Belarusian (be) locale (oire) -- [NEW] Search date in strings (beenje) -- [NEW] Note that arrow's tokens differ from strptime's. (offby1) - -0.6.0 ------ - -- [FIX] Added support for Python 3 -- [FIX] Avoid truncating oversized epoch timestamps. Fixes #216. -- [FIX] Fixed month abbreviations for Ukrainian -- [FIX] Fix typo timezone -- [FIX] A couple of dialect fixes and two new languages -- [FIX] Spanish locale: ``Miercoles`` should have acute accent -- [Fix] Fix Finnish grammar -- [FIX] Fix typo in 'Arrow.floor' docstring -- [FIX] Use read() utility to open README -- [FIX] span_range for week frame -- [NEW] Add minimal support for fractional seconds longer than six digits. -- [NEW] Adding locale support for Marathi (mr) -- [NEW] Add count argument to span method -- [NEW] Improved docs - -0.5.1 - 0.5.4 -------------- - -- [FIX] test the behavior of simplejson instead of calling for_json directly (tonyseek) -- [FIX] Add Hebrew Locale (doodyparizada) -- [FIX] Update documentation location (andrewelkins) -- [FIX] Update setup.py Development Status level (andrewelkins) -- [FIX] Case insensitive month match (cshowe) - -0.5.0 ------ - -- [NEW] struct_time addition. (mhworth) -- [NEW] Version grep (eirnym) -- [NEW] Default to ISO 8601 format (emonty) -- [NEW] Raise TypeError on comparison (sniekamp) -- [NEW] Adding Macedonian(mk) locale (krisfremen) -- [FIX] Fix for ISO seconds and fractional seconds (sdispater) (andrewelkins) -- [FIX] Use correct Dutch wording for "hours" (wbolster) -- [FIX] Complete the list of english locales (indorilftw) -- [FIX] Change README to reStructuredText (nyuszika7h) -- [FIX] Parse lower-cased 'h' (tamentis) -- [FIX] Slight modifications to Dutch locale (nvie) - -0.4.4 ------ - -- [NEW] Include the docs in the released tarball -- [NEW] Czech localization Czech localization for Arrow -- [NEW] Add fa_ir to locales -- [FIX] Fixes parsing of time strings with a final Z -- [FIX] Fixes ISO parsing and formatting for fractional seconds -- [FIX] test_fromtimestamp sp -- [FIX] some typos fixed -- [FIX] removed an unused import statement -- [FIX] docs table fix -- [FIX] Issue with specify 'X' template and no template at all to arrow.get -- [FIX] Fix "import" typo in docs/index.rst -- [FIX] Fix unit tests for zero passed -- [FIX] Update layout.html -- [FIX] In Norwegian and new Norwegian months and weekdays should not be capitalized -- [FIX] Fixed discrepancy between specifying 'X' to arrow.get and specifying no template - -0.4.3 ------ - -- [NEW] Turkish locale (Emre) -- [NEW] Arabic locale (Mosab Ahmad) -- [NEW] Danish locale (Holmars) -- [NEW] Icelandic locale (Holmars) -- [NEW] Hindi locale (Atmb4u) -- [NEW] Malayalam locale (Atmb4u) -- [NEW] Finnish locale (Stormpat) -- [NEW] Portuguese locale (Danielcorreia) -- [NEW] ``h`` and ``hh`` strings are now supported (Averyonghub) -- [FIX] An incorrect inflection in the Polish locale has been fixed (Avalanchy) -- [FIX] ``arrow.get`` now properly handles ``Date`` (Jaapz) -- [FIX] Tests are now declared in ``setup.py`` and the manifest (Pypingou) -- [FIX] ``__version__`` has been added to ``__init__.py`` (Sametmax) -- [FIX] ISO 8601 strings can be parsed without a separator (Ivandiguisto / Root) -- [FIX] Documentation is now more clear regarding some inputs on ``arrow.get`` (Eriktaubeneck) -- [FIX] Some documentation links have been fixed (Vrutsky) -- [FIX] Error messages for parse errors are now more descriptive (Maciej Albin) -- [FIX] The parser now correctly checks for separators in strings (Mschwager) - -0.4.2 ------ - -- [NEW] Factory ``get`` method now accepts a single ``Arrow`` argument. -- [NEW] Tokens SSSS, SSSSS and SSSSSS are supported in parsing. -- [NEW] ``Arrow`` objects have a ``float_timestamp`` property. -- [NEW] Vietnamese locale (Iu1nguoi) -- [NEW] Factory ``get`` method now accepts a list of format strings (Dgilland) -- [NEW] A MANIFEST.in file has been added (Pypingou) -- [NEW] Tests can be run directly from ``setup.py`` (Pypingou) -- [FIX] Arrow docs now list 'day of week' format tokens correctly (Rudolphfroger) -- [FIX] Several issues with the Korean locale have been resolved (Yoloseem) -- [FIX] ``humanize`` now correctly returns unicode (Shvechikov) -- [FIX] ``Arrow`` objects now pickle / unpickle correctly (Yoloseem) - -0.4.1 ------ - -- [NEW] Table / explanation of formatting & parsing tokens in docs -- [NEW] Brazilian locale (Augusto2112) -- [NEW] Dutch locale (OrangeTux) -- [NEW] Italian locale (Pertux) -- [NEW] Austrain locale (LeChewbacca) -- [NEW] Tagalog locale (Marksteve) -- [FIX] Corrected spelling and day numbers in German locale (LeChewbacca) -- [FIX] Factory ``get`` method should now handle unicode strings correctly (Bwells) -- [FIX] Midnight and noon should now parse and format correctly (Bwells) - -0.4.0 ------ - -- [NEW] Format-free ISO 8601 parsing in factory ``get`` method -- [NEW] Support for 'week' / 'weeks' in ``span``, ``range``, ``span_range``, ``floor`` and ``ceil`` -- [NEW] Support for 'weeks' in ``replace`` -- [NEW] Norwegian locale (Martinp) -- [NEW] Japanese locale (CortYuming) -- [FIX] Timezones no longer show the wrong sign when formatted (Bean) -- [FIX] Microseconds are parsed correctly from strings (Bsidhom) -- [FIX] Locale day-of-week is no longer off by one (Cynddl) -- [FIX] Corrected plurals of Ukrainian and Russian nouns (Catchagain) -- [CHANGE] Old 0.1 ``arrow`` module method removed -- [CHANGE] Dropped timestamp support in ``range`` and ``span_range`` (never worked correctly) -- [CHANGE] Dropped parsing of single string as tz string in factory ``get`` method (replaced by ISO 8601) - -0.3.5 ------ - -- [NEW] French locale (Cynddl) -- [NEW] Spanish locale (Slapresta) -- [FIX] Ranges handle multiple timezones correctly (Ftobia) - -0.3.4 ------ - -- [FIX] Humanize no longer sometimes returns the wrong month delta -- [FIX] ``__format__`` works correctly with no format string - -0.3.3 ------ - -- [NEW] Python 2.6 support -- [NEW] Initial support for locale-based parsing and formatting -- [NEW] ArrowFactory class, now proxied as the module API -- [NEW] ``factory`` api method to obtain a factory for a custom type -- [FIX] Python 3 support and tests completely ironed out - -0.3.2 ------ - -- [NEW] Python 3+ support - -0.3.1 ------ - -- [FIX] The old ``arrow`` module function handles timestamps correctly as it used to - -0.3.0 ------ - -- [NEW] ``Arrow.replace`` method -- [NEW] Accept timestamps, datetimes and Arrows for datetime inputs, where reasonable -- [FIX] ``range`` and ``span_range`` respect end and limit parameters correctly -- [CHANGE] Arrow objects are no longer mutable -- [CHANGE] Plural attribute name semantics altered: single -> absolute, plural -> relative -- [CHANGE] Plural names no longer supported as properties (e.g. ``arrow.utcnow().years``) - -0.2.1 ------ - -- [NEW] Support for localized humanization -- [NEW] English, Russian, Greek, Korean, Chinese locales - -0.2.0 ------ - -- **REWRITE** -- [NEW] Date parsing -- [NEW] Date formatting -- [NEW] ``floor``, ``ceil`` and ``span`` methods -- [NEW] ``datetime`` interface implementation -- [NEW] ``clone`` method -- [NEW] ``get``, ``now`` and ``utcnow`` API methods - -0.1.6 ------ - -- [NEW] Humanized time deltas -- [NEW] ``__eq__`` implemented -- [FIX] Issues with conversions related to daylight savings time resolved -- [CHANGE] ``__str__`` uses ISO formatting - -0.1.5 ------ - -- **Started tracking changes** -- [NEW] Parsing of ISO-formatted time zone offsets (e.g. '+02:30', '-05:00') -- [NEW] Resolved some issues with timestamps and delta / Olson time zones diff --git a/openpype/modules/ftrack/python2_vendor/arrow/MANIFEST.in b/openpype/modules/ftrack/python2_vendor/arrow/MANIFEST.in deleted file mode 100644 index d9955ed96a..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/MANIFEST.in +++ /dev/null @@ -1,3 +0,0 @@ -include LICENSE CHANGELOG.rst README.rst Makefile requirements.txt tox.ini -recursive-include tests *.py -recursive-include docs *.py *.rst *.bat Makefile diff --git a/openpype/modules/ftrack/python2_vendor/arrow/Makefile b/openpype/modules/ftrack/python2_vendor/arrow/Makefile deleted file mode 100644 index f294985dc6..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/Makefile +++ /dev/null @@ -1,44 +0,0 @@ -.PHONY: auto test docs clean - -auto: build38 - -build27: PYTHON_VER = python2.7 -build35: PYTHON_VER = python3.5 -build36: PYTHON_VER = python3.6 -build37: PYTHON_VER = python3.7 -build38: PYTHON_VER = python3.8 -build39: PYTHON_VER = python3.9 - -build27 build35 build36 build37 build38 build39: clean - virtualenv venv --python=$(PYTHON_VER) - . venv/bin/activate; \ - pip install -r requirements.txt; \ - pre-commit install - -test: - rm -f .coverage coverage.xml - . venv/bin/activate; pytest - -lint: - . venv/bin/activate; pre-commit run --all-files --show-diff-on-failure - -docs: - rm -rf docs/_build - . venv/bin/activate; cd docs; make html - -clean: clean-dist - rm -rf venv .pytest_cache ./**/__pycache__ - rm -f .coverage coverage.xml ./**/*.pyc - -clean-dist: - rm -rf dist build .egg .eggs arrow.egg-info - -build-dist: - . venv/bin/activate; \ - pip install -U setuptools twine wheel; \ - python setup.py sdist bdist_wheel - -upload-dist: - . venv/bin/activate; twine upload dist/* - -publish: test clean-dist build-dist upload-dist clean-dist diff --git a/openpype/modules/ftrack/python2_vendor/arrow/README.rst b/openpype/modules/ftrack/python2_vendor/arrow/README.rst deleted file mode 100644 index 69f6c50d81..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/README.rst +++ /dev/null @@ -1,133 +0,0 @@ -Arrow: Better dates & times for Python -====================================== - -.. start-inclusion-marker-do-not-remove - -.. image:: https://github.com/arrow-py/arrow/workflows/tests/badge.svg?branch=master - :alt: Build Status - :target: https://github.com/arrow-py/arrow/actions?query=workflow%3Atests+branch%3Amaster - -.. image:: https://codecov.io/gh/arrow-py/arrow/branch/master/graph/badge.svg - :alt: Coverage - :target: https://codecov.io/gh/arrow-py/arrow - -.. image:: https://img.shields.io/pypi/v/arrow.svg - :alt: PyPI Version - :target: https://pypi.python.org/pypi/arrow - -.. image:: https://img.shields.io/pypi/pyversions/arrow.svg - :alt: Supported Python Versions - :target: https://pypi.python.org/pypi/arrow - -.. image:: https://img.shields.io/pypi/l/arrow.svg - :alt: License - :target: https://pypi.python.org/pypi/arrow - -.. image:: https://img.shields.io/badge/code%20style-black-000000.svg - :alt: Code Style: Black - :target: https://github.com/psf/black - - -**Arrow** is a Python library that offers a sensible and human-friendly approach to creating, manipulating, formatting and converting dates, times and timestamps. It implements and updates the datetime type, plugging gaps in functionality and providing an intelligent module API that supports many common creation scenarios. Simply put, it helps you work with dates and times with fewer imports and a lot less code. - -Arrow is named after the `arrow of time `_ and is heavily inspired by `moment.js `_ and `requests `_. - -Why use Arrow over built-in modules? ------------------------------------- - -Python's standard library and some other low-level modules have near-complete date, time and timezone functionality, but don't work very well from a usability perspective: - -- Too many modules: datetime, time, calendar, dateutil, pytz and more -- Too many types: date, time, datetime, tzinfo, timedelta, relativedelta, etc. -- Timezones and timestamp conversions are verbose and unpleasant -- Timezone naivety is the norm -- Gaps in functionality: ISO 8601 parsing, timespans, humanization - -Features --------- - -- Fully-implemented, drop-in replacement for datetime -- Supports Python 2.7, 3.5, 3.6, 3.7, 3.8 and 3.9 -- Timezone-aware and UTC by default -- Provides super-simple creation options for many common input scenarios -- :code:`shift` method with support for relative offsets, including weeks -- Formats and parses strings automatically -- Wide support for ISO 8601 -- Timezone conversion -- Timestamp available as a property -- Generates time spans, ranges, floors and ceilings for time frames ranging from microsecond to year -- Humanizes and supports a growing list of contributed locales -- Extensible for your own Arrow-derived types - -Quick Start ------------ - -Installation -~~~~~~~~~~~~ - -To install Arrow, use `pip `_ or `pipenv `_: - -.. code-block:: console - - $ pip install -U arrow - -Example Usage -~~~~~~~~~~~~~ - -.. code-block:: python - - >>> import arrow - >>> arrow.get('2013-05-11T21:23:58.970460+07:00') - - - >>> utc = arrow.utcnow() - >>> utc - - - >>> utc = utc.shift(hours=-1) - >>> utc - - - >>> local = utc.to('US/Pacific') - >>> local - - - >>> local.timestamp - 1368303838 - - >>> local.format() - '2013-05-11 13:23:58 -07:00' - - >>> local.format('YYYY-MM-DD HH:mm:ss ZZ') - '2013-05-11 13:23:58 -07:00' - - >>> local.humanize() - 'an hour ago' - - >>> local.humanize(locale='ko_kr') - '1시간 전' - -.. end-inclusion-marker-do-not-remove - -Documentation -------------- - -For full documentation, please visit `arrow.readthedocs.io `_. - -Contributing ------------- - -Contributions are welcome for both code and localizations (adding and updating locales). Begin by gaining familiarity with the Arrow library and its features. Then, jump into contributing: - -#. Find an issue or feature to tackle on the `issue tracker `_. Issues marked with the `"good first issue" label `_ may be a great place to start! -#. Fork `this repository `_ on GitHub and begin making changes in a branch. -#. Add a few tests to ensure that the bug was fixed or the feature works as expected. -#. Run the entire test suite and linting checks by running one of the following commands: :code:`tox` (if you have `tox `_ installed) **OR** :code:`make build38 && make test && make lint` (if you do not have Python 3.8 installed, replace :code:`build38` with the latest Python version on your system). -#. Submit a pull request and await feedback 😃. - -If you have any questions along the way, feel free to ask them `here `_. - -Support Arrow -------------- - -`Open Collective `_ is an online funding platform that provides tools to raise money and share your finances with full transparency. It is the platform of choice for individuals and companies to make one-time or recurring donations directly to the project. If you are interested in making a financial contribution, please visit the `Arrow collective `_. diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/Makefile b/openpype/modules/ftrack/python2_vendor/arrow/docs/Makefile deleted file mode 100644 index d4bb2cbb9e..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -SPHINXBUILD ?= sphinx-build -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/conf.py b/openpype/modules/ftrack/python2_vendor/arrow/docs/conf.py deleted file mode 100644 index aaf3c50822..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/conf.py +++ /dev/null @@ -1,62 +0,0 @@ -# -*- coding: utf-8 -*- - -# -- Path setup -------------------------------------------------------------- - -import io -import os -import sys - -sys.path.insert(0, os.path.abspath("..")) - -about = {} -with io.open("../arrow/_version.py", "r", encoding="utf-8") as f: - exec(f.read(), about) - -# -- Project information ----------------------------------------------------- - -project = u"Arrow 🏹" -copyright = "2020, Chris Smith" -author = "Chris Smith" - -release = about["__version__"] - -# -- General configuration --------------------------------------------------- - -extensions = ["sphinx.ext.autodoc"] - -templates_path = [] - -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - -master_doc = "index" -source_suffix = ".rst" -pygments_style = "sphinx" - -language = None - -# -- Options for HTML output ------------------------------------------------- - -html_theme = "alabaster" -html_theme_path = [] -html_static_path = [] - -html_show_sourcelink = False -html_show_sphinx = False -html_show_copyright = True - -# https://alabaster.readthedocs.io/en/latest/customization.html -html_theme_options = { - "description": "Arrow is a sensible and human-friendly approach to dates, times and timestamps.", - "github_user": "arrow-py", - "github_repo": "arrow", - "github_banner": True, - "show_related": False, - "show_powered_by": False, - "github_button": True, - "github_type": "star", - "github_count": "true", # must be a string -} - -html_sidebars = { - "**": ["about.html", "localtoc.html", "relations.html", "searchbox.html"] -} diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/index.rst b/openpype/modules/ftrack/python2_vendor/arrow/docs/index.rst deleted file mode 100644 index e2830b04f3..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/index.rst +++ /dev/null @@ -1,566 +0,0 @@ -Arrow: Better dates & times for Python -====================================== - -Release v\ |release| (`Installation`_) (`Changelog `_) - -.. include:: ../README.rst - :start-after: start-inclusion-marker-do-not-remove - :end-before: end-inclusion-marker-do-not-remove - -User's Guide ------------- - -Creation -~~~~~~~~ - -Get 'now' easily: - -.. code-block:: python - - >>> arrow.utcnow() - - - >>> arrow.now() - - - >>> arrow.now('US/Pacific') - - -Create from timestamps (:code:`int` or :code:`float`): - -.. code-block:: python - - >>> arrow.get(1367900664) - - - >>> arrow.get(1367900664.152325) - - -Use a naive or timezone-aware datetime, or flexibly specify a timezone: - -.. code-block:: python - - >>> arrow.get(datetime.utcnow()) - - - >>> arrow.get(datetime(2013, 5, 5), 'US/Pacific') - - - >>> from dateutil import tz - >>> arrow.get(datetime(2013, 5, 5), tz.gettz('US/Pacific')) - - - >>> arrow.get(datetime.now(tz.gettz('US/Pacific'))) - - -Parse from a string: - -.. code-block:: python - - >>> arrow.get('2013-05-05 12:30:45', 'YYYY-MM-DD HH:mm:ss') - - -Search a date in a string: - -.. code-block:: python - - >>> arrow.get('June was born in May 1980', 'MMMM YYYY') - - -Some ISO 8601 compliant strings are recognized and parsed without a format string: - - >>> arrow.get('2013-09-30T15:34:00.000-07:00') - - -Arrow objects can be instantiated directly too, with the same arguments as a datetime: - -.. code-block:: python - - >>> arrow.get(2013, 5, 5) - - - >>> arrow.Arrow(2013, 5, 5) - - -Properties -~~~~~~~~~~ - -Get a datetime or timestamp representation: - -.. code-block:: python - - >>> a = arrow.utcnow() - >>> a.datetime - datetime.datetime(2013, 5, 7, 4, 38, 15, 447644, tzinfo=tzutc()) - - >>> a.timestamp - 1367901495 - -Get a naive datetime, and tzinfo: - -.. code-block:: python - - >>> a.naive - datetime.datetime(2013, 5, 7, 4, 38, 15, 447644) - - >>> a.tzinfo - tzutc() - -Get any datetime value: - -.. code-block:: python - - >>> a.year - 2013 - -Call datetime functions that return properties: - -.. code-block:: python - - >>> a.date() - datetime.date(2013, 5, 7) - - >>> a.time() - datetime.time(4, 38, 15, 447644) - -Replace & Shift -~~~~~~~~~~~~~~~ - -Get a new :class:`Arrow ` object, with altered attributes, just as you would with a datetime: - -.. code-block:: python - - >>> arw = arrow.utcnow() - >>> arw - - - >>> arw.replace(hour=4, minute=40) - - -Or, get one with attributes shifted forward or backward: - -.. code-block:: python - - >>> arw.shift(weeks=+3) - - -Even replace the timezone without altering other attributes: - -.. code-block:: python - - >>> arw.replace(tzinfo='US/Pacific') - - -Move between the earlier and later moments of an ambiguous time: - -.. code-block:: python - - >>> paris_transition = arrow.Arrow(2019, 10, 27, 2, tzinfo="Europe/Paris", fold=0) - >>> paris_transition - - >>> paris_transition.ambiguous - True - >>> paris_transition.replace(fold=1) - - -Format -~~~~~~ - -.. code-block:: python - - >>> arrow.utcnow().format('YYYY-MM-DD HH:mm:ss ZZ') - '2013-05-07 05:23:16 -00:00' - -Convert -~~~~~~~ - -Convert from UTC to other timezones by name or tzinfo: - -.. code-block:: python - - >>> utc = arrow.utcnow() - >>> utc - - - >>> utc.to('US/Pacific') - - - >>> utc.to(tz.gettz('US/Pacific')) - - -Or using shorthand: - -.. code-block:: python - - >>> utc.to('local') - - - >>> utc.to('local').to('utc') - - - -Humanize -~~~~~~~~ - -Humanize relative to now: - -.. code-block:: python - - >>> past = arrow.utcnow().shift(hours=-1) - >>> past.humanize() - 'an hour ago' - -Or another Arrow, or datetime: - -.. code-block:: python - - >>> present = arrow.utcnow() - >>> future = present.shift(hours=2) - >>> future.humanize(present) - 'in 2 hours' - -Indicate time as relative or include only the distance - -.. code-block:: python - - >>> present = arrow.utcnow() - >>> future = present.shift(hours=2) - >>> future.humanize(present) - 'in 2 hours' - >>> future.humanize(present, only_distance=True) - '2 hours' - - -Indicate a specific time granularity (or multiple): - -.. code-block:: python - - >>> present = arrow.utcnow() - >>> future = present.shift(minutes=66) - >>> future.humanize(present, granularity="minute") - 'in 66 minutes' - >>> future.humanize(present, granularity=["hour", "minute"]) - 'in an hour and 6 minutes' - >>> present.humanize(future, granularity=["hour", "minute"]) - 'an hour and 6 minutes ago' - >>> future.humanize(present, only_distance=True, granularity=["hour", "minute"]) - 'an hour and 6 minutes' - -Support for a growing number of locales (see ``locales.py`` for supported languages): - -.. code-block:: python - - - >>> future = arrow.utcnow().shift(hours=1) - >>> future.humanize(a, locale='ru') - 'через 2 час(а,ов)' - - -Ranges & Spans -~~~~~~~~~~~~~~ - -Get the time span of any unit: - -.. code-block:: python - - >>> arrow.utcnow().span('hour') - (, ) - -Or just get the floor and ceiling: - -.. code-block:: python - - >>> arrow.utcnow().floor('hour') - - - >>> arrow.utcnow().ceil('hour') - - -You can also get a range of time spans: - -.. code-block:: python - - >>> start = datetime(2013, 5, 5, 12, 30) - >>> end = datetime(2013, 5, 5, 17, 15) - >>> for r in arrow.Arrow.span_range('hour', start, end): - ... print r - ... - (, ) - (, ) - (, ) - (, ) - (, ) - -Or just iterate over a range of time: - -.. code-block:: python - - >>> start = datetime(2013, 5, 5, 12, 30) - >>> end = datetime(2013, 5, 5, 17, 15) - >>> for r in arrow.Arrow.range('hour', start, end): - ... print repr(r) - ... - - - - - - -.. toctree:: - :maxdepth: 2 - -Factories -~~~~~~~~~ - -Use factories to harness Arrow's module API for a custom Arrow-derived type. First, derive your type: - -.. code-block:: python - - >>> class CustomArrow(arrow.Arrow): - ... - ... def days_till_xmas(self): - ... - ... xmas = arrow.Arrow(self.year, 12, 25) - ... - ... if self > xmas: - ... xmas = xmas.shift(years=1) - ... - ... return (xmas - self).days - - -Then get and use a factory for it: - -.. code-block:: python - - >>> factory = arrow.ArrowFactory(CustomArrow) - >>> custom = factory.utcnow() - >>> custom - >>> - - >>> custom.days_till_xmas() - >>> 211 - -Supported Tokens -~~~~~~~~~~~~~~~~ - -Use the following tokens for parsing and formatting. Note that they are **not** the same as the tokens for `strptime `_: - -+--------------------------------+--------------+-------------------------------------------+ -| |Token |Output | -+================================+==============+===========================================+ -|**Year** |YYYY |2000, 2001, 2002 ... 2012, 2013 | -+--------------------------------+--------------+-------------------------------------------+ -| |YY |00, 01, 02 ... 12, 13 | -+--------------------------------+--------------+-------------------------------------------+ -|**Month** |MMMM |January, February, March ... [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |MMM |Jan, Feb, Mar ... [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |MM |01, 02, 03 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -| |M |1, 2, 3 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -|**Day of Year** |DDDD |001, 002, 003 ... 364, 365 | -+--------------------------------+--------------+-------------------------------------------+ -| |DDD |1, 2, 3 ... 364, 365 | -+--------------------------------+--------------+-------------------------------------------+ -|**Day of Month** |DD |01, 02, 03 ... 30, 31 | -+--------------------------------+--------------+-------------------------------------------+ -| |D |1, 2, 3 ... 30, 31 | -+--------------------------------+--------------+-------------------------------------------+ -| |Do |1st, 2nd, 3rd ... 30th, 31st | -+--------------------------------+--------------+-------------------------------------------+ -|**Day of Week** |dddd |Monday, Tuesday, Wednesday ... [#t2]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |ddd |Mon, Tue, Wed ... [#t2]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |d |1, 2, 3 ... 6, 7 | -+--------------------------------+--------------+-------------------------------------------+ -|**ISO week date** |W |2011-W05-4, 2019-W17 | -+--------------------------------+--------------+-------------------------------------------+ -|**Hour** |HH |00, 01, 02 ... 23, 24 | -+--------------------------------+--------------+-------------------------------------------+ -| |H |0, 1, 2 ... 23, 24 | -+--------------------------------+--------------+-------------------------------------------+ -| |hh |01, 02, 03 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -| |h |1, 2, 3 ... 11, 12 | -+--------------------------------+--------------+-------------------------------------------+ -|**AM / PM** |A |AM, PM, am, pm [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |a |am, pm [#t1]_ | -+--------------------------------+--------------+-------------------------------------------+ -|**Minute** |mm |00, 01, 02 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -| |m |0, 1, 2 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -|**Second** |ss |00, 01, 02 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -| |s |0, 1, 2 ... 58, 59 | -+--------------------------------+--------------+-------------------------------------------+ -|**Sub-second** |S... |0, 02, 003, 000006, 123123123123... [#t3]_ | -+--------------------------------+--------------+-------------------------------------------+ -|**Timezone** |ZZZ |Asia/Baku, Europe/Warsaw, GMT ... [#t4]_ | -+--------------------------------+--------------+-------------------------------------------+ -| |ZZ |-07:00, -06:00 ... +06:00, +07:00, +08, Z | -+--------------------------------+--------------+-------------------------------------------+ -| |Z |-0700, -0600 ... +0600, +0700, +08, Z | -+--------------------------------+--------------+-------------------------------------------+ -|**Seconds Timestamp** |X |1381685817, 1381685817.915482 ... [#t5]_ | -+--------------------------------+--------------+-------------------------------------------+ -|**ms or µs Timestamp** |x |1569980330813, 1569980330813221 | -+--------------------------------+--------------+-------------------------------------------+ - -.. rubric:: Footnotes - -.. [#t1] localization support for parsing and formatting -.. [#t2] localization support only for formatting -.. [#t3] the result is truncated to microseconds, with `half-to-even rounding `_. -.. [#t4] timezone names from `tz database `_ provided via dateutil package, note that abbreviations such as MST, PDT, BRST are unlikely to parse due to ambiguity. Use the full IANA zone name instead (Asia/Shanghai, Europe/London, America/Chicago etc). -.. [#t5] this token cannot be used for parsing timestamps out of natural language strings due to compatibility reasons - -Built-in Formats -++++++++++++++++ - -There are several formatting standards that are provided as built-in tokens. - -.. code-block:: python - - >>> arw = arrow.utcnow() - >>> arw.format(arrow.FORMAT_ATOM) - '2020-05-27 10:30:35+00:00' - >>> arw.format(arrow.FORMAT_COOKIE) - 'Wednesday, 27-May-2020 10:30:35 UTC' - >>> arw.format(arrow.FORMAT_RSS) - 'Wed, 27 May 2020 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC822) - 'Wed, 27 May 20 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC850) - 'Wednesday, 27-May-20 10:30:35 UTC' - >>> arw.format(arrow.FORMAT_RFC1036) - 'Wed, 27 May 20 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC1123) - 'Wed, 27 May 2020 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC2822) - 'Wed, 27 May 2020 10:30:35 +0000' - >>> arw.format(arrow.FORMAT_RFC3339) - '2020-05-27 10:30:35+00:00' - >>> arw.format(arrow.FORMAT_W3C) - '2020-05-27 10:30:35+00:00' - -Escaping Formats -~~~~~~~~~~~~~~~~ - -Tokens, phrases, and regular expressions in a format string can be escaped when parsing and formatting by enclosing them within square brackets. - -Tokens & Phrases -++++++++++++++++ - -Any `token `_ or phrase can be escaped as follows: - -.. code-block:: python - - >>> fmt = "YYYY-MM-DD h [h] m" - >>> arw = arrow.get("2018-03-09 8 h 40", fmt) - - >>> arw.format(fmt) - '2018-03-09 8 h 40' - - >>> fmt = "YYYY-MM-DD h [hello] m" - >>> arw = arrow.get("2018-03-09 8 hello 40", fmt) - - >>> arw.format(fmt) - '2018-03-09 8 hello 40' - - >>> fmt = "YYYY-MM-DD h [hello world] m" - >>> arw = arrow.get("2018-03-09 8 hello world 40", fmt) - - >>> arw.format(fmt) - '2018-03-09 8 hello world 40' - -This can be useful for parsing dates in different locales such as French, in which it is common to format time strings as "8 h 40" rather than "8:40". - -Regular Expressions -+++++++++++++++++++ - -You can also escape regular expressions by enclosing them within square brackets. In the following example, we are using the regular expression :code:`\s+` to match any number of whitespace characters that separate the tokens. This is useful if you do not know the number of spaces between tokens ahead of time (e.g. in log files). - -.. code-block:: python - - >>> fmt = r"ddd[\s+]MMM[\s+]DD[\s+]HH:mm:ss[\s+]YYYY" - >>> arrow.get("Mon Sep 08 16:41:45 2014", fmt) - - - >>> arrow.get("Mon \tSep 08 16:41:45 2014", fmt) - - - >>> arrow.get("Mon Sep 08 16:41:45 2014", fmt) - - -Punctuation -~~~~~~~~~~~ - -Date and time formats may be fenced on either side by one punctuation character from the following list: ``, . ; : ? ! " \` ' [ ] { } ( ) < >`` - -.. code-block:: python - - >>> arrow.get("Cool date: 2019-10-31T09:12:45.123456+04:30.", "YYYY-MM-DDTHH:mm:ss.SZZ") - - - >>> arrow.get("Tomorrow (2019-10-31) is Halloween!", "YYYY-MM-DD") - - - >>> arrow.get("Halloween is on 2019.10.31.", "YYYY.MM.DD") - - - >>> arrow.get("It's Halloween tomorrow (2019-10-31)!", "YYYY-MM-DD") - # Raises exception because there are multiple punctuation marks following the date - -Redundant Whitespace -~~~~~~~~~~~~~~~~~~~~ - -Redundant whitespace characters (spaces, tabs, and newlines) can be normalized automatically by passing in the ``normalize_whitespace`` flag to ``arrow.get``: - -.. code-block:: python - - >>> arrow.get('\t \n 2013-05-05T12:30:45.123456 \t \n', normalize_whitespace=True) - - - >>> arrow.get('2013-05-05 T \n 12:30:45\t123456', 'YYYY-MM-DD T HH:mm:ss S', normalize_whitespace=True) - - -API Guide ---------- - -arrow.arrow -~~~~~~~~~~~ - -.. automodule:: arrow.arrow - :members: - -arrow.factory -~~~~~~~~~~~~~ - -.. automodule:: arrow.factory - :members: - -arrow.api -~~~~~~~~~ - -.. automodule:: arrow.api - :members: - -arrow.locale -~~~~~~~~~~~~ - -.. automodule:: arrow.locales - :members: - :undoc-members: - -Release History ---------------- - -.. toctree:: - :maxdepth: 2 - - releases diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/make.bat b/openpype/modules/ftrack/python2_vendor/arrow/docs/make.bat deleted file mode 100644 index 922152e96a..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/make.bat +++ /dev/null @@ -1,35 +0,0 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=sphinx-build -) -set SOURCEDIR=. -set BUILDDIR=_build - -if "%1" == "" goto help - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The 'sphinx-build' command was not found. Make sure you have Sphinx - echo.installed, then set the SPHINXBUILD environment variable to point - echo.to the full path of the 'sphinx-build' executable. Alternatively you - echo.may add the Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.http://sphinx-doc.org/ - exit /b 1 -) - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% - -:end -popd diff --git a/openpype/modules/ftrack/python2_vendor/arrow/docs/releases.rst b/openpype/modules/ftrack/python2_vendor/arrow/docs/releases.rst deleted file mode 100644 index 22e1e59c8c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/docs/releases.rst +++ /dev/null @@ -1,3 +0,0 @@ -.. _releases: - -.. include:: ../CHANGELOG.rst diff --git a/openpype/modules/ftrack/python2_vendor/arrow/requirements.txt b/openpype/modules/ftrack/python2_vendor/arrow/requirements.txt deleted file mode 100644 index df565d8384..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/requirements.txt +++ /dev/null @@ -1,14 +0,0 @@ -backports.functools_lru_cache==1.6.1; python_version == "2.7" -dateparser==0.7.* -pre-commit==1.21.*; python_version <= "3.5" -pre-commit==2.6.*; python_version >= "3.6" -pytest==4.6.*; python_version == "2.7" -pytest==6.0.*; python_version >= "3.5" -pytest-cov==2.10.* -pytest-mock==2.0.*; python_version == "2.7" -pytest-mock==3.2.*; python_version >= "3.5" -python-dateutil==2.8.* -pytz==2019.* -simplejson==3.17.* -sphinx==1.8.*; python_version == "2.7" -sphinx==3.2.*; python_version >= "3.5" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/setup.cfg b/openpype/modules/ftrack/python2_vendor/arrow/setup.cfg deleted file mode 100644 index 2a9acf13da..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/setup.cfg +++ /dev/null @@ -1,2 +0,0 @@ -[bdist_wheel] -universal = 1 diff --git a/openpype/modules/ftrack/python2_vendor/arrow/setup.py b/openpype/modules/ftrack/python2_vendor/arrow/setup.py deleted file mode 100644 index dc4f0e77d5..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/setup.py +++ /dev/null @@ -1,50 +0,0 @@ -# -*- coding: utf-8 -*- -import io - -from setuptools import setup - -with io.open("README.rst", "r", encoding="utf-8") as f: - readme = f.read() - -about = {} -with io.open("arrow/_version.py", "r", encoding="utf-8") as f: - exec(f.read(), about) - -setup( - name="arrow", - version=about["__version__"], - description="Better dates & times for Python", - long_description=readme, - long_description_content_type="text/x-rst", - url="https://arrow.readthedocs.io", - author="Chris Smith", - author_email="crsmithdev@gmail.com", - license="Apache 2.0", - packages=["arrow"], - zip_safe=False, - python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*", - install_requires=[ - "python-dateutil>=2.7.0", - "backports.functools_lru_cache>=1.2.1;python_version=='2.7'", - ], - classifiers=[ - "Development Status :: 4 - Beta", - "Intended Audience :: Developers", - "License :: OSI Approved :: Apache Software License", - "Topic :: Software Development :: Libraries :: Python Modules", - "Programming Language :: Python :: 2", - "Programming Language :: Python :: 2.7", - "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.5", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "Programming Language :: Python :: 3.9", - ], - keywords="arrow date time datetime timestamp timezone humanize", - project_urls={ - "Repository": "https://github.com/arrow-py/arrow", - "Bug Reports": "https://github.com/arrow-py/arrow/issues", - "Documentation": "https://arrow.readthedocs.io", - }, -) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/conftest.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/conftest.py deleted file mode 100644 index 5bc8a4af2e..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/conftest.py +++ /dev/null @@ -1,76 +0,0 @@ -# -*- coding: utf-8 -*- -from datetime import datetime - -import pytest -from dateutil import tz as dateutil_tz - -from arrow import arrow, factory, formatter, locales, parser - - -@pytest.fixture(scope="class") -def time_utcnow(request): - request.cls.arrow = arrow.Arrow.utcnow() - - -@pytest.fixture(scope="class") -def time_2013_01_01(request): - request.cls.now = arrow.Arrow.utcnow() - request.cls.arrow = arrow.Arrow(2013, 1, 1) - request.cls.datetime = datetime(2013, 1, 1) - - -@pytest.fixture(scope="class") -def time_2013_02_03(request): - request.cls.arrow = arrow.Arrow(2013, 2, 3, 12, 30, 45, 1) - - -@pytest.fixture(scope="class") -def time_2013_02_15(request): - request.cls.datetime = datetime(2013, 2, 15, 3, 41, 22, 8923) - request.cls.arrow = arrow.Arrow.fromdatetime(request.cls.datetime) - - -@pytest.fixture(scope="class") -def time_1975_12_25(request): - request.cls.datetime = datetime( - 1975, 12, 25, 14, 15, 16, tzinfo=dateutil_tz.gettz("America/New_York") - ) - request.cls.arrow = arrow.Arrow.fromdatetime(request.cls.datetime) - - -@pytest.fixture(scope="class") -def arrow_formatter(request): - request.cls.formatter = formatter.DateTimeFormatter() - - -@pytest.fixture(scope="class") -def arrow_factory(request): - request.cls.factory = factory.ArrowFactory() - - -@pytest.fixture(scope="class") -def lang_locales(request): - request.cls.locales = locales._locales - - -@pytest.fixture(scope="class") -def lang_locale(request): - # As locale test classes are prefixed with Test, we are dynamically getting the locale by the test class name. - # TestEnglishLocale -> EnglishLocale - name = request.cls.__name__[4:] - request.cls.locale = locales.get_locale_by_class_name(name) - - -@pytest.fixture(scope="class") -def dt_parser(request): - request.cls.parser = parser.DateTimeParser() - - -@pytest.fixture(scope="class") -def dt_parser_regex(request): - request.cls.format_regex = parser.DateTimeParser._FORMAT_RE - - -@pytest.fixture(scope="class") -def tzinfo_parser(request): - request.cls.parser = parser.TzinfoParser() diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_api.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_api.py deleted file mode 100644 index 9b19a27cd9..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_api.py +++ /dev/null @@ -1,28 +0,0 @@ -# -*- coding: utf-8 -*- -import arrow - - -class TestModule: - def test_get(self, mocker): - mocker.patch("arrow.api._factory.get", return_value="result") - - assert arrow.api.get() == "result" - - def test_utcnow(self, mocker): - mocker.patch("arrow.api._factory.utcnow", return_value="utcnow") - - assert arrow.api.utcnow() == "utcnow" - - def test_now(self, mocker): - mocker.patch("arrow.api._factory.now", tz="tz", return_value="now") - - assert arrow.api.now("tz") == "now" - - def test_factory(self): - class MockCustomArrowClass(arrow.Arrow): - pass - - result = arrow.api.factory(MockCustomArrowClass) - - assert isinstance(result, arrow.factory.ArrowFactory) - assert isinstance(result.utcnow(), MockCustomArrowClass) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_arrow.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_arrow.py deleted file mode 100644 index b0bd20a5e3..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_arrow.py +++ /dev/null @@ -1,2150 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import absolute_import, unicode_literals - -import calendar -import pickle -import sys -import time -from datetime import date, datetime, timedelta - -import dateutil -import pytest -import pytz -import simplejson as json -from dateutil import tz -from dateutil.relativedelta import FR, MO, SA, SU, TH, TU, WE - -from arrow import arrow - -from .utils import assert_datetime_equality - - -class TestTestArrowInit: - def test_init_bad_input(self): - - with pytest.raises(TypeError): - arrow.Arrow(2013) - - with pytest.raises(TypeError): - arrow.Arrow(2013, 2) - - with pytest.raises(ValueError): - arrow.Arrow(2013, 2, 2, 12, 30, 45, 9999999) - - def test_init(self): - - result = arrow.Arrow(2013, 2, 2) - self.expected = datetime(2013, 2, 2, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12) - self.expected = datetime(2013, 2, 2, 12, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12, 30) - self.expected = datetime(2013, 2, 2, 12, 30, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12, 30, 45) - self.expected = datetime(2013, 2, 2, 12, 30, 45, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow(2013, 2, 2, 12, 30, 45, 999999) - self.expected = datetime(2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.tzutc()) - assert result._datetime == self.expected - - result = arrow.Arrow( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.gettz("Europe/Paris") - ) - self.expected = datetime( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.gettz("Europe/Paris") - ) - assert result._datetime == self.expected - - # regression tests for issue #626 - def test_init_pytz_timezone(self): - - result = arrow.Arrow( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=pytz.timezone("Europe/Paris") - ) - self.expected = datetime( - 2013, 2, 2, 12, 30, 45, 999999, tzinfo=tz.gettz("Europe/Paris") - ) - assert result._datetime == self.expected - assert_datetime_equality(result._datetime, self.expected, 1) - - def test_init_with_fold(self): - before = arrow.Arrow(2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm") - after = arrow.Arrow(2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm", fold=1) - - assert hasattr(before, "fold") - assert hasattr(after, "fold") - - # PEP-495 requires the comparisons below to be true - assert before == after - assert before.utcoffset() != after.utcoffset() - - -class TestTestArrowFactory: - def test_now(self): - - result = arrow.Arrow.now() - - assert_datetime_equality( - result._datetime, datetime.now().replace(tzinfo=tz.tzlocal()) - ) - - def test_utcnow(self): - - result = arrow.Arrow.utcnow() - - assert_datetime_equality( - result._datetime, datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - assert result.fold == 0 - - def test_fromtimestamp(self): - - timestamp = time.time() - - result = arrow.Arrow.fromtimestamp(timestamp) - assert_datetime_equality( - result._datetime, datetime.now().replace(tzinfo=tz.tzlocal()) - ) - - result = arrow.Arrow.fromtimestamp(timestamp, tzinfo=tz.gettz("Europe/Paris")) - assert_datetime_equality( - result._datetime, - datetime.fromtimestamp(timestamp, tz.gettz("Europe/Paris")), - ) - - result = arrow.Arrow.fromtimestamp(timestamp, tzinfo="Europe/Paris") - assert_datetime_equality( - result._datetime, - datetime.fromtimestamp(timestamp, tz.gettz("Europe/Paris")), - ) - - with pytest.raises(ValueError): - arrow.Arrow.fromtimestamp("invalid timestamp") - - def test_utcfromtimestamp(self): - - timestamp = time.time() - - result = arrow.Arrow.utcfromtimestamp(timestamp) - assert_datetime_equality( - result._datetime, datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - with pytest.raises(ValueError): - arrow.Arrow.utcfromtimestamp("invalid timestamp") - - def test_fromdatetime(self): - - dt = datetime(2013, 2, 3, 12, 30, 45, 1) - - result = arrow.Arrow.fromdatetime(dt) - - assert result._datetime == dt.replace(tzinfo=tz.tzutc()) - - def test_fromdatetime_dt_tzinfo(self): - - dt = datetime(2013, 2, 3, 12, 30, 45, 1, tzinfo=tz.gettz("US/Pacific")) - - result = arrow.Arrow.fromdatetime(dt) - - assert result._datetime == dt.replace(tzinfo=tz.gettz("US/Pacific")) - - def test_fromdatetime_tzinfo_arg(self): - - dt = datetime(2013, 2, 3, 12, 30, 45, 1) - - result = arrow.Arrow.fromdatetime(dt, tz.gettz("US/Pacific")) - - assert result._datetime == dt.replace(tzinfo=tz.gettz("US/Pacific")) - - def test_fromdate(self): - - dt = date(2013, 2, 3) - - result = arrow.Arrow.fromdate(dt, tz.gettz("US/Pacific")) - - assert result._datetime == datetime(2013, 2, 3, tzinfo=tz.gettz("US/Pacific")) - - def test_strptime(self): - - formatted = datetime(2013, 2, 3, 12, 30, 45).strftime("%Y-%m-%d %H:%M:%S") - - result = arrow.Arrow.strptime(formatted, "%Y-%m-%d %H:%M:%S") - assert result._datetime == datetime(2013, 2, 3, 12, 30, 45, tzinfo=tz.tzutc()) - - result = arrow.Arrow.strptime( - formatted, "%Y-%m-%d %H:%M:%S", tzinfo=tz.gettz("Europe/Paris") - ) - assert result._datetime == datetime( - 2013, 2, 3, 12, 30, 45, tzinfo=tz.gettz("Europe/Paris") - ) - - -@pytest.mark.usefixtures("time_2013_02_03") -class TestTestArrowRepresentation: - def test_repr(self): - - result = self.arrow.__repr__() - - assert result == "".format(self.arrow._datetime.isoformat()) - - def test_str(self): - - result = self.arrow.__str__() - - assert result == self.arrow._datetime.isoformat() - - def test_hash(self): - - result = self.arrow.__hash__() - - assert result == self.arrow._datetime.__hash__() - - def test_format(self): - - result = "{:YYYY-MM-DD}".format(self.arrow) - - assert result == "2013-02-03" - - def test_bare_format(self): - - result = self.arrow.format() - - assert result == "2013-02-03 12:30:45+00:00" - - def test_format_no_format_string(self): - - result = "{}".format(self.arrow) - - assert result == str(self.arrow) - - def test_clone(self): - - result = self.arrow.clone() - - assert result is not self.arrow - assert result._datetime == self.arrow._datetime - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowAttribute: - def test_getattr_base(self): - - with pytest.raises(AttributeError): - self.arrow.prop - - def test_getattr_week(self): - - assert self.arrow.week == 1 - - def test_getattr_quarter(self): - # start dates - q1 = arrow.Arrow(2013, 1, 1) - q2 = arrow.Arrow(2013, 4, 1) - q3 = arrow.Arrow(2013, 8, 1) - q4 = arrow.Arrow(2013, 10, 1) - assert q1.quarter == 1 - assert q2.quarter == 2 - assert q3.quarter == 3 - assert q4.quarter == 4 - - # end dates - q1 = arrow.Arrow(2013, 3, 31) - q2 = arrow.Arrow(2013, 6, 30) - q3 = arrow.Arrow(2013, 9, 30) - q4 = arrow.Arrow(2013, 12, 31) - assert q1.quarter == 1 - assert q2.quarter == 2 - assert q3.quarter == 3 - assert q4.quarter == 4 - - def test_getattr_dt_value(self): - - assert self.arrow.year == 2013 - - def test_tzinfo(self): - - self.arrow.tzinfo = tz.gettz("PST") - assert self.arrow.tzinfo == tz.gettz("PST") - - def test_naive(self): - - assert self.arrow.naive == self.arrow._datetime.replace(tzinfo=None) - - def test_timestamp(self): - - assert self.arrow.timestamp == calendar.timegm( - self.arrow._datetime.utctimetuple() - ) - - with pytest.warns(DeprecationWarning): - self.arrow.timestamp - - def test_int_timestamp(self): - - assert self.arrow.int_timestamp == calendar.timegm( - self.arrow._datetime.utctimetuple() - ) - - def test_float_timestamp(self): - - result = self.arrow.float_timestamp - self.arrow.timestamp - - assert result == self.arrow.microsecond - - def test_getattr_fold(self): - - # UTC is always unambiguous - assert self.now.fold == 0 - - ambiguous_dt = arrow.Arrow( - 2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm", fold=1 - ) - assert ambiguous_dt.fold == 1 - - with pytest.raises(AttributeError): - ambiguous_dt.fold = 0 - - def test_getattr_ambiguous(self): - - assert not self.now.ambiguous - - ambiguous_dt = arrow.Arrow(2017, 10, 29, 2, 0, tzinfo="Europe/Stockholm") - - assert ambiguous_dt.ambiguous - - def test_getattr_imaginary(self): - - assert not self.now.imaginary - - imaginary_dt = arrow.Arrow(2013, 3, 31, 2, 30, tzinfo="Europe/Paris") - - assert imaginary_dt.imaginary - - -@pytest.mark.usefixtures("time_utcnow") -class TestArrowComparison: - def test_eq(self): - - assert self.arrow == self.arrow - assert self.arrow == self.arrow.datetime - assert not (self.arrow == "abc") - - def test_ne(self): - - assert not (self.arrow != self.arrow) - assert not (self.arrow != self.arrow.datetime) - assert self.arrow != "abc" - - def test_gt(self): - - arrow_cmp = self.arrow.shift(minutes=1) - - assert not (self.arrow > self.arrow) - assert not (self.arrow > self.arrow.datetime) - - with pytest.raises(TypeError): - self.arrow > "abc" - - assert self.arrow < arrow_cmp - assert self.arrow < arrow_cmp.datetime - - def test_ge(self): - - with pytest.raises(TypeError): - self.arrow >= "abc" - - assert self.arrow >= self.arrow - assert self.arrow >= self.arrow.datetime - - def test_lt(self): - - arrow_cmp = self.arrow.shift(minutes=1) - - assert not (self.arrow < self.arrow) - assert not (self.arrow < self.arrow.datetime) - - with pytest.raises(TypeError): - self.arrow < "abc" - - assert self.arrow < arrow_cmp - assert self.arrow < arrow_cmp.datetime - - def test_le(self): - - with pytest.raises(TypeError): - self.arrow <= "abc" - - assert self.arrow <= self.arrow - assert self.arrow <= self.arrow.datetime - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowMath: - def test_add_timedelta(self): - - result = self.arrow.__add__(timedelta(days=1)) - - assert result._datetime == datetime(2013, 1, 2, tzinfo=tz.tzutc()) - - def test_add_other(self): - - with pytest.raises(TypeError): - self.arrow + 1 - - def test_radd(self): - - result = self.arrow.__radd__(timedelta(days=1)) - - assert result._datetime == datetime(2013, 1, 2, tzinfo=tz.tzutc()) - - def test_sub_timedelta(self): - - result = self.arrow.__sub__(timedelta(days=1)) - - assert result._datetime == datetime(2012, 12, 31, tzinfo=tz.tzutc()) - - def test_sub_datetime(self): - - result = self.arrow.__sub__(datetime(2012, 12, 21, tzinfo=tz.tzutc())) - - assert result == timedelta(days=11) - - def test_sub_arrow(self): - - result = self.arrow.__sub__(arrow.Arrow(2012, 12, 21, tzinfo=tz.tzutc())) - - assert result == timedelta(days=11) - - def test_sub_other(self): - - with pytest.raises(TypeError): - self.arrow - object() - - def test_rsub_datetime(self): - - result = self.arrow.__rsub__(datetime(2012, 12, 21, tzinfo=tz.tzutc())) - - assert result == timedelta(days=-11) - - def test_rsub_other(self): - - with pytest.raises(TypeError): - timedelta(days=1) - self.arrow - - -@pytest.mark.usefixtures("time_utcnow") -class TestArrowDatetimeInterface: - def test_date(self): - - result = self.arrow.date() - - assert result == self.arrow._datetime.date() - - def test_time(self): - - result = self.arrow.time() - - assert result == self.arrow._datetime.time() - - def test_timetz(self): - - result = self.arrow.timetz() - - assert result == self.arrow._datetime.timetz() - - def test_astimezone(self): - - other_tz = tz.gettz("US/Pacific") - - result = self.arrow.astimezone(other_tz) - - assert result == self.arrow._datetime.astimezone(other_tz) - - def test_utcoffset(self): - - result = self.arrow.utcoffset() - - assert result == self.arrow._datetime.utcoffset() - - def test_dst(self): - - result = self.arrow.dst() - - assert result == self.arrow._datetime.dst() - - def test_timetuple(self): - - result = self.arrow.timetuple() - - assert result == self.arrow._datetime.timetuple() - - def test_utctimetuple(self): - - result = self.arrow.utctimetuple() - - assert result == self.arrow._datetime.utctimetuple() - - def test_toordinal(self): - - result = self.arrow.toordinal() - - assert result == self.arrow._datetime.toordinal() - - def test_weekday(self): - - result = self.arrow.weekday() - - assert result == self.arrow._datetime.weekday() - - def test_isoweekday(self): - - result = self.arrow.isoweekday() - - assert result == self.arrow._datetime.isoweekday() - - def test_isocalendar(self): - - result = self.arrow.isocalendar() - - assert result == self.arrow._datetime.isocalendar() - - def test_isoformat(self): - - result = self.arrow.isoformat() - - assert result == self.arrow._datetime.isoformat() - - def test_simplejson(self): - - result = json.dumps({"v": self.arrow.for_json()}, for_json=True) - - assert json.loads(result)["v"] == self.arrow._datetime.isoformat() - - def test_ctime(self): - - result = self.arrow.ctime() - - assert result == self.arrow._datetime.ctime() - - def test_strftime(self): - - result = self.arrow.strftime("%Y") - - assert result == self.arrow._datetime.strftime("%Y") - - -class TestArrowFalsePositiveDst: - """These tests relate to issues #376 and #551. - The key points in both issues are that arrow will assign a UTC timezone if none is provided and - .to() will change other attributes to be correct whereas .replace() only changes the specified attribute. - - Issue 376 - >>> arrow.get('2016-11-06').to('America/New_York').ceil('day') - < Arrow [2016-11-05T23:59:59.999999-04:00] > - - Issue 551 - >>> just_before = arrow.get('2018-11-04T01:59:59.999999') - >>> just_before - 2018-11-04T01:59:59.999999+00:00 - >>> just_after = just_before.shift(microseconds=1) - >>> just_after - 2018-11-04T02:00:00+00:00 - >>> just_before_eastern = just_before.replace(tzinfo='US/Eastern') - >>> just_before_eastern - 2018-11-04T01:59:59.999999-04:00 - >>> just_after_eastern = just_after.replace(tzinfo='US/Eastern') - >>> just_after_eastern - 2018-11-04T02:00:00-05:00 - """ - - def test_dst(self): - self.before_1 = arrow.Arrow( - 2016, 11, 6, 3, 59, tzinfo=tz.gettz("America/New_York") - ) - self.before_2 = arrow.Arrow(2016, 11, 6, tzinfo=tz.gettz("America/New_York")) - self.after_1 = arrow.Arrow(2016, 11, 6, 4, tzinfo=tz.gettz("America/New_York")) - self.after_2 = arrow.Arrow( - 2016, 11, 6, 23, 59, tzinfo=tz.gettz("America/New_York") - ) - self.before_3 = arrow.Arrow( - 2018, 11, 4, 3, 59, tzinfo=tz.gettz("America/New_York") - ) - self.before_4 = arrow.Arrow(2018, 11, 4, tzinfo=tz.gettz("America/New_York")) - self.after_3 = arrow.Arrow(2018, 11, 4, 4, tzinfo=tz.gettz("America/New_York")) - self.after_4 = arrow.Arrow( - 2018, 11, 4, 23, 59, tzinfo=tz.gettz("America/New_York") - ) - assert self.before_1.day == self.before_2.day - assert self.after_1.day == self.after_2.day - assert self.before_3.day == self.before_4.day - assert self.after_3.day == self.after_4.day - - -class TestArrowConversion: - def test_to(self): - - dt_from = datetime.now() - arrow_from = arrow.Arrow.fromdatetime(dt_from, tz.gettz("US/Pacific")) - - self.expected = dt_from.replace(tzinfo=tz.gettz("US/Pacific")).astimezone( - tz.tzutc() - ) - - assert arrow_from.to("UTC").datetime == self.expected - assert arrow_from.to(tz.tzutc()).datetime == self.expected - - # issue #368 - def test_to_pacific_then_utc(self): - result = arrow.Arrow(2018, 11, 4, 1, tzinfo="-08:00").to("US/Pacific").to("UTC") - assert result == arrow.Arrow(2018, 11, 4, 9) - - # issue #368 - def test_to_amsterdam_then_utc(self): - result = arrow.Arrow(2016, 10, 30).to("Europe/Amsterdam") - assert result.utcoffset() == timedelta(seconds=7200) - - # regression test for #690 - def test_to_israel_same_offset(self): - - result = arrow.Arrow(2019, 10, 27, 2, 21, 1, tzinfo="+03:00").to("Israel") - expected = arrow.Arrow(2019, 10, 27, 1, 21, 1, tzinfo="Israel") - - assert result == expected - assert result.utcoffset() != expected.utcoffset() - - # issue 315 - def test_anchorage_dst(self): - before = arrow.Arrow(2016, 3, 13, 1, 59, tzinfo="America/Anchorage") - after = arrow.Arrow(2016, 3, 13, 2, 1, tzinfo="America/Anchorage") - - assert before.utcoffset() != after.utcoffset() - - # issue 476 - def test_chicago_fall(self): - - result = arrow.Arrow(2017, 11, 5, 2, 1, tzinfo="-05:00").to("America/Chicago") - expected = arrow.Arrow(2017, 11, 5, 1, 1, tzinfo="America/Chicago") - - assert result == expected - assert result.utcoffset() != expected.utcoffset() - - def test_toronto_gap(self): - - before = arrow.Arrow(2011, 3, 13, 6, 30, tzinfo="UTC").to("America/Toronto") - after = arrow.Arrow(2011, 3, 13, 7, 30, tzinfo="UTC").to("America/Toronto") - - assert before.datetime.replace(tzinfo=None) == datetime(2011, 3, 13, 1, 30) - assert after.datetime.replace(tzinfo=None) == datetime(2011, 3, 13, 3, 30) - - assert before.utcoffset() != after.utcoffset() - - def test_sydney_gap(self): - - before = arrow.Arrow(2012, 10, 6, 15, 30, tzinfo="UTC").to("Australia/Sydney") - after = arrow.Arrow(2012, 10, 6, 16, 30, tzinfo="UTC").to("Australia/Sydney") - - assert before.datetime.replace(tzinfo=None) == datetime(2012, 10, 7, 1, 30) - assert after.datetime.replace(tzinfo=None) == datetime(2012, 10, 7, 3, 30) - - assert before.utcoffset() != after.utcoffset() - - -class TestArrowPickling: - def test_pickle_and_unpickle(self): - - dt = arrow.Arrow.utcnow() - - pickled = pickle.dumps(dt) - - unpickled = pickle.loads(pickled) - - assert unpickled == dt - - -class TestArrowReplace: - def test_not_attr(self): - - with pytest.raises(AttributeError): - arrow.Arrow.utcnow().replace(abc=1) - - def test_replace(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.replace(year=2012) == arrow.Arrow(2012, 5, 5, 12, 30, 45) - assert arw.replace(month=1) == arrow.Arrow(2013, 1, 5, 12, 30, 45) - assert arw.replace(day=1) == arrow.Arrow(2013, 5, 1, 12, 30, 45) - assert arw.replace(hour=1) == arrow.Arrow(2013, 5, 5, 1, 30, 45) - assert arw.replace(minute=1) == arrow.Arrow(2013, 5, 5, 12, 1, 45) - assert arw.replace(second=1) == arrow.Arrow(2013, 5, 5, 12, 30, 1) - - def test_replace_tzinfo(self): - - arw = arrow.Arrow.utcnow().to("US/Eastern") - - result = arw.replace(tzinfo=tz.gettz("US/Pacific")) - - assert result == arw.datetime.replace(tzinfo=tz.gettz("US/Pacific")) - - def test_replace_fold(self): - - before = arrow.Arrow(2017, 11, 5, 1, tzinfo="America/New_York") - after = before.replace(fold=1) - - assert before.fold == 0 - assert after.fold == 1 - assert before == after - assert before.utcoffset() != after.utcoffset() - - def test_replace_fold_and_other(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.replace(fold=1, minute=50) == arrow.Arrow(2013, 5, 5, 12, 50, 45) - assert arw.replace(minute=50, fold=1) == arrow.Arrow(2013, 5, 5, 12, 50, 45) - - def test_replace_week(self): - - with pytest.raises(AttributeError): - arrow.Arrow.utcnow().replace(week=1) - - def test_replace_quarter(self): - - with pytest.raises(AttributeError): - arrow.Arrow.utcnow().replace(quarter=1) - - def test_replace_quarter_and_fold(self): - with pytest.raises(AttributeError): - arrow.utcnow().replace(fold=1, quarter=1) - - with pytest.raises(AttributeError): - arrow.utcnow().replace(quarter=1, fold=1) - - def test_replace_other_kwargs(self): - - with pytest.raises(AttributeError): - arrow.utcnow().replace(abc="def") - - -class TestArrowShift: - def test_not_attr(self): - - now = arrow.Arrow.utcnow() - - with pytest.raises(AttributeError): - now.shift(abc=1) - - with pytest.raises(AttributeError): - now.shift(week=1) - - def test_shift(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.shift(years=1) == arrow.Arrow(2014, 5, 5, 12, 30, 45) - assert arw.shift(quarters=1) == arrow.Arrow(2013, 8, 5, 12, 30, 45) - assert arw.shift(quarters=1, months=1) == arrow.Arrow(2013, 9, 5, 12, 30, 45) - assert arw.shift(months=1) == arrow.Arrow(2013, 6, 5, 12, 30, 45) - assert arw.shift(weeks=1) == arrow.Arrow(2013, 5, 12, 12, 30, 45) - assert arw.shift(days=1) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(hours=1) == arrow.Arrow(2013, 5, 5, 13, 30, 45) - assert arw.shift(minutes=1) == arrow.Arrow(2013, 5, 5, 12, 31, 45) - assert arw.shift(seconds=1) == arrow.Arrow(2013, 5, 5, 12, 30, 46) - assert arw.shift(microseconds=1) == arrow.Arrow(2013, 5, 5, 12, 30, 45, 1) - - # Remember: Python's weekday 0 is Monday - assert arw.shift(weekday=0) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=1) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=2) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=3) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=4) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=5) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=6) == arw - - with pytest.raises(IndexError): - arw.shift(weekday=7) - - # Use dateutil.relativedelta's convenient day instances - assert arw.shift(weekday=MO) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=MO(0)) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=MO(1)) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(weekday=MO(2)) == arrow.Arrow(2013, 5, 13, 12, 30, 45) - assert arw.shift(weekday=TU) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=TU(0)) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=TU(1)) == arrow.Arrow(2013, 5, 7, 12, 30, 45) - assert arw.shift(weekday=TU(2)) == arrow.Arrow(2013, 5, 14, 12, 30, 45) - assert arw.shift(weekday=WE) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=WE(0)) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=WE(1)) == arrow.Arrow(2013, 5, 8, 12, 30, 45) - assert arw.shift(weekday=WE(2)) == arrow.Arrow(2013, 5, 15, 12, 30, 45) - assert arw.shift(weekday=TH) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=TH(0)) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=TH(1)) == arrow.Arrow(2013, 5, 9, 12, 30, 45) - assert arw.shift(weekday=TH(2)) == arrow.Arrow(2013, 5, 16, 12, 30, 45) - assert arw.shift(weekday=FR) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=FR(0)) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=FR(1)) == arrow.Arrow(2013, 5, 10, 12, 30, 45) - assert arw.shift(weekday=FR(2)) == arrow.Arrow(2013, 5, 17, 12, 30, 45) - assert arw.shift(weekday=SA) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=SA(0)) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=SA(1)) == arrow.Arrow(2013, 5, 11, 12, 30, 45) - assert arw.shift(weekday=SA(2)) == arrow.Arrow(2013, 5, 18, 12, 30, 45) - assert arw.shift(weekday=SU) == arw - assert arw.shift(weekday=SU(0)) == arw - assert arw.shift(weekday=SU(1)) == arw - assert arw.shift(weekday=SU(2)) == arrow.Arrow(2013, 5, 12, 12, 30, 45) - - def test_shift_negative(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - assert arw.shift(years=-1) == arrow.Arrow(2012, 5, 5, 12, 30, 45) - assert arw.shift(quarters=-1) == arrow.Arrow(2013, 2, 5, 12, 30, 45) - assert arw.shift(quarters=-1, months=-1) == arrow.Arrow(2013, 1, 5, 12, 30, 45) - assert arw.shift(months=-1) == arrow.Arrow(2013, 4, 5, 12, 30, 45) - assert arw.shift(weeks=-1) == arrow.Arrow(2013, 4, 28, 12, 30, 45) - assert arw.shift(days=-1) == arrow.Arrow(2013, 5, 4, 12, 30, 45) - assert arw.shift(hours=-1) == arrow.Arrow(2013, 5, 5, 11, 30, 45) - assert arw.shift(minutes=-1) == arrow.Arrow(2013, 5, 5, 12, 29, 45) - assert arw.shift(seconds=-1) == arrow.Arrow(2013, 5, 5, 12, 30, 44) - assert arw.shift(microseconds=-1) == arrow.Arrow(2013, 5, 5, 12, 30, 44, 999999) - - # Not sure how practical these negative weekdays are - assert arw.shift(weekday=-1) == arw.shift(weekday=SU) - assert arw.shift(weekday=-2) == arw.shift(weekday=SA) - assert arw.shift(weekday=-3) == arw.shift(weekday=FR) - assert arw.shift(weekday=-4) == arw.shift(weekday=TH) - assert arw.shift(weekday=-5) == arw.shift(weekday=WE) - assert arw.shift(weekday=-6) == arw.shift(weekday=TU) - assert arw.shift(weekday=-7) == arw.shift(weekday=MO) - - with pytest.raises(IndexError): - arw.shift(weekday=-8) - - assert arw.shift(weekday=MO(-1)) == arrow.Arrow(2013, 4, 29, 12, 30, 45) - assert arw.shift(weekday=TU(-1)) == arrow.Arrow(2013, 4, 30, 12, 30, 45) - assert arw.shift(weekday=WE(-1)) == arrow.Arrow(2013, 5, 1, 12, 30, 45) - assert arw.shift(weekday=TH(-1)) == arrow.Arrow(2013, 5, 2, 12, 30, 45) - assert arw.shift(weekday=FR(-1)) == arrow.Arrow(2013, 5, 3, 12, 30, 45) - assert arw.shift(weekday=SA(-1)) == arrow.Arrow(2013, 5, 4, 12, 30, 45) - assert arw.shift(weekday=SU(-1)) == arw - assert arw.shift(weekday=SU(-2)) == arrow.Arrow(2013, 4, 28, 12, 30, 45) - - def test_shift_quarters_bug(self): - - arw = arrow.Arrow(2013, 5, 5, 12, 30, 45) - - # The value of the last-read argument was used instead of the ``quarters`` argument. - # Recall that the keyword argument dict, like all dicts, is unordered, so only certain - # combinations of arguments would exhibit this. - assert arw.shift(quarters=0, years=1) == arrow.Arrow(2014, 5, 5, 12, 30, 45) - assert arw.shift(quarters=0, months=1) == arrow.Arrow(2013, 6, 5, 12, 30, 45) - assert arw.shift(quarters=0, weeks=1) == arrow.Arrow(2013, 5, 12, 12, 30, 45) - assert arw.shift(quarters=0, days=1) == arrow.Arrow(2013, 5, 6, 12, 30, 45) - assert arw.shift(quarters=0, hours=1) == arrow.Arrow(2013, 5, 5, 13, 30, 45) - assert arw.shift(quarters=0, minutes=1) == arrow.Arrow(2013, 5, 5, 12, 31, 45) - assert arw.shift(quarters=0, seconds=1) == arrow.Arrow(2013, 5, 5, 12, 30, 46) - assert arw.shift(quarters=0, microseconds=1) == arrow.Arrow( - 2013, 5, 5, 12, 30, 45, 1 - ) - - def test_shift_positive_imaginary(self): - - # Avoid shifting into imaginary datetimes, take into account DST and other timezone changes. - - new_york = arrow.Arrow(2017, 3, 12, 1, 30, tzinfo="America/New_York") - assert new_york.shift(hours=+1) == arrow.Arrow( - 2017, 3, 12, 3, 30, tzinfo="America/New_York" - ) - - # pendulum example - paris = arrow.Arrow(2013, 3, 31, 1, 50, tzinfo="Europe/Paris") - assert paris.shift(minutes=+20) == arrow.Arrow( - 2013, 3, 31, 3, 10, tzinfo="Europe/Paris" - ) - - canberra = arrow.Arrow(2018, 10, 7, 1, 30, tzinfo="Australia/Canberra") - assert canberra.shift(hours=+1) == arrow.Arrow( - 2018, 10, 7, 3, 30, tzinfo="Australia/Canberra" - ) - - kiev = arrow.Arrow(2018, 3, 25, 2, 30, tzinfo="Europe/Kiev") - assert kiev.shift(hours=+1) == arrow.Arrow( - 2018, 3, 25, 4, 30, tzinfo="Europe/Kiev" - ) - - # Edge case, the entire day of 2011-12-30 is imaginary in this zone! - apia = arrow.Arrow(2011, 12, 29, 23, tzinfo="Pacific/Apia") - assert apia.shift(hours=+2) == arrow.Arrow( - 2011, 12, 31, 1, tzinfo="Pacific/Apia" - ) - - def test_shift_negative_imaginary(self): - - new_york = arrow.Arrow(2011, 3, 13, 3, 30, tzinfo="America/New_York") - assert new_york.shift(hours=-1) == arrow.Arrow( - 2011, 3, 13, 3, 30, tzinfo="America/New_York" - ) - assert new_york.shift(hours=-2) == arrow.Arrow( - 2011, 3, 13, 1, 30, tzinfo="America/New_York" - ) - - london = arrow.Arrow(2019, 3, 31, 2, tzinfo="Europe/London") - assert london.shift(hours=-1) == arrow.Arrow( - 2019, 3, 31, 2, tzinfo="Europe/London" - ) - assert london.shift(hours=-2) == arrow.Arrow( - 2019, 3, 31, 0, tzinfo="Europe/London" - ) - - # edge case, crossing the international dateline - apia = arrow.Arrow(2011, 12, 31, 1, tzinfo="Pacific/Apia") - assert apia.shift(hours=-2) == arrow.Arrow( - 2011, 12, 31, 23, tzinfo="Pacific/Apia" - ) - - @pytest.mark.skipif( - dateutil.__version__ < "2.7.1", reason="old tz database (2018d needed)" - ) - def test_shift_kiritimati(self): - # corrected 2018d tz database release, will fail in earlier versions - - kiritimati = arrow.Arrow(1994, 12, 30, 12, 30, tzinfo="Pacific/Kiritimati") - assert kiritimati.shift(days=+1) == arrow.Arrow( - 1995, 1, 1, 12, 30, tzinfo="Pacific/Kiritimati" - ) - - @pytest.mark.skipif( - sys.version_info < (3, 6), reason="unsupported before python 3.6" - ) - def shift_imaginary_seconds(self): - # offset has a seconds component - monrovia = arrow.Arrow(1972, 1, 6, 23, tzinfo="Africa/Monrovia") - assert monrovia.shift(hours=+1, minutes=+30) == arrow.Arrow( - 1972, 1, 7, 1, 14, 30, tzinfo="Africa/Monrovia" - ) - - -class TestArrowRange: - def test_year(self): - - result = list( - arrow.Arrow.range( - "year", datetime(2013, 1, 2, 3, 4, 5), datetime(2016, 4, 5, 6, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2014, 1, 2, 3, 4, 5), - arrow.Arrow(2015, 1, 2, 3, 4, 5), - arrow.Arrow(2016, 1, 2, 3, 4, 5), - ] - - def test_quarter(self): - - result = list( - arrow.Arrow.range( - "quarter", datetime(2013, 2, 3, 4, 5, 6), datetime(2013, 5, 6, 7, 8, 9) - ) - ) - - assert result == [ - arrow.Arrow(2013, 2, 3, 4, 5, 6), - arrow.Arrow(2013, 5, 3, 4, 5, 6), - ] - - def test_month(self): - - result = list( - arrow.Arrow.range( - "month", datetime(2013, 2, 3, 4, 5, 6), datetime(2013, 5, 6, 7, 8, 9) - ) - ) - - assert result == [ - arrow.Arrow(2013, 2, 3, 4, 5, 6), - arrow.Arrow(2013, 3, 3, 4, 5, 6), - arrow.Arrow(2013, 4, 3, 4, 5, 6), - arrow.Arrow(2013, 5, 3, 4, 5, 6), - ] - - def test_week(self): - - result = list( - arrow.Arrow.range( - "week", datetime(2013, 9, 1, 2, 3, 4), datetime(2013, 10, 1, 2, 3, 4) - ) - ) - - assert result == [ - arrow.Arrow(2013, 9, 1, 2, 3, 4), - arrow.Arrow(2013, 9, 8, 2, 3, 4), - arrow.Arrow(2013, 9, 15, 2, 3, 4), - arrow.Arrow(2013, 9, 22, 2, 3, 4), - arrow.Arrow(2013, 9, 29, 2, 3, 4), - ] - - def test_day(self): - - result = list( - arrow.Arrow.range( - "day", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 5, 6, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 3, 3, 4, 5), - arrow.Arrow(2013, 1, 4, 3, 4, 5), - arrow.Arrow(2013, 1, 5, 3, 4, 5), - ] - - def test_hour(self): - - result = list( - arrow.Arrow.range( - "hour", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 6, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 2, 4, 4, 5), - arrow.Arrow(2013, 1, 2, 5, 4, 5), - arrow.Arrow(2013, 1, 2, 6, 4, 5), - ] - - result = list( - arrow.Arrow.range( - "hour", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 3, 4, 5) - ) - ) - - assert result == [arrow.Arrow(2013, 1, 2, 3, 4, 5)] - - def test_minute(self): - - result = list( - arrow.Arrow.range( - "minute", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 3, 7, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 2, 3, 5, 5), - arrow.Arrow(2013, 1, 2, 3, 6, 5), - arrow.Arrow(2013, 1, 2, 3, 7, 5), - ] - - def test_second(self): - - result = list( - arrow.Arrow.range( - "second", datetime(2013, 1, 2, 3, 4, 5), datetime(2013, 1, 2, 3, 4, 8) - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 2, 3, 4, 6), - arrow.Arrow(2013, 1, 2, 3, 4, 7), - arrow.Arrow(2013, 1, 2, 3, 4, 8), - ] - - def test_arrow(self): - - result = list( - arrow.Arrow.range( - "day", - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 5, 6, 7, 8), - ) - ) - - assert result == [ - arrow.Arrow(2013, 1, 2, 3, 4, 5), - arrow.Arrow(2013, 1, 3, 3, 4, 5), - arrow.Arrow(2013, 1, 4, 3, 4, 5), - arrow.Arrow(2013, 1, 5, 3, 4, 5), - ] - - def test_naive_tz(self): - - result = arrow.Arrow.range( - "year", datetime(2013, 1, 2, 3), datetime(2016, 4, 5, 6), "US/Pacific" - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Pacific") - - def test_aware_same_tz(self): - - result = arrow.Arrow.range( - "day", - arrow.Arrow(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")), - arrow.Arrow(2013, 1, 3, tzinfo=tz.gettz("US/Pacific")), - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Pacific") - - def test_aware_different_tz(self): - - result = arrow.Arrow.range( - "day", - datetime(2013, 1, 1, tzinfo=tz.gettz("US/Eastern")), - datetime(2013, 1, 3, tzinfo=tz.gettz("US/Pacific")), - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Eastern") - - def test_aware_tz(self): - - result = arrow.Arrow.range( - "day", - datetime(2013, 1, 1, tzinfo=tz.gettz("US/Eastern")), - datetime(2013, 1, 3, tzinfo=tz.gettz("US/Pacific")), - tz=tz.gettz("US/Central"), - ) - - for r in result: - assert r.tzinfo == tz.gettz("US/Central") - - def test_imaginary(self): - # issue #72, avoid duplication in utc column - - before = arrow.Arrow(2018, 3, 10, 23, tzinfo="US/Pacific") - after = arrow.Arrow(2018, 3, 11, 4, tzinfo="US/Pacific") - - pacific_range = [t for t in arrow.Arrow.range("hour", before, after)] - utc_range = [t.to("utc") for t in arrow.Arrow.range("hour", before, after)] - - assert len(pacific_range) == len(set(pacific_range)) - assert len(utc_range) == len(set(utc_range)) - - def test_unsupported(self): - - with pytest.raises(AttributeError): - next(arrow.Arrow.range("abc", datetime.utcnow(), datetime.utcnow())) - - def test_range_over_months_ending_on_different_days(self): - # regression test for issue #842 - result = list(arrow.Arrow.range("month", datetime(2015, 1, 31), limit=4)) - assert result == [ - arrow.Arrow(2015, 1, 31), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 3, 31), - arrow.Arrow(2015, 4, 30), - ] - - result = list(arrow.Arrow.range("month", datetime(2015, 1, 30), limit=3)) - assert result == [ - arrow.Arrow(2015, 1, 30), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 3, 30), - ] - - result = list(arrow.Arrow.range("month", datetime(2015, 2, 28), limit=3)) - assert result == [ - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 3, 28), - arrow.Arrow(2015, 4, 28), - ] - - result = list(arrow.Arrow.range("month", datetime(2015, 3, 31), limit=3)) - assert result == [ - arrow.Arrow(2015, 3, 31), - arrow.Arrow(2015, 4, 30), - arrow.Arrow(2015, 5, 31), - ] - - def test_range_over_quarter_months_ending_on_different_days(self): - result = list(arrow.Arrow.range("quarter", datetime(2014, 11, 30), limit=3)) - assert result == [ - arrow.Arrow(2014, 11, 30), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2015, 5, 30), - ] - - def test_range_over_year_maintains_end_date_across_leap_year(self): - result = list(arrow.Arrow.range("year", datetime(2012, 2, 29), limit=5)) - assert result == [ - arrow.Arrow(2012, 2, 29), - arrow.Arrow(2013, 2, 28), - arrow.Arrow(2014, 2, 28), - arrow.Arrow(2015, 2, 28), - arrow.Arrow(2016, 2, 29), - ] - - -class TestArrowSpanRange: - def test_year(self): - - result = list( - arrow.Arrow.span_range("year", datetime(2013, 2, 1), datetime(2016, 3, 31)) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1), - arrow.Arrow(2013, 12, 31, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2014, 1, 1), - arrow.Arrow(2014, 12, 31, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2015, 1, 1), - arrow.Arrow(2015, 12, 31, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2016, 1, 1), - arrow.Arrow(2016, 12, 31, 23, 59, 59, 999999), - ), - ] - - def test_quarter(self): - - result = list( - arrow.Arrow.span_range( - "quarter", datetime(2013, 2, 2), datetime(2013, 5, 15) - ) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1), arrow.Arrow(2013, 3, 31, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 4, 1), arrow.Arrow(2013, 6, 30, 23, 59, 59, 999999)), - ] - - def test_month(self): - - result = list( - arrow.Arrow.span_range("month", datetime(2013, 1, 2), datetime(2013, 4, 15)) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1), arrow.Arrow(2013, 1, 31, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 2, 1), arrow.Arrow(2013, 2, 28, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 3, 1), arrow.Arrow(2013, 3, 31, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 4, 1), arrow.Arrow(2013, 4, 30, 23, 59, 59, 999999)), - ] - - def test_week(self): - - result = list( - arrow.Arrow.span_range("week", datetime(2013, 2, 2), datetime(2013, 2, 28)) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 28), arrow.Arrow(2013, 2, 3, 23, 59, 59, 999999)), - (arrow.Arrow(2013, 2, 4), arrow.Arrow(2013, 2, 10, 23, 59, 59, 999999)), - ( - arrow.Arrow(2013, 2, 11), - arrow.Arrow(2013, 2, 17, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 2, 18), - arrow.Arrow(2013, 2, 24, 23, 59, 59, 999999), - ), - (arrow.Arrow(2013, 2, 25), arrow.Arrow(2013, 3, 3, 23, 59, 59, 999999)), - ] - - def test_day(self): - - result = list( - arrow.Arrow.span_range( - "day", datetime(2013, 1, 1, 12), datetime(2013, 1, 4, 12) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0), - arrow.Arrow(2013, 1, 1, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 2, 0), - arrow.Arrow(2013, 1, 2, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 3, 0), - arrow.Arrow(2013, 1, 3, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 4, 0), - arrow.Arrow(2013, 1, 4, 23, 59, 59, 999999), - ), - ] - - def test_days(self): - - result = list( - arrow.Arrow.span_range( - "days", datetime(2013, 1, 1, 12), datetime(2013, 1, 4, 12) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0), - arrow.Arrow(2013, 1, 1, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 2, 0), - arrow.Arrow(2013, 1, 2, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 3, 0), - arrow.Arrow(2013, 1, 3, 23, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 4, 0), - arrow.Arrow(2013, 1, 4, 23, 59, 59, 999999), - ), - ] - - def test_hour(self): - - result = list( - arrow.Arrow.span_range( - "hour", datetime(2013, 1, 1, 0, 30), datetime(2013, 1, 1, 3, 30) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0), - arrow.Arrow(2013, 1, 1, 0, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 1), - arrow.Arrow(2013, 1, 1, 1, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 2), - arrow.Arrow(2013, 1, 1, 2, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 3), - arrow.Arrow(2013, 1, 1, 3, 59, 59, 999999), - ), - ] - - result = list( - arrow.Arrow.span_range( - "hour", datetime(2013, 1, 1, 3, 30), datetime(2013, 1, 1, 3, 30) - ) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1, 3), arrow.Arrow(2013, 1, 1, 3, 59, 59, 999999)) - ] - - def test_minute(self): - - result = list( - arrow.Arrow.span_range( - "minute", datetime(2013, 1, 1, 0, 0, 30), datetime(2013, 1, 1, 0, 3, 30) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0, 0), - arrow.Arrow(2013, 1, 1, 0, 0, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 1), - arrow.Arrow(2013, 1, 1, 0, 1, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 2), - arrow.Arrow(2013, 1, 1, 0, 2, 59, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 3), - arrow.Arrow(2013, 1, 1, 0, 3, 59, 999999), - ), - ] - - def test_second(self): - - result = list( - arrow.Arrow.span_range( - "second", datetime(2013, 1, 1), datetime(2013, 1, 1, 0, 0, 3) - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 1, 1, 0, 0, 0), - arrow.Arrow(2013, 1, 1, 0, 0, 0, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 0, 1), - arrow.Arrow(2013, 1, 1, 0, 0, 1, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 0, 2), - arrow.Arrow(2013, 1, 1, 0, 0, 2, 999999), - ), - ( - arrow.Arrow(2013, 1, 1, 0, 0, 3), - arrow.Arrow(2013, 1, 1, 0, 0, 3, 999999), - ), - ] - - def test_naive_tz(self): - - tzinfo = tz.gettz("US/Pacific") - - result = arrow.Arrow.span_range( - "hour", datetime(2013, 1, 1, 0), datetime(2013, 1, 1, 3, 59), "US/Pacific" - ) - - for f, c in result: - assert f.tzinfo == tzinfo - assert c.tzinfo == tzinfo - - def test_aware_same_tz(self): - - tzinfo = tz.gettz("US/Pacific") - - result = arrow.Arrow.span_range( - "hour", - datetime(2013, 1, 1, 0, tzinfo=tzinfo), - datetime(2013, 1, 1, 2, 59, tzinfo=tzinfo), - ) - - for f, c in result: - assert f.tzinfo == tzinfo - assert c.tzinfo == tzinfo - - def test_aware_different_tz(self): - - tzinfo1 = tz.gettz("US/Pacific") - tzinfo2 = tz.gettz("US/Eastern") - - result = arrow.Arrow.span_range( - "hour", - datetime(2013, 1, 1, 0, tzinfo=tzinfo1), - datetime(2013, 1, 1, 2, 59, tzinfo=tzinfo2), - ) - - for f, c in result: - assert f.tzinfo == tzinfo1 - assert c.tzinfo == tzinfo1 - - def test_aware_tz(self): - - result = arrow.Arrow.span_range( - "hour", - datetime(2013, 1, 1, 0, tzinfo=tz.gettz("US/Eastern")), - datetime(2013, 1, 1, 2, 59, tzinfo=tz.gettz("US/Eastern")), - tz="US/Central", - ) - - for f, c in result: - assert f.tzinfo == tz.gettz("US/Central") - assert c.tzinfo == tz.gettz("US/Central") - - def test_bounds_param_is_passed(self): - - result = list( - arrow.Arrow.span_range( - "quarter", datetime(2013, 2, 2), datetime(2013, 5, 15), bounds="[]" - ) - ) - - assert result == [ - (arrow.Arrow(2013, 1, 1), arrow.Arrow(2013, 4, 1)), - (arrow.Arrow(2013, 4, 1), arrow.Arrow(2013, 7, 1)), - ] - - -class TestArrowInterval: - def test_incorrect_input(self): - with pytest.raises(ValueError): - list( - arrow.Arrow.interval( - "month", datetime(2013, 1, 2), datetime(2013, 4, 15), 0 - ) - ) - - def test_correct(self): - result = list( - arrow.Arrow.interval( - "hour", datetime(2013, 5, 5, 12, 30), datetime(2013, 5, 5, 17, 15), 2 - ) - ) - - assert result == [ - ( - arrow.Arrow(2013, 5, 5, 12), - arrow.Arrow(2013, 5, 5, 13, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 5, 5, 14), - arrow.Arrow(2013, 5, 5, 15, 59, 59, 999999), - ), - ( - arrow.Arrow(2013, 5, 5, 16), - arrow.Arrow(2013, 5, 5, 17, 59, 59, 999999), - ), - ] - - def test_bounds_param_is_passed(self): - result = list( - arrow.Arrow.interval( - "hour", - datetime(2013, 5, 5, 12, 30), - datetime(2013, 5, 5, 17, 15), - 2, - bounds="[]", - ) - ) - - assert result == [ - (arrow.Arrow(2013, 5, 5, 12), arrow.Arrow(2013, 5, 5, 14)), - (arrow.Arrow(2013, 5, 5, 14), arrow.Arrow(2013, 5, 5, 16)), - (arrow.Arrow(2013, 5, 5, 16), arrow.Arrow(2013, 5, 5, 18)), - ] - - -@pytest.mark.usefixtures("time_2013_02_15") -class TestArrowSpan: - def test_span_attribute(self): - - with pytest.raises(AttributeError): - self.arrow.span("span") - - def test_span_year(self): - - floor, ceil = self.arrow.span("year") - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 12, 31, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_quarter(self): - - floor, ceil = self.arrow.span("quarter") - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 3, 31, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_quarter_count(self): - - floor, ceil = self.arrow.span("quarter", 2) - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 6, 30, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_year_count(self): - - floor, ceil = self.arrow.span("year", 2) - - assert floor == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2014, 12, 31, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_month(self): - - floor, ceil = self.arrow.span("month") - - assert floor == datetime(2013, 2, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 28, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_week(self): - - floor, ceil = self.arrow.span("week") - - assert floor == datetime(2013, 2, 11, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 17, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_day(self): - - floor, ceil = self.arrow.span("day") - - assert floor == datetime(2013, 2, 15, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 23, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_hour(self): - - floor, ceil = self.arrow.span("hour") - - assert floor == datetime(2013, 2, 15, 3, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_minute(self): - - floor, ceil = self.arrow.span("minute") - - assert floor == datetime(2013, 2, 15, 3, 41, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 41, 59, 999999, tzinfo=tz.tzutc()) - - def test_span_second(self): - - floor, ceil = self.arrow.span("second") - - assert floor == datetime(2013, 2, 15, 3, 41, 22, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 41, 22, 999999, tzinfo=tz.tzutc()) - - def test_span_microsecond(self): - - floor, ceil = self.arrow.span("microsecond") - - assert floor == datetime(2013, 2, 15, 3, 41, 22, 8923, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 41, 22, 8923, tzinfo=tz.tzutc()) - - def test_floor(self): - - floor, ceil = self.arrow.span("month") - - assert floor == self.arrow.floor("month") - assert ceil == self.arrow.ceil("month") - - def test_span_inclusive_inclusive(self): - - floor, ceil = self.arrow.span("hour", bounds="[]") - - assert floor == datetime(2013, 2, 15, 3, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 4, tzinfo=tz.tzutc()) - - def test_span_exclusive_inclusive(self): - - floor, ceil = self.arrow.span("hour", bounds="(]") - - assert floor == datetime(2013, 2, 15, 3, 0, 0, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 4, tzinfo=tz.tzutc()) - - def test_span_exclusive_exclusive(self): - - floor, ceil = self.arrow.span("hour", bounds="()") - - assert floor == datetime(2013, 2, 15, 3, 0, 0, 1, tzinfo=tz.tzutc()) - assert ceil == datetime(2013, 2, 15, 3, 59, 59, 999999, tzinfo=tz.tzutc()) - - def test_bounds_are_validated(self): - - with pytest.raises(ValueError): - floor, ceil = self.arrow.span("hour", bounds="][") - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowHumanize: - def test_granularity(self): - - assert self.now.humanize(granularity="second") == "just now" - - later1 = self.now.shift(seconds=1) - assert self.now.humanize(later1, granularity="second") == "just now" - assert later1.humanize(self.now, granularity="second") == "just now" - assert self.now.humanize(later1, granularity="minute") == "0 minutes ago" - assert later1.humanize(self.now, granularity="minute") == "in 0 minutes" - - later100 = self.now.shift(seconds=100) - assert self.now.humanize(later100, granularity="second") == "100 seconds ago" - assert later100.humanize(self.now, granularity="second") == "in 100 seconds" - assert self.now.humanize(later100, granularity="minute") == "a minute ago" - assert later100.humanize(self.now, granularity="minute") == "in a minute" - assert self.now.humanize(later100, granularity="hour") == "0 hours ago" - assert later100.humanize(self.now, granularity="hour") == "in 0 hours" - - later4000 = self.now.shift(seconds=4000) - assert self.now.humanize(later4000, granularity="minute") == "66 minutes ago" - assert later4000.humanize(self.now, granularity="minute") == "in 66 minutes" - assert self.now.humanize(later4000, granularity="hour") == "an hour ago" - assert later4000.humanize(self.now, granularity="hour") == "in an hour" - assert self.now.humanize(later4000, granularity="day") == "0 days ago" - assert later4000.humanize(self.now, granularity="day") == "in 0 days" - - later105 = self.now.shift(seconds=10 ** 5) - assert self.now.humanize(later105, granularity="hour") == "27 hours ago" - assert later105.humanize(self.now, granularity="hour") == "in 27 hours" - assert self.now.humanize(later105, granularity="day") == "a day ago" - assert later105.humanize(self.now, granularity="day") == "in a day" - assert self.now.humanize(later105, granularity="week") == "0 weeks ago" - assert later105.humanize(self.now, granularity="week") == "in 0 weeks" - assert self.now.humanize(later105, granularity="month") == "0 months ago" - assert later105.humanize(self.now, granularity="month") == "in 0 months" - assert self.now.humanize(later105, granularity=["month"]) == "0 months ago" - assert later105.humanize(self.now, granularity=["month"]) == "in 0 months" - - later106 = self.now.shift(seconds=3 * 10 ** 6) - assert self.now.humanize(later106, granularity="day") == "34 days ago" - assert later106.humanize(self.now, granularity="day") == "in 34 days" - assert self.now.humanize(later106, granularity="week") == "4 weeks ago" - assert later106.humanize(self.now, granularity="week") == "in 4 weeks" - assert self.now.humanize(later106, granularity="month") == "a month ago" - assert later106.humanize(self.now, granularity="month") == "in a month" - assert self.now.humanize(later106, granularity="year") == "0 years ago" - assert later106.humanize(self.now, granularity="year") == "in 0 years" - - later506 = self.now.shift(seconds=50 * 10 ** 6) - assert self.now.humanize(later506, granularity="week") == "82 weeks ago" - assert later506.humanize(self.now, granularity="week") == "in 82 weeks" - assert self.now.humanize(later506, granularity="month") == "18 months ago" - assert later506.humanize(self.now, granularity="month") == "in 18 months" - assert self.now.humanize(later506, granularity="year") == "a year ago" - assert later506.humanize(self.now, granularity="year") == "in a year" - - later108 = self.now.shift(seconds=10 ** 8) - assert self.now.humanize(later108, granularity="year") == "3 years ago" - assert later108.humanize(self.now, granularity="year") == "in 3 years" - - later108onlydistance = self.now.shift(seconds=10 ** 8) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity="year" - ) - == "3 years" - ) - assert ( - later108onlydistance.humanize( - self.now, only_distance=True, granularity="year" - ) - == "3 years" - ) - - with pytest.raises(AttributeError): - self.now.humanize(later108, granularity="years") - - def test_multiple_granularity(self): - assert self.now.humanize(granularity="second") == "just now" - assert self.now.humanize(granularity=["second"]) == "just now" - assert ( - self.now.humanize(granularity=["year", "month", "day", "hour", "second"]) - == "in 0 years 0 months 0 days 0 hours and 0 seconds" - ) - - later4000 = self.now.shift(seconds=4000) - assert ( - later4000.humanize(self.now, granularity=["hour", "minute"]) - == "in an hour and 6 minutes" - ) - assert ( - self.now.humanize(later4000, granularity=["hour", "minute"]) - == "an hour and 6 minutes ago" - ) - assert ( - later4000.humanize( - self.now, granularity=["hour", "minute"], only_distance=True - ) - == "an hour and 6 minutes" - ) - assert ( - later4000.humanize(self.now, granularity=["day", "hour", "minute"]) - == "in 0 days an hour and 6 minutes" - ) - assert ( - self.now.humanize(later4000, granularity=["day", "hour", "minute"]) - == "0 days an hour and 6 minutes ago" - ) - - later105 = self.now.shift(seconds=10 ** 5) - assert ( - self.now.humanize(later105, granularity=["hour", "day", "minute"]) - == "a day 3 hours and 46 minutes ago" - ) - with pytest.raises(AttributeError): - self.now.humanize(later105, granularity=["error", "second"]) - - later108onlydistance = self.now.shift(seconds=10 ** 8) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity=["year"] - ) - == "3 years" - ) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity=["month", "week"] - ) - == "37 months and 4 weeks" - ) - assert ( - self.now.humanize( - later108onlydistance, only_distance=True, granularity=["year", "second"] - ) - == "3 years and 5327200 seconds" - ) - - one_min_one_sec_ago = self.now.shift(minutes=-1, seconds=-1) - assert ( - one_min_one_sec_ago.humanize(self.now, granularity=["minute", "second"]) - == "a minute and a second ago" - ) - - one_min_two_secs_ago = self.now.shift(minutes=-1, seconds=-2) - assert ( - one_min_two_secs_ago.humanize(self.now, granularity=["minute", "second"]) - == "a minute and 2 seconds ago" - ) - - def test_seconds(self): - - later = self.now.shift(seconds=10) - - # regression test for issue #727 - assert self.now.humanize(later) == "10 seconds ago" - assert later.humanize(self.now) == "in 10 seconds" - - assert self.now.humanize(later, only_distance=True) == "10 seconds" - assert later.humanize(self.now, only_distance=True) == "10 seconds" - - def test_minute(self): - - later = self.now.shift(minutes=1) - - assert self.now.humanize(later) == "a minute ago" - assert later.humanize(self.now) == "in a minute" - - assert self.now.humanize(later, only_distance=True) == "a minute" - assert later.humanize(self.now, only_distance=True) == "a minute" - - def test_minutes(self): - - later = self.now.shift(minutes=2) - - assert self.now.humanize(later) == "2 minutes ago" - assert later.humanize(self.now) == "in 2 minutes" - - assert self.now.humanize(later, only_distance=True) == "2 minutes" - assert later.humanize(self.now, only_distance=True) == "2 minutes" - - def test_hour(self): - - later = self.now.shift(hours=1) - - assert self.now.humanize(later) == "an hour ago" - assert later.humanize(self.now) == "in an hour" - - assert self.now.humanize(later, only_distance=True) == "an hour" - assert later.humanize(self.now, only_distance=True) == "an hour" - - def test_hours(self): - - later = self.now.shift(hours=2) - - assert self.now.humanize(later) == "2 hours ago" - assert later.humanize(self.now) == "in 2 hours" - - assert self.now.humanize(later, only_distance=True) == "2 hours" - assert later.humanize(self.now, only_distance=True) == "2 hours" - - def test_day(self): - - later = self.now.shift(days=1) - - assert self.now.humanize(later) == "a day ago" - assert later.humanize(self.now) == "in a day" - - # regression test for issue #697 - less_than_48_hours = self.now.shift( - days=1, hours=23, seconds=59, microseconds=999999 - ) - assert self.now.humanize(less_than_48_hours) == "a day ago" - assert less_than_48_hours.humanize(self.now) == "in a day" - - less_than_48_hours_date = less_than_48_hours._datetime.date() - with pytest.raises(TypeError): - # humanize other argument does not take raw datetime.date objects - self.now.humanize(less_than_48_hours_date) - - # convert from date to arrow object - less_than_48_hours_date = arrow.Arrow.fromdate(less_than_48_hours_date) - assert self.now.humanize(less_than_48_hours_date) == "a day ago" - assert less_than_48_hours_date.humanize(self.now) == "in a day" - - assert self.now.humanize(later, only_distance=True) == "a day" - assert later.humanize(self.now, only_distance=True) == "a day" - - def test_days(self): - - later = self.now.shift(days=2) - - assert self.now.humanize(later) == "2 days ago" - assert later.humanize(self.now) == "in 2 days" - - assert self.now.humanize(later, only_distance=True) == "2 days" - assert later.humanize(self.now, only_distance=True) == "2 days" - - # Regression tests for humanize bug referenced in issue 541 - later = self.now.shift(days=3) - assert later.humanize(self.now) == "in 3 days" - - later = self.now.shift(days=3, seconds=1) - assert later.humanize(self.now) == "in 3 days" - - later = self.now.shift(days=4) - assert later.humanize(self.now) == "in 4 days" - - def test_week(self): - - later = self.now.shift(weeks=1) - - assert self.now.humanize(later) == "a week ago" - assert later.humanize(self.now) == "in a week" - - assert self.now.humanize(later, only_distance=True) == "a week" - assert later.humanize(self.now, only_distance=True) == "a week" - - def test_weeks(self): - - later = self.now.shift(weeks=2) - - assert self.now.humanize(later) == "2 weeks ago" - assert later.humanize(self.now) == "in 2 weeks" - - assert self.now.humanize(later, only_distance=True) == "2 weeks" - assert later.humanize(self.now, only_distance=True) == "2 weeks" - - def test_month(self): - - later = self.now.shift(months=1) - - assert self.now.humanize(later) == "a month ago" - assert later.humanize(self.now) == "in a month" - - assert self.now.humanize(later, only_distance=True) == "a month" - assert later.humanize(self.now, only_distance=True) == "a month" - - def test_months(self): - - later = self.now.shift(months=2) - earlier = self.now.shift(months=-2) - - assert earlier.humanize(self.now) == "2 months ago" - assert later.humanize(self.now) == "in 2 months" - - assert self.now.humanize(later, only_distance=True) == "2 months" - assert later.humanize(self.now, only_distance=True) == "2 months" - - def test_year(self): - - later = self.now.shift(years=1) - - assert self.now.humanize(later) == "a year ago" - assert later.humanize(self.now) == "in a year" - - assert self.now.humanize(later, only_distance=True) == "a year" - assert later.humanize(self.now, only_distance=True) == "a year" - - def test_years(self): - - later = self.now.shift(years=2) - - assert self.now.humanize(later) == "2 years ago" - assert later.humanize(self.now) == "in 2 years" - - assert self.now.humanize(later, only_distance=True) == "2 years" - assert later.humanize(self.now, only_distance=True) == "2 years" - - arw = arrow.Arrow(2014, 7, 2) - - result = arw.humanize(self.datetime) - - assert result == "in 2 years" - - def test_arrow(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - result = arw.humanize(arrow.Arrow.fromdatetime(self.datetime)) - - assert result == "just now" - - def test_datetime_tzinfo(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - result = arw.humanize(self.datetime.replace(tzinfo=tz.tzutc())) - - assert result == "just now" - - def test_other(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - with pytest.raises(TypeError): - arw.humanize(object()) - - def test_invalid_locale(self): - - arw = arrow.Arrow.fromdatetime(self.datetime) - - with pytest.raises(ValueError): - arw.humanize(locale="klingon") - - def test_none(self): - - arw = arrow.Arrow.utcnow() - - result = arw.humanize() - - assert result == "just now" - - result = arw.humanize(None) - - assert result == "just now" - - def test_untranslated_granularity(self, mocker): - - arw = arrow.Arrow.utcnow() - later = arw.shift(weeks=1) - - # simulate an untranslated timeframe key - mocker.patch.dict("arrow.locales.EnglishLocale.timeframes") - del arrow.locales.EnglishLocale.timeframes["week"] - with pytest.raises(ValueError): - arw.humanize(later, granularity="week") - - -@pytest.mark.usefixtures("time_2013_01_01") -class TestArrowHumanizeTestsWithLocale: - def test_now(self): - - arw = arrow.Arrow(2013, 1, 1, 0, 0, 0) - - result = arw.humanize(self.datetime, locale="ru") - - assert result == "сейчас" - - def test_seconds(self): - arw = arrow.Arrow(2013, 1, 1, 0, 0, 44) - - result = arw.humanize(self.datetime, locale="ru") - - assert result == "через 44 несколько секунд" - - def test_years(self): - - arw = arrow.Arrow(2011, 7, 2) - - result = arw.humanize(self.datetime, locale="ru") - - assert result == "2 года назад" - - -class TestArrowIsBetween: - def test_start_before_end(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 8)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - result = target.is_between(start, end) - assert not result - - def test_exclusive_exclusive_bounds(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 5, 12, 30, 27)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5, 12, 30, 10)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 5, 12, 30, 36)) - result = target.is_between(start, end, "()") - assert result - result = target.is_between(start, end) - assert result - - def test_exclusive_exclusive_bounds_same_date(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - result = target.is_between(start, end, "()") - assert not result - - def test_inclusive_exclusive_bounds(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 6)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 4)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 6)) - result = target.is_between(start, end, "[)") - assert not result - - def test_exclusive_inclusive_bounds(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - result = target.is_between(start, end, "(]") - assert result - - def test_inclusive_inclusive_bounds_same_date(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - result = target.is_between(start, end, "[]") - assert result - - def test_type_error_exception(self): - with pytest.raises(TypeError): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = datetime(2013, 5, 5) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 8)) - target.is_between(start, end) - - with pytest.raises(TypeError): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - end = datetime(2013, 5, 8) - target.is_between(start, end) - - with pytest.raises(TypeError): - target.is_between(None, None) - - def test_value_error_exception(self): - target = arrow.Arrow.fromdatetime(datetime(2013, 5, 7)) - start = arrow.Arrow.fromdatetime(datetime(2013, 5, 5)) - end = arrow.Arrow.fromdatetime(datetime(2013, 5, 8)) - with pytest.raises(ValueError): - target.is_between(start, end, "][") - with pytest.raises(ValueError): - target.is_between(start, end, "") - with pytest.raises(ValueError): - target.is_between(start, end, "]") - with pytest.raises(ValueError): - target.is_between(start, end, "[") - with pytest.raises(ValueError): - target.is_between(start, end, "hello") - - -class TestArrowUtil: - def test_get_datetime(self): - - get_datetime = arrow.Arrow._get_datetime - - arw = arrow.Arrow.utcnow() - dt = datetime.utcnow() - timestamp = time.time() - - assert get_datetime(arw) == arw.datetime - assert get_datetime(dt) == dt - assert ( - get_datetime(timestamp) == arrow.Arrow.utcfromtimestamp(timestamp).datetime - ) - - with pytest.raises(ValueError) as raise_ctx: - get_datetime("abc") - assert "not recognized as a datetime or timestamp" in str(raise_ctx.value) - - def test_get_tzinfo(self): - - get_tzinfo = arrow.Arrow._get_tzinfo - - with pytest.raises(ValueError) as raise_ctx: - get_tzinfo("abc") - assert "not recognized as a timezone" in str(raise_ctx.value) - - def test_get_iteration_params(self): - - assert arrow.Arrow._get_iteration_params("end", None) == ("end", sys.maxsize) - assert arrow.Arrow._get_iteration_params(None, 100) == (arrow.Arrow.max, 100) - assert arrow.Arrow._get_iteration_params(100, 120) == (100, 120) - - with pytest.raises(ValueError): - arrow.Arrow._get_iteration_params(None, None) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_factory.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_factory.py deleted file mode 100644 index 2b8df5168f..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_factory.py +++ /dev/null @@ -1,390 +0,0 @@ -# -*- coding: utf-8 -*- -import time -from datetime import date, datetime - -import pytest -from dateutil import tz - -from arrow.parser import ParserError - -from .utils import assert_datetime_equality - - -@pytest.mark.usefixtures("arrow_factory") -class TestGet: - def test_no_args(self): - - assert_datetime_equality( - self.factory.get(), datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - def test_timestamp_one_arg_no_arg(self): - - no_arg = self.factory.get(1406430900).timestamp - one_arg = self.factory.get("1406430900", "X").timestamp - - assert no_arg == one_arg - - def test_one_arg_none(self): - - assert_datetime_equality( - self.factory.get(None), datetime.utcnow().replace(tzinfo=tz.tzutc()) - ) - - def test_struct_time(self): - - assert_datetime_equality( - self.factory.get(time.gmtime()), - datetime.utcnow().replace(tzinfo=tz.tzutc()), - ) - - def test_one_arg_timestamp(self): - - int_timestamp = int(time.time()) - timestamp_dt = datetime.utcfromtimestamp(int_timestamp).replace( - tzinfo=tz.tzutc() - ) - - assert self.factory.get(int_timestamp) == timestamp_dt - - with pytest.raises(ParserError): - self.factory.get(str(int_timestamp)) - - float_timestamp = time.time() - timestamp_dt = datetime.utcfromtimestamp(float_timestamp).replace( - tzinfo=tz.tzutc() - ) - - assert self.factory.get(float_timestamp) == timestamp_dt - - with pytest.raises(ParserError): - self.factory.get(str(float_timestamp)) - - # Regression test for issue #216 - # Python 3 raises OverflowError, Python 2 raises ValueError - timestamp = 99999999999999999999999999.99999999999999999999999999 - with pytest.raises((OverflowError, ValueError)): - self.factory.get(timestamp) - - def test_one_arg_expanded_timestamp(self): - - millisecond_timestamp = 1591328104308 - microsecond_timestamp = 1591328104308505 - - # Regression test for issue #796 - assert self.factory.get(millisecond_timestamp) == datetime.utcfromtimestamp( - 1591328104.308 - ).replace(tzinfo=tz.tzutc()) - assert self.factory.get(microsecond_timestamp) == datetime.utcfromtimestamp( - 1591328104.308505 - ).replace(tzinfo=tz.tzutc()) - - def test_one_arg_timestamp_with_tzinfo(self): - - timestamp = time.time() - timestamp_dt = datetime.fromtimestamp(timestamp, tz=tz.tzutc()).astimezone( - tz.gettz("US/Pacific") - ) - timezone = tz.gettz("US/Pacific") - - assert_datetime_equality( - self.factory.get(timestamp, tzinfo=timezone), timestamp_dt - ) - - def test_one_arg_arrow(self): - - arw = self.factory.utcnow() - result = self.factory.get(arw) - - assert arw == result - - def test_one_arg_datetime(self): - - dt = datetime.utcnow().replace(tzinfo=tz.tzutc()) - - assert self.factory.get(dt) == dt - - def test_one_arg_date(self): - - d = date.today() - dt = datetime(d.year, d.month, d.day, tzinfo=tz.tzutc()) - - assert self.factory.get(d) == dt - - def test_one_arg_tzinfo(self): - - self.expected = ( - datetime.utcnow() - .replace(tzinfo=tz.tzutc()) - .astimezone(tz.gettz("US/Pacific")) - ) - - assert_datetime_equality( - self.factory.get(tz.gettz("US/Pacific")), self.expected - ) - - # regression test for issue #658 - def test_one_arg_dateparser_datetime(self): - dateparser = pytest.importorskip("dateparser") - expected = datetime(1990, 1, 1).replace(tzinfo=tz.tzutc()) - # dateparser outputs: datetime.datetime(1990, 1, 1, 0, 0, tzinfo=) - parsed_date = dateparser.parse("1990-01-01T00:00:00+00:00") - dt_output = self.factory.get(parsed_date)._datetime.replace(tzinfo=tz.tzutc()) - assert dt_output == expected - - def test_kwarg_tzinfo(self): - - self.expected = ( - datetime.utcnow() - .replace(tzinfo=tz.tzutc()) - .astimezone(tz.gettz("US/Pacific")) - ) - - assert_datetime_equality( - self.factory.get(tzinfo=tz.gettz("US/Pacific")), self.expected - ) - - def test_kwarg_tzinfo_string(self): - - self.expected = ( - datetime.utcnow() - .replace(tzinfo=tz.tzutc()) - .astimezone(tz.gettz("US/Pacific")) - ) - - assert_datetime_equality(self.factory.get(tzinfo="US/Pacific"), self.expected) - - with pytest.raises(ParserError): - self.factory.get(tzinfo="US/PacificInvalidTzinfo") - - def test_kwarg_normalize_whitespace(self): - result = self.factory.get( - "Jun 1 2005 1:33PM", - "MMM D YYYY H:mmA", - tzinfo=tz.tzutc(), - normalize_whitespace=True, - ) - assert result._datetime == datetime(2005, 6, 1, 13, 33, tzinfo=tz.tzutc()) - - result = self.factory.get( - "\t 2013-05-05T12:30:45.123456 \t \n", - tzinfo=tz.tzutc(), - normalize_whitespace=True, - ) - assert result._datetime == datetime( - 2013, 5, 5, 12, 30, 45, 123456, tzinfo=tz.tzutc() - ) - - def test_one_arg_iso_str(self): - - dt = datetime.utcnow() - - assert_datetime_equality( - self.factory.get(dt.isoformat()), dt.replace(tzinfo=tz.tzutc()) - ) - - def test_one_arg_iso_calendar(self): - - pairs = [ - (datetime(2004, 1, 4), (2004, 1, 7)), - (datetime(2008, 12, 30), (2009, 1, 2)), - (datetime(2010, 1, 2), (2009, 53, 6)), - (datetime(2000, 2, 29), (2000, 9, 2)), - (datetime(2005, 1, 1), (2004, 53, 6)), - (datetime(2010, 1, 4), (2010, 1, 1)), - (datetime(2010, 1, 3), (2009, 53, 7)), - (datetime(2003, 12, 29), (2004, 1, 1)), - ] - - for pair in pairs: - dt, iso = pair - assert self.factory.get(iso) == self.factory.get(dt) - - with pytest.raises(TypeError): - self.factory.get((2014, 7, 1, 4)) - - with pytest.raises(TypeError): - self.factory.get((2014, 7)) - - with pytest.raises(ValueError): - self.factory.get((2014, 70, 1)) - - with pytest.raises(ValueError): - self.factory.get((2014, 7, 10)) - - def test_one_arg_other(self): - - with pytest.raises(TypeError): - self.factory.get(object()) - - def test_one_arg_bool(self): - - with pytest.raises(TypeError): - self.factory.get(False) - - with pytest.raises(TypeError): - self.factory.get(True) - - def test_two_args_datetime_tzinfo(self): - - result = self.factory.get(datetime(2013, 1, 1), tz.gettz("US/Pacific")) - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_datetime_tz_str(self): - - result = self.factory.get(datetime(2013, 1, 1), "US/Pacific") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_date_tzinfo(self): - - result = self.factory.get(date(2013, 1, 1), tz.gettz("US/Pacific")) - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_date_tz_str(self): - - result = self.factory.get(date(2013, 1, 1), "US/Pacific") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - - def test_two_args_datetime_other(self): - - with pytest.raises(TypeError): - self.factory.get(datetime.utcnow(), object()) - - def test_two_args_date_other(self): - - with pytest.raises(TypeError): - self.factory.get(date.today(), object()) - - def test_two_args_str_str(self): - - result = self.factory.get("2013-01-01", "YYYY-MM-DD") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_two_args_str_tzinfo(self): - - result = self.factory.get("2013-01-01", tzinfo=tz.gettz("US/Pacific")) - - assert_datetime_equality( - result._datetime, datetime(2013, 1, 1, tzinfo=tz.gettz("US/Pacific")) - ) - - def test_two_args_twitter_format(self): - - # format returned by twitter API for created_at: - twitter_date = "Fri Apr 08 21:08:54 +0000 2016" - result = self.factory.get(twitter_date, "ddd MMM DD HH:mm:ss Z YYYY") - - assert result._datetime == datetime(2016, 4, 8, 21, 8, 54, tzinfo=tz.tzutc()) - - def test_two_args_str_list(self): - - result = self.factory.get("2013-01-01", ["MM/DD/YYYY", "YYYY-MM-DD"]) - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_two_args_unicode_unicode(self): - - result = self.factory.get(u"2013-01-01", u"YYYY-MM-DD") - - assert result._datetime == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_two_args_other(self): - - with pytest.raises(TypeError): - self.factory.get(object(), object()) - - def test_three_args_with_tzinfo(self): - - timefmt = "YYYYMMDD" - d = "20150514" - - assert self.factory.get(d, timefmt, tzinfo=tz.tzlocal()) == datetime( - 2015, 5, 14, tzinfo=tz.tzlocal() - ) - - def test_three_args(self): - - assert self.factory.get(2013, 1, 1) == datetime(2013, 1, 1, tzinfo=tz.tzutc()) - - def test_full_kwargs(self): - - assert ( - self.factory.get( - year=2016, - month=7, - day=14, - hour=7, - minute=16, - second=45, - microsecond=631092, - ) - == datetime(2016, 7, 14, 7, 16, 45, 631092, tzinfo=tz.tzutc()) - ) - - def test_three_kwargs(self): - - assert self.factory.get(year=2016, month=7, day=14) == datetime( - 2016, 7, 14, 0, 0, tzinfo=tz.tzutc() - ) - - def test_tzinfo_string_kwargs(self): - result = self.factory.get("2019072807", "YYYYMMDDHH", tzinfo="UTC") - assert result._datetime == datetime(2019, 7, 28, 7, 0, 0, 0, tzinfo=tz.tzutc()) - - def test_insufficient_kwargs(self): - - with pytest.raises(TypeError): - self.factory.get(year=2016) - - with pytest.raises(TypeError): - self.factory.get(year=2016, month=7) - - def test_locale(self): - result = self.factory.get("2010", "YYYY", locale="ja") - assert result._datetime == datetime(2010, 1, 1, 0, 0, 0, 0, tzinfo=tz.tzutc()) - - # regression test for issue #701 - result = self.factory.get( - "Montag, 9. September 2019, 16:15-20:00", "dddd, D. MMMM YYYY", locale="de" - ) - assert result._datetime == datetime(2019, 9, 9, 0, 0, 0, 0, tzinfo=tz.tzutc()) - - def test_locale_kwarg_only(self): - res = self.factory.get(locale="ja") - assert res.tzinfo == tz.tzutc() - - def test_locale_with_tzinfo(self): - res = self.factory.get(locale="ja", tzinfo=tz.gettz("Asia/Tokyo")) - assert res.tzinfo == tz.gettz("Asia/Tokyo") - - -@pytest.mark.usefixtures("arrow_factory") -class TestUtcNow: - def test_utcnow(self): - - assert_datetime_equality( - self.factory.utcnow()._datetime, - datetime.utcnow().replace(tzinfo=tz.tzutc()), - ) - - -@pytest.mark.usefixtures("arrow_factory") -class TestNow: - def test_no_tz(self): - - assert_datetime_equality(self.factory.now(), datetime.now(tz.tzlocal())) - - def test_tzinfo(self): - - assert_datetime_equality( - self.factory.now(tz.gettz("EST")), datetime.now(tz.gettz("EST")) - ) - - def test_tz_str(self): - - assert_datetime_equality(self.factory.now("EST"), datetime.now(tz.gettz("EST"))) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_formatter.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_formatter.py deleted file mode 100644 index e97aeb5dcc..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_formatter.py +++ /dev/null @@ -1,282 +0,0 @@ -# -*- coding: utf-8 -*- -from datetime import datetime - -import pytest -import pytz -from dateutil import tz as dateutil_tz - -from arrow import ( - FORMAT_ATOM, - FORMAT_COOKIE, - FORMAT_RFC822, - FORMAT_RFC850, - FORMAT_RFC1036, - FORMAT_RFC1123, - FORMAT_RFC2822, - FORMAT_RFC3339, - FORMAT_RSS, - FORMAT_W3C, -) - -from .utils import make_full_tz_list - - -@pytest.mark.usefixtures("arrow_formatter") -class TestFormatterFormatToken: - def test_format(self): - - dt = datetime(2013, 2, 5, 12, 32, 51) - - result = self.formatter.format(dt, "MM-DD-YYYY hh:mm:ss a") - - assert result == "02-05-2013 12:32:51 pm" - - def test_year(self): - - dt = datetime(2013, 1, 1) - assert self.formatter._format_token(dt, "YYYY") == "2013" - assert self.formatter._format_token(dt, "YY") == "13" - - def test_month(self): - - dt = datetime(2013, 1, 1) - assert self.formatter._format_token(dt, "MMMM") == "January" - assert self.formatter._format_token(dt, "MMM") == "Jan" - assert self.formatter._format_token(dt, "MM") == "01" - assert self.formatter._format_token(dt, "M") == "1" - - def test_day(self): - - dt = datetime(2013, 2, 1) - assert self.formatter._format_token(dt, "DDDD") == "032" - assert self.formatter._format_token(dt, "DDD") == "32" - assert self.formatter._format_token(dt, "DD") == "01" - assert self.formatter._format_token(dt, "D") == "1" - assert self.formatter._format_token(dt, "Do") == "1st" - - assert self.formatter._format_token(dt, "dddd") == "Friday" - assert self.formatter._format_token(dt, "ddd") == "Fri" - assert self.formatter._format_token(dt, "d") == "5" - - def test_hour(self): - - dt = datetime(2013, 1, 1, 2) - assert self.formatter._format_token(dt, "HH") == "02" - assert self.formatter._format_token(dt, "H") == "2" - - dt = datetime(2013, 1, 1, 13) - assert self.formatter._format_token(dt, "HH") == "13" - assert self.formatter._format_token(dt, "H") == "13" - - dt = datetime(2013, 1, 1, 2) - assert self.formatter._format_token(dt, "hh") == "02" - assert self.formatter._format_token(dt, "h") == "2" - - dt = datetime(2013, 1, 1, 13) - assert self.formatter._format_token(dt, "hh") == "01" - assert self.formatter._format_token(dt, "h") == "1" - - # test that 12-hour time converts to '12' at midnight - dt = datetime(2013, 1, 1, 0) - assert self.formatter._format_token(dt, "hh") == "12" - assert self.formatter._format_token(dt, "h") == "12" - - def test_minute(self): - - dt = datetime(2013, 1, 1, 0, 1) - assert self.formatter._format_token(dt, "mm") == "01" - assert self.formatter._format_token(dt, "m") == "1" - - def test_second(self): - - dt = datetime(2013, 1, 1, 0, 0, 1) - assert self.formatter._format_token(dt, "ss") == "01" - assert self.formatter._format_token(dt, "s") == "1" - - def test_sub_second(self): - - dt = datetime(2013, 1, 1, 0, 0, 0, 123456) - assert self.formatter._format_token(dt, "SSSSSS") == "123456" - assert self.formatter._format_token(dt, "SSSSS") == "12345" - assert self.formatter._format_token(dt, "SSSS") == "1234" - assert self.formatter._format_token(dt, "SSS") == "123" - assert self.formatter._format_token(dt, "SS") == "12" - assert self.formatter._format_token(dt, "S") == "1" - - dt = datetime(2013, 1, 1, 0, 0, 0, 2000) - assert self.formatter._format_token(dt, "SSSSSS") == "002000" - assert self.formatter._format_token(dt, "SSSSS") == "00200" - assert self.formatter._format_token(dt, "SSSS") == "0020" - assert self.formatter._format_token(dt, "SSS") == "002" - assert self.formatter._format_token(dt, "SS") == "00" - assert self.formatter._format_token(dt, "S") == "0" - - def test_timestamp(self): - - timestamp = 1588437009.8952794 - dt = datetime.utcfromtimestamp(timestamp) - expected = str(int(timestamp)) - assert self.formatter._format_token(dt, "X") == expected - - # Must round because time.time() may return a float with greater - # than 6 digits of precision - expected = str(int(timestamp * 1000000)) - assert self.formatter._format_token(dt, "x") == expected - - def test_timezone(self): - - dt = datetime.utcnow().replace(tzinfo=dateutil_tz.gettz("US/Pacific")) - - result = self.formatter._format_token(dt, "ZZ") - assert result == "-07:00" or result == "-08:00" - - result = self.formatter._format_token(dt, "Z") - assert result == "-0700" or result == "-0800" - - @pytest.mark.parametrize("full_tz_name", make_full_tz_list()) - def test_timezone_formatter(self, full_tz_name): - - # This test will fail if we use "now" as date as soon as we change from/to DST - dt = datetime(1986, 2, 14, tzinfo=pytz.timezone("UTC")).replace( - tzinfo=dateutil_tz.gettz(full_tz_name) - ) - abbreviation = dt.tzname() - - result = self.formatter._format_token(dt, "ZZZ") - assert result == abbreviation - - def test_am_pm(self): - - dt = datetime(2012, 1, 1, 11) - assert self.formatter._format_token(dt, "a") == "am" - assert self.formatter._format_token(dt, "A") == "AM" - - dt = datetime(2012, 1, 1, 13) - assert self.formatter._format_token(dt, "a") == "pm" - assert self.formatter._format_token(dt, "A") == "PM" - - def test_week(self): - dt = datetime(2017, 5, 19) - assert self.formatter._format_token(dt, "W") == "2017-W20-5" - - # make sure week is zero padded when needed - dt_early = datetime(2011, 1, 20) - assert self.formatter._format_token(dt_early, "W") == "2011-W03-4" - - def test_nonsense(self): - dt = datetime(2012, 1, 1, 11) - assert self.formatter._format_token(dt, None) is None - assert self.formatter._format_token(dt, "NONSENSE") is None - - def test_escape(self): - - assert ( - self.formatter.format( - datetime(2015, 12, 10, 17, 9), "MMMM D, YYYY [at] h:mma" - ) - == "December 10, 2015 at 5:09pm" - ) - - assert ( - self.formatter.format( - datetime(2015, 12, 10, 17, 9), "[MMMM] M D, YYYY [at] h:mma" - ) - == "MMMM 12 10, 2015 at 5:09pm" - ) - - assert ( - self.formatter.format( - datetime(1990, 11, 25), - "[It happened on] MMMM Do [in the year] YYYY [a long time ago]", - ) - == "It happened on November 25th in the year 1990 a long time ago" - ) - - assert ( - self.formatter.format( - datetime(1990, 11, 25), - "[It happened on] MMMM Do [in the][ year] YYYY [a long time ago]", - ) - == "It happened on November 25th in the year 1990 a long time ago" - ) - - assert ( - self.formatter.format( - datetime(1, 1, 1), "[I'm][ entirely][ escaped,][ weee!]" - ) - == "I'm entirely escaped, weee!" - ) - - # Special RegEx characters - assert ( - self.formatter.format( - datetime(2017, 12, 31, 2, 0), "MMM DD, YYYY |^${}().*+?<>-& h:mm A" - ) - == "Dec 31, 2017 |^${}().*+?<>-& 2:00 AM" - ) - - # Escaping is atomic: brackets inside brackets are treated literally - assert self.formatter.format(datetime(1, 1, 1), "[[[ ]]") == "[[ ]" - - -@pytest.mark.usefixtures("arrow_formatter", "time_1975_12_25") -class TestFormatterBuiltinFormats: - def test_atom(self): - assert ( - self.formatter.format(self.datetime, FORMAT_ATOM) - == "1975-12-25 14:15:16-05:00" - ) - - def test_cookie(self): - assert ( - self.formatter.format(self.datetime, FORMAT_COOKIE) - == "Thursday, 25-Dec-1975 14:15:16 EST" - ) - - def test_rfc_822(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC822) - == "Thu, 25 Dec 75 14:15:16 -0500" - ) - - def test_rfc_850(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC850) - == "Thursday, 25-Dec-75 14:15:16 EST" - ) - - def test_rfc_1036(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC1036) - == "Thu, 25 Dec 75 14:15:16 -0500" - ) - - def test_rfc_1123(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC1123) - == "Thu, 25 Dec 1975 14:15:16 -0500" - ) - - def test_rfc_2822(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC2822) - == "Thu, 25 Dec 1975 14:15:16 -0500" - ) - - def test_rfc3339(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RFC3339) - == "1975-12-25 14:15:16-05:00" - ) - - def test_rss(self): - assert ( - self.formatter.format(self.datetime, FORMAT_RSS) - == "Thu, 25 Dec 1975 14:15:16 -0500" - ) - - def test_w3c(self): - assert ( - self.formatter.format(self.datetime, FORMAT_W3C) - == "1975-12-25 14:15:16-05:00" - ) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_locales.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_locales.py deleted file mode 100644 index 006ccdd5ba..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_locales.py +++ /dev/null @@ -1,1352 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -import pytest - -from arrow import arrow, locales - - -@pytest.mark.usefixtures("lang_locales") -class TestLocaleValidation: - """Validate locales to ensure that translations are valid and complete""" - - def test_locale_validation(self): - - for _, locale_cls in self.locales.items(): - # 7 days + 1 spacer to allow for 1-indexing of months - assert len(locale_cls.day_names) == 8 - assert locale_cls.day_names[0] == "" - # ensure that all string from index 1 onward are valid (not blank or None) - assert all(locale_cls.day_names[1:]) - - assert len(locale_cls.day_abbreviations) == 8 - assert locale_cls.day_abbreviations[0] == "" - assert all(locale_cls.day_abbreviations[1:]) - - # 12 months + 1 spacer to allow for 1-indexing of months - assert len(locale_cls.month_names) == 13 - assert locale_cls.month_names[0] == "" - assert all(locale_cls.month_names[1:]) - - assert len(locale_cls.month_abbreviations) == 13 - assert locale_cls.month_abbreviations[0] == "" - assert all(locale_cls.month_abbreviations[1:]) - - assert len(locale_cls.names) > 0 - assert locale_cls.past is not None - assert locale_cls.future is not None - - -class TestModule: - def test_get_locale(self, mocker): - mock_locale = mocker.Mock() - mock_locale_cls = mocker.Mock() - mock_locale_cls.return_value = mock_locale - - with pytest.raises(ValueError): - arrow.locales.get_locale("locale_name") - - cls_dict = arrow.locales._locales - mocker.patch.dict(cls_dict, {"locale_name": mock_locale_cls}) - - result = arrow.locales.get_locale("locale_name") - - assert result == mock_locale - - def test_get_locale_by_class_name(self, mocker): - mock_locale_cls = mocker.Mock() - mock_locale_obj = mock_locale_cls.return_value = mocker.Mock() - - globals_fn = mocker.Mock() - globals_fn.return_value = {"NonExistentLocale": mock_locale_cls} - - with pytest.raises(ValueError): - arrow.locales.get_locale_by_class_name("NonExistentLocale") - - mocker.patch.object(locales, "globals", globals_fn) - result = arrow.locales.get_locale_by_class_name("NonExistentLocale") - - mock_locale_cls.assert_called_once_with() - assert result == mock_locale_obj - - def test_locales(self): - - assert len(locales._locales) > 0 - - -@pytest.mark.usefixtures("lang_locale") -class TestEnglishLocale: - def test_describe(self): - assert self.locale.describe("now", only_distance=True) == "instantly" - assert self.locale.describe("now", only_distance=False) == "just now" - - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 hours" - assert self.locale._format_timeframe("hour", 0) == "an hour" - - def test_format_relative_now(self): - - result = self.locale._format_relative("just now", "now", 0) - - assert result == "just now" - - def test_format_relative_past(self): - - result = self.locale._format_relative("an hour", "hour", 1) - - assert result == "in an hour" - - def test_format_relative_future(self): - - result = self.locale._format_relative("an hour", "hour", -1) - - assert result == "an hour ago" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(0) == "0th" - assert self.locale.ordinal_number(1) == "1st" - assert self.locale.ordinal_number(2) == "2nd" - assert self.locale.ordinal_number(3) == "3rd" - assert self.locale.ordinal_number(4) == "4th" - assert self.locale.ordinal_number(10) == "10th" - assert self.locale.ordinal_number(11) == "11th" - assert self.locale.ordinal_number(12) == "12th" - assert self.locale.ordinal_number(13) == "13th" - assert self.locale.ordinal_number(14) == "14th" - assert self.locale.ordinal_number(21) == "21st" - assert self.locale.ordinal_number(22) == "22nd" - assert self.locale.ordinal_number(23) == "23rd" - assert self.locale.ordinal_number(24) == "24th" - - assert self.locale.ordinal_number(100) == "100th" - assert self.locale.ordinal_number(101) == "101st" - assert self.locale.ordinal_number(102) == "102nd" - assert self.locale.ordinal_number(103) == "103rd" - assert self.locale.ordinal_number(104) == "104th" - assert self.locale.ordinal_number(110) == "110th" - assert self.locale.ordinal_number(111) == "111th" - assert self.locale.ordinal_number(112) == "112th" - assert self.locale.ordinal_number(113) == "113th" - assert self.locale.ordinal_number(114) == "114th" - assert self.locale.ordinal_number(121) == "121st" - assert self.locale.ordinal_number(122) == "122nd" - assert self.locale.ordinal_number(123) == "123rd" - assert self.locale.ordinal_number(124) == "124th" - - def test_meridian_invalid_token(self): - assert self.locale.meridian(7, None) is None - assert self.locale.meridian(7, "B") is None - assert self.locale.meridian(7, "NONSENSE") is None - - -@pytest.mark.usefixtures("lang_locale") -class TestItalianLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1º" - - -@pytest.mark.usefixtures("lang_locale") -class TestSpanishLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1º" - - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "ahora" - assert self.locale._format_timeframe("seconds", 1) == "1 segundos" - assert self.locale._format_timeframe("seconds", 3) == "3 segundos" - assert self.locale._format_timeframe("seconds", 30) == "30 segundos" - assert self.locale._format_timeframe("minute", 1) == "un minuto" - assert self.locale._format_timeframe("minutes", 4) == "4 minutos" - assert self.locale._format_timeframe("minutes", 40) == "40 minutos" - assert self.locale._format_timeframe("hour", 1) == "una hora" - assert self.locale._format_timeframe("hours", 5) == "5 horas" - assert self.locale._format_timeframe("hours", 23) == "23 horas" - assert self.locale._format_timeframe("day", 1) == "un día" - assert self.locale._format_timeframe("days", 6) == "6 días" - assert self.locale._format_timeframe("days", 12) == "12 días" - assert self.locale._format_timeframe("week", 1) == "una semana" - assert self.locale._format_timeframe("weeks", 2) == "2 semanas" - assert self.locale._format_timeframe("weeks", 3) == "3 semanas" - assert self.locale._format_timeframe("month", 1) == "un mes" - assert self.locale._format_timeframe("months", 7) == "7 meses" - assert self.locale._format_timeframe("months", 11) == "11 meses" - assert self.locale._format_timeframe("year", 1) == "un año" - assert self.locale._format_timeframe("years", 8) == "8 años" - assert self.locale._format_timeframe("years", 12) == "12 años" - - assert self.locale._format_timeframe("now", 0) == "ahora" - assert self.locale._format_timeframe("seconds", -1) == "1 segundos" - assert self.locale._format_timeframe("seconds", -9) == "9 segundos" - assert self.locale._format_timeframe("seconds", -12) == "12 segundos" - assert self.locale._format_timeframe("minute", -1) == "un minuto" - assert self.locale._format_timeframe("minutes", -2) == "2 minutos" - assert self.locale._format_timeframe("minutes", -10) == "10 minutos" - assert self.locale._format_timeframe("hour", -1) == "una hora" - assert self.locale._format_timeframe("hours", -3) == "3 horas" - assert self.locale._format_timeframe("hours", -11) == "11 horas" - assert self.locale._format_timeframe("day", -1) == "un día" - assert self.locale._format_timeframe("days", -2) == "2 días" - assert self.locale._format_timeframe("days", -12) == "12 días" - assert self.locale._format_timeframe("week", -1) == "una semana" - assert self.locale._format_timeframe("weeks", -2) == "2 semanas" - assert self.locale._format_timeframe("weeks", -3) == "3 semanas" - assert self.locale._format_timeframe("month", -1) == "un mes" - assert self.locale._format_timeframe("months", -3) == "3 meses" - assert self.locale._format_timeframe("months", -13) == "13 meses" - assert self.locale._format_timeframe("year", -1) == "un año" - assert self.locale._format_timeframe("years", -4) == "4 años" - assert self.locale._format_timeframe("years", -14) == "14 años" - - -@pytest.mark.usefixtures("lang_locale") -class TestFrenchLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1er" - assert self.locale.ordinal_number(2) == "2e" - - def test_month_abbreviation(self): - assert "juil" in self.locale.month_abbreviations - - -@pytest.mark.usefixtures("lang_locale") -class TestFrenchCanadianLocale: - def test_month_abbreviation(self): - assert "juill" in self.locale.month_abbreviations - - -@pytest.mark.usefixtures("lang_locale") -class TestRussianLocale: - def test_plurals2(self): - assert self.locale._format_timeframe("hours", 0) == "0 часов" - assert self.locale._format_timeframe("hours", 1) == "1 час" - assert self.locale._format_timeframe("hours", 2) == "2 часа" - assert self.locale._format_timeframe("hours", 4) == "4 часа" - assert self.locale._format_timeframe("hours", 5) == "5 часов" - assert self.locale._format_timeframe("hours", 21) == "21 час" - assert self.locale._format_timeframe("hours", 22) == "22 часа" - assert self.locale._format_timeframe("hours", 25) == "25 часов" - - # feminine grammatical gender should be tested separately - assert self.locale._format_timeframe("minutes", 0) == "0 минут" - assert self.locale._format_timeframe("minutes", 1) == "1 минуту" - assert self.locale._format_timeframe("minutes", 2) == "2 минуты" - assert self.locale._format_timeframe("minutes", 4) == "4 минуты" - assert self.locale._format_timeframe("minutes", 5) == "5 минут" - assert self.locale._format_timeframe("minutes", 21) == "21 минуту" - assert self.locale._format_timeframe("minutes", 22) == "22 минуты" - assert self.locale._format_timeframe("minutes", 25) == "25 минут" - - -@pytest.mark.usefixtures("lang_locale") -class TestPolishLocale: - def test_plurals(self): - - assert self.locale._format_timeframe("seconds", 0) == "0 sekund" - assert self.locale._format_timeframe("second", 1) == "sekundę" - assert self.locale._format_timeframe("seconds", 2) == "2 sekundy" - assert self.locale._format_timeframe("seconds", 5) == "5 sekund" - assert self.locale._format_timeframe("seconds", 21) == "21 sekund" - assert self.locale._format_timeframe("seconds", 22) == "22 sekundy" - assert self.locale._format_timeframe("seconds", 25) == "25 sekund" - - assert self.locale._format_timeframe("minutes", 0) == "0 minut" - assert self.locale._format_timeframe("minute", 1) == "minutę" - assert self.locale._format_timeframe("minutes", 2) == "2 minuty" - assert self.locale._format_timeframe("minutes", 5) == "5 minut" - assert self.locale._format_timeframe("minutes", 21) == "21 minut" - assert self.locale._format_timeframe("minutes", 22) == "22 minuty" - assert self.locale._format_timeframe("minutes", 25) == "25 minut" - - assert self.locale._format_timeframe("hours", 0) == "0 godzin" - assert self.locale._format_timeframe("hour", 1) == "godzinę" - assert self.locale._format_timeframe("hours", 2) == "2 godziny" - assert self.locale._format_timeframe("hours", 5) == "5 godzin" - assert self.locale._format_timeframe("hours", 21) == "21 godzin" - assert self.locale._format_timeframe("hours", 22) == "22 godziny" - assert self.locale._format_timeframe("hours", 25) == "25 godzin" - - assert self.locale._format_timeframe("weeks", 0) == "0 tygodni" - assert self.locale._format_timeframe("week", 1) == "tydzień" - assert self.locale._format_timeframe("weeks", 2) == "2 tygodnie" - assert self.locale._format_timeframe("weeks", 5) == "5 tygodni" - assert self.locale._format_timeframe("weeks", 21) == "21 tygodni" - assert self.locale._format_timeframe("weeks", 22) == "22 tygodnie" - assert self.locale._format_timeframe("weeks", 25) == "25 tygodni" - - assert self.locale._format_timeframe("months", 0) == "0 miesięcy" - assert self.locale._format_timeframe("month", 1) == "miesiąc" - assert self.locale._format_timeframe("months", 2) == "2 miesiące" - assert self.locale._format_timeframe("months", 5) == "5 miesięcy" - assert self.locale._format_timeframe("months", 21) == "21 miesięcy" - assert self.locale._format_timeframe("months", 22) == "22 miesiące" - assert self.locale._format_timeframe("months", 25) == "25 miesięcy" - - assert self.locale._format_timeframe("years", 0) == "0 lat" - assert self.locale._format_timeframe("year", 1) == "rok" - assert self.locale._format_timeframe("years", 2) == "2 lata" - assert self.locale._format_timeframe("years", 5) == "5 lat" - assert self.locale._format_timeframe("years", 21) == "21 lat" - assert self.locale._format_timeframe("years", 22) == "22 lata" - assert self.locale._format_timeframe("years", 25) == "25 lat" - - -@pytest.mark.usefixtures("lang_locale") -class TestIcelandicLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("minute", -1) == "einni mínútu" - assert self.locale._format_timeframe("minute", 1) == "eina mínútu" - - assert self.locale._format_timeframe("hours", -2) == "2 tímum" - assert self.locale._format_timeframe("hours", 2) == "2 tíma" - assert self.locale._format_timeframe("now", 0) == "rétt í þessu" - - -@pytest.mark.usefixtures("lang_locale") -class TestMalayalamLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 മണിക്കൂർ" - assert self.locale._format_timeframe("hour", 0) == "ഒരു മണിക്കൂർ" - - def test_format_relative_now(self): - - result = self.locale._format_relative("ഇപ്പോൾ", "now", 0) - - assert result == "ഇപ്പോൾ" - - def test_format_relative_past(self): - - result = self.locale._format_relative("ഒരു മണിക്കൂർ", "hour", 1) - assert result == "ഒരു മണിക്കൂർ ശേഷം" - - def test_format_relative_future(self): - - result = self.locale._format_relative("ഒരു മണിക്കൂർ", "hour", -1) - assert result == "ഒരു മണിക്കൂർ മുമ്പ്" - - -@pytest.mark.usefixtures("lang_locale") -class TestHindiLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 घंटे" - assert self.locale._format_timeframe("hour", 0) == "एक घंटा" - - def test_format_relative_now(self): - - result = self.locale._format_relative("अभी", "now", 0) - assert result == "अभी" - - def test_format_relative_past(self): - - result = self.locale._format_relative("एक घंटा", "hour", 1) - assert result == "एक घंटा बाद" - - def test_format_relative_future(self): - - result = self.locale._format_relative("एक घंटा", "hour", -1) - assert result == "एक घंटा पहले" - - -@pytest.mark.usefixtures("lang_locale") -class TestCzechLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("hours", 2) == "2 hodiny" - assert self.locale._format_timeframe("hours", 5) == "5 hodin" - assert self.locale._format_timeframe("hour", 0) == "0 hodin" - assert self.locale._format_timeframe("hours", -2) == "2 hodinami" - assert self.locale._format_timeframe("hours", -5) == "5 hodinami" - assert self.locale._format_timeframe("now", 0) == "Teď" - - assert self.locale._format_timeframe("weeks", 2) == "2 týdny" - assert self.locale._format_timeframe("weeks", 5) == "5 týdnů" - assert self.locale._format_timeframe("week", 0) == "0 týdnů" - assert self.locale._format_timeframe("weeks", -2) == "2 týdny" - assert self.locale._format_timeframe("weeks", -5) == "5 týdny" - - def test_format_relative_now(self): - - result = self.locale._format_relative("Teď", "now", 0) - assert result == "Teď" - - def test_format_relative_future(self): - - result = self.locale._format_relative("hodinu", "hour", 1) - assert result == "Za hodinu" - - def test_format_relative_past(self): - - result = self.locale._format_relative("hodinou", "hour", -1) - assert result == "Před hodinou" - - -@pytest.mark.usefixtures("lang_locale") -class TestSlovakLocale: - def test_format_timeframe(self): - - assert self.locale._format_timeframe("seconds", -5) == "5 sekundami" - assert self.locale._format_timeframe("seconds", -2) == "2 sekundami" - assert self.locale._format_timeframe("second", -1) == "sekundou" - assert self.locale._format_timeframe("second", 0) == "0 sekúnd" - assert self.locale._format_timeframe("second", 1) == "sekundu" - assert self.locale._format_timeframe("seconds", 2) == "2 sekundy" - assert self.locale._format_timeframe("seconds", 5) == "5 sekúnd" - - assert self.locale._format_timeframe("minutes", -5) == "5 minútami" - assert self.locale._format_timeframe("minutes", -2) == "2 minútami" - assert self.locale._format_timeframe("minute", -1) == "minútou" - assert self.locale._format_timeframe("minute", 0) == "0 minút" - assert self.locale._format_timeframe("minute", 1) == "minútu" - assert self.locale._format_timeframe("minutes", 2) == "2 minúty" - assert self.locale._format_timeframe("minutes", 5) == "5 minút" - - assert self.locale._format_timeframe("hours", -5) == "5 hodinami" - assert self.locale._format_timeframe("hours", -2) == "2 hodinami" - assert self.locale._format_timeframe("hour", -1) == "hodinou" - assert self.locale._format_timeframe("hour", 0) == "0 hodín" - assert self.locale._format_timeframe("hour", 1) == "hodinu" - assert self.locale._format_timeframe("hours", 2) == "2 hodiny" - assert self.locale._format_timeframe("hours", 5) == "5 hodín" - - assert self.locale._format_timeframe("days", -5) == "5 dňami" - assert self.locale._format_timeframe("days", -2) == "2 dňami" - assert self.locale._format_timeframe("day", -1) == "dňom" - assert self.locale._format_timeframe("day", 0) == "0 dní" - assert self.locale._format_timeframe("day", 1) == "deň" - assert self.locale._format_timeframe("days", 2) == "2 dni" - assert self.locale._format_timeframe("days", 5) == "5 dní" - - assert self.locale._format_timeframe("weeks", -5) == "5 týždňami" - assert self.locale._format_timeframe("weeks", -2) == "2 týždňami" - assert self.locale._format_timeframe("week", -1) == "týždňom" - assert self.locale._format_timeframe("week", 0) == "0 týždňov" - assert self.locale._format_timeframe("week", 1) == "týždeň" - assert self.locale._format_timeframe("weeks", 2) == "2 týždne" - assert self.locale._format_timeframe("weeks", 5) == "5 týždňov" - - assert self.locale._format_timeframe("months", -5) == "5 mesiacmi" - assert self.locale._format_timeframe("months", -2) == "2 mesiacmi" - assert self.locale._format_timeframe("month", -1) == "mesiacom" - assert self.locale._format_timeframe("month", 0) == "0 mesiacov" - assert self.locale._format_timeframe("month", 1) == "mesiac" - assert self.locale._format_timeframe("months", 2) == "2 mesiace" - assert self.locale._format_timeframe("months", 5) == "5 mesiacov" - - assert self.locale._format_timeframe("years", -5) == "5 rokmi" - assert self.locale._format_timeframe("years", -2) == "2 rokmi" - assert self.locale._format_timeframe("year", -1) == "rokom" - assert self.locale._format_timeframe("year", 0) == "0 rokov" - assert self.locale._format_timeframe("year", 1) == "rok" - assert self.locale._format_timeframe("years", 2) == "2 roky" - assert self.locale._format_timeframe("years", 5) == "5 rokov" - - assert self.locale._format_timeframe("now", 0) == "Teraz" - - def test_format_relative_now(self): - - result = self.locale._format_relative("Teraz", "now", 0) - assert result == "Teraz" - - def test_format_relative_future(self): - - result = self.locale._format_relative("hodinu", "hour", 1) - assert result == "O hodinu" - - def test_format_relative_past(self): - - result = self.locale._format_relative("hodinou", "hour", -1) - assert result == "Pred hodinou" - - -@pytest.mark.usefixtures("lang_locale") -class TestBulgarianLocale: - def test_plurals2(self): - assert self.locale._format_timeframe("hours", 0) == "0 часа" - assert self.locale._format_timeframe("hours", 1) == "1 час" - assert self.locale._format_timeframe("hours", 2) == "2 часа" - assert self.locale._format_timeframe("hours", 4) == "4 часа" - assert self.locale._format_timeframe("hours", 5) == "5 часа" - assert self.locale._format_timeframe("hours", 21) == "21 час" - assert self.locale._format_timeframe("hours", 22) == "22 часа" - assert self.locale._format_timeframe("hours", 25) == "25 часа" - - # feminine grammatical gender should be tested separately - assert self.locale._format_timeframe("minutes", 0) == "0 минути" - assert self.locale._format_timeframe("minutes", 1) == "1 минута" - assert self.locale._format_timeframe("minutes", 2) == "2 минути" - assert self.locale._format_timeframe("minutes", 4) == "4 минути" - assert self.locale._format_timeframe("minutes", 5) == "5 минути" - assert self.locale._format_timeframe("minutes", 21) == "21 минута" - assert self.locale._format_timeframe("minutes", 22) == "22 минути" - assert self.locale._format_timeframe("minutes", 25) == "25 минути" - - -@pytest.mark.usefixtures("lang_locale") -class TestMacedonianLocale: - def test_singles_mk(self): - assert self.locale._format_timeframe("second", 1) == "една секунда" - assert self.locale._format_timeframe("minute", 1) == "една минута" - assert self.locale._format_timeframe("hour", 1) == "еден саат" - assert self.locale._format_timeframe("day", 1) == "еден ден" - assert self.locale._format_timeframe("week", 1) == "една недела" - assert self.locale._format_timeframe("month", 1) == "еден месец" - assert self.locale._format_timeframe("year", 1) == "една година" - - def test_meridians_mk(self): - assert self.locale.meridian(7, "A") == "претпладне" - assert self.locale.meridian(18, "A") == "попладне" - assert self.locale.meridian(10, "a") == "дп" - assert self.locale.meridian(22, "a") == "пп" - - def test_describe_mk(self): - assert self.locale.describe("second", only_distance=True) == "една секунда" - assert self.locale.describe("second", only_distance=False) == "за една секунда" - assert self.locale.describe("minute", only_distance=True) == "една минута" - assert self.locale.describe("minute", only_distance=False) == "за една минута" - assert self.locale.describe("hour", only_distance=True) == "еден саат" - assert self.locale.describe("hour", only_distance=False) == "за еден саат" - assert self.locale.describe("day", only_distance=True) == "еден ден" - assert self.locale.describe("day", only_distance=False) == "за еден ден" - assert self.locale.describe("week", only_distance=True) == "една недела" - assert self.locale.describe("week", only_distance=False) == "за една недела" - assert self.locale.describe("month", only_distance=True) == "еден месец" - assert self.locale.describe("month", only_distance=False) == "за еден месец" - assert self.locale.describe("year", only_distance=True) == "една година" - assert self.locale.describe("year", only_distance=False) == "за една година" - - def test_relative_mk(self): - # time - assert self.locale._format_relative("сега", "now", 0) == "сега" - assert self.locale._format_relative("1 секунда", "seconds", 1) == "за 1 секунда" - assert self.locale._format_relative("1 минута", "minutes", 1) == "за 1 минута" - assert self.locale._format_relative("1 саат", "hours", 1) == "за 1 саат" - assert self.locale._format_relative("1 ден", "days", 1) == "за 1 ден" - assert self.locale._format_relative("1 недела", "weeks", 1) == "за 1 недела" - assert self.locale._format_relative("1 месец", "months", 1) == "за 1 месец" - assert self.locale._format_relative("1 година", "years", 1) == "за 1 година" - assert ( - self.locale._format_relative("1 секунда", "seconds", -1) == "пред 1 секунда" - ) - assert ( - self.locale._format_relative("1 минута", "minutes", -1) == "пред 1 минута" - ) - assert self.locale._format_relative("1 саат", "hours", -1) == "пред 1 саат" - assert self.locale._format_relative("1 ден", "days", -1) == "пред 1 ден" - assert self.locale._format_relative("1 недела", "weeks", -1) == "пред 1 недела" - assert self.locale._format_relative("1 месец", "months", -1) == "пред 1 месец" - assert self.locale._format_relative("1 година", "years", -1) == "пред 1 година" - - def test_plurals_mk(self): - # Seconds - assert self.locale._format_timeframe("seconds", 0) == "0 секунди" - assert self.locale._format_timeframe("seconds", 1) == "1 секунда" - assert self.locale._format_timeframe("seconds", 2) == "2 секунди" - assert self.locale._format_timeframe("seconds", 4) == "4 секунди" - assert self.locale._format_timeframe("seconds", 5) == "5 секунди" - assert self.locale._format_timeframe("seconds", 21) == "21 секунда" - assert self.locale._format_timeframe("seconds", 22) == "22 секунди" - assert self.locale._format_timeframe("seconds", 25) == "25 секунди" - - # Minutes - assert self.locale._format_timeframe("minutes", 0) == "0 минути" - assert self.locale._format_timeframe("minutes", 1) == "1 минута" - assert self.locale._format_timeframe("minutes", 2) == "2 минути" - assert self.locale._format_timeframe("minutes", 4) == "4 минути" - assert self.locale._format_timeframe("minutes", 5) == "5 минути" - assert self.locale._format_timeframe("minutes", 21) == "21 минута" - assert self.locale._format_timeframe("minutes", 22) == "22 минути" - assert self.locale._format_timeframe("minutes", 25) == "25 минути" - - # Hours - assert self.locale._format_timeframe("hours", 0) == "0 саати" - assert self.locale._format_timeframe("hours", 1) == "1 саат" - assert self.locale._format_timeframe("hours", 2) == "2 саати" - assert self.locale._format_timeframe("hours", 4) == "4 саати" - assert self.locale._format_timeframe("hours", 5) == "5 саати" - assert self.locale._format_timeframe("hours", 21) == "21 саат" - assert self.locale._format_timeframe("hours", 22) == "22 саати" - assert self.locale._format_timeframe("hours", 25) == "25 саати" - - # Days - assert self.locale._format_timeframe("days", 0) == "0 дена" - assert self.locale._format_timeframe("days", 1) == "1 ден" - assert self.locale._format_timeframe("days", 2) == "2 дена" - assert self.locale._format_timeframe("days", 3) == "3 дена" - assert self.locale._format_timeframe("days", 21) == "21 ден" - - # Weeks - assert self.locale._format_timeframe("weeks", 0) == "0 недели" - assert self.locale._format_timeframe("weeks", 1) == "1 недела" - assert self.locale._format_timeframe("weeks", 2) == "2 недели" - assert self.locale._format_timeframe("weeks", 4) == "4 недели" - assert self.locale._format_timeframe("weeks", 5) == "5 недели" - assert self.locale._format_timeframe("weeks", 21) == "21 недела" - assert self.locale._format_timeframe("weeks", 22) == "22 недели" - assert self.locale._format_timeframe("weeks", 25) == "25 недели" - - # Months - assert self.locale._format_timeframe("months", 0) == "0 месеци" - assert self.locale._format_timeframe("months", 1) == "1 месец" - assert self.locale._format_timeframe("months", 2) == "2 месеци" - assert self.locale._format_timeframe("months", 4) == "4 месеци" - assert self.locale._format_timeframe("months", 5) == "5 месеци" - assert self.locale._format_timeframe("months", 21) == "21 месец" - assert self.locale._format_timeframe("months", 22) == "22 месеци" - assert self.locale._format_timeframe("months", 25) == "25 месеци" - - # Years - assert self.locale._format_timeframe("years", 1) == "1 година" - assert self.locale._format_timeframe("years", 2) == "2 години" - assert self.locale._format_timeframe("years", 5) == "5 години" - - def test_multi_describe_mk(self): - describe = self.locale.describe_multi - - fulltest = [("years", 5), ("weeks", 1), ("hours", 1), ("minutes", 6)] - assert describe(fulltest) == "за 5 години 1 недела 1 саат 6 минути" - seconds4000_0days = [("days", 0), ("hours", 1), ("minutes", 6)] - assert describe(seconds4000_0days) == "за 0 дена 1 саат 6 минути" - seconds4000 = [("hours", 1), ("minutes", 6)] - assert describe(seconds4000) == "за 1 саат 6 минути" - assert describe(seconds4000, only_distance=True) == "1 саат 6 минути" - seconds3700 = [("hours", 1), ("minutes", 1)] - assert describe(seconds3700) == "за 1 саат 1 минута" - seconds300_0hours = [("hours", 0), ("minutes", 5)] - assert describe(seconds300_0hours) == "за 0 саати 5 минути" - seconds300 = [("minutes", 5)] - assert describe(seconds300) == "за 5 минути" - seconds60 = [("minutes", 1)] - assert describe(seconds60) == "за 1 минута" - assert describe(seconds60, only_distance=True) == "1 минута" - seconds60 = [("seconds", 1)] - assert describe(seconds60) == "за 1 секунда" - assert describe(seconds60, only_distance=True) == "1 секунда" - - -@pytest.mark.usefixtures("time_2013_01_01") -@pytest.mark.usefixtures("lang_locale") -class TestHebrewLocale: - def test_couple_of_timeframe(self): - assert self.locale._format_timeframe("days", 1) == "יום" - assert self.locale._format_timeframe("days", 2) == "יומיים" - assert self.locale._format_timeframe("days", 3) == "3 ימים" - - assert self.locale._format_timeframe("hours", 1) == "שעה" - assert self.locale._format_timeframe("hours", 2) == "שעתיים" - assert self.locale._format_timeframe("hours", 3) == "3 שעות" - - assert self.locale._format_timeframe("week", 1) == "שבוע" - assert self.locale._format_timeframe("weeks", 2) == "שבועיים" - assert self.locale._format_timeframe("weeks", 3) == "3 שבועות" - - assert self.locale._format_timeframe("months", 1) == "חודש" - assert self.locale._format_timeframe("months", 2) == "חודשיים" - assert self.locale._format_timeframe("months", 4) == "4 חודשים" - - assert self.locale._format_timeframe("years", 1) == "שנה" - assert self.locale._format_timeframe("years", 2) == "שנתיים" - assert self.locale._format_timeframe("years", 5) == "5 שנים" - - def test_describe_multi(self): - describe = self.locale.describe_multi - - fulltest = [("years", 5), ("weeks", 1), ("hours", 1), ("minutes", 6)] - assert describe(fulltest) == "בעוד 5 שנים, שבוע, שעה ו־6 דקות" - seconds4000_0days = [("days", 0), ("hours", 1), ("minutes", 6)] - assert describe(seconds4000_0days) == "בעוד 0 ימים, שעה ו־6 דקות" - seconds4000 = [("hours", 1), ("minutes", 6)] - assert describe(seconds4000) == "בעוד שעה ו־6 דקות" - assert describe(seconds4000, only_distance=True) == "שעה ו־6 דקות" - seconds3700 = [("hours", 1), ("minutes", 1)] - assert describe(seconds3700) == "בעוד שעה ודקה" - seconds300_0hours = [("hours", 0), ("minutes", 5)] - assert describe(seconds300_0hours) == "בעוד 0 שעות ו־5 דקות" - seconds300 = [("minutes", 5)] - assert describe(seconds300) == "בעוד 5 דקות" - seconds60 = [("minutes", 1)] - assert describe(seconds60) == "בעוד דקה" - assert describe(seconds60, only_distance=True) == "דקה" - - -@pytest.mark.usefixtures("lang_locale") -class TestMarathiLocale: - def test_dateCoreFunctionality(self): - dt = arrow.Arrow(2015, 4, 11, 17, 30, 00) - assert self.locale.month_name(dt.month) == "एप्रिल" - assert self.locale.month_abbreviation(dt.month) == "एप्रि" - assert self.locale.day_name(dt.isoweekday()) == "शनिवार" - assert self.locale.day_abbreviation(dt.isoweekday()) == "शनि" - - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == "2 तास" - assert self.locale._format_timeframe("hour", 0) == "एक तास" - - def test_format_relative_now(self): - result = self.locale._format_relative("सद्य", "now", 0) - assert result == "सद्य" - - def test_format_relative_past(self): - result = self.locale._format_relative("एक तास", "hour", 1) - assert result == "एक तास नंतर" - - def test_format_relative_future(self): - result = self.locale._format_relative("एक तास", "hour", -1) - assert result == "एक तास आधी" - - # Not currently implemented - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1" - - -@pytest.mark.usefixtures("lang_locale") -class TestFinnishLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == ("2 tuntia", "2 tunnin") - assert self.locale._format_timeframe("hour", 0) == ("tunti", "tunnin") - - def test_format_relative_now(self): - result = self.locale._format_relative(["juuri nyt", "juuri nyt"], "now", 0) - assert result == "juuri nyt" - - def test_format_relative_past(self): - result = self.locale._format_relative(["tunti", "tunnin"], "hour", 1) - assert result == "tunnin kuluttua" - - def test_format_relative_future(self): - result = self.locale._format_relative(["tunti", "tunnin"], "hour", -1) - assert result == "tunti sitten" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1." - - -@pytest.mark.usefixtures("lang_locale") -class TestGermanLocale: - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1." - - def test_define(self): - assert self.locale.describe("minute", only_distance=True) == "eine Minute" - assert self.locale.describe("minute", only_distance=False) == "in einer Minute" - assert self.locale.describe("hour", only_distance=True) == "eine Stunde" - assert self.locale.describe("hour", only_distance=False) == "in einer Stunde" - assert self.locale.describe("day", only_distance=True) == "ein Tag" - assert self.locale.describe("day", only_distance=False) == "in einem Tag" - assert self.locale.describe("week", only_distance=True) == "eine Woche" - assert self.locale.describe("week", only_distance=False) == "in einer Woche" - assert self.locale.describe("month", only_distance=True) == "ein Monat" - assert self.locale.describe("month", only_distance=False) == "in einem Monat" - assert self.locale.describe("year", only_distance=True) == "ein Jahr" - assert self.locale.describe("year", only_distance=False) == "in einem Jahr" - - def test_weekday(self): - dt = arrow.Arrow(2015, 4, 11, 17, 30, 00) - assert self.locale.day_name(dt.isoweekday()) == "Samstag" - assert self.locale.day_abbreviation(dt.isoweekday()) == "Sa" - - -@pytest.mark.usefixtures("lang_locale") -class TestHungarianLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == "2 óra" - assert self.locale._format_timeframe("hour", 0) == "egy órával" - assert self.locale._format_timeframe("hours", -2) == "2 órával" - assert self.locale._format_timeframe("now", 0) == "éppen most" - - -@pytest.mark.usefixtures("lang_locale") -class TestEsperantoLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 2) == "2 horoj" - assert self.locale._format_timeframe("hour", 0) == "un horo" - assert self.locale._format_timeframe("hours", -2) == "2 horoj" - assert self.locale._format_timeframe("now", 0) == "nun" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(1) == "1a" - - -@pytest.mark.usefixtures("lang_locale") -class TestThaiLocale: - def test_year_full(self): - assert self.locale.year_full(2015) == "2558" - - def test_year_abbreviation(self): - assert self.locale.year_abbreviation(2015) == "58" - - def test_format_relative_now(self): - result = self.locale._format_relative("ขณะนี้", "now", 0) - assert result == "ขณะนี้" - - def test_format_relative_past(self): - result = self.locale._format_relative("1 ชั่วโมง", "hour", 1) - assert result == "ในอีก 1 ชั่วโมง" - result = self.locale._format_relative("{0} ชั่วโมง", "hours", 2) - assert result == "ในอีก {0} ชั่วโมง" - result = self.locale._format_relative("ไม่กี่วินาที", "seconds", 42) - assert result == "ในอีกไม่กี่วินาที" - - def test_format_relative_future(self): - result = self.locale._format_relative("1 ชั่วโมง", "hour", -1) - assert result == "1 ชั่วโมง ที่ผ่านมา" - - -@pytest.mark.usefixtures("lang_locale") -class TestBengaliLocale: - def test_ordinal_number(self): - assert self.locale._ordinal_number(0) == "0তম" - assert self.locale._ordinal_number(1) == "1ম" - assert self.locale._ordinal_number(3) == "3য়" - assert self.locale._ordinal_number(4) == "4র্থ" - assert self.locale._ordinal_number(5) == "5ম" - assert self.locale._ordinal_number(6) == "6ষ্ঠ" - assert self.locale._ordinal_number(10) == "10ম" - assert self.locale._ordinal_number(11) == "11তম" - assert self.locale._ordinal_number(42) == "42তম" - assert self.locale._ordinal_number(-1) is None - - -@pytest.mark.usefixtures("lang_locale") -class TestRomanianLocale: - def test_timeframes(self): - - assert self.locale._format_timeframe("hours", 2) == "2 ore" - assert self.locale._format_timeframe("months", 2) == "2 luni" - - assert self.locale._format_timeframe("days", 2) == "2 zile" - assert self.locale._format_timeframe("years", 2) == "2 ani" - - assert self.locale._format_timeframe("hours", 3) == "3 ore" - assert self.locale._format_timeframe("months", 4) == "4 luni" - assert self.locale._format_timeframe("days", 3) == "3 zile" - assert self.locale._format_timeframe("years", 5) == "5 ani" - - def test_relative_timeframes(self): - assert self.locale._format_relative("acum", "now", 0) == "acum" - assert self.locale._format_relative("o oră", "hour", 1) == "peste o oră" - assert self.locale._format_relative("o oră", "hour", -1) == "o oră în urmă" - assert self.locale._format_relative("un minut", "minute", 1) == "peste un minut" - assert ( - self.locale._format_relative("un minut", "minute", -1) == "un minut în urmă" - ) - assert ( - self.locale._format_relative("câteva secunde", "seconds", -1) - == "câteva secunde în urmă" - ) - assert ( - self.locale._format_relative("câteva secunde", "seconds", 1) - == "peste câteva secunde" - ) - assert self.locale._format_relative("o zi", "day", -1) == "o zi în urmă" - assert self.locale._format_relative("o zi", "day", 1) == "peste o zi" - - -@pytest.mark.usefixtures("lang_locale") -class TestArabicLocale: - def test_timeframes(self): - - # single - assert self.locale._format_timeframe("minute", 1) == "دقيقة" - assert self.locale._format_timeframe("hour", 1) == "ساعة" - assert self.locale._format_timeframe("day", 1) == "يوم" - assert self.locale._format_timeframe("month", 1) == "شهر" - assert self.locale._format_timeframe("year", 1) == "سنة" - - # double - assert self.locale._format_timeframe("minutes", 2) == "دقيقتين" - assert self.locale._format_timeframe("hours", 2) == "ساعتين" - assert self.locale._format_timeframe("days", 2) == "يومين" - assert self.locale._format_timeframe("months", 2) == "شهرين" - assert self.locale._format_timeframe("years", 2) == "سنتين" - - # up to ten - assert self.locale._format_timeframe("minutes", 3) == "3 دقائق" - assert self.locale._format_timeframe("hours", 4) == "4 ساعات" - assert self.locale._format_timeframe("days", 5) == "5 أيام" - assert self.locale._format_timeframe("months", 6) == "6 أشهر" - assert self.locale._format_timeframe("years", 10) == "10 سنوات" - - # more than ten - assert self.locale._format_timeframe("minutes", 11) == "11 دقيقة" - assert self.locale._format_timeframe("hours", 19) == "19 ساعة" - assert self.locale._format_timeframe("months", 24) == "24 شهر" - assert self.locale._format_timeframe("days", 50) == "50 يوم" - assert self.locale._format_timeframe("years", 115) == "115 سنة" - - -@pytest.mark.usefixtures("lang_locale") -class TestNepaliLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("hours", 3) == "3 घण्टा" - assert self.locale._format_timeframe("hour", 0) == "एक घण्टा" - - def test_format_relative_now(self): - result = self.locale._format_relative("अहिले", "now", 0) - assert result == "अहिले" - - def test_format_relative_future(self): - result = self.locale._format_relative("एक घण्टा", "hour", 1) - assert result == "एक घण्टा पछी" - - def test_format_relative_past(self): - result = self.locale._format_relative("एक घण्टा", "hour", -1) - assert result == "एक घण्टा पहिले" - - -@pytest.mark.usefixtures("lang_locale") -class TestIndonesianLocale: - def test_timeframes(self): - assert self.locale._format_timeframe("hours", 2) == "2 jam" - assert self.locale._format_timeframe("months", 2) == "2 bulan" - - assert self.locale._format_timeframe("days", 2) == "2 hari" - assert self.locale._format_timeframe("years", 2) == "2 tahun" - - assert self.locale._format_timeframe("hours", 3) == "3 jam" - assert self.locale._format_timeframe("months", 4) == "4 bulan" - assert self.locale._format_timeframe("days", 3) == "3 hari" - assert self.locale._format_timeframe("years", 5) == "5 tahun" - - def test_format_relative_now(self): - assert self.locale._format_relative("baru saja", "now", 0) == "baru saja" - - def test_format_relative_past(self): - assert self.locale._format_relative("1 jam", "hour", 1) == "dalam 1 jam" - assert self.locale._format_relative("1 detik", "seconds", 1) == "dalam 1 detik" - - def test_format_relative_future(self): - assert self.locale._format_relative("1 jam", "hour", -1) == "1 jam yang lalu" - - -@pytest.mark.usefixtures("lang_locale") -class TestTagalogLocale: - def test_singles_tl(self): - assert self.locale._format_timeframe("second", 1) == "isang segundo" - assert self.locale._format_timeframe("minute", 1) == "isang minuto" - assert self.locale._format_timeframe("hour", 1) == "isang oras" - assert self.locale._format_timeframe("day", 1) == "isang araw" - assert self.locale._format_timeframe("week", 1) == "isang linggo" - assert self.locale._format_timeframe("month", 1) == "isang buwan" - assert self.locale._format_timeframe("year", 1) == "isang taon" - - def test_meridians_tl(self): - assert self.locale.meridian(7, "A") == "ng umaga" - assert self.locale.meridian(18, "A") == "ng hapon" - assert self.locale.meridian(10, "a") == "nu" - assert self.locale.meridian(22, "a") == "nh" - - def test_describe_tl(self): - assert self.locale.describe("second", only_distance=True) == "isang segundo" - assert ( - self.locale.describe("second", only_distance=False) - == "isang segundo mula ngayon" - ) - assert self.locale.describe("minute", only_distance=True) == "isang minuto" - assert ( - self.locale.describe("minute", only_distance=False) - == "isang minuto mula ngayon" - ) - assert self.locale.describe("hour", only_distance=True) == "isang oras" - assert ( - self.locale.describe("hour", only_distance=False) - == "isang oras mula ngayon" - ) - assert self.locale.describe("day", only_distance=True) == "isang araw" - assert ( - self.locale.describe("day", only_distance=False) == "isang araw mula ngayon" - ) - assert self.locale.describe("week", only_distance=True) == "isang linggo" - assert ( - self.locale.describe("week", only_distance=False) - == "isang linggo mula ngayon" - ) - assert self.locale.describe("month", only_distance=True) == "isang buwan" - assert ( - self.locale.describe("month", only_distance=False) - == "isang buwan mula ngayon" - ) - assert self.locale.describe("year", only_distance=True) == "isang taon" - assert ( - self.locale.describe("year", only_distance=False) - == "isang taon mula ngayon" - ) - - def test_relative_tl(self): - # time - assert self.locale._format_relative("ngayon", "now", 0) == "ngayon" - assert ( - self.locale._format_relative("1 segundo", "seconds", 1) - == "1 segundo mula ngayon" - ) - assert ( - self.locale._format_relative("1 minuto", "minutes", 1) - == "1 minuto mula ngayon" - ) - assert ( - self.locale._format_relative("1 oras", "hours", 1) == "1 oras mula ngayon" - ) - assert self.locale._format_relative("1 araw", "days", 1) == "1 araw mula ngayon" - assert ( - self.locale._format_relative("1 linggo", "weeks", 1) - == "1 linggo mula ngayon" - ) - assert ( - self.locale._format_relative("1 buwan", "months", 1) - == "1 buwan mula ngayon" - ) - assert ( - self.locale._format_relative("1 taon", "years", 1) == "1 taon mula ngayon" - ) - assert ( - self.locale._format_relative("1 segundo", "seconds", -1) - == "nakaraang 1 segundo" - ) - assert ( - self.locale._format_relative("1 minuto", "minutes", -1) - == "nakaraang 1 minuto" - ) - assert self.locale._format_relative("1 oras", "hours", -1) == "nakaraang 1 oras" - assert self.locale._format_relative("1 araw", "days", -1) == "nakaraang 1 araw" - assert ( - self.locale._format_relative("1 linggo", "weeks", -1) - == "nakaraang 1 linggo" - ) - assert ( - self.locale._format_relative("1 buwan", "months", -1) == "nakaraang 1 buwan" - ) - assert self.locale._format_relative("1 taon", "years", -1) == "nakaraang 1 taon" - - def test_plurals_tl(self): - # Seconds - assert self.locale._format_timeframe("seconds", 0) == "0 segundo" - assert self.locale._format_timeframe("seconds", 1) == "1 segundo" - assert self.locale._format_timeframe("seconds", 2) == "2 segundo" - assert self.locale._format_timeframe("seconds", 4) == "4 segundo" - assert self.locale._format_timeframe("seconds", 5) == "5 segundo" - assert self.locale._format_timeframe("seconds", 21) == "21 segundo" - assert self.locale._format_timeframe("seconds", 22) == "22 segundo" - assert self.locale._format_timeframe("seconds", 25) == "25 segundo" - - # Minutes - assert self.locale._format_timeframe("minutes", 0) == "0 minuto" - assert self.locale._format_timeframe("minutes", 1) == "1 minuto" - assert self.locale._format_timeframe("minutes", 2) == "2 minuto" - assert self.locale._format_timeframe("minutes", 4) == "4 minuto" - assert self.locale._format_timeframe("minutes", 5) == "5 minuto" - assert self.locale._format_timeframe("minutes", 21) == "21 minuto" - assert self.locale._format_timeframe("minutes", 22) == "22 minuto" - assert self.locale._format_timeframe("minutes", 25) == "25 minuto" - - # Hours - assert self.locale._format_timeframe("hours", 0) == "0 oras" - assert self.locale._format_timeframe("hours", 1) == "1 oras" - assert self.locale._format_timeframe("hours", 2) == "2 oras" - assert self.locale._format_timeframe("hours", 4) == "4 oras" - assert self.locale._format_timeframe("hours", 5) == "5 oras" - assert self.locale._format_timeframe("hours", 21) == "21 oras" - assert self.locale._format_timeframe("hours", 22) == "22 oras" - assert self.locale._format_timeframe("hours", 25) == "25 oras" - - # Days - assert self.locale._format_timeframe("days", 0) == "0 araw" - assert self.locale._format_timeframe("days", 1) == "1 araw" - assert self.locale._format_timeframe("days", 2) == "2 araw" - assert self.locale._format_timeframe("days", 3) == "3 araw" - assert self.locale._format_timeframe("days", 21) == "21 araw" - - # Weeks - assert self.locale._format_timeframe("weeks", 0) == "0 linggo" - assert self.locale._format_timeframe("weeks", 1) == "1 linggo" - assert self.locale._format_timeframe("weeks", 2) == "2 linggo" - assert self.locale._format_timeframe("weeks", 4) == "4 linggo" - assert self.locale._format_timeframe("weeks", 5) == "5 linggo" - assert self.locale._format_timeframe("weeks", 21) == "21 linggo" - assert self.locale._format_timeframe("weeks", 22) == "22 linggo" - assert self.locale._format_timeframe("weeks", 25) == "25 linggo" - - # Months - assert self.locale._format_timeframe("months", 0) == "0 buwan" - assert self.locale._format_timeframe("months", 1) == "1 buwan" - assert self.locale._format_timeframe("months", 2) == "2 buwan" - assert self.locale._format_timeframe("months", 4) == "4 buwan" - assert self.locale._format_timeframe("months", 5) == "5 buwan" - assert self.locale._format_timeframe("months", 21) == "21 buwan" - assert self.locale._format_timeframe("months", 22) == "22 buwan" - assert self.locale._format_timeframe("months", 25) == "25 buwan" - - # Years - assert self.locale._format_timeframe("years", 1) == "1 taon" - assert self.locale._format_timeframe("years", 2) == "2 taon" - assert self.locale._format_timeframe("years", 5) == "5 taon" - - def test_multi_describe_tl(self): - describe = self.locale.describe_multi - - fulltest = [("years", 5), ("weeks", 1), ("hours", 1), ("minutes", 6)] - assert describe(fulltest) == "5 taon 1 linggo 1 oras 6 minuto mula ngayon" - seconds4000_0days = [("days", 0), ("hours", 1), ("minutes", 6)] - assert describe(seconds4000_0days) == "0 araw 1 oras 6 minuto mula ngayon" - seconds4000 = [("hours", 1), ("minutes", 6)] - assert describe(seconds4000) == "1 oras 6 minuto mula ngayon" - assert describe(seconds4000, only_distance=True) == "1 oras 6 minuto" - seconds3700 = [("hours", 1), ("minutes", 1)] - assert describe(seconds3700) == "1 oras 1 minuto mula ngayon" - seconds300_0hours = [("hours", 0), ("minutes", 5)] - assert describe(seconds300_0hours) == "0 oras 5 minuto mula ngayon" - seconds300 = [("minutes", 5)] - assert describe(seconds300) == "5 minuto mula ngayon" - seconds60 = [("minutes", 1)] - assert describe(seconds60) == "1 minuto mula ngayon" - assert describe(seconds60, only_distance=True) == "1 minuto" - seconds60 = [("seconds", 1)] - assert describe(seconds60) == "1 segundo mula ngayon" - assert describe(seconds60, only_distance=True) == "1 segundo" - - def test_ordinal_number_tl(self): - assert self.locale.ordinal_number(0) == "ika-0" - assert self.locale.ordinal_number(1) == "ika-1" - assert self.locale.ordinal_number(2) == "ika-2" - assert self.locale.ordinal_number(3) == "ika-3" - assert self.locale.ordinal_number(10) == "ika-10" - assert self.locale.ordinal_number(23) == "ika-23" - assert self.locale.ordinal_number(100) == "ika-100" - assert self.locale.ordinal_number(103) == "ika-103" - assert self.locale.ordinal_number(114) == "ika-114" - - -@pytest.mark.usefixtures("lang_locale") -class TestEstonianLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "just nüüd" - assert self.locale._format_timeframe("second", 1) == "ühe sekundi" - assert self.locale._format_timeframe("seconds", 3) == "3 sekundi" - assert self.locale._format_timeframe("seconds", 30) == "30 sekundi" - assert self.locale._format_timeframe("minute", 1) == "ühe minuti" - assert self.locale._format_timeframe("minutes", 4) == "4 minuti" - assert self.locale._format_timeframe("minutes", 40) == "40 minuti" - assert self.locale._format_timeframe("hour", 1) == "tunni aja" - assert self.locale._format_timeframe("hours", 5) == "5 tunni" - assert self.locale._format_timeframe("hours", 23) == "23 tunni" - assert self.locale._format_timeframe("day", 1) == "ühe päeva" - assert self.locale._format_timeframe("days", 6) == "6 päeva" - assert self.locale._format_timeframe("days", 12) == "12 päeva" - assert self.locale._format_timeframe("month", 1) == "ühe kuu" - assert self.locale._format_timeframe("months", 7) == "7 kuu" - assert self.locale._format_timeframe("months", 11) == "11 kuu" - assert self.locale._format_timeframe("year", 1) == "ühe aasta" - assert self.locale._format_timeframe("years", 8) == "8 aasta" - assert self.locale._format_timeframe("years", 12) == "12 aasta" - - assert self.locale._format_timeframe("now", 0) == "just nüüd" - assert self.locale._format_timeframe("second", -1) == "üks sekund" - assert self.locale._format_timeframe("seconds", -9) == "9 sekundit" - assert self.locale._format_timeframe("seconds", -12) == "12 sekundit" - assert self.locale._format_timeframe("minute", -1) == "üks minut" - assert self.locale._format_timeframe("minutes", -2) == "2 minutit" - assert self.locale._format_timeframe("minutes", -10) == "10 minutit" - assert self.locale._format_timeframe("hour", -1) == "tund aega" - assert self.locale._format_timeframe("hours", -3) == "3 tundi" - assert self.locale._format_timeframe("hours", -11) == "11 tundi" - assert self.locale._format_timeframe("day", -1) == "üks päev" - assert self.locale._format_timeframe("days", -2) == "2 päeva" - assert self.locale._format_timeframe("days", -12) == "12 päeva" - assert self.locale._format_timeframe("month", -1) == "üks kuu" - assert self.locale._format_timeframe("months", -3) == "3 kuud" - assert self.locale._format_timeframe("months", -13) == "13 kuud" - assert self.locale._format_timeframe("year", -1) == "üks aasta" - assert self.locale._format_timeframe("years", -4) == "4 aastat" - assert self.locale._format_timeframe("years", -14) == "14 aastat" - - -@pytest.mark.usefixtures("lang_locale") -class TestPortugueseLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "agora" - assert self.locale._format_timeframe("second", 1) == "um segundo" - assert self.locale._format_timeframe("seconds", 30) == "30 segundos" - assert self.locale._format_timeframe("minute", 1) == "um minuto" - assert self.locale._format_timeframe("minutes", 40) == "40 minutos" - assert self.locale._format_timeframe("hour", 1) == "uma hora" - assert self.locale._format_timeframe("hours", 23) == "23 horas" - assert self.locale._format_timeframe("day", 1) == "um dia" - assert self.locale._format_timeframe("days", 12) == "12 dias" - assert self.locale._format_timeframe("month", 1) == "um mês" - assert self.locale._format_timeframe("months", 11) == "11 meses" - assert self.locale._format_timeframe("year", 1) == "um ano" - assert self.locale._format_timeframe("years", 12) == "12 anos" - - -@pytest.mark.usefixtures("lang_locale") -class TestBrazilianPortugueseLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "agora" - assert self.locale._format_timeframe("second", 1) == "um segundo" - assert self.locale._format_timeframe("seconds", 30) == "30 segundos" - assert self.locale._format_timeframe("minute", 1) == "um minuto" - assert self.locale._format_timeframe("minutes", 40) == "40 minutos" - assert self.locale._format_timeframe("hour", 1) == "uma hora" - assert self.locale._format_timeframe("hours", 23) == "23 horas" - assert self.locale._format_timeframe("day", 1) == "um dia" - assert self.locale._format_timeframe("days", 12) == "12 dias" - assert self.locale._format_timeframe("month", 1) == "um mês" - assert self.locale._format_timeframe("months", 11) == "11 meses" - assert self.locale._format_timeframe("year", 1) == "um ano" - assert self.locale._format_timeframe("years", 12) == "12 anos" - assert self.locale._format_relative("uma hora", "hour", -1) == "faz uma hora" - - -@pytest.mark.usefixtures("lang_locale") -class TestHongKongLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "剛才" - assert self.locale._format_timeframe("second", 1) == "1秒" - assert self.locale._format_timeframe("seconds", 30) == "30秒" - assert self.locale._format_timeframe("minute", 1) == "1分鐘" - assert self.locale._format_timeframe("minutes", 40) == "40分鐘" - assert self.locale._format_timeframe("hour", 1) == "1小時" - assert self.locale._format_timeframe("hours", 23) == "23小時" - assert self.locale._format_timeframe("day", 1) == "1天" - assert self.locale._format_timeframe("days", 12) == "12天" - assert self.locale._format_timeframe("week", 1) == "1星期" - assert self.locale._format_timeframe("weeks", 38) == "38星期" - assert self.locale._format_timeframe("month", 1) == "1個月" - assert self.locale._format_timeframe("months", 11) == "11個月" - assert self.locale._format_timeframe("year", 1) == "1年" - assert self.locale._format_timeframe("years", 12) == "12年" - - -@pytest.mark.usefixtures("lang_locale") -class TestChineseTWLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "剛才" - assert self.locale._format_timeframe("second", 1) == "1秒" - assert self.locale._format_timeframe("seconds", 30) == "30秒" - assert self.locale._format_timeframe("minute", 1) == "1分鐘" - assert self.locale._format_timeframe("minutes", 40) == "40分鐘" - assert self.locale._format_timeframe("hour", 1) == "1小時" - assert self.locale._format_timeframe("hours", 23) == "23小時" - assert self.locale._format_timeframe("day", 1) == "1天" - assert self.locale._format_timeframe("days", 12) == "12天" - assert self.locale._format_timeframe("week", 1) == "1週" - assert self.locale._format_timeframe("weeks", 38) == "38週" - assert self.locale._format_timeframe("month", 1) == "1個月" - assert self.locale._format_timeframe("months", 11) == "11個月" - assert self.locale._format_timeframe("year", 1) == "1年" - assert self.locale._format_timeframe("years", 12) == "12年" - - -@pytest.mark.usefixtures("lang_locale") -class TestSwahiliLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "sasa hivi" - assert self.locale._format_timeframe("second", 1) == "sekunde" - assert self.locale._format_timeframe("seconds", 3) == "sekunde 3" - assert self.locale._format_timeframe("seconds", 30) == "sekunde 30" - assert self.locale._format_timeframe("minute", 1) == "dakika moja" - assert self.locale._format_timeframe("minutes", 4) == "dakika 4" - assert self.locale._format_timeframe("minutes", 40) == "dakika 40" - assert self.locale._format_timeframe("hour", 1) == "saa moja" - assert self.locale._format_timeframe("hours", 5) == "saa 5" - assert self.locale._format_timeframe("hours", 23) == "saa 23" - assert self.locale._format_timeframe("day", 1) == "siku moja" - assert self.locale._format_timeframe("days", 6) == "siku 6" - assert self.locale._format_timeframe("days", 12) == "siku 12" - assert self.locale._format_timeframe("month", 1) == "mwezi moja" - assert self.locale._format_timeframe("months", 7) == "miezi 7" - assert self.locale._format_timeframe("week", 1) == "wiki moja" - assert self.locale._format_timeframe("weeks", 2) == "wiki 2" - assert self.locale._format_timeframe("months", 11) == "miezi 11" - assert self.locale._format_timeframe("year", 1) == "mwaka moja" - assert self.locale._format_timeframe("years", 8) == "miaka 8" - assert self.locale._format_timeframe("years", 12) == "miaka 12" - - def test_format_relative_now(self): - result = self.locale._format_relative("sasa hivi", "now", 0) - assert result == "sasa hivi" - - def test_format_relative_past(self): - result = self.locale._format_relative("saa moja", "hour", 1) - assert result == "muda wa saa moja" - - def test_format_relative_future(self): - result = self.locale._format_relative("saa moja", "hour", -1) - assert result == "saa moja iliyopita" - - -@pytest.mark.usefixtures("lang_locale") -class TestKoreanLocale: - def test_format_timeframe(self): - assert self.locale._format_timeframe("now", 0) == "지금" - assert self.locale._format_timeframe("second", 1) == "1초" - assert self.locale._format_timeframe("seconds", 2) == "2초" - assert self.locale._format_timeframe("minute", 1) == "1분" - assert self.locale._format_timeframe("minutes", 2) == "2분" - assert self.locale._format_timeframe("hour", 1) == "한시간" - assert self.locale._format_timeframe("hours", 2) == "2시간" - assert self.locale._format_timeframe("day", 1) == "하루" - assert self.locale._format_timeframe("days", 2) == "2일" - assert self.locale._format_timeframe("week", 1) == "1주" - assert self.locale._format_timeframe("weeks", 2) == "2주" - assert self.locale._format_timeframe("month", 1) == "한달" - assert self.locale._format_timeframe("months", 2) == "2개월" - assert self.locale._format_timeframe("year", 1) == "1년" - assert self.locale._format_timeframe("years", 2) == "2년" - - def test_format_relative(self): - assert self.locale._format_relative("지금", "now", 0) == "지금" - - assert self.locale._format_relative("1초", "second", 1) == "1초 후" - assert self.locale._format_relative("2초", "seconds", 2) == "2초 후" - assert self.locale._format_relative("1분", "minute", 1) == "1분 후" - assert self.locale._format_relative("2분", "minutes", 2) == "2분 후" - assert self.locale._format_relative("한시간", "hour", 1) == "한시간 후" - assert self.locale._format_relative("2시간", "hours", 2) == "2시간 후" - assert self.locale._format_relative("하루", "day", 1) == "내일" - assert self.locale._format_relative("2일", "days", 2) == "모레" - assert self.locale._format_relative("3일", "days", 3) == "글피" - assert self.locale._format_relative("4일", "days", 4) == "그글피" - assert self.locale._format_relative("5일", "days", 5) == "5일 후" - assert self.locale._format_relative("1주", "week", 1) == "1주 후" - assert self.locale._format_relative("2주", "weeks", 2) == "2주 후" - assert self.locale._format_relative("한달", "month", 1) == "한달 후" - assert self.locale._format_relative("2개월", "months", 2) == "2개월 후" - assert self.locale._format_relative("1년", "year", 1) == "내년" - assert self.locale._format_relative("2년", "years", 2) == "내후년" - assert self.locale._format_relative("3년", "years", 3) == "3년 후" - - assert self.locale._format_relative("1초", "second", -1) == "1초 전" - assert self.locale._format_relative("2초", "seconds", -2) == "2초 전" - assert self.locale._format_relative("1분", "minute", -1) == "1분 전" - assert self.locale._format_relative("2분", "minutes", -2) == "2분 전" - assert self.locale._format_relative("한시간", "hour", -1) == "한시간 전" - assert self.locale._format_relative("2시간", "hours", -2) == "2시간 전" - assert self.locale._format_relative("하루", "day", -1) == "어제" - assert self.locale._format_relative("2일", "days", -2) == "그제" - assert self.locale._format_relative("3일", "days", -3) == "그끄제" - assert self.locale._format_relative("4일", "days", -4) == "4일 전" - assert self.locale._format_relative("1주", "week", -1) == "1주 전" - assert self.locale._format_relative("2주", "weeks", -2) == "2주 전" - assert self.locale._format_relative("한달", "month", -1) == "한달 전" - assert self.locale._format_relative("2개월", "months", -2) == "2개월 전" - assert self.locale._format_relative("1년", "year", -1) == "작년" - assert self.locale._format_relative("2년", "years", -2) == "제작년" - assert self.locale._format_relative("3년", "years", -3) == "3년 전" - - def test_ordinal_number(self): - assert self.locale.ordinal_number(0) == "0번째" - assert self.locale.ordinal_number(1) == "첫번째" - assert self.locale.ordinal_number(2) == "두번째" - assert self.locale.ordinal_number(3) == "세번째" - assert self.locale.ordinal_number(4) == "네번째" - assert self.locale.ordinal_number(5) == "다섯번째" - assert self.locale.ordinal_number(6) == "여섯번째" - assert self.locale.ordinal_number(7) == "일곱번째" - assert self.locale.ordinal_number(8) == "여덟번째" - assert self.locale.ordinal_number(9) == "아홉번째" - assert self.locale.ordinal_number(10) == "열번째" - assert self.locale.ordinal_number(11) == "11번째" - assert self.locale.ordinal_number(12) == "12번째" - assert self.locale.ordinal_number(100) == "100번째" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_parser.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_parser.py deleted file mode 100644 index 9fb4e68f3c..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_parser.py +++ /dev/null @@ -1,1657 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -import calendar -import os -import time -from datetime import datetime - -import pytest -from dateutil import tz - -import arrow -from arrow import formatter, parser -from arrow.constants import MAX_TIMESTAMP_US -from arrow.parser import DateTimeParser, ParserError, ParserMatchError - -from .utils import make_full_tz_list - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParser: - def test_parse_multiformat(self, mocker): - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_a", - side_effect=parser.ParserError, - ) - - with pytest.raises(parser.ParserError): - self.parser._parse_multiformat("str", ["fmt_a"]) - - mock_datetime = mocker.Mock() - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_b", - return_value=mock_datetime, - ) - - result = self.parser._parse_multiformat("str", ["fmt_a", "fmt_b"]) - assert result == mock_datetime - - def test_parse_multiformat_all_fail(self, mocker): - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_a", - side_effect=parser.ParserError, - ) - - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_b", - side_effect=parser.ParserError, - ) - - with pytest.raises(parser.ParserError): - self.parser._parse_multiformat("str", ["fmt_a", "fmt_b"]) - - def test_parse_multiformat_unself_expected_fail(self, mocker): - class UnselfExpectedError(Exception): - pass - - mocker.patch( - "arrow.parser.DateTimeParser.parse", - string="str", - fmt="fmt_a", - side_effect=UnselfExpectedError, - ) - - with pytest.raises(UnselfExpectedError): - self.parser._parse_multiformat("str", ["fmt_a", "fmt_b"]) - - def test_parse_token_nonsense(self): - parts = {} - self.parser._parse_token("NONSENSE", "1900", parts) - assert parts == {} - - def test_parse_token_invalid_meridians(self): - parts = {} - self.parser._parse_token("A", "a..m", parts) - assert parts == {} - self.parser._parse_token("a", "p..m", parts) - assert parts == {} - - def test_parser_no_caching(self, mocker): - - mocked_parser = mocker.patch( - "arrow.parser.DateTimeParser._generate_pattern_re", fmt="fmt_a" - ) - self.parser = parser.DateTimeParser(cache_size=0) - for _ in range(100): - self.parser._generate_pattern_re("fmt_a") - assert mocked_parser.call_count == 100 - - def test_parser_1_line_caching(self, mocker): - mocked_parser = mocker.patch("arrow.parser.DateTimeParser._generate_pattern_re") - self.parser = parser.DateTimeParser(cache_size=1) - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - assert mocked_parser.call_count == 1 - assert mocked_parser.call_args_list[0] == mocker.call(fmt="fmt_a") - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_b") - assert mocked_parser.call_count == 2 - assert mocked_parser.call_args_list[1] == mocker.call(fmt="fmt_b") - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - assert mocked_parser.call_count == 3 - assert mocked_parser.call_args_list[2] == mocker.call(fmt="fmt_a") - - def test_parser_multiple_line_caching(self, mocker): - mocked_parser = mocker.patch("arrow.parser.DateTimeParser._generate_pattern_re") - self.parser = parser.DateTimeParser(cache_size=2) - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - assert mocked_parser.call_count == 1 - assert mocked_parser.call_args_list[0] == mocker.call(fmt="fmt_a") - - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_b") - assert mocked_parser.call_count == 2 - assert mocked_parser.call_args_list[1] == mocker.call(fmt="fmt_b") - - # fmt_a and fmt_b are in the cache, so no new calls should be made - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_a") - for _ in range(100): - self.parser._generate_pattern_re(fmt="fmt_b") - assert mocked_parser.call_count == 2 - assert mocked_parser.call_args_list[0] == mocker.call(fmt="fmt_a") - assert mocked_parser.call_args_list[1] == mocker.call(fmt="fmt_b") - - def test_YY_and_YYYY_format_list(self): - - assert self.parser.parse("15/01/19", ["DD/MM/YY", "DD/MM/YYYY"]) == datetime( - 2019, 1, 15 - ) - - # Regression test for issue #580 - assert self.parser.parse("15/01/2019", ["DD/MM/YY", "DD/MM/YYYY"]) == datetime( - 2019, 1, 15 - ) - - assert ( - self.parser.parse( - "15/01/2019T04:05:06.789120Z", - ["D/M/YYThh:mm:ss.SZ", "D/M/YYYYThh:mm:ss.SZ"], - ) - == datetime(2019, 1, 15, 4, 5, 6, 789120, tzinfo=tz.tzutc()) - ) - - # regression test for issue #447 - def test_timestamp_format_list(self): - # should not match on the "X" token - assert ( - self.parser.parse( - "15 Jul 2000", - ["MM/DD/YYYY", "YYYY-MM-DD", "X", "DD-MMMM-YYYY", "D MMM YYYY"], - ) - == datetime(2000, 7, 15) - ) - - with pytest.raises(ParserError): - self.parser.parse("15 Jul", "X") - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserParse: - def test_parse_list(self, mocker): - - mocker.patch( - "arrow.parser.DateTimeParser._parse_multiformat", - string="str", - formats=["fmt_a", "fmt_b"], - return_value="result", - ) - - result = self.parser.parse("str", ["fmt_a", "fmt_b"]) - assert result == "result" - - def test_parse_unrecognized_token(self, mocker): - - mocker.patch.dict("arrow.parser.DateTimeParser._BASE_INPUT_RE_MAP") - del arrow.parser.DateTimeParser._BASE_INPUT_RE_MAP["YYYY"] - - # need to make another local parser to apply patch changes - _parser = parser.DateTimeParser() - with pytest.raises(parser.ParserError): - _parser.parse("2013-01-01", "YYYY-MM-DD") - - def test_parse_parse_no_match(self): - - with pytest.raises(ParserError): - self.parser.parse("01-01", "YYYY-MM-DD") - - def test_parse_separators(self): - - with pytest.raises(ParserError): - self.parser.parse("1403549231", "YYYY-MM-DD") - - def test_parse_numbers(self): - - self.expected = datetime(2012, 1, 1, 12, 5, 10) - assert ( - self.parser.parse("2012-01-01 12:05:10", "YYYY-MM-DD HH:mm:ss") - == self.expected - ) - - def test_parse_year_two_digit(self): - - self.expected = datetime(1979, 1, 1, 12, 5, 10) - assert ( - self.parser.parse("79-01-01 12:05:10", "YY-MM-DD HH:mm:ss") == self.expected - ) - - def test_parse_timestamp(self): - - tz_utc = tz.tzutc() - int_timestamp = int(time.time()) - self.expected = datetime.fromtimestamp(int_timestamp, tz=tz_utc) - assert self.parser.parse("{:d}".format(int_timestamp), "X") == self.expected - - float_timestamp = time.time() - self.expected = datetime.fromtimestamp(float_timestamp, tz=tz_utc) - assert self.parser.parse("{:f}".format(float_timestamp), "X") == self.expected - - # test handling of ns timestamp (arrow will round to 6 digits regardless) - self.expected = datetime.fromtimestamp(float_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:f}123".format(float_timestamp), "X") == self.expected - ) - - # test ps timestamp (arrow will round to 6 digits regardless) - self.expected = datetime.fromtimestamp(float_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:f}123456".format(float_timestamp), "X") - == self.expected - ) - - # NOTE: negative timestamps cannot be handled by datetime on Window - # Must use timedelta to handle them. ref: https://stackoverflow.com/questions/36179914 - if os.name != "nt": - # regression test for issue #662 - negative_int_timestamp = -int_timestamp - self.expected = datetime.fromtimestamp(negative_int_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:d}".format(negative_int_timestamp), "X") - == self.expected - ) - - negative_float_timestamp = -float_timestamp - self.expected = datetime.fromtimestamp(negative_float_timestamp, tz=tz_utc) - assert ( - self.parser.parse("{:f}".format(negative_float_timestamp), "X") - == self.expected - ) - - # NOTE: timestamps cannot be parsed from natural language strings (by removing the ^...$) because it will - # break cases like "15 Jul 2000" and a format list (see issue #447) - with pytest.raises(ParserError): - natural_lang_string = "Meet me at {} at the restaurant.".format( - float_timestamp - ) - self.parser.parse(natural_lang_string, "X") - - with pytest.raises(ParserError): - self.parser.parse("1565982019.", "X") - - with pytest.raises(ParserError): - self.parser.parse(".1565982019", "X") - - def test_parse_expanded_timestamp(self): - # test expanded timestamps that include milliseconds - # and microseconds as multiples rather than decimals - # requested in issue #357 - - tz_utc = tz.tzutc() - timestamp = 1569982581.413132 - timestamp_milli = int(round(timestamp * 1000)) - timestamp_micro = int(round(timestamp * 1000000)) - - # "x" token should parse integer timestamps below MAX_TIMESTAMP normally - self.expected = datetime.fromtimestamp(int(timestamp), tz=tz_utc) - assert self.parser.parse("{:d}".format(int(timestamp)), "x") == self.expected - - self.expected = datetime.fromtimestamp(round(timestamp, 3), tz=tz_utc) - assert self.parser.parse("{:d}".format(timestamp_milli), "x") == self.expected - - self.expected = datetime.fromtimestamp(timestamp, tz=tz_utc) - assert self.parser.parse("{:d}".format(timestamp_micro), "x") == self.expected - - # anything above max µs timestamp should fail - with pytest.raises(ValueError): - self.parser.parse("{:d}".format(int(MAX_TIMESTAMP_US) + 1), "x") - - # floats are not allowed with the "x" token - with pytest.raises(ParserMatchError): - self.parser.parse("{:f}".format(timestamp), "x") - - def test_parse_names(self): - - self.expected = datetime(2012, 1, 1) - - assert self.parser.parse("January 1, 2012", "MMMM D, YYYY") == self.expected - assert self.parser.parse("Jan 1, 2012", "MMM D, YYYY") == self.expected - - def test_parse_pm(self): - - self.expected = datetime(1, 1, 1, 13, 0, 0) - assert self.parser.parse("1 pm", "H a") == self.expected - assert self.parser.parse("1 pm", "h a") == self.expected - - self.expected = datetime(1, 1, 1, 1, 0, 0) - assert self.parser.parse("1 am", "H A") == self.expected - assert self.parser.parse("1 am", "h A") == self.expected - - self.expected = datetime(1, 1, 1, 0, 0, 0) - assert self.parser.parse("12 am", "H A") == self.expected - assert self.parser.parse("12 am", "h A") == self.expected - - self.expected = datetime(1, 1, 1, 12, 0, 0) - assert self.parser.parse("12 pm", "H A") == self.expected - assert self.parser.parse("12 pm", "h A") == self.expected - - def test_parse_tz_hours_only(self): - - self.expected = datetime(2025, 10, 17, 5, 30, 10, tzinfo=tz.tzoffset(None, 0)) - parsed = self.parser.parse("2025-10-17 05:30:10+00", "YYYY-MM-DD HH:mm:ssZ") - assert parsed == self.expected - - def test_parse_tz_zz(self): - - self.expected = datetime(2013, 1, 1, tzinfo=tz.tzoffset(None, -7 * 3600)) - assert self.parser.parse("2013-01-01 -07:00", "YYYY-MM-DD ZZ") == self.expected - - @pytest.mark.parametrize("full_tz_name", make_full_tz_list()) - def test_parse_tz_name_zzz(self, full_tz_name): - - self.expected = datetime(2013, 1, 1, tzinfo=tz.gettz(full_tz_name)) - assert ( - self.parser.parse("2013-01-01 {}".format(full_tz_name), "YYYY-MM-DD ZZZ") - == self.expected - ) - - # note that offsets are not timezones - with pytest.raises(ParserError): - self.parser.parse("2013-01-01 12:30:45.9+1000", "YYYY-MM-DDZZZ") - - with pytest.raises(ParserError): - self.parser.parse("2013-01-01 12:30:45.9+10:00", "YYYY-MM-DDZZZ") - - with pytest.raises(ParserError): - self.parser.parse("2013-01-01 12:30:45.9-10", "YYYY-MM-DDZZZ") - - def test_parse_subsecond(self): - self.expected = datetime(2013, 1, 1, 12, 30, 45, 900000) - assert ( - self.parser.parse("2013-01-01 12:30:45.9", "YYYY-MM-DD HH:mm:ss.S") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 980000) - assert ( - self.parser.parse("2013-01-01 12:30:45.98", "YYYY-MM-DD HH:mm:ss.SS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987000) - assert ( - self.parser.parse("2013-01-01 12:30:45.987", "YYYY-MM-DD HH:mm:ss.SSS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987600) - assert ( - self.parser.parse("2013-01-01 12:30:45.9876", "YYYY-MM-DD HH:mm:ss.SSSS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987650) - assert ( - self.parser.parse("2013-01-01 12:30:45.98765", "YYYY-MM-DD HH:mm:ss.SSSSS") - == self.expected - ) - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - assert ( - self.parser.parse( - "2013-01-01 12:30:45.987654", "YYYY-MM-DD HH:mm:ss.SSSSSS" - ) - == self.expected - ) - - def test_parse_subsecond_rounding(self): - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - datetime_format = "YYYY-MM-DD HH:mm:ss.S" - - # round up - string = "2013-01-01 12:30:45.9876539" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # round down - string = "2013-01-01 12:30:45.98765432" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # round half-up - string = "2013-01-01 12:30:45.987653521" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # round half-down - string = "2013-01-01 12:30:45.9876545210" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # overflow (zero out the subseconds and increment the seconds) - # regression tests for issue #636 - def test_parse_subsecond_rounding_overflow(self): - datetime_format = "YYYY-MM-DD HH:mm:ss.S" - - self.expected = datetime(2013, 1, 1, 12, 30, 46) - string = "2013-01-01 12:30:45.9999995" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - self.expected = datetime(2013, 1, 1, 12, 31, 0) - string = "2013-01-01 12:30:59.9999999" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - self.expected = datetime(2013, 1, 2, 0, 0, 0) - string = "2013-01-01 23:59:59.9999999" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # 6 digits should remain unrounded - self.expected = datetime(2013, 1, 1, 12, 30, 45, 999999) - string = "2013-01-01 12:30:45.999999" - assert self.parser.parse(string, datetime_format) == self.expected - assert self.parser.parse_iso(string) == self.expected - - # Regression tests for issue #560 - def test_parse_long_year(self): - with pytest.raises(ParserError): - self.parser.parse("09 January 123456789101112", "DD MMMM YYYY") - - with pytest.raises(ParserError): - self.parser.parse("123456789101112 09 January", "YYYY DD MMMM") - - with pytest.raises(ParserError): - self.parser.parse("68096653015/01/19", "YY/M/DD") - - def test_parse_with_extra_words_at_start_and_end_invalid(self): - input_format_pairs = [ - ("blah2016", "YYYY"), - ("blah2016blah", "YYYY"), - ("2016blah", "YYYY"), - ("2016-05blah", "YYYY-MM"), - ("2016-05-16blah", "YYYY-MM-DD"), - ("2016-05-16T04:05:06.789120blah", "YYYY-MM-DDThh:mm:ss.S"), - ("2016-05-16T04:05:06.789120ZblahZ", "YYYY-MM-DDThh:mm:ss.SZ"), - ("2016-05-16T04:05:06.789120Zblah", "YYYY-MM-DDThh:mm:ss.SZ"), - ("2016-05-16T04:05:06.789120blahZ", "YYYY-MM-DDThh:mm:ss.SZ"), - ] - - for pair in input_format_pairs: - with pytest.raises(ParserError): - self.parser.parse(pair[0], pair[1]) - - def test_parse_with_extra_words_at_start_and_end_valid(self): - # Spaces surrounding the parsable date are ok because we - # allow the parsing of natural language input. Additionally, a single - # character of specific punctuation before or after the date is okay. - # See docs for full list of valid punctuation. - - assert self.parser.parse("blah 2016 blah", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse("blah 2016", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse("2016 blah", "YYYY") == datetime(2016, 1, 1) - - # test one additional space along with space divider - assert self.parser.parse( - "blah 2016-05-16 04:05:06.789120", "YYYY-MM-DD hh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert self.parser.parse( - "2016-05-16 04:05:06.789120 blah", "YYYY-MM-DD hh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - # test one additional space along with T divider - assert self.parser.parse( - "blah 2016-05-16T04:05:06.789120", "YYYY-MM-DDThh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert self.parser.parse( - "2016-05-16T04:05:06.789120 blah", "YYYY-MM-DDThh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert ( - self.parser.parse( - "Meet me at 2016-05-16T04:05:06.789120 at the restaurant.", - "YYYY-MM-DDThh:mm:ss.S", - ) - == datetime(2016, 5, 16, 4, 5, 6, 789120) - ) - - assert ( - self.parser.parse( - "Meet me at 2016-05-16 04:05:06.789120 at the restaurant.", - "YYYY-MM-DD hh:mm:ss.S", - ) - == datetime(2016, 5, 16, 4, 5, 6, 789120) - ) - - # regression test for issue #701 - # tests cases of a partial match surrounded by punctuation - # for the list of valid punctuation, see documentation - def test_parse_with_punctuation_fences(self): - assert self.parser.parse( - "Meet me at my house on Halloween (2019-31-10)", "YYYY-DD-MM" - ) == datetime(2019, 10, 31) - - assert self.parser.parse( - "Monday, 9. September 2019, 16:15-20:00", "dddd, D. MMMM YYYY" - ) == datetime(2019, 9, 9) - - assert self.parser.parse("A date is 11.11.2011.", "DD.MM.YYYY") == datetime( - 2011, 11, 11 - ) - - with pytest.raises(ParserMatchError): - self.parser.parse("11.11.2011.1 is not a valid date.", "DD.MM.YYYY") - - with pytest.raises(ParserMatchError): - self.parser.parse( - "This date has too many punctuation marks following it (11.11.2011).", - "DD.MM.YYYY", - ) - - def test_parse_with_leading_and_trailing_whitespace(self): - assert self.parser.parse(" 2016", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse("2016 ", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse(" 2016 ", "YYYY") == datetime(2016, 1, 1) - - assert self.parser.parse( - " 2016-05-16 04:05:06.789120 ", "YYYY-MM-DD hh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - assert self.parser.parse( - " 2016-05-16T04:05:06.789120 ", "YYYY-MM-DDThh:mm:ss.S" - ) == datetime(2016, 5, 16, 4, 5, 6, 789120) - - def test_parse_YYYY_DDDD(self): - assert self.parser.parse("1998-136", "YYYY-DDDD") == datetime(1998, 5, 16) - - assert self.parser.parse("1998-006", "YYYY-DDDD") == datetime(1998, 1, 6) - - with pytest.raises(ParserError): - self.parser.parse("1998-456", "YYYY-DDDD") - - def test_parse_YYYY_DDD(self): - assert self.parser.parse("1998-6", "YYYY-DDD") == datetime(1998, 1, 6) - - assert self.parser.parse("1998-136", "YYYY-DDD") == datetime(1998, 5, 16) - - with pytest.raises(ParserError): - self.parser.parse("1998-756", "YYYY-DDD") - - # month cannot be passed with DDD and DDDD tokens - def test_parse_YYYY_MM_DDDD(self): - with pytest.raises(ParserError): - self.parser.parse("2015-01-009", "YYYY-MM-DDDD") - - # year is required with the DDD and DDDD tokens - def test_parse_DDD_only(self): - with pytest.raises(ParserError): - self.parser.parse("5", "DDD") - - def test_parse_DDDD_only(self): - with pytest.raises(ParserError): - self.parser.parse("145", "DDDD") - - def test_parse_ddd_and_dddd(self): - fr_parser = parser.DateTimeParser("fr") - - # Day of week should be ignored when a day is passed - # 2019-10-17 is a Thursday, so we know day of week - # is ignored if the same date is outputted - expected = datetime(2019, 10, 17) - assert self.parser.parse("Tue 2019-10-17", "ddd YYYY-MM-DD") == expected - assert fr_parser.parse("mar 2019-10-17", "ddd YYYY-MM-DD") == expected - assert self.parser.parse("Tuesday 2019-10-17", "dddd YYYY-MM-DD") == expected - assert fr_parser.parse("mardi 2019-10-17", "dddd YYYY-MM-DD") == expected - - # Get first Tuesday after epoch - expected = datetime(1970, 1, 6) - assert self.parser.parse("Tue", "ddd") == expected - assert fr_parser.parse("mar", "ddd") == expected - assert self.parser.parse("Tuesday", "dddd") == expected - assert fr_parser.parse("mardi", "dddd") == expected - - # Get first Tuesday in 2020 - expected = datetime(2020, 1, 7) - assert self.parser.parse("Tue 2020", "ddd YYYY") == expected - assert fr_parser.parse("mar 2020", "ddd YYYY") == expected - assert self.parser.parse("Tuesday 2020", "dddd YYYY") == expected - assert fr_parser.parse("mardi 2020", "dddd YYYY") == expected - - # Get first Tuesday in February 2020 - expected = datetime(2020, 2, 4) - assert self.parser.parse("Tue 02 2020", "ddd MM YYYY") == expected - assert fr_parser.parse("mar 02 2020", "ddd MM YYYY") == expected - assert self.parser.parse("Tuesday 02 2020", "dddd MM YYYY") == expected - assert fr_parser.parse("mardi 02 2020", "dddd MM YYYY") == expected - - # Get first Tuesday in February after epoch - expected = datetime(1970, 2, 3) - assert self.parser.parse("Tue 02", "ddd MM") == expected - assert fr_parser.parse("mar 02", "ddd MM") == expected - assert self.parser.parse("Tuesday 02", "dddd MM") == expected - assert fr_parser.parse("mardi 02", "dddd MM") == expected - - # Times remain intact - expected = datetime(2020, 2, 4, 10, 25, 54, 123456, tz.tzoffset(None, -3600)) - assert ( - self.parser.parse( - "Tue 02 2020 10:25:54.123456-01:00", "ddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - assert ( - fr_parser.parse( - "mar 02 2020 10:25:54.123456-01:00", "ddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - assert ( - self.parser.parse( - "Tuesday 02 2020 10:25:54.123456-01:00", "dddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - assert ( - fr_parser.parse( - "mardi 02 2020 10:25:54.123456-01:00", "dddd MM YYYY HH:mm:ss.SZZ" - ) - == expected - ) - - def test_parse_ddd_and_dddd_ignore_case(self): - # Regression test for issue #851 - expected = datetime(2019, 6, 24) - assert ( - self.parser.parse("MONDAY, June 24, 2019", "dddd, MMMM DD, YYYY") - == expected - ) - - def test_parse_ddd_and_dddd_then_format(self): - # Regression test for issue #446 - arw_formatter = formatter.DateTimeFormatter() - assert arw_formatter.format(self.parser.parse("Mon", "ddd"), "ddd") == "Mon" - assert ( - arw_formatter.format(self.parser.parse("Monday", "dddd"), "dddd") - == "Monday" - ) - assert arw_formatter.format(self.parser.parse("Tue", "ddd"), "ddd") == "Tue" - assert ( - arw_formatter.format(self.parser.parse("Tuesday", "dddd"), "dddd") - == "Tuesday" - ) - assert arw_formatter.format(self.parser.parse("Wed", "ddd"), "ddd") == "Wed" - assert ( - arw_formatter.format(self.parser.parse("Wednesday", "dddd"), "dddd") - == "Wednesday" - ) - assert arw_formatter.format(self.parser.parse("Thu", "ddd"), "ddd") == "Thu" - assert ( - arw_formatter.format(self.parser.parse("Thursday", "dddd"), "dddd") - == "Thursday" - ) - assert arw_formatter.format(self.parser.parse("Fri", "ddd"), "ddd") == "Fri" - assert ( - arw_formatter.format(self.parser.parse("Friday", "dddd"), "dddd") - == "Friday" - ) - assert arw_formatter.format(self.parser.parse("Sat", "ddd"), "ddd") == "Sat" - assert ( - arw_formatter.format(self.parser.parse("Saturday", "dddd"), "dddd") - == "Saturday" - ) - assert arw_formatter.format(self.parser.parse("Sun", "ddd"), "ddd") == "Sun" - assert ( - arw_formatter.format(self.parser.parse("Sunday", "dddd"), "dddd") - == "Sunday" - ) - - def test_parse_HH_24(self): - assert self.parser.parse( - "2019-10-30T24:00:00", "YYYY-MM-DDTHH:mm:ss" - ) == datetime(2019, 10, 31, 0, 0, 0, 0) - assert self.parser.parse("2019-10-30T24:00", "YYYY-MM-DDTHH:mm") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse("2019-10-30T24", "YYYY-MM-DDTHH") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse( - "2019-10-30T24:00:00.0", "YYYY-MM-DDTHH:mm:ss.S" - ) == datetime(2019, 10, 31, 0, 0, 0, 0) - assert self.parser.parse( - "2019-10-31T24:00:00", "YYYY-MM-DDTHH:mm:ss" - ) == datetime(2019, 11, 1, 0, 0, 0, 0) - assert self.parser.parse( - "2019-12-31T24:00:00", "YYYY-MM-DDTHH:mm:ss" - ) == datetime(2020, 1, 1, 0, 0, 0, 0) - assert self.parser.parse( - "2019-12-31T23:59:59.9999999", "YYYY-MM-DDTHH:mm:ss.S" - ) == datetime(2020, 1, 1, 0, 0, 0, 0) - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:01:00", "YYYY-MM-DDTHH:mm:ss") - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:00:01", "YYYY-MM-DDTHH:mm:ss") - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:00:00.1", "YYYY-MM-DDTHH:mm:ss.S") - - with pytest.raises(ParserError): - self.parser.parse("2019-12-31T24:00:00.999999", "YYYY-MM-DDTHH:mm:ss.S") - - def test_parse_W(self): - - assert self.parser.parse("2011-W05-4", "W") == datetime(2011, 2, 3) - assert self.parser.parse("2011W054", "W") == datetime(2011, 2, 3) - assert self.parser.parse("2011-W05", "W") == datetime(2011, 1, 31) - assert self.parser.parse("2011W05", "W") == datetime(2011, 1, 31) - assert self.parser.parse("2011-W05-4T14:17:01", "WTHH:mm:ss") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - assert self.parser.parse("2011W054T14:17:01", "WTHH:mm:ss") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - assert self.parser.parse("2011-W05T14:17:01", "WTHH:mm:ss") == datetime( - 2011, 1, 31, 14, 17, 1 - ) - assert self.parser.parse("2011W05T141701", "WTHHmmss") == datetime( - 2011, 1, 31, 14, 17, 1 - ) - assert self.parser.parse("2011W054T141701", "WTHHmmss") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - - bad_formats = [ - "201W22", - "1995-W1-4", - "2001-W34-90", - "2001--W34", - "2011-W03--3", - "thstrdjtrsrd676776r65", - "2002-W66-1T14:17:01", - "2002-W23-03T14:17:01", - ] - - for fmt in bad_formats: - with pytest.raises(ParserError): - self.parser.parse(fmt, "W") - - def test_parse_normalize_whitespace(self): - assert self.parser.parse( - "Jun 1 2005 1:33PM", "MMM D YYYY H:mmA", normalize_whitespace=True - ) == datetime(2005, 6, 1, 13, 33) - - with pytest.raises(ParserError): - self.parser.parse("Jun 1 2005 1:33PM", "MMM D YYYY H:mmA") - - assert ( - self.parser.parse( - "\t 2013-05-05 T \n 12:30:45\t123456 \t \n", - "YYYY-MM-DD T HH:mm:ss S", - normalize_whitespace=True, - ) - == datetime(2013, 5, 5, 12, 30, 45, 123456) - ) - - with pytest.raises(ParserError): - self.parser.parse( - "\t 2013-05-05 T \n 12:30:45\t123456 \t \n", - "YYYY-MM-DD T HH:mm:ss S", - ) - - assert self.parser.parse( - " \n Jun 1\t 2005\n ", "MMM D YYYY", normalize_whitespace=True - ) == datetime(2005, 6, 1) - - with pytest.raises(ParserError): - self.parser.parse(" \n Jun 1\t 2005\n ", "MMM D YYYY") - - -@pytest.mark.usefixtures("dt_parser_regex") -class TestDateTimeParserRegex: - def test_format_year(self): - - assert self.format_regex.findall("YYYY-YY") == ["YYYY", "YY"] - - def test_format_month(self): - - assert self.format_regex.findall("MMMM-MMM-MM-M") == ["MMMM", "MMM", "MM", "M"] - - def test_format_day(self): - - assert self.format_regex.findall("DDDD-DDD-DD-D") == ["DDDD", "DDD", "DD", "D"] - - def test_format_hour(self): - - assert self.format_regex.findall("HH-H-hh-h") == ["HH", "H", "hh", "h"] - - def test_format_minute(self): - - assert self.format_regex.findall("mm-m") == ["mm", "m"] - - def test_format_second(self): - - assert self.format_regex.findall("ss-s") == ["ss", "s"] - - def test_format_subsecond(self): - - assert self.format_regex.findall("SSSSSS-SSSSS-SSSS-SSS-SS-S") == [ - "SSSSSS", - "SSSSS", - "SSSS", - "SSS", - "SS", - "S", - ] - - def test_format_tz(self): - - assert self.format_regex.findall("ZZZ-ZZ-Z") == ["ZZZ", "ZZ", "Z"] - - def test_format_am_pm(self): - - assert self.format_regex.findall("A-a") == ["A", "a"] - - def test_format_timestamp(self): - - assert self.format_regex.findall("X") == ["X"] - - def test_format_timestamp_milli(self): - - assert self.format_regex.findall("x") == ["x"] - - def test_escape(self): - - escape_regex = parser.DateTimeParser._ESCAPE_RE - - assert escape_regex.findall("2018-03-09 8 [h] 40 [hello]") == ["[h]", "[hello]"] - - def test_month_names(self): - p = parser.DateTimeParser("en_us") - - text = "_".join(calendar.month_name[1:]) - - result = p._input_re_map["MMMM"].findall(text) - - assert result == calendar.month_name[1:] - - def test_month_abbreviations(self): - p = parser.DateTimeParser("en_us") - - text = "_".join(calendar.month_abbr[1:]) - - result = p._input_re_map["MMM"].findall(text) - - assert result == calendar.month_abbr[1:] - - def test_digits(self): - - assert parser.DateTimeParser._ONE_OR_TWO_DIGIT_RE.findall("4-56") == ["4", "56"] - assert parser.DateTimeParser._ONE_OR_TWO_OR_THREE_DIGIT_RE.findall( - "4-56-789" - ) == ["4", "56", "789"] - assert parser.DateTimeParser._ONE_OR_MORE_DIGIT_RE.findall( - "4-56-789-1234-12345" - ) == ["4", "56", "789", "1234", "12345"] - assert parser.DateTimeParser._TWO_DIGIT_RE.findall("12-3-45") == ["12", "45"] - assert parser.DateTimeParser._THREE_DIGIT_RE.findall("123-4-56") == ["123"] - assert parser.DateTimeParser._FOUR_DIGIT_RE.findall("1234-56") == ["1234"] - - def test_tz(self): - tz_z_re = parser.DateTimeParser._TZ_Z_RE - assert tz_z_re.findall("-0700") == [("-", "07", "00")] - assert tz_z_re.findall("+07") == [("+", "07", "")] - assert tz_z_re.search("15/01/2019T04:05:06.789120Z") is not None - assert tz_z_re.search("15/01/2019T04:05:06.789120") is None - - tz_zz_re = parser.DateTimeParser._TZ_ZZ_RE - assert tz_zz_re.findall("-07:00") == [("-", "07", "00")] - assert tz_zz_re.findall("+07") == [("+", "07", "")] - assert tz_zz_re.search("15/01/2019T04:05:06.789120Z") is not None - assert tz_zz_re.search("15/01/2019T04:05:06.789120") is None - - tz_name_re = parser.DateTimeParser._TZ_NAME_RE - assert tz_name_re.findall("Europe/Warsaw") == ["Europe/Warsaw"] - assert tz_name_re.findall("GMT") == ["GMT"] - - def test_timestamp(self): - timestamp_re = parser.DateTimeParser._TIMESTAMP_RE - assert timestamp_re.findall("1565707550.452729") == ["1565707550.452729"] - assert timestamp_re.findall("-1565707550.452729") == ["-1565707550.452729"] - assert timestamp_re.findall("-1565707550") == ["-1565707550"] - assert timestamp_re.findall("1565707550") == ["1565707550"] - assert timestamp_re.findall("1565707550.") == [] - assert timestamp_re.findall(".1565707550") == [] - - def test_timestamp_milli(self): - timestamp_expanded_re = parser.DateTimeParser._TIMESTAMP_EXPANDED_RE - assert timestamp_expanded_re.findall("-1565707550") == ["-1565707550"] - assert timestamp_expanded_re.findall("1565707550") == ["1565707550"] - assert timestamp_expanded_re.findall("1565707550.452729") == [] - assert timestamp_expanded_re.findall("1565707550.") == [] - assert timestamp_expanded_re.findall(".1565707550") == [] - - def test_time(self): - time_re = parser.DateTimeParser._TIME_RE - time_seperators = [":", ""] - - for sep in time_seperators: - assert time_re.findall("12") == [("12", "", "", "", "")] - assert time_re.findall("12{sep}35".format(sep=sep)) == [ - ("12", "35", "", "", "") - ] - assert time_re.findall("12{sep}35{sep}46".format(sep=sep)) == [ - ("12", "35", "46", "", "") - ] - assert time_re.findall("12{sep}35{sep}46.952313".format(sep=sep)) == [ - ("12", "35", "46", ".", "952313") - ] - assert time_re.findall("12{sep}35{sep}46,952313".format(sep=sep)) == [ - ("12", "35", "46", ",", "952313") - ] - - assert time_re.findall("12:") == [] - assert time_re.findall("12:35:46.") == [] - assert time_re.findall("12:35:46,") == [] - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserISO: - def test_YYYY(self): - - assert self.parser.parse_iso("2013") == datetime(2013, 1, 1) - - def test_YYYY_DDDD(self): - assert self.parser.parse_iso("1998-136") == datetime(1998, 5, 16) - - assert self.parser.parse_iso("1998-006") == datetime(1998, 1, 6) - - with pytest.raises(ParserError): - self.parser.parse_iso("1998-456") - - # 2016 is a leap year, so Feb 29 exists (leap day) - assert self.parser.parse_iso("2016-059") == datetime(2016, 2, 28) - assert self.parser.parse_iso("2016-060") == datetime(2016, 2, 29) - assert self.parser.parse_iso("2016-061") == datetime(2016, 3, 1) - - # 2017 is not a leap year, so Feb 29 does not exist - assert self.parser.parse_iso("2017-059") == datetime(2017, 2, 28) - assert self.parser.parse_iso("2017-060") == datetime(2017, 3, 1) - assert self.parser.parse_iso("2017-061") == datetime(2017, 3, 2) - - # Since 2016 is a leap year, the 366th day falls in the same year - assert self.parser.parse_iso("2016-366") == datetime(2016, 12, 31) - - # Since 2017 is not a leap year, the 366th day falls in the next year - assert self.parser.parse_iso("2017-366") == datetime(2018, 1, 1) - - def test_YYYY_DDDD_HH_mm_ssZ(self): - - assert self.parser.parse_iso("2013-036 04:05:06+01:00") == datetime( - 2013, 2, 5, 4, 5, 6, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-036 04:05:06Z") == datetime( - 2013, 2, 5, 4, 5, 6, tzinfo=tz.tzutc() - ) - - def test_YYYY_MM_DDDD(self): - with pytest.raises(ParserError): - self.parser.parse_iso("2014-05-125") - - def test_YYYY_MM(self): - - for separator in DateTimeParser.SEPARATORS: - assert self.parser.parse_iso(separator.join(("2013", "02"))) == datetime( - 2013, 2, 1 - ) - - def test_YYYY_MM_DD(self): - - for separator in DateTimeParser.SEPARATORS: - assert self.parser.parse_iso( - separator.join(("2013", "02", "03")) - ) == datetime(2013, 2, 3) - - def test_YYYY_MM_DDTHH_mmZ(self): - - assert self.parser.parse_iso("2013-02-03T04:05+01:00") == datetime( - 2013, 2, 3, 4, 5, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DDTHH_mm(self): - - assert self.parser.parse_iso("2013-02-03T04:05") == datetime(2013, 2, 3, 4, 5) - - def test_YYYY_MM_DDTHH(self): - - assert self.parser.parse_iso("2013-02-03T04") == datetime(2013, 2, 3, 4) - - def test_YYYY_MM_DDTHHZ(self): - - assert self.parser.parse_iso("2013-02-03T04+01:00") == datetime( - 2013, 2, 3, 4, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DDTHH_mm_ssZ(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DDTHH_mm_ss(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06") == datetime( - 2013, 2, 3, 4, 5, 6 - ) - - def test_YYYY_MM_DD_HH_mmZ(self): - - assert self.parser.parse_iso("2013-02-03 04:05+01:00") == datetime( - 2013, 2, 3, 4, 5, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DD_HH_mm(self): - - assert self.parser.parse_iso("2013-02-03 04:05") == datetime(2013, 2, 3, 4, 5) - - def test_YYYY_MM_DD_HH(self): - - assert self.parser.parse_iso("2013-02-03 04") == datetime(2013, 2, 3, 4) - - def test_invalid_time(self): - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03 044") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03 04:05:06.") - - def test_YYYY_MM_DD_HH_mm_ssZ(self): - - assert self.parser.parse_iso("2013-02-03 04:05:06+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, tzinfo=tz.tzoffset(None, 3600) - ) - - def test_YYYY_MM_DD_HH_mm_ss(self): - - assert self.parser.parse_iso("2013-02-03 04:05:06") == datetime( - 2013, 2, 3, 4, 5, 6 - ) - - def test_YYYY_MM_DDTHH_mm_ss_S(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06.7") == datetime( - 2013, 2, 3, 4, 5, 6, 700000 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78") == datetime( - 2013, 2, 3, 4, 5, 6, 780000 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.789") == datetime( - 2013, 2, 3, 4, 5, 6, 789000 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.7891") == datetime( - 2013, 2, 3, 4, 5, 6, 789100 - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78912") == datetime( - 2013, 2, 3, 4, 5, 6, 789120 - ) - - # ISO 8601:2004(E), ISO, 2004-12-01, 4.2.2.4 ... the decimal fraction - # shall be divided from the integer part by the decimal sign specified - # in ISO 31-0, i.e. the comma [,] or full stop [.]. Of these, the comma - # is the preferred sign. - assert self.parser.parse_iso("2013-02-03T04:05:06,789123678") == datetime( - 2013, 2, 3, 4, 5, 6, 789124 - ) - - # there is no limit on the number of decimal places - assert self.parser.parse_iso("2013-02-03T04:05:06.789123678") == datetime( - 2013, 2, 3, 4, 5, 6, 789124 - ) - - def test_YYYY_MM_DDTHH_mm_ss_SZ(self): - - assert self.parser.parse_iso("2013-02-03T04:05:06.7+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 700000, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 780000, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.789+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 789000, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.7891+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 789100, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03T04:05:06.78912+01:00") == datetime( - 2013, 2, 3, 4, 5, 6, 789120, tzinfo=tz.tzoffset(None, 3600) - ) - - assert self.parser.parse_iso("2013-02-03 04:05:06.78912Z") == datetime( - 2013, 2, 3, 4, 5, 6, 789120, tzinfo=tz.tzutc() - ) - - def test_W(self): - - assert self.parser.parse_iso("2011-W05-4") == datetime(2011, 2, 3) - - assert self.parser.parse_iso("2011-W05-4T14:17:01") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - - assert self.parser.parse_iso("2011W054") == datetime(2011, 2, 3) - - assert self.parser.parse_iso("2011W054T141701") == datetime( - 2011, 2, 3, 14, 17, 1 - ) - - def test_invalid_Z(self): - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912z") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912zz") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912Zz") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912ZZ") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912+Z") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912-Z") - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-02-03T04:05:06.78912 Z") - - def test_parse_subsecond(self): - self.expected = datetime(2013, 1, 1, 12, 30, 45, 900000) - assert self.parser.parse_iso("2013-01-01 12:30:45.9") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 980000) - assert self.parser.parse_iso("2013-01-01 12:30:45.98") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987000) - assert self.parser.parse_iso("2013-01-01 12:30:45.987") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987600) - assert self.parser.parse_iso("2013-01-01 12:30:45.9876") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987650) - assert self.parser.parse_iso("2013-01-01 12:30:45.98765") == self.expected - - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - assert self.parser.parse_iso("2013-01-01 12:30:45.987654") == self.expected - - # use comma as subsecond separator - self.expected = datetime(2013, 1, 1, 12, 30, 45, 987654) - assert self.parser.parse_iso("2013-01-01 12:30:45,987654") == self.expected - - def test_gnu_date(self): - """Regression tests for parsing output from GNU date.""" - # date -Ins - assert self.parser.parse_iso("2016-11-16T09:46:30,895636557-0800") == datetime( - 2016, 11, 16, 9, 46, 30, 895636, tzinfo=tz.tzoffset(None, -3600 * 8) - ) - - # date --rfc-3339=ns - assert self.parser.parse_iso("2016-11-16 09:51:14.682141526-08:00") == datetime( - 2016, 11, 16, 9, 51, 14, 682142, tzinfo=tz.tzoffset(None, -3600 * 8) - ) - - def test_isoformat(self): - - dt = datetime.utcnow() - - assert self.parser.parse_iso(dt.isoformat()) == dt - - def test_parse_iso_normalize_whitespace(self): - assert self.parser.parse_iso( - "2013-036 \t 04:05:06Z", normalize_whitespace=True - ) == datetime(2013, 2, 5, 4, 5, 6, tzinfo=tz.tzutc()) - - with pytest.raises(ParserError): - self.parser.parse_iso("2013-036 \t 04:05:06Z") - - assert self.parser.parse_iso( - "\t 2013-05-05T12:30:45.123456 \t \n", normalize_whitespace=True - ) == datetime(2013, 5, 5, 12, 30, 45, 123456) - - with pytest.raises(ParserError): - self.parser.parse_iso("\t 2013-05-05T12:30:45.123456 \t \n") - - def test_parse_iso_with_leading_and_trailing_whitespace(self): - datetime_string = " 2016-11-15T06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = " 2016-11-15T06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = "2016-11-15T06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = "2016-11-15T 06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - # leading whitespace - datetime_string = " 2016-11-15 06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - # trailing whitespace - datetime_string = "2016-11-15 06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - datetime_string = " 2016-11-15 06:37:19.123456 " - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - # two dividing spaces - datetime_string = "2016-11-15 06:37:19.123456" - with pytest.raises(ParserError): - self.parser.parse_iso(datetime_string) - - def test_parse_iso_with_extra_words_at_start_and_end_invalid(self): - test_inputs = [ - "blah2016", - "blah2016blah", - "blah 2016 blah", - "blah 2016", - "2016 blah", - "blah 2016-05-16 04:05:06.789120", - "2016-05-16 04:05:06.789120 blah", - "blah 2016-05-16T04:05:06.789120", - "2016-05-16T04:05:06.789120 blah", - "2016blah", - "2016-05blah", - "2016-05-16blah", - "2016-05-16T04:05:06.789120blah", - "2016-05-16T04:05:06.789120ZblahZ", - "2016-05-16T04:05:06.789120Zblah", - "2016-05-16T04:05:06.789120blahZ", - "Meet me at 2016-05-16T04:05:06.789120 at the restaurant.", - "Meet me at 2016-05-16 04:05:06.789120 at the restaurant.", - ] - - for ti in test_inputs: - with pytest.raises(ParserError): - self.parser.parse_iso(ti) - - def test_iso8601_basic_format(self): - assert self.parser.parse_iso("20180517") == datetime(2018, 5, 17) - - assert self.parser.parse_iso("20180517T10") == datetime(2018, 5, 17, 10) - - assert self.parser.parse_iso("20180517T105513.843456") == datetime( - 2018, 5, 17, 10, 55, 13, 843456 - ) - - assert self.parser.parse_iso("20180517T105513Z") == datetime( - 2018, 5, 17, 10, 55, 13, tzinfo=tz.tzutc() - ) - - assert self.parser.parse_iso("20180517T105513.843456-0700") == datetime( - 2018, 5, 17, 10, 55, 13, 843456, tzinfo=tz.tzoffset(None, -25200) - ) - - assert self.parser.parse_iso("20180517T105513-0700") == datetime( - 2018, 5, 17, 10, 55, 13, tzinfo=tz.tzoffset(None, -25200) - ) - - assert self.parser.parse_iso("20180517T105513-07") == datetime( - 2018, 5, 17, 10, 55, 13, tzinfo=tz.tzoffset(None, -25200) - ) - - # ordinal in basic format: YYYYDDDD - assert self.parser.parse_iso("1998136") == datetime(1998, 5, 16) - - # timezone requires +- seperator - with pytest.raises(ParserError): - self.parser.parse_iso("20180517T1055130700") - - with pytest.raises(ParserError): - self.parser.parse_iso("20180517T10551307") - - # too many digits in date - with pytest.raises(ParserError): - self.parser.parse_iso("201860517T105513Z") - - # too many digits in time - with pytest.raises(ParserError): - self.parser.parse_iso("20180517T1055213Z") - - def test_midnight_end_day(self): - assert self.parser.parse_iso("2019-10-30T24:00:00") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-10-30T24:00") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-10-30T24:00:00.0") == datetime( - 2019, 10, 31, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-10-31T24:00:00") == datetime( - 2019, 11, 1, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-12-31T24:00:00") == datetime( - 2020, 1, 1, 0, 0, 0, 0 - ) - assert self.parser.parse_iso("2019-12-31T23:59:59.9999999") == datetime( - 2020, 1, 1, 0, 0, 0, 0 - ) - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:01:00") - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:00:01") - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:00:00.1") - - with pytest.raises(ParserError): - self.parser.parse_iso("2019-12-31T24:00:00.999999") - - -@pytest.mark.usefixtures("tzinfo_parser") -class TestTzinfoParser: - def test_parse_local(self): - - assert self.parser.parse("local") == tz.tzlocal() - - def test_parse_utc(self): - - assert self.parser.parse("utc") == tz.tzutc() - assert self.parser.parse("UTC") == tz.tzutc() - - def test_parse_iso(self): - - assert self.parser.parse("01:00") == tz.tzoffset(None, 3600) - assert self.parser.parse("11:35") == tz.tzoffset(None, 11 * 3600 + 2100) - assert self.parser.parse("+01:00") == tz.tzoffset(None, 3600) - assert self.parser.parse("-01:00") == tz.tzoffset(None, -3600) - - assert self.parser.parse("0100") == tz.tzoffset(None, 3600) - assert self.parser.parse("+0100") == tz.tzoffset(None, 3600) - assert self.parser.parse("-0100") == tz.tzoffset(None, -3600) - - assert self.parser.parse("01") == tz.tzoffset(None, 3600) - assert self.parser.parse("+01") == tz.tzoffset(None, 3600) - assert self.parser.parse("-01") == tz.tzoffset(None, -3600) - - def test_parse_str(self): - - assert self.parser.parse("US/Pacific") == tz.gettz("US/Pacific") - - def test_parse_fails(self): - - with pytest.raises(parser.ParserError): - self.parser.parse("fail") - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserMonthName: - def test_shortmonth_capitalized(self): - - assert self.parser.parse("2013-Jan-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - def test_shortmonth_allupper(self): - - assert self.parser.parse("2013-JAN-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - def test_shortmonth_alllower(self): - - assert self.parser.parse("2013-jan-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - def test_month_capitalized(self): - - assert self.parser.parse("2013-January-01", "YYYY-MMMM-DD") == datetime( - 2013, 1, 1 - ) - - def test_month_allupper(self): - - assert self.parser.parse("2013-JANUARY-01", "YYYY-MMMM-DD") == datetime( - 2013, 1, 1 - ) - - def test_month_alllower(self): - - assert self.parser.parse("2013-january-01", "YYYY-MMMM-DD") == datetime( - 2013, 1, 1 - ) - - def test_localized_month_name(self): - parser_ = parser.DateTimeParser("fr_fr") - - assert parser_.parse("2013-Janvier-01", "YYYY-MMMM-DD") == datetime(2013, 1, 1) - - def test_localized_month_abbreviation(self): - parser_ = parser.DateTimeParser("it_it") - - assert parser_.parse("2013-Gen-01", "YYYY-MMM-DD") == datetime(2013, 1, 1) - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserMeridians: - def test_meridians_lowercase(self): - assert self.parser.parse("2013-01-01 5am", "YYYY-MM-DD ha") == datetime( - 2013, 1, 1, 5 - ) - - assert self.parser.parse("2013-01-01 5pm", "YYYY-MM-DD ha") == datetime( - 2013, 1, 1, 17 - ) - - def test_meridians_capitalized(self): - assert self.parser.parse("2013-01-01 5AM", "YYYY-MM-DD hA") == datetime( - 2013, 1, 1, 5 - ) - - assert self.parser.parse("2013-01-01 5PM", "YYYY-MM-DD hA") == datetime( - 2013, 1, 1, 17 - ) - - def test_localized_meridians_lowercase(self): - parser_ = parser.DateTimeParser("hu_hu") - assert parser_.parse("2013-01-01 5 de", "YYYY-MM-DD h a") == datetime( - 2013, 1, 1, 5 - ) - - assert parser_.parse("2013-01-01 5 du", "YYYY-MM-DD h a") == datetime( - 2013, 1, 1, 17 - ) - - def test_localized_meridians_capitalized(self): - parser_ = parser.DateTimeParser("hu_hu") - assert parser_.parse("2013-01-01 5 DE", "YYYY-MM-DD h A") == datetime( - 2013, 1, 1, 5 - ) - - assert parser_.parse("2013-01-01 5 DU", "YYYY-MM-DD h A") == datetime( - 2013, 1, 1, 17 - ) - - # regression test for issue #607 - def test_es_meridians(self): - parser_ = parser.DateTimeParser("es") - - assert parser_.parse( - "Junio 30, 2019 - 08:00 pm", "MMMM DD, YYYY - hh:mm a" - ) == datetime(2019, 6, 30, 20, 0) - - with pytest.raises(ParserError): - parser_.parse( - "Junio 30, 2019 - 08:00 pasdfasdfm", "MMMM DD, YYYY - hh:mm a" - ) - - def test_fr_meridians(self): - parser_ = parser.DateTimeParser("fr") - - # the French locale always uses a 24 hour clock, so it does not support meridians - with pytest.raises(ParserError): - parser_.parse("Janvier 30, 2019 - 08:00 pm", "MMMM DD, YYYY - hh:mm a") - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserMonthOrdinalDay: - def test_english(self): - parser_ = parser.DateTimeParser("en_us") - - assert parser_.parse("January 1st, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 1 - ) - assert parser_.parse("January 2nd, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 2 - ) - assert parser_.parse("January 3rd, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 3 - ) - assert parser_.parse("January 4th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 4 - ) - assert parser_.parse("January 11th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 11 - ) - assert parser_.parse("January 12th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 12 - ) - assert parser_.parse("January 13th, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 13 - ) - assert parser_.parse("January 21st, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 21 - ) - assert parser_.parse("January 31st, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 31 - ) - - with pytest.raises(ParserError): - parser_.parse("January 1th, 2013", "MMMM Do, YYYY") - - with pytest.raises(ParserError): - parser_.parse("January 11st, 2013", "MMMM Do, YYYY") - - def test_italian(self): - parser_ = parser.DateTimeParser("it_it") - - assert parser_.parse("Gennaio 1º, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 1 - ) - - def test_spanish(self): - parser_ = parser.DateTimeParser("es_es") - - assert parser_.parse("Enero 1º, 2013", "MMMM Do, YYYY") == datetime(2013, 1, 1) - - def test_french(self): - parser_ = parser.DateTimeParser("fr_fr") - - assert parser_.parse("Janvier 1er, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 1 - ) - - assert parser_.parse("Janvier 2e, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 2 - ) - - assert parser_.parse("Janvier 11e, 2013", "MMMM Do, YYYY") == datetime( - 2013, 1, 11 - ) - - -@pytest.mark.usefixtures("dt_parser") -class TestDateTimeParserSearchDate: - def test_parse_search(self): - - assert self.parser.parse( - "Today is 25 of September of 2003", "DD of MMMM of YYYY" - ) == datetime(2003, 9, 25) - - def test_parse_search_with_numbers(self): - - assert self.parser.parse( - "2000 people met the 2012-01-01 12:05:10", "YYYY-MM-DD HH:mm:ss" - ) == datetime(2012, 1, 1, 12, 5, 10) - - assert self.parser.parse( - "Call 01-02-03 on 79-01-01 12:05:10", "YY-MM-DD HH:mm:ss" - ) == datetime(1979, 1, 1, 12, 5, 10) - - def test_parse_search_with_names(self): - - assert self.parser.parse("June was born in May 1980", "MMMM YYYY") == datetime( - 1980, 5, 1 - ) - - def test_parse_search_locale_with_names(self): - p = parser.DateTimeParser("sv_se") - - assert p.parse("Jan föddes den 31 Dec 1980", "DD MMM YYYY") == datetime( - 1980, 12, 31 - ) - - assert p.parse("Jag föddes den 25 Augusti 1975", "DD MMMM YYYY") == datetime( - 1975, 8, 25 - ) - - def test_parse_search_fails(self): - - with pytest.raises(parser.ParserError): - self.parser.parse("Jag föddes den 25 Augusti 1975", "DD MMMM YYYY") - - def test_escape(self): - - format = "MMMM D, YYYY [at] h:mma" - assert self.parser.parse( - "Thursday, December 10, 2015 at 5:09pm", format - ) == datetime(2015, 12, 10, 17, 9) - - format = "[MMMM] M D, YYYY [at] h:mma" - assert self.parser.parse("MMMM 12 10, 2015 at 5:09pm", format) == datetime( - 2015, 12, 10, 17, 9 - ) - - format = "[It happened on] MMMM Do [in the year] YYYY [a long time ago]" - assert self.parser.parse( - "It happened on November 25th in the year 1990 a long time ago", format - ) == datetime(1990, 11, 25) - - format = "[It happened on] MMMM Do [in the][ year] YYYY [a long time ago]" - assert self.parser.parse( - "It happened on November 25th in the year 1990 a long time ago", format - ) == datetime(1990, 11, 25) - - format = "[I'm][ entirely][ escaped,][ weee!]" - assert self.parser.parse("I'm entirely escaped, weee!", format) == datetime( - 1, 1, 1 - ) - - # Special RegEx characters - format = "MMM DD, YYYY |^${}().*+?<>-& h:mm A" - assert self.parser.parse( - "Dec 31, 2017 |^${}().*+?<>-& 2:00 AM", format - ) == datetime(2017, 12, 31, 2, 0) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_util.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/test_util.py deleted file mode 100644 index e48b4de066..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/test_util.py +++ /dev/null @@ -1,81 +0,0 @@ -# -*- coding: utf-8 -*- -import time -from datetime import datetime - -import pytest - -from arrow import util - - -class TestUtil: - def test_next_weekday(self): - # Get first Monday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 0) == datetime(1970, 1, 5) - - # Get first Tuesday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 1) == datetime(1970, 1, 6) - - # Get first Wednesday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 2) == datetime(1970, 1, 7) - - # Get first Thursday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 3) == datetime(1970, 1, 1) - - # Get first Friday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 4) == datetime(1970, 1, 2) - - # Get first Saturday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 5) == datetime(1970, 1, 3) - - # Get first Sunday after epoch - assert util.next_weekday(datetime(1970, 1, 1), 6) == datetime(1970, 1, 4) - - # Weekdays are 0-indexed - with pytest.raises(ValueError): - util.next_weekday(datetime(1970, 1, 1), 7) - - with pytest.raises(ValueError): - util.next_weekday(datetime(1970, 1, 1), -1) - - def test_total_seconds(self): - td = datetime(2019, 1, 1) - datetime(2018, 1, 1) - assert util.total_seconds(td) == td.total_seconds() - - def test_is_timestamp(self): - timestamp_float = time.time() - timestamp_int = int(timestamp_float) - - assert util.is_timestamp(timestamp_int) - assert util.is_timestamp(timestamp_float) - assert util.is_timestamp(str(timestamp_int)) - assert util.is_timestamp(str(timestamp_float)) - - assert not util.is_timestamp(True) - assert not util.is_timestamp(False) - - class InvalidTimestamp: - pass - - assert not util.is_timestamp(InvalidTimestamp()) - - full_datetime = "2019-06-23T13:12:42" - assert not util.is_timestamp(full_datetime) - - def test_normalize_timestamp(self): - timestamp = 1591161115.194556 - millisecond_timestamp = 1591161115194 - microsecond_timestamp = 1591161115194556 - - assert util.normalize_timestamp(timestamp) == timestamp - assert util.normalize_timestamp(millisecond_timestamp) == 1591161115.194 - assert util.normalize_timestamp(microsecond_timestamp) == 1591161115.194556 - - with pytest.raises(ValueError): - util.normalize_timestamp(3e17) - - def test_iso_gregorian(self): - with pytest.raises(ValueError): - util.iso_to_gregorian(2013, 0, 5) - - with pytest.raises(ValueError): - util.iso_to_gregorian(2013, 8, 0) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/utils.py b/openpype/modules/ftrack/python2_vendor/arrow/tests/utils.py deleted file mode 100644 index 2a048feb3f..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tests/utils.py +++ /dev/null @@ -1,16 +0,0 @@ -# -*- coding: utf-8 -*- -import pytz -from dateutil.zoneinfo import get_zonefile_instance - -from arrow import util - - -def make_full_tz_list(): - dateutil_zones = set(get_zonefile_instance().zones) - pytz_zones = set(pytz.all_timezones) - return dateutil_zones.union(pytz_zones) - - -def assert_datetime_equality(dt1, dt2, within=10): - assert dt1.tzinfo == dt2.tzinfo - assert abs(util.total_seconds(dt1 - dt2)) < within diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tox.ini b/openpype/modules/ftrack/python2_vendor/arrow/tox.ini deleted file mode 100644 index 46576b12e3..0000000000 --- a/openpype/modules/ftrack/python2_vendor/arrow/tox.ini +++ /dev/null @@ -1,53 +0,0 @@ -[tox] -minversion = 3.18.0 -envlist = py{py3,27,35,36,37,38,39},lint,docs -skip_missing_interpreters = true - -[gh-actions] -python = - pypy3: pypy3 - 2.7: py27 - 3.5: py35 - 3.6: py36 - 3.7: py37 - 3.8: py38 - 3.9: py39 - -[testenv] -deps = -rrequirements.txt -allowlist_externals = pytest -commands = pytest - -[testenv:lint] -basepython = python3 -skip_install = true -deps = pre-commit -commands = - pre-commit install - pre-commit run --all-files --show-diff-on-failure - -[testenv:docs] -basepython = python3 -skip_install = true -changedir = docs -deps = - doc8 - sphinx - python-dateutil -allowlist_externals = make -commands = - doc8 index.rst ../README.rst --extension .rst --ignore D001 - make html SPHINXOPTS="-W --keep-going" - -[pytest] -addopts = -v --cov-branch --cov=arrow --cov-fail-under=100 --cov-report=term-missing --cov-report=xml -testpaths = tests - -[isort] -line_length = 88 -multi_line_output = 3 -include_trailing_comma = true - -[flake8] -per-file-ignores = arrow/__init__.py:F401 -ignore = E203,E501,W503 diff --git a/openpype/modules/log_viewer/log_view_module.py b/openpype/modules/log_viewer/log_view_module.py index e9dba2041c..1cafbe4fbd 100644 --- a/openpype/modules/log_viewer/log_view_module.py +++ b/openpype/modules/log_viewer/log_view_module.py @@ -1,3 +1,4 @@ +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayModule @@ -7,6 +8,8 @@ class LogViewModule(OpenPypeModule, ITrayModule): def initialize(self, modules_settings): logging_settings = modules_settings[self.name] self.enabled = logging_settings["enabled"] + if AYON_SERVER_ENABLED: + self.enabled = False # Tray attributes self.window = None diff --git a/openpype/hosts/maya/plugins/publish/validate_muster_connection.py b/openpype/modules/muster/plugins/publish/validate_muster_connection.py similarity index 100% rename from openpype/hosts/maya/plugins/publish/validate_muster_connection.py rename to openpype/modules/muster/plugins/publish/validate_muster_connection.py diff --git a/openpype/modules/project_manager_action.py b/openpype/modules/project_manager_action.py index 5f74dd9ee5..bf55e1544d 100644 --- a/openpype/modules/project_manager_action.py +++ b/openpype/modules/project_manager_action.py @@ -1,3 +1,4 @@ +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayAction @@ -11,6 +12,9 @@ class ProjectManagerAction(OpenPypeModule, ITrayAction): module_settings = modules_settings.get(self.name) if module_settings: enabled = module_settings.get("enabled", enabled) + + if AYON_SERVER_ENABLED: + enabled = False self.enabled = enabled # Tray attributes diff --git a/openpype/modules/royalrender/api.py b/openpype/modules/royalrender/api.py index de1dba8724..e610a0c8a8 100644 --- a/openpype/modules/royalrender/api.py +++ b/openpype/modules/royalrender/api.py @@ -3,10 +3,10 @@ import sys import os -from openpype.settings import get_project_settings from openpype.lib.local_settings import OpenPypeSettingsRegistry from openpype.lib import Logger, run_subprocess from .rr_job import RRJob, SubmitFile, SubmitterParameter +from openpype.lib.vendor_bin_utils import find_tool_in_custom_paths class Api: @@ -15,69 +15,57 @@ class Api: RR_SUBMIT_CONSOLE = 1 RR_SUBMIT_API = 2 - def __init__(self, settings, project=None): + def __init__(self, rr_path=None): self.log = Logger.get_logger("RoyalRender") - self._settings = settings - self._initialize_rr(project) + self._rr_path = rr_path + os.environ["RR_ROOT"] = rr_path - def _initialize_rr(self, project=None): - # type: (str) -> None - """Initialize RR Path. + @staticmethod + def get_rr_bin_path(rr_root, tool_name=None): + # type: (str, str) -> str + """Get path to RR bin folder. Args: - project (str, Optional): Project name to set RR api in - context. + tool_name (str): Name of RR executable you want. + rr_root (str): Custom RR root if needed. + + Returns: + str: Path to the tool based on current platform. """ - if project: - project_settings = get_project_settings(project) - rr_path = ( - project_settings - ["royalrender"] - ["rr_paths"] - ) - else: - rr_path = ( - self._settings - ["modules"] - ["royalrender"] - ["rr_path"] - ["default"] - ) - os.environ["RR_ROOT"] = rr_path - self._rr_path = rr_path - - def _get_rr_bin_path(self, rr_root=None): - # type: (str) -> str - """Get path to RR bin folder.""" - rr_root = rr_root or self._rr_path is_64bit_python = sys.maxsize > 2 ** 32 - rr_bin_path = "" + rr_bin_parts = [rr_root, "bin"] if sys.platform.lower() == "win32": - rr_bin_path = "/bin/win64" - if not is_64bit_python: - # we are using 64bit python - rr_bin_path = "/bin/win" - rr_bin_path = rr_bin_path.replace( - "/", os.path.sep - ) + rr_bin_parts.append("win") if sys.platform.lower() == "darwin": - rr_bin_path = "/bin/mac64" - if not is_64bit_python: - rr_bin_path = "/bin/mac" + rr_bin_parts.append("mac") - if sys.platform.lower() == "linux": - rr_bin_path = "/bin/lx64" + if sys.platform.lower().startswith("linux"): + rr_bin_parts.append("lx") - return os.path.join(rr_root, rr_bin_path) + rr_bin_path = os.sep.join(rr_bin_parts) + + paths_to_check = [] + # if we use 64bit python, append 64bit specific path first + if is_64bit_python: + if not tool_name: + return rr_bin_path + "64" + paths_to_check.append(rr_bin_path + "64") + + # otherwise use 32bit + if not tool_name: + return rr_bin_path + paths_to_check.append(rr_bin_path) + + return find_tool_in_custom_paths(paths_to_check, tool_name) def _initialize_module_path(self): # type: () -> None """Set RR modules for Python.""" # default for linux - rr_bin = self._get_rr_bin_path() + rr_bin = self.get_rr_bin_path(self._rr_path) rr_module_path = os.path.join(rr_bin, "lx64/lib") if sys.platform.lower() == "win32": @@ -91,51 +79,46 @@ class Api: sys.path.append(os.path.join(self._rr_path, rr_module_path)) - def create_submission(self, jobs, submitter_attributes, file_name=None): - # type: (list[RRJob], list[SubmitterParameter], str) -> SubmitFile + @staticmethod + def create_submission(jobs, submitter_attributes): + # type: (list[RRJob], list[SubmitterParameter]) -> SubmitFile """Create jobs submission file. Args: jobs (list): List of :class:`RRJob` submitter_attributes (list): List of submitter attributes :class:`SubmitterParameter` for whole submission batch. - file_name (str), optional): File path to write data to. Returns: str: XML data of job submission files. """ - raise NotImplementedError + return SubmitFile(SubmitterParameters=submitter_attributes, Jobs=jobs) def submit_file(self, file, mode=RR_SUBMIT_CONSOLE): # type: (SubmitFile, int) -> None if mode == self.RR_SUBMIT_CONSOLE: self._submit_using_console(file) + return - # RR v7 supports only Python 2.7 so we bail out in fear + # RR v7 supports only Python 2.7, so we bail out in fear # until there is support for Python 3 😰 raise NotImplementedError( "Submission via RoyalRender API is not supported yet") # self._submit_using_api(file) - def _submit_using_console(self, file): - # type: (SubmitFile) -> bool - rr_console = os.path.join( - self._get_rr_bin_path(), - "rrSubmitterconsole" - ) + def _submit_using_console(self, job_file): + # type: (SubmitFile) -> None + rr_start_local = self.get_rr_bin_path( + self._rr_path, "rrStartLocal") - if sys.platform.lower() == "darwin": - if "/bin/mac64" in rr_console: - rr_console = rr_console.replace("/bin/mac64", "/bin/mac") + self.log.info("rr_console: {}".format(rr_start_local)) - if sys.platform.lower() == "win32": - if "/bin/win64" in rr_console: - rr_console = rr_console.replace("/bin/win64", "/bin/win") - rr_console += ".exe" - - args = [rr_console, file] - run_subprocess(" ".join(args), logger=self.log) + args = [rr_start_local, "rrSubmitterconsole", job_file] + self.log.info("Executing: {}".format(" ".join(args))) + env = os.environ + env["RR_ROOT"] = self._rr_path + run_subprocess(args, logger=self.log, env=env) def _submit_using_api(self, file): # type: (SubmitFile) -> None diff --git a/openpype/modules/royalrender/lib.py b/openpype/modules/royalrender/lib.py new file mode 100644 index 0000000000..4708d25eed --- /dev/null +++ b/openpype/modules/royalrender/lib.py @@ -0,0 +1,304 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to RoyalRender.""" +import os +import re +import platform +from datetime import datetime + +import pyblish.api +from openpype.tests.lib import is_in_tests +from openpype.pipeline.publish.lib import get_published_workfile_instance +from openpype.pipeline.publish import KnownPublishError +from openpype.modules.royalrender.api import Api as rrApi +from openpype.modules.royalrender.rr_job import ( + RRJob, CustomAttribute, get_rr_platform) +from openpype.lib import ( + is_running_from_build, + BoolDef, + NumberDef, +) +from openpype.pipeline import OpenPypePyblishPluginMixin + + +class BaseCreateRoyalRenderJob(pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin): + """Creates separate rendering job for Royal Render""" + label = "Create Nuke Render job in RR" + order = pyblish.api.IntegratorOrder + 0.1 + hosts = ["nuke"] + families = ["render", "prerender"] + targets = ["local"] + optional = True + + priority = 50 + chunk_size = 1 + concurrent_tasks = 1 + use_gpu = True + use_published = True + + @classmethod + def get_attribute_defs(cls): + return [ + NumberDef( + "priority", + label="Priority", + default=cls.priority, + decimals=0 + ), + NumberDef( + "chunk", + label="Frames Per Task", + default=cls.chunk_size, + decimals=0, + minimum=1, + maximum=1000 + ), + NumberDef( + "concurrency", + label="Concurrency", + default=cls.concurrent_tasks, + decimals=0, + minimum=1, + maximum=10 + ), + BoolDef( + "use_gpu", + default=cls.use_gpu, + label="Use GPU" + ), + BoolDef( + "suspend_publish", + default=False, + label="Suspend publish" + ), + BoolDef( + "use_published", + default=cls.use_published, + label="Use published workfile" + ) + ] + + def __init__(self, *args, **kwargs): + self._rr_root = None + self.scene_path = None + self.job = None + self.submission_parameters = None + self.rr_api = None + + def process(self, instance): + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + + instance.data["attributeValues"] = self.get_attr_values_from_data( + instance.data) + + # add suspend_publish attributeValue to instance data + instance.data["suspend_publish"] = instance.data["attributeValues"][ + "suspend_publish"] + + context = instance.context + + self._rr_root = self._resolve_rr_path(context, instance.data.get( + "rrPathName")) # noqa + self.log.debug(self._rr_root) + if not self._rr_root: + raise KnownPublishError( + ("Missing RoyalRender root. " + "You need to configure RoyalRender module.")) + + self.rr_api = rrApi(self._rr_root) + + self.scene_path = context.data["currentFile"] + if self.use_published: + published_workfile = get_published_workfile_instance(context) + + # fallback if nothing was set + if published_workfile is None: + self.log.warning("Falling back to workfile") + file_path = context.data["currentFile"] + else: + workfile_repre = published_workfile.data["representations"][0] + file_path = workfile_repre["published_path"] + + self.scene_path = file_path + self.log.info( + "Using published scene for render {}".format(self.scene_path) + ) + + if not instance.data.get("expectedFiles"): + instance.data["expectedFiles"] = [] + + if not instance.data.get("rrJobs"): + instance.data["rrJobs"] = [] + + def get_job(self, instance, script_path, render_path, node_name): + """Get RR job based on current instance. + + Args: + script_path (str): Path to Nuke script. + render_path (str): Output path. + node_name (str): Name of the render node. + + Returns: + RRJob: RoyalRender Job instance. + + """ + start_frame = int(instance.data["frameStartHandle"]) + end_frame = int(instance.data["frameEndHandle"]) + + batch_name = os.path.basename(script_path) + jobname = "%s - %s" % (batch_name, instance.name) + if is_in_tests(): + batch_name += datetime.now().strftime("%d%m%Y%H%M%S") + + render_dir = os.path.normpath(os.path.dirname(render_path)) + output_filename_0 = self.pad_file_name(render_path, str(start_frame)) + file_name, file_ext = os.path.splitext( + os.path.basename(output_filename_0)) + + custom_attributes = [] + if is_running_from_build(): + custom_attributes = [ + CustomAttribute( + name="OpenPypeVersion", + value=os.environ.get("OPENPYPE_VERSION")) + ] + + # this will append expected files to instance as needed. + expected_files = self.expected_files( + instance, render_path, start_frame, end_frame) + instance.data["expectedFiles"].extend(expected_files) + + job = RRJob( + Software="", + Renderer="", + SeqStart=int(start_frame), + SeqEnd=int(end_frame), + SeqStep=int(instance.data.get("byFrameStep", 1)), + SeqFileOffset=0, + Version=0, + SceneName=script_path, + IsActive=True, + ImageDir=render_dir.replace("\\", "/"), + ImageFilename=file_name, + ImageExtension=file_ext, + ImagePreNumberLetter="", + ImageSingleOutputFile=False, + SceneOS=get_rr_platform(), + Layer=node_name, + SceneDatabaseDir=script_path, + CustomSHotName=jobname, + CompanyProjectName=instance.context.data["projectName"], + ImageWidth=instance.data["resolutionWidth"], + ImageHeight=instance.data["resolutionHeight"], + CustomAttributes=custom_attributes + ) + + return job + + def update_job_with_host_specific(self, instance, job): + """Host specific mapping for RRJob""" + raise NotImplementedError + + @staticmethod + def _resolve_rr_path(context, rr_path_name): + # type: (pyblish.api.Context, str) -> str + rr_settings = ( + context.data + ["system_settings"] + ["modules"] + ["royalrender"] + ) + try: + default_servers = rr_settings["rr_paths"] + project_servers = ( + context.data + ["project_settings"] + ["royalrender"] + ["rr_paths"] + ) + rr_servers = { + k: default_servers[k] + for k in project_servers + if k in default_servers + } + + except (AttributeError, KeyError): + # Handle situation were we had only one url for royal render. + return context.data["defaultRRPath"][platform.system().lower()] + + return rr_servers[rr_path_name][platform.system().lower()] + + def expected_files(self, instance, path, start_frame, end_frame): + """Get expected files. + + This function generate expected files from provided + path and start/end frames. + + It was taken from Deadline module, but this should be + probably handled better in collector to support more + flexible scenarios. + + Args: + instance (Instance) + path (str): Output path. + start_frame (int): Start frame. + end_frame (int): End frame. + + Returns: + list: List of expected files. + + """ + dir_name = os.path.dirname(path) + file = os.path.basename(path) + + expected_files = [] + + if "#" in file: + pparts = file.split("#") + padding = "%0{}d".format(len(pparts) - 1) + file = pparts[0] + padding + pparts[-1] + + if "%" not in file: + expected_files.append(path) + return expected_files + + if instance.data.get("slate"): + start_frame -= 1 + + expected_files.extend( + os.path.join(dir_name, (file % i)).replace("\\", "/") + for i in range(start_frame, (end_frame + 1)) + ) + return expected_files + + def pad_file_name(self, path, first_frame): + """Return output file path with #### for padding. + + RR requires the path to be formatted with # in place of numbers. + For example `/path/to/render.####.png` + + Args: + path (str): path to rendered image + first_frame (str): from representation to cleany replace with # + padding + + Returns: + str + + """ + self.log.debug("pad_file_name path: `{}`".format(path)) + if "%" in path: + search_results = re.search(r"(%0)(\d)(d.)", path).groups() + self.log.debug("_ search_results: `{}`".format(search_results)) + return int(search_results[1]) + if "#" in path: + self.log.debug("already padded: `{}`".format(path)) + return path + + if first_frame: + padding = len(first_frame) + path = path.replace(first_frame, "#" * padding) + + return path diff --git a/openpype/modules/royalrender/plugins/publish/collect_default_rr_path.py b/openpype/modules/royalrender/plugins/publish/collect_default_rr_path.py deleted file mode 100644 index 3ce95e0c50..0000000000 --- a/openpype/modules/royalrender/plugins/publish/collect_default_rr_path.py +++ /dev/null @@ -1,23 +0,0 @@ -# -*- coding: utf-8 -*- -"""Collect default Deadline server.""" -import pyblish.api - - -class CollectDefaultRRPath(pyblish.api.ContextPlugin): - """Collect default Royal Render path.""" - - order = pyblish.api.CollectorOrder - label = "Default Royal Render Path" - - def process(self, context): - try: - rr_module = context.data.get( - "openPypeModules")["royalrender"] - except AttributeError: - msg = "Cannot get OpenPype Royal Render module." - self.log.error(msg) - raise AssertionError(msg) - - # get default deadline webservice url from deadline module - self.log.debug(rr_module.rr_paths) - context.data["defaultRRPath"] = rr_module.rr_paths["default"] # noqa: E501 diff --git a/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py b/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py index 6a3dc276f3..e978ce5bed 100644 --- a/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py +++ b/openpype/modules/royalrender/plugins/publish/collect_rr_path_from_instance.py @@ -5,29 +5,31 @@ import pyblish.api class CollectRRPathFromInstance(pyblish.api.InstancePlugin): """Collect RR Path from instance.""" - order = pyblish.api.CollectorOrder + 0.01 - label = "Royal Render Path from the Instance" - families = ["rendering"] + order = pyblish.api.CollectorOrder + label = "Collect Royal Render path name from the Instance" + families = ["render", "prerender", "renderlayer"] def process(self, instance): - instance.data["rrPath"] = self._collect_rr_path(instance) + instance.data["rrPathName"] = self._collect_rr_path_name(instance) self.log.info( - "Using {} for submission.".format(instance.data["rrPath"])) + "Using '{}' for submission.".format(instance.data["rrPathName"])) @staticmethod - def _collect_rr_path(render_instance): + def _collect_rr_path_name(instance): # type: (pyblish.api.Instance) -> str - """Get Royal Render path from render instance.""" + """Get Royal Render pat name from render instance.""" rr_settings = ( - render_instance.context.data + instance.context.data ["system_settings"] ["modules"] ["royalrender"] ) + if not instance.data.get("rrPaths"): + return "default" try: default_servers = rr_settings["rr_paths"] project_servers = ( - render_instance.context.data + instance.context.data ["project_settings"] ["royalrender"] ["rr_paths"] @@ -40,10 +42,6 @@ class CollectRRPathFromInstance(pyblish.api.InstancePlugin): except (AttributeError, KeyError): # Handle situation were we had only one url for royal render. - return render_instance.context.data["defaultRRPath"] + return rr_settings["rr_paths"]["default"] - return rr_servers[ - list(rr_servers.keys())[ - int(render_instance.data.get("rrPaths")) - ] - ] + return list(rr_servers.keys())[int(instance.data.get("rrPaths"))] diff --git a/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py b/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py index 65af90e8a6..1bfee19e3d 100644 --- a/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py +++ b/openpype/modules/royalrender/plugins/publish/collect_sequences_from_job.py @@ -71,7 +71,7 @@ class CollectSequencesFromJob(pyblish.api.ContextPlugin): """Gather file sequences from job directory. When "OPENPYPE_PUBLISH_DATA" environment variable is set these paths - (folders or .json files) are parsed for image sequences. Otherwise the + (folders or .json files) are parsed for image sequences. Otherwise, the current working directory is searched for file sequences. """ @@ -189,7 +189,7 @@ class CollectSequencesFromJob(pyblish.api.ContextPlugin): "families": list(families), "subset": subset, "asset": data.get( - "asset", legacy_io.Session["AVALON_ASSET"] + "asset", context.data["asset"] ), "stagingDir": root, "frameStart": start, diff --git a/openpype/modules/royalrender/plugins/publish/create_maya_royalrender_job.py b/openpype/modules/royalrender/plugins/publish/create_maya_royalrender_job.py new file mode 100644 index 0000000000..22d910b7cd --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/create_maya_royalrender_job.py @@ -0,0 +1,42 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to RoyalRender.""" +import os + +from maya.OpenMaya import MGlobal + +from openpype.modules.royalrender import lib +from openpype.pipeline.farm.tools import iter_expected_files + + +class CreateMayaRoyalRenderJob(lib.BaseCreateRoyalRenderJob): + label = "Create Maya Render job in RR" + hosts = ["maya"] + families = ["renderlayer"] + + def update_job_with_host_specific(self, instance, job): + job.Software = "Maya" + job.Version = "{0:.2f}".format(MGlobal.apiVersion() / 10000) + if instance.data.get("cameras"): + job.Camera = instance.data["cameras"][0].replace("'", '"') + workspace = instance.context.data["workspaceDir"] + job.SceneDatabaseDir = workspace + + return job + + def process(self, instance): + """Plugin entry point.""" + super(CreateMayaRoyalRenderJob, self).process(instance) + + expected_files = instance.data["expectedFiles"] + first_file_path = next(iter_expected_files(expected_files)) + output_dir = os.path.dirname(first_file_path) + instance.data["outputDir"] = output_dir + + layer = instance.data["setMembers"] # type: str + layer_name = layer.removeprefix("rs_") + + job = self.get_job(instance, self.scene_path, first_file_path, + layer_name) + job = self.update_job_with_host_specific(instance, job) + + instance.data["rrJobs"].append(job) diff --git a/openpype/modules/royalrender/plugins/publish/create_nuke_royalrender_job.py b/openpype/modules/royalrender/plugins/publish/create_nuke_royalrender_job.py new file mode 100644 index 0000000000..71daa6edf8 --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/create_nuke_royalrender_job.py @@ -0,0 +1,69 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to RoyalRender.""" +import re + +from openpype.modules.royalrender import lib + + +class CreateNukeRoyalRenderJob(lib.BaseCreateRoyalRenderJob): + """Creates separate rendering job for Royal Render""" + label = "Create Nuke Render job in RR" + hosts = ["nuke"] + families = ["render", "prerender"] + + def process(self, instance): + super(CreateNukeRoyalRenderJob, self).process(instance) + + # redefinition of families + if "render" in instance.data["family"]: + instance.data["family"] = "write" + instance.data["families"].insert(0, "render2d") + elif "prerender" in instance.data["family"]: + instance.data["family"] = "write" + instance.data["families"].insert(0, "prerender") + + jobs = self.create_jobs(instance) + for job in jobs: + job = self.update_job_with_host_specific(instance, job) + + instance.data["rrJobs"].append(job) + + def update_job_with_host_specific(self, instance, job): + nuke_version = re.search( + r"\d+\.\d+", instance.context.data.get("hostVersion")) + + job.Software = "Nuke" + job.Version = nuke_version.group() + + return job + + def create_jobs(self, instance): + """Nuke creates multiple RR jobs - for baking etc.""" + # get output path + render_path = instance.data['path'] + script_path = self.scene_path + node = instance.data["transientData"]["node"] + + # main job + jobs = [ + self.get_job( + instance, + script_path, + render_path, + node.name() + ) + ] + + for baking_script in instance.data.get("bakingNukeScripts", []): + render_path = baking_script["bakeRenderPath"] + script_path = baking_script["bakeScriptPath"] + exe_node_name = baking_script["bakeWriteNodeName"] + + jobs.append(self.get_job( + instance, + script_path, + render_path, + exe_node_name + )) + + return jobs diff --git a/openpype/modules/royalrender/plugins/publish/create_publish_royalrender_job.py b/openpype/modules/royalrender/plugins/publish/create_publish_royalrender_job.py new file mode 100644 index 0000000000..3eb49a39ee --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/create_publish_royalrender_job.py @@ -0,0 +1,286 @@ +# -*- coding: utf-8 -*- +"""Create publishing job on RoyalRender.""" +import os +import attr +import json +import re + +import pyblish.api + +from openpype.modules.royalrender.rr_job import ( + RRJob, + RREnvList, + get_rr_platform +) +from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline import ( + legacy_io, +) +from openpype.pipeline.farm.pyblish_functions import ( + create_skeleton_instance, + create_instances_for_aov, + attach_instances_to_subset, + prepare_representations, + create_metadata_path +) +from openpype.pipeline import publish + + +class CreatePublishRoyalRenderJob(pyblish.api.InstancePlugin, + publish.ColormanagedPyblishPluginMixin): + """Creates job which publishes rendered files to publish area. + + Job waits until all rendering jobs are finished, triggers `publish` command + where it reads from prepared .json file with metadata about what should + be published, renames prepared images and publishes them. + + When triggered it produces .log file next to .json file in work area. + """ + label = "Create publish job in RR" + order = pyblish.api.IntegratorOrder + 0.2 + icon = "tractor" + targets = ["local"] + hosts = ["fusion", "maya", "nuke", "celaction", "aftereffects", "harmony"] + families = ["render.farm", "prerender.farm", + "renderlayer", "imagesequence", "vrayscene"] + aov_filter = {"maya": [r".*([Bb]eauty).*"], + "aftereffects": [r".*"], # for everything from AE + "harmony": [r".*"], # for everything from AE + "celaction": [r".*"]} + + skip_integration_repre_list = [] + + # mapping of instance properties to be transferred to new instance + # for every specified family + instance_transfer = { + "slate": ["slateFrames", "slate"], + "review": ["lutPath"], + "render2d": ["bakingNukeScripts", "version"], + "renderlayer": ["convertToScanline"] + } + + # list of family names to transfer to new family if present + families_transfer = ["render3d", "render2d", "ftrack", "slate"] + + environ_job_filter = [ + "OPENPYPE_METADATA_FILE" + ] + + environ_keys = [ + "FTRACK_API_USER", + "FTRACK_API_KEY", + "FTRACK_SERVER", + "AVALON_APP_NAME", + "OPENPYPE_USERNAME", + "OPENPYPE_SG_USER", + "OPENPYPE_MONGO" + ] + priority = 50 + + def process(self, instance): + context = instance.context + self.context = context + self.anatomy = instance.context.data["anatomy"] + + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + + instance_skeleton_data = create_skeleton_instance( + instance, + families_transfer=self.families_transfer, + instance_transfer=self.instance_transfer) + + do_not_add_review = False + if instance.data.get("review") is False: + self.log.debug("Instance has review explicitly disabled.") + do_not_add_review = True + + if isinstance(instance.data.get("expectedFiles")[0], dict): + instances = create_instances_for_aov( + instance, instance_skeleton_data, + self.aov_filter, self.skip_integration_repre_list, + do_not_add_review) + + else: + representations = prepare_representations( + instance_skeleton_data, + instance.data.get("expectedFiles"), + self.anatomy, + self.aov_filter, + self.skip_integration_repre_list, + do_not_add_review, + instance.context, + self + ) + + if "representations" not in instance_skeleton_data.keys(): + instance_skeleton_data["representations"] = [] + + # add representation + instance_skeleton_data["representations"] += representations + instances = [instance_skeleton_data] + + # attach instances to subset + if instance.data.get("attachTo"): + instances = attach_instances_to_subset( + instance.data.get("attachTo"), instances + ) + + self.log.info("Creating RoyalRender Publish job ...") + + if not instance.data.get("rrJobs"): + self.log.error(("There is no prior RoyalRender " + "job on the instance.")) + raise KnownPublishError( + "Can't create publish job without prior rendering jobs first") + + rr_job = self.get_job(instance, instances) + instance.data["rrJobs"].append(rr_job) + + # publish job file + publish_job = { + "asset": instance_skeleton_data["asset"], + "frameStart": instance_skeleton_data["frameStart"], + "frameEnd": instance_skeleton_data["frameEnd"], + "fps": instance_skeleton_data["fps"], + "source": instance_skeleton_data["source"], + "user": instance.context.data["user"], + "version": instance.context.data["version"], # workfile version + "intent": instance.context.data.get("intent"), + "comment": instance.context.data.get("comment"), + "job": attr.asdict(rr_job), + "session": legacy_io.Session.copy(), + "instances": instances + } + + metadata_path, rootless_metadata_path = \ + create_metadata_path(instance, self.anatomy) + + self.log.info("Writing json file: {}".format(metadata_path)) + with open(metadata_path, "w") as f: + json.dump(publish_job, f, indent=4, sort_keys=True) + + def get_job(self, instance, instances): + """Create RR publishing job. + + Based on provided original instance and additional instances, + create publishing job and return it to be submitted to farm. + + Args: + instance (Instance): Original instance. + instances (list of Instance): List of instances to + be published on farm. + + Returns: + RRJob: RoyalRender publish job. + + """ + data = instance.data.copy() + subset = data["subset"] + jobname = "Publish - {subset}".format(subset=subset) + + # Transfer the environment from the original job to this dependent + # job, so they use the same environment + metadata_path, rootless_metadata_path = \ + create_metadata_path(instance, self.anatomy) + + anatomy_data = instance.context.data["anatomyData"] + + environment = RREnvList({ + "AVALON_PROJECT": anatomy_data["project"]["name"], + "AVALON_ASSET": anatomy_data["asset"], + "AVALON_TASK": anatomy_data["task"]["name"], + "OPENPYPE_USERNAME": anatomy_data["user"] + }) + + # add environments from self.environ_keys + for env_key in self.environ_keys: + if os.getenv(env_key): + environment[env_key] = os.environ[env_key] + + # pass environment keys from self.environ_job_filter + # and collect all pre_ids to wait for + job_environ = {} + jobs_pre_ids = [] + for job in instance.data["rrJobs"]: # type: RRJob + if job.rrEnvList: + job_environ.update( + dict(RREnvList.parse(job.rrEnvList)) + ) + jobs_pre_ids.append(job.PreID) + + for env_j_key in self.environ_job_filter: + if job_environ.get(env_j_key): + environment[env_j_key] = job_environ[env_j_key] + + priority = self.priority or instance.data.get("priority", 50) + + # rr requires absolut path or all jobs won't show up in rControl + abs_metadata_path = self.anatomy.fill_root(rootless_metadata_path) + + # command line set in E01__OpenPype__PublishJob.cfg, here only + # additional logging + args = [ + ">", os.path.join(os.path.dirname(abs_metadata_path), + "rr_out.log"), + "2>&1" + ] + + job = RRJob( + Software="OpenPype", + Renderer="Once", + SeqStart=1, + SeqEnd=1, + SeqStep=1, + SeqFileOffset=0, + Version=self._sanitize_version(os.environ.get("OPENPYPE_VERSION")), + SceneName=abs_metadata_path, + # command line arguments + CustomAddCmdFlags=" ".join(args), + IsActive=True, + ImageFilename="execOnce.file", + ImageDir="", + ImageExtension="", + ImagePreNumberLetter="", + SceneOS=get_rr_platform(), + rrEnvList=environment.serialize(), + Priority=priority, + CustomSHotName=jobname, + CompanyProjectName=instance.context.data["projectName"] + ) + + # add assembly jobs as dependencies + if instance.data.get("tileRendering"): + self.log.info("Adding tile assembly jobs as dependencies...") + job.WaitForPreIDs += instance.data.get("assemblySubmissionJobs") + elif instance.data.get("bakingSubmissionJobs"): + self.log.info("Adding baking submission jobs as dependencies...") + job.WaitForPreIDs += instance.data["bakingSubmissionJobs"] + else: + job.WaitForPreIDs += jobs_pre_ids + + return job + + def _sanitize_version(self, version): + """Returns version in format MAJOR.MINORPATCH + + 3.15.7-nightly.2 >> 3.157 + """ + VERSION_REGEX = re.compile( + r"(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"(?:-(?P[a-zA-Z\d\-.]*))?" + r"(?:\+(?P[a-zA-Z\d\-.]*))?" + ) + + valid_parts = VERSION_REGEX.findall(version) + if len(valid_parts) != 1: + # Return invalid version with filled 'origin' attribute + return version + + # Unpack found version + major, minor, patch, pre, post = valid_parts[0] + + return "{}.{}{}".format(major, minor, patch) diff --git a/openpype/modules/royalrender/plugins/publish/submit_jobs_to_royalrender.py b/openpype/modules/royalrender/plugins/publish/submit_jobs_to_royalrender.py new file mode 100644 index 0000000000..8fc8604b83 --- /dev/null +++ b/openpype/modules/royalrender/plugins/publish/submit_jobs_to_royalrender.py @@ -0,0 +1,131 @@ +# -*- coding: utf-8 -*- +"""Submit jobs to RoyalRender.""" +import tempfile +import platform + +import pyblish.api +from openpype.modules.royalrender.api import ( + RRJob, + Api as rrApi, + SubmitterParameter +) +from openpype.pipeline.publish import KnownPublishError + + +class SubmitJobsToRoyalRender(pyblish.api.ContextPlugin): + """Find all jobs, create submission XML and submit it to RoyalRender.""" + label = "Submit jobs to RoyalRender" + order = pyblish.api.IntegratorOrder + 0.3 + targets = ["local"] + + def __init__(self): + super(SubmitJobsToRoyalRender, self).__init__() + self._rr_root = None + self._rr_api = None + self._submission_parameters = [] + + def process(self, context): + rr_settings = ( + context.data + ["system_settings"] + ["modules"] + ["royalrender"] + ) + + if rr_settings["enabled"] is not True: + self.log.warning("RoyalRender modules is disabled.") + return + + # iterate over all instances and try to find RRJobs + jobs = [] + instance_rr_path = None + for instance in context: + if isinstance(instance.data.get("rrJob"), RRJob): + jobs.append(instance.data.get("rrJob")) + if instance.data.get("rrJobs"): + if all( + isinstance(job, RRJob) + for job in instance.data.get("rrJobs")): + jobs += instance.data.get("rrJobs") + if instance.data.get("rrPathName"): + instance_rr_path = instance.data["rrPathName"] + + if jobs: + self._rr_root = self._resolve_rr_path(context, instance_rr_path) + if not self._rr_root: + raise KnownPublishError( + ("Missing RoyalRender root. " + "You need to configure RoyalRender module.")) + self._rr_api = rrApi(self._rr_root) + self._submission_parameters = self.get_submission_parameters() + self.process_submission(jobs) + return + + self.log.info("No RoyalRender jobs found") + + def process_submission(self, jobs): + # type: ([RRJob]) -> None + + idx_pre_id = 0 + for job in jobs: + job.PreID = idx_pre_id + if idx_pre_id > 0: + job.WaitForPreIDs.append(idx_pre_id - 1) + idx_pre_id += 1 + + submission = rrApi.create_submission( + jobs, + self._submission_parameters) + + xml = tempfile.NamedTemporaryFile(suffix=".xml", delete=False) + with open(xml.name, "w") as f: + f.write(submission.serialize()) + + self.log.info("submitting job(s) file: {}".format(xml.name)) + self._rr_api.submit_file(file=xml.name) + + def create_file(self, name, ext, contents=None): + temp = tempfile.NamedTemporaryFile( + dir=self.tempdir, + suffix=ext, + prefix=name + '.', + delete=False, + ) + + if contents: + with open(temp.name, 'w') as f: + f.write(contents) + + return temp.name + + def get_submission_parameters(self): + return [SubmitterParameter("RequiredMemory", "0")] + + @staticmethod + def _resolve_rr_path(context, rr_path_name): + # type: (pyblish.api.Context, str) -> str + rr_settings = ( + context.data + ["system_settings"] + ["modules"] + ["royalrender"] + ) + try: + default_servers = rr_settings["rr_paths"] + project_servers = ( + context.data + ["project_settings"] + ["royalrender"] + ["rr_paths"] + ) + rr_servers = { + k: default_servers[k] + for k in project_servers + if k in default_servers + } + + except (AttributeError, KeyError): + # Handle situation were we had only one url for royal render. + return context.data["defaultRRPath"][platform.system().lower()] + + return rr_servers[rr_path_name][platform.system().lower()] diff --git a/openpype/modules/royalrender/rr_job.py b/openpype/modules/royalrender/rr_job.py index c660eceac7..b85ac592f8 100644 --- a/openpype/modules/royalrender/rr_job.py +++ b/openpype/modules/royalrender/rr_job.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """Python wrapper for RoyalRender XML job file.""" +import sys from xml.dom import minidom as md import attr from collections import namedtuple, OrderedDict @@ -8,8 +9,36 @@ from collections import namedtuple, OrderedDict CustomAttribute = namedtuple("CustomAttribute", ["name", "value"]) +def get_rr_platform(): + # type: () -> str + """Returns name of platform used in rr jobs.""" + if sys.platform.lower() in ["win32", "win64"]: + return "windows" + elif sys.platform.lower() == "darwin": + return "mac" + else: + return "linux" + + +class RREnvList(dict): + def serialize(self): + # VariableA=ValueA~~~VariableB=ValueB + return "~~~".join( + ["{}={}".format(k, v) for k, v in sorted(self.items())]) + + @staticmethod + def parse(data): + # type: (str) -> RREnvList + """Parse rrEnvList string and return it as RREnvList object.""" + out = RREnvList() + for var in data.split("~~~"): + k, v = var.split("=") + out[k] = v + return out + + @attr.s -class RRJob: +class RRJob(object): """Mapping of Royal Render job file to a data class.""" # Required @@ -35,7 +64,7 @@ class RRJob: # Is the job enabled for submission? # enabled by default - IsActive = attr.ib() # type: str + IsActive = attr.ib() # type: bool # Sequence settings of this job SeqStart = attr.ib() # type: int @@ -60,7 +89,7 @@ class RRJob: # If you render a single file, e.g. Quicktime or Avi, then you have to # set this value. Videos have to be rendered at once on one client. - ImageSingleOutputFile = attr.ib(default="false") # type: str + ImageSingleOutputFile = attr.ib(default=False) # type: bool # Semi-Required (required for some render applications) # ----------------------------------------------------- @@ -87,7 +116,7 @@ class RRJob: # Frame Padding of the frame number in the rendered filename. # Some render config files are setting the padding at render time. - ImageFramePadding = attr.ib(default=None) # type: str + ImageFramePadding = attr.ib(default=None) # type: int # Some render applications support overriding the image format at # the render commandline. @@ -108,7 +137,7 @@ class RRJob: # jobs send from this machine. If a job with the PreID was found, then # this jobs waits for the other job. Note: This flag can be used multiple # times to wait for multiple jobs. - WaitForPreID = attr.ib(default=None) # type: int + WaitForPreIDs = attr.ib(factory=list) # type: list # List of submitter options per job # list item must be of `SubmitterParameter` type @@ -120,6 +149,9 @@ class RRJob: # list item must be of `CustomAttribute` named tuple CustomAttributes = attr.ib(factory=list) # type: list + # This is used to hold command line arguments for Execute job + CustomAddCmdFlags = attr.ib(default=None) # type: str + # Additional information for subsequent publish script and # for better display in rrControl UserName = attr.ib(default=None) # type: str @@ -129,6 +161,7 @@ class RRJob: CustomUserInfo = attr.ib(default=None) # type: str SubmitMachine = attr.ib(default=None) # type: str Color_ID = attr.ib(default=2) # type: int + CompanyProjectName = attr.ib(default=None) # type: str RequiredLicenses = attr.ib(default=None) # type: str @@ -137,6 +170,10 @@ class RRJob: TotalFrames = attr.ib(default=None) # type: int Tiled = attr.ib(default=None) # type: str + # Environment + # only used in RR 8.3 and newer + rrEnvList = attr.ib(default=None) # type: str + class SubmitterParameter: """Wrapper for Submitter Parameters.""" @@ -160,7 +197,7 @@ class SubmitterParameter: @attr.s -class SubmitFile: +class SubmitFile(object): """Class wrapping Royal Render submission XML file.""" # Syntax version of the submission file. @@ -169,11 +206,11 @@ class SubmitFile: # Delete submission file after processing DeleteXML = attr.ib(default=1) # type: int - # List of submitter options per job + # List of the submitter options per job. # list item must be of `SubmitterParameter` type SubmitterParameters = attr.ib(factory=list) # type: list - # List of job is submission batch. + # List of the jobs in submission batch. # list item must be of type `RRJob` Jobs = attr.ib(factory=list) # type: list @@ -225,7 +262,7 @@ class SubmitFile: # foo=bar~baz~goo self._process_submitter_parameters( self.SubmitterParameters, root, job_file) - + root.appendChild(job_file) for job in self.Jobs: # type: RRJob if not isinstance(job, RRJob): raise AttributeError( @@ -241,16 +278,28 @@ class SubmitFile: job, dict_factory=OrderedDict, filter=filter_data) serialized_job.pop("CustomAttributes") serialized_job.pop("SubmitterParameters") + # we are handling `WaitForPreIDs` separately. + wait_pre_ids = serialized_job.pop("WaitForPreIDs", []) for custom_attr in job_custom_attributes: # type: CustomAttribute serialized_job["Custom{}".format( custom_attr.name)] = custom_attr.value for item, value in serialized_job.items(): - xml_attr = root.create(item) + xml_attr = root.createElement(item) xml_attr.appendChild( - root.createTextNode(value) + root.createTextNode(str(value)) ) xml_job.appendChild(xml_attr) + # WaitForPreID - can be used multiple times + for pre_id in wait_pre_ids: + xml_attr = root.createElement("WaitForPreID") + xml_attr.appendChild( + root.createTextNode(str(pre_id)) + ) + xml_job.appendChild(xml_attr) + + job_file.appendChild(xml_job) + return root.toprettyxml(indent="\t") diff --git a/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype.png b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype.png new file mode 100644 index 0000000000..68c5aec117 Binary files /dev/null and b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype.png differ diff --git a/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype__PublishJob.cfg b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype__PublishJob.cfg new file mode 100644 index 0000000000..864eeaf15a --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype__PublishJob.cfg @@ -0,0 +1,71 @@ +IconApp= E01__OpenPype.png +Name= OpenPype +rendererName= Once +Version= 1 +Version_Minor= 0 +Type=Execute +TYPEv9=Execute +ExecuteJobType=Once + + +################################# [Windows] [Linux] [Osx] ################################## + + +CommandLine=> + +CommandLine= + + +::win CommandLine= set "CUDA_VISIBLE_DEVICES=" +::lx CommandLine= setenv CUDA_VISIBLE_DEVICES +::osx CommandLine= setenv CUDA_VISIBLE_DEVICES + + +CommandLine= + + +CommandLine= + + +CommandLine= + + +CommandLine= "" --headless publish + --targets royalrender + --targets farm + + + +CommandLine= + + + + +################################## Render Settings ################################## + + + +################################## Submitter Settings ################################## +StartMultipleInstances= 0~0 +SceneFileExtension= *.json +AllowImageNameChange= 0 +AllowImageDirChange= 0 +SequenceDivide= 0~1 +PPSequenceCheck=0~0 +PPCreateSmallVideo=0~0 +PPCreateFullVideo=0~0 +AllowLocalRenderOut= 0~0 + + +################################## Client Settings ################################## + +IconApp=E01__OpenPype.png + +licenseFailLine= + +errorSearchLine= + +permanentErrorSearchLine = + +Frozen_MinCoreUsage=0.3 +Frozen_Minutes=30 diff --git a/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype___global.inc b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype___global.inc new file mode 100644 index 0000000000..ba38337340 --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_config/E01__OpenPype___global.inc @@ -0,0 +1,2 @@ +IconApp= E01__OpenPype.png +Name= OpenPype diff --git a/openpype/modules/royalrender/rr_root/render_apps/_install_paths/OpenPype.cfg b/openpype/modules/royalrender/rr_root/render_apps/_install_paths/OpenPype.cfg new file mode 100644 index 0000000000..07f7547d29 --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_install_paths/OpenPype.cfg @@ -0,0 +1,12 @@ +[Windows] +Executable= openpype_console.exe +Path= OS; \OpenPype\*\openpype_console.exe +Path= 32; \OpenPype\*\openpype_console.exe + +[Linux] +Executable= openpype_console +Path= OS; /opt/openpype/*/openpype_console + +[Mac] +Executable= openpype_console +Path= OS; /Applications/OpenPype*/Content/MacOS/openpype_console diff --git a/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/OpenPypeEnvironment.cfg b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/OpenPypeEnvironment.cfg new file mode 100644 index 0000000000..70f0bc2e24 --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/OpenPypeEnvironment.cfg @@ -0,0 +1,11 @@ +PrePostType= pre +CommandLine= + +CommandLine= rrPythonconsole" > "render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py" + +CommandLine= + + +CommandLine= "" +CommandLine= + diff --git a/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py new file mode 100644 index 0000000000..891de9594c --- /dev/null +++ b/openpype/modules/royalrender/rr_root/render_apps/_prepost_scripts/PreOpenPypeInjectEnvironments.py @@ -0,0 +1,4 @@ +# -*- coding: utf-8 -*- +import os + +os.environ["OPENYPYPE_TESTVAR"] = "OpenPype was here" diff --git a/openpype/modules/settings_action.py b/openpype/modules/settings_action.py index 90092a133d..5950fbd910 100644 --- a/openpype/modules/settings_action.py +++ b/openpype/modules/settings_action.py @@ -1,3 +1,4 @@ +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayAction @@ -10,6 +11,8 @@ class SettingsAction(OpenPypeModule, ITrayAction): def initialize(self, _modules_settings): # This action is always enabled self.enabled = True + if AYON_SERVER_ENABLED: + self.enabled = False # User role # TODO should be changeable @@ -80,6 +83,8 @@ class LocalSettingsAction(OpenPypeModule, ITrayAction): def initialize(self, _modules_settings): # This action is always enabled self.enabled = True + if AYON_SERVER_ENABLED: + self.enabled = False # Tray attributes self.settings_window = None diff --git a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py index 0b03ac2e5d..db2e4eadc5 100644 --- a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py +++ b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py @@ -1,7 +1,5 @@ -import os - import pyblish.api -from openpype.lib.mongo import OpenPypeMongoConnection +from openpype.client.mongo import OpenPypeMongoConnection class CollectShotgridEntities(pyblish.api.ContextPlugin): @@ -14,7 +12,7 @@ class CollectShotgridEntities(pyblish.api.ContextPlugin): avalon_project = context.data.get("projectEntity") avalon_asset = context.data.get("assetEntity") - avalon_task_name = os.getenv("AVALON_TASK") + avalon_task_name = context.data.get("task") self.log.info(avalon_project) self.log.info(avalon_asset) diff --git a/openpype/modules/slack/launch_hooks/pre_python2_vendor.py b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py index 0f4bc22a34..891c92bb7a 100644 --- a/openpype/modules/slack/launch_hooks/pre_python2_vendor.py +++ b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py @@ -1,5 +1,5 @@ import os -from openpype.lib import PreLaunchHook +from openpype.lib.applications import PreLaunchHook from openpype_modules.slack import SLACK_MODULE_DIR @@ -8,6 +8,7 @@ class PrePython2Support(PreLaunchHook): Path to vendor modules is added to the beginning of PYTHONPATH. """ + launch_types = set() def execute(self): if not self.application.use_python_2: diff --git a/openpype/modules/slack/plugins/publish/integrate_slack_api.py b/openpype/modules/slack/plugins/publish/integrate_slack_api.py index 86c97586d2..4c5a39318a 100644 --- a/openpype/modules/slack/plugins/publish/integrate_slack_api.py +++ b/openpype/modules/slack/plugins/publish/integrate_slack_api.py @@ -350,6 +350,10 @@ class SlackPython3Operations(AbstractSlackOperations): self.log.warning("Cannot pull user info, " "mentions won't work", exc_info=True) return [], [] + except Exception: + self.log.warning("Cannot pull user info, " + "mentions won't work", exc_info=True) + return [], [] return users, groups @@ -377,8 +381,12 @@ class SlackPython3Operations(AbstractSlackOperations): return response.data["ts"], file_ids except SlackApiError as e: # # You will get a SlackApiError if "ok" is False - error_str = self._enrich_error(str(e.response["error"]), channel) - self.log.warning("Error happened {}".format(error_str)) + if e.response.get("error"): + error_str = self._enrich_error(str(e.response["error"]), channel) + else: + error_str = self._enrich_error(str(e), channel) + self.log.warning("Error happened: {}".format(error_str), + exc_info=True) except Exception as e: error_str = self._enrich_error(str(e), channel) self.log.warning("Not SlackAPI error", exc_info=True) @@ -448,12 +456,14 @@ class SlackPython2Operations(AbstractSlackOperations): if response.get("error"): error_str = self._enrich_error(str(response.get("error")), channel) - self.log.warning("Error happened: {}".format(error_str)) + self.log.warning("Error happened: {}".format(error_str), + exc_info=True) else: return response["ts"], file_ids except Exception as e: # You will get a SlackApiError if "ok" is False error_str = self._enrich_error(str(e), channel) - self.log.warning("Error happened: {}".format(error_str)) + self.log.warning("Error happened: {}".format(error_str), + exc_info=True) return None, [] diff --git a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py index bbc220945c..047e35e3ac 100644 --- a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py +++ b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py @@ -1,12 +1,8 @@ import os import shutil -from openpype.client.entities import ( - get_representations, - get_project -) - -from openpype.lib import PreLaunchHook +from openpype.client.entities import get_representations +from openpype.lib.applications import PreLaunchHook, LaunchTypes from openpype.lib.profiles_filtering import filter_profiles from openpype.modules.sync_server.sync_server import ( download_last_published_workfile, @@ -32,6 +28,7 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "nuke", "nukeassist", "nukex", "hiero", "nukestudio", "maya", "harmony", "celaction", "flame", "fusion", "houdini", "tvpaint"] + launch_types = {LaunchTypes.local} def execute(self): """Check if local workfile doesn't exist, else copy it. @@ -119,6 +116,18 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "task": {"name": task_name, "type": task_type} } + # Add version filter + workfile_version = self.launch_context.data.get("workfile_version", -1) + if workfile_version > 0 and workfile_version not in {None, "last"}: + context_filters["version"] = self.launch_context.data[ + "workfile_version" + ] + + # Only one version will be matched + version_index = 0 + else: + version_index = workfile_version + workfile_representations = list(get_representations( project_name, context_filters=context_filters @@ -136,9 +145,10 @@ class CopyLastPublishedWorkfile(PreLaunchHook): lambda r: r["context"].get("version") is not None, workfile_representations ) - workfile_representation = max( + # Get workfile version + workfile_representation = sorted( filtered_repres, key=lambda r: r["context"]["version"] - ) + )[version_index] # Copy file and substitute path last_published_workfile_path = download_last_published_workfile( diff --git a/openpype/plugins/load/add_site.py b/openpype/modules/sync_server/plugins/load/add_site.py similarity index 100% rename from openpype/plugins/load/add_site.py rename to openpype/modules/sync_server/plugins/load/add_site.py diff --git a/openpype/plugins/load/remove_site.py b/openpype/modules/sync_server/plugins/load/remove_site.py similarity index 100% rename from openpype/plugins/load/remove_site.py rename to openpype/modules/sync_server/plugins/load/remove_site.py diff --git a/openpype/modules/sync_server/sync_server.py b/openpype/modules/sync_server/sync_server.py index 98065b68a0..1b7b2dc3a6 100644 --- a/openpype/modules/sync_server/sync_server.py +++ b/openpype/modules/sync_server/sync_server.py @@ -536,8 +536,8 @@ class SyncServerThread(threading.Thread): _site_is_working(self.module, project_name, remote_site, remote_site_config)]): self.log.debug( - "Some of the sites {} - {} is not working properly".format( - local_site, remote_site + "Some of the sites {} - {} in {} is not working properly".format( # noqa + local_site, remote_site, project_name ) ) diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index b85b045bd9..8a92697920 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -15,7 +15,7 @@ from openpype.client import ( get_representations, get_representation_by_id, ) -from openpype.modules import OpenPypeModule, ITrayModule +from openpype.modules import OpenPypeModule, ITrayModule, IPluginPaths from openpype.settings import ( get_project_settings, get_system_settings, @@ -34,12 +34,17 @@ from openpype.settings.constants import ( from .providers.local_drive import LocalDriveHandler from .providers import lib -from .utils import time_function, SyncStatus, SiteAlreadyPresentError +from .utils import ( + time_function, + SyncStatus, + SiteAlreadyPresentError, + SYNC_SERVER_ROOT, +) log = Logger.get_logger("SyncServer") -class SyncServerModule(OpenPypeModule, ITrayModule): +class SyncServerModule(OpenPypeModule, ITrayModule, IPluginPaths): """ Synchronization server that is syncing published files from local to any of implemented providers (like GDrive, S3 etc.) @@ -136,6 +141,27 @@ class SyncServerModule(OpenPypeModule, ITrayModule): # projects that long tasks are running on self.projects_processed = set() + def get_plugin_paths(self): + """Deadline plugin paths.""" + return { + "load": [os.path.join(SYNC_SERVER_ROOT, "plugins", "load")] + } + + def get_site_icons(self): + """Icons for sites. + + Returns: + dict[str, str]: Path to icon by site. + """ + + resource_path = os.path.join( + SYNC_SERVER_ROOT, "providers", "resources" + ) + return { + provider: "{}/{}.png".format(resource_path, provider) + for provider in ["studio", "local_drive", "gdrive"] + } + """ Start of Public API """ def add_site(self, project_name, representation_id, site_name=None, force=False, priority=None, reset_timer=False): @@ -204,6 +230,58 @@ class SyncServerModule(OpenPypeModule, ITrayModule): if remove_local_files: self._remove_local_file(project_name, representation_id, site_name) + def get_progress_for_repre(self, doc, active_site, remote_site): + """ + Calculates average progress for representation. + If site has created_dt >> fully available >> progress == 1 + Could be calculated in aggregate if it would be too slow + Args: + doc(dict): representation dict + Returns: + (dict) with active and remote sites progress + {'studio': 1.0, 'gdrive': -1} - gdrive site is not present + -1 is used to highlight the site should be added + {'studio': 1.0, 'gdrive': 0.0} - gdrive site is present, not + uploaded yet + """ + progress = {active_site: -1, + remote_site: -1} + if not doc: + return progress + + files = {active_site: 0, remote_site: 0} + doc_files = doc.get("files") or [] + for doc_file in doc_files: + if not isinstance(doc_file, dict): + continue + + sites = doc_file.get("sites") or [] + for site in sites: + if ( + # Pype 2 compatibility + not isinstance(site, dict) + # Check if site name is one of progress sites + or site["name"] not in progress + ): + continue + + files[site["name"]] += 1 + norm_progress = max(progress[site["name"]], 0) + if site.get("created_dt"): + progress[site["name"]] = norm_progress + 1 + elif site.get("progress"): + progress[site["name"]] = norm_progress + site["progress"] + else: # site exists, might be failed, do not add again + progress[site["name"]] = 0 + + # for example 13 fully avail. files out of 26 >> 13/26 = 0.5 + avg_progress = {} + avg_progress[active_site] = \ + progress[active_site] / max(files[active_site], 1) + avg_progress[remote_site] = \ + progress[remote_site] / max(files[remote_site], 1) + return avg_progress + def compute_resource_sync_sites(self, project_name): """Get available resource sync sites state for publish process. @@ -845,10 +923,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule): (str): full absolut path to directory with hooks for the module """ - return os.path.join( - os.path.dirname(os.path.abspath(__file__)), - "launch_hooks" - ) + return os.path.join(SYNC_SERVER_ROOT, "launch_hooks") # Needs to be refactored after Settings are updated # # Methods for Settings to get appriate values to fill forms diff --git a/openpype/modules/sync_server/utils.py b/openpype/modules/sync_server/utils.py index 4caa01e9d7..b2f855539f 100644 --- a/openpype/modules/sync_server/utils.py +++ b/openpype/modules/sync_server/utils.py @@ -1,9 +1,12 @@ +import os import time from openpype.lib import Logger log = Logger.get_logger("SyncServer") +SYNC_SERVER_ROOT = os.path.dirname(os.path.abspath(__file__)) + class ResumableError(Exception): """Error which could be temporary, skip current loop, try next time""" diff --git a/openpype/modules/timers_manager/launch_hooks/post_start_timer.py b/openpype/modules/timers_manager/launch_hooks/post_start_timer.py index d6ae013403..76c3cca33e 100644 --- a/openpype/modules/timers_manager/launch_hooks/post_start_timer.py +++ b/openpype/modules/timers_manager/launch_hooks/post_start_timer.py @@ -1,4 +1,4 @@ -from openpype.lib import PostLaunchHook +from openpype.lib.applications import PostLaunchHook, LaunchTypes class PostStartTimerHook(PostLaunchHook): @@ -7,6 +7,7 @@ class PostStartTimerHook(PostLaunchHook): This module requires enabled TimerManager module. """ order = None + launch_types = {LaunchTypes.local} def execute(self): project_name = self.data.get("project_name") diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index d656d58adc..8f370d389b 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -13,6 +13,7 @@ from .create import ( BaseCreator, Creator, AutoCreator, + HiddenCreator, CreatedInstance, CreatorError, @@ -88,11 +89,12 @@ from .context_tools import ( deregister_host, get_process_id, + get_global_context, get_current_context, get_current_host_name, get_current_project_name, get_current_asset_name, - get_current_task_name, + get_current_task_name ) install = install_host uninstall = uninstall_host @@ -113,6 +115,7 @@ __all__ = ( "BaseCreator", "Creator", "AutoCreator", + "HiddenCreator", "CreatedInstance", "CreatorError", @@ -186,6 +189,7 @@ __all__ = ( "deregister_host", "get_process_id", + "get_global_context", "get_current_context", "get_current_host_name", "get_current_project_name", diff --git a/openpype/pipeline/anatomy.py b/openpype/pipeline/anatomy.py index 30748206a3..029b5cc1ff 100644 --- a/openpype/pipeline/anatomy.py +++ b/openpype/pipeline/anatomy.py @@ -5,17 +5,19 @@ import platform import collections import numbers +import ayon_api import six import time +from openpype import AYON_SERVER_ENABLED from openpype.settings.lib import ( get_local_settings, ) from openpype.settings.constants import ( DEFAULT_PROJECT_KEY ) - from openpype.client import get_project +from openpype.lib import Logger, get_local_site_id from openpype.lib.path_templates import ( TemplateUnsolved, TemplateResult, @@ -23,7 +25,6 @@ from openpype.lib.path_templates import ( TemplatesDict, FormatObject, ) -from openpype.lib.log import Logger from openpype.modules import ModulesManager log = Logger.get_logger(__name__) @@ -475,6 +476,13 @@ class Anatomy(BaseAnatomy): Union[Dict[str, str], None]): Local root overrides. """ + if AYON_SERVER_ENABLED: + if not project_name: + return + return ayon_api.get_project_roots_for_site( + project_name, get_local_site_id() + ) + if local_settings is None: local_settings = get_local_settings() diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py index 13b235d5dd..02a6b90f25 100644 --- a/openpype/pipeline/colorspace.py +++ b/openpype/pipeline/colorspace.py @@ -448,7 +448,8 @@ def get_imageio_config( host_name, project_settings=None, anatomy_data=None, - anatomy=None + anatomy=None, + env=None ): """Returns config data from settings @@ -461,6 +462,7 @@ def get_imageio_config( project_settings (Optional[dict]): Project settings. anatomy_data (Optional[dict]): anatomy formatting data. anatomy (Optional[Anatomy]): Anatomy object. + env (Optional[dict]): Environment variables. Returns: dict: config path data or empty dict @@ -533,13 +535,13 @@ def get_imageio_config( if override_global_config: config_data = _get_config_data( - host_ocio_config["filepath"], formatting_data + host_ocio_config["filepath"], formatting_data, env ) else: # get config path from global config_global = imageio_global["ocio_config"] config_data = _get_config_data( - config_global["filepath"], formatting_data + config_global["filepath"], formatting_data, env ) if not config_data: @@ -551,7 +553,7 @@ def get_imageio_config( return config_data -def _get_config_data(path_list, anatomy_data): +def _get_config_data(path_list, anatomy_data, env=None): """Return first existing path in path list. If template is used in path inputs, @@ -561,14 +563,17 @@ def _get_config_data(path_list, anatomy_data): Args: path_list (list[str]): list of abs paths anatomy_data (dict): formatting data + env (Optional[dict]): Environment variables. Returns: dict: config data """ formatting_data = deepcopy(anatomy_data) + environment_vars = env or dict(**os.environ) + # format the path for potential env vars - formatting_data.update(dict(**os.environ)) + formatting_data.update(environment_vars) # first try host config paths for path_ in path_list: diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index 97a5c1ba69..f567118062 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -21,6 +21,7 @@ from openpype.client import ( from openpype.lib.events import emit_event from openpype.modules import load_modules, ModulesManager from openpype.settings import get_project_settings +from openpype.tests.lib import is_in_tests from .publish.lib import filter_pyblish_plugins from .anatomy import Anatomy @@ -35,7 +36,7 @@ from . import ( register_inventory_action_path, register_creator_plugin_path, deregister_loader_plugin_path, - deregister_inventory_action_path, + deregister_inventory_action_path ) @@ -142,6 +143,10 @@ def install_host(host): else: pyblish.api.register_target("local") + if is_in_tests(): + print("Registering pyblish target: automated") + pyblish.api.register_target("automated") + project_name = os.environ.get("AVALON_PROJECT") host_name = os.environ.get("AVALON_APP") @@ -320,7 +325,7 @@ def get_current_host_name(): """Current host name. Function is based on currently registered host integration or environment - variant 'AVALON_APP'. + variable 'AVALON_APP'. Returns: Union[str, None]: Name of host integration in current process or None. @@ -333,6 +338,26 @@ def get_current_host_name(): def get_global_context(): + """Global context defined in environment variables. + + Values here may not reflect current context of host integration. The + function can be used on startup before a host is registered. + + Use 'get_current_context' to make sure you'll get current host integration + context info. + + Example: + { + "project_name": "Commercial", + "asset_name": "Bunny", + "task_name": "Animation", + } + + Returns: + dict[str, Union[str, None]]: Context defined with environment + variables. + """ + return { "project_name": os.environ.get("AVALON_PROJECT"), "asset_name": os.environ.get("AVALON_ASSET"), diff --git a/openpype/pipeline/create/__init__.py b/openpype/pipeline/create/__init__.py index 5eee18df0f..94d575a776 100644 --- a/openpype/pipeline/create/__init__.py +++ b/openpype/pipeline/create/__init__.py @@ -2,6 +2,7 @@ from .constants import ( SUBSET_NAME_ALLOWED_SYMBOLS, DEFAULT_SUBSET_TEMPLATE, PRE_CREATE_THUMBNAIL_KEY, + DEFAULT_VARIANT_VALUE, ) from .utils import ( @@ -50,6 +51,7 @@ __all__ = ( "SUBSET_NAME_ALLOWED_SYMBOLS", "DEFAULT_SUBSET_TEMPLATE", "PRE_CREATE_THUMBNAIL_KEY", + "DEFAULT_VARIANT_VALUE", "get_last_versions_for_instances", "get_next_versions_for_instances", @@ -74,6 +76,8 @@ __all__ = ( "register_creator_plugin_path", "deregister_creator_plugin_path", + "cache_and_get_instances", + "CreatedInstance", "CreateContext", diff --git a/openpype/pipeline/create/constants.py b/openpype/pipeline/create/constants.py index 375cfc4a12..7d1d0154e9 100644 --- a/openpype/pipeline/create/constants.py +++ b/openpype/pipeline/create/constants.py @@ -1,10 +1,12 @@ SUBSET_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_." DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}" PRE_CREATE_THUMBNAIL_KEY = "thumbnail_source" +DEFAULT_VARIANT_VALUE = "Main" __all__ = ( "SUBSET_NAME_ALLOWED_SYMBOLS", "DEFAULT_SUBSET_TEMPLATE", "PRE_CREATE_THUMBNAIL_KEY", + "DEFAULT_VARIANT_VALUE", ) diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 98fcee5fe5..3076efcde7 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -1165,8 +1165,8 @@ class CreatedInstance: Args: instance_data (Dict[str, Any]): Data in a structure ready for 'CreatedInstance' object. - creator (Creator): Creator plugin which is creating the instance - of for which the instance belong. + creator (BaseCreator): Creator plugin which is creating the + instance of for which the instance belong. """ instance_data = copy.deepcopy(instance_data) @@ -1979,7 +1979,11 @@ class CreateContext: if pre_create_data is None: pre_create_data = {} - precreate_attr_defs = creator.get_pre_create_attr_defs() or [] + precreate_attr_defs = [] + # Hidden creators do not have or need the pre-create attributes. + if isinstance(creator, Creator): + precreate_attr_defs = creator.get_pre_create_attr_defs() + # Create default values of precreate data _pre_create_data = get_default_values(precreate_attr_defs) # Update passed precreate data to default values @@ -2121,7 +2125,7 @@ class CreateContext: def reset_instances(self): """Reload instances""" - self._instances_by_id = {} + self._instances_by_id = collections.OrderedDict() # Collect instances error_message = "Collection of instances for creator {} failed. {}" diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py index 947a90ef08..38d6b6f465 100644 --- a/openpype/pipeline/create/creator_plugins.py +++ b/openpype/pipeline/create/creator_plugins.py @@ -1,4 +1,3 @@ -import os import copy import collections @@ -20,6 +19,7 @@ from openpype.pipeline.plugin_discover import ( deregister_plugin_path ) +from .constants import DEFAULT_VARIANT_VALUE from .subset_name import get_subset_name from .utils import get_next_versions_for_instances from .legacy_create import LegacyCreator @@ -517,7 +517,7 @@ class Creator(BaseCreator): default_variants = [] # Default variant used in 'get_default_variant' - default_variant = None + _default_variant = None # Short description of family # - may not be used if `get_description` is overriden @@ -543,6 +543,21 @@ class Creator(BaseCreator): # - similar to instance attribute definitions pre_create_attr_defs = [] + def __init__(self, *args, **kwargs): + cls = self.__class__ + + # Fix backwards compatibility for plugins which override + # 'default_variant' attribute directly + if not isinstance(cls.default_variant, property): + # Move value from 'default_variant' to '_default_variant' + self._default_variant = self.default_variant + # Create property 'default_variant' on the class + cls.default_variant = property( + cls._get_default_variant_wrap, + cls._set_default_variant_wrap + ) + super(Creator, self).__init__(*args, **kwargs) + @property def show_order(self): """Order in which is creator shown in UI. @@ -595,10 +610,10 @@ class Creator(BaseCreator): def get_default_variants(self): """Default variant values for UI tooltips. - Replacement of `defatults` attribute. Using method gives ability to - have some "logic" other than attribute values. + Replacement of `default_variants` attribute. Using method gives + ability to have some "logic" other than attribute values. - By default returns `default_variants` value. + By default, returns `default_variants` value. Returns: List[str]: Whisper variants for user input. @@ -606,17 +621,63 @@ class Creator(BaseCreator): return copy.deepcopy(self.default_variants) - def get_default_variant(self): + def get_default_variant(self, only_explicit=False): """Default variant value that will be used to prefill variant input. This is for user input and value may not be content of result from `get_default_variants`. - Can return `None`. In that case first element from - `get_default_variants` should be used. + Note: + This method does not allow to have empty string as + default variant. + + Args: + only_explicit (Optional[bool]): If True, only explicit default + variant from '_default_variant' will be returned. + + Returns: + str: Variant value. """ - return self.default_variant + if only_explicit or self._default_variant: + return self._default_variant + + for variant in self.get_default_variants(): + return variant + return DEFAULT_VARIANT_VALUE + + def _get_default_variant_wrap(self): + """Default variant value that will be used to prefill variant input. + + Wrapper for 'get_default_variant'. + + Notes: + This method is wrapper for 'get_default_variant' + for 'default_variant' property, so creator can override + the method. + + Returns: + str: Variant value. + """ + + return self.get_default_variant() + + def _set_default_variant_wrap(self, variant): + """Set default variant value. + + This method is needed for automated settings overrides which are + changing attributes based on keys in settings. + + Args: + variant (str): New default variant value. + """ + + self._default_variant = variant + + default_variant = property( + _get_default_variant_wrap, + _set_default_variant_wrap + ) def get_pre_create_attr_defs(self): """Plugin attribute definitions needed for creation. @@ -660,12 +721,12 @@ def discover_convertor_plugins(*args, **kwargs): def discover_legacy_creator_plugins(): - from openpype.lib import Logger + from openpype.pipeline import get_current_project_name log = Logger.get_logger("CreatorDiscover") plugins = discover(LegacyCreator) - project_name = os.environ.get("AVALON_PROJECT") + project_name = get_current_project_name() system_settings = get_system_settings() project_settings = get_project_settings(project_name) for plugin in plugins: diff --git a/openpype/pipeline/delivery.py b/openpype/pipeline/delivery.py index ddde45d4da..bbd01f7a4e 100644 --- a/openpype/pipeline/delivery.py +++ b/openpype/pipeline/delivery.py @@ -178,7 +178,9 @@ def deliver_sequence( anatomy_data, format_dict, report_items, - log + log, + has_renumbered_frame=False, + new_frame_start=0 ): """ For Pype2(mainly - works in 3 too) where representation might not contain files. @@ -294,17 +296,30 @@ def deliver_sequence( src_head = src_collection.head src_tail = src_collection.tail uploaded = 0 + first_frame = min(src_collection.indexes) for index in src_collection.indexes: src_padding = src_collection.format("{padding}") % index src_file_name = "{}{}{}".format(src_head, src_padding, src_tail) src = os.path.normpath( os.path.join(dir_path, src_file_name) ) - - dst_padding = dst_collection.format("{padding}") % index + dst_index = index + if has_renumbered_frame: + # Calculate offset between first frame and current frame + # - '0' for first frame + offset = new_frame_start - first_frame + # Add offset to new frame start + dst_index = index + offset + if dst_index < 0: + msg = "Renumber frame has a smaller number than original frame" # noqa + report_items[msg].append(src_file_name) + log.warning("{} <{}>".format(msg, context)) + return report_items, 0 + dst_padding = dst_collection.format("{padding}") % dst_index dst = "{}{}{}".format(dst_head, dst_padding, dst_tail) log.debug("Copying single: {} -> {}".format(src, dst)) _copy_file(src, dst) + uploaded += 1 return report_items, uploaded diff --git a/openpype/pipeline/farm/pyblish_functions.py b/openpype/pipeline/farm/pyblish_functions.py new file mode 100644 index 0000000000..fe3ab97de8 --- /dev/null +++ b/openpype/pipeline/farm/pyblish_functions.py @@ -0,0 +1,889 @@ +import copy +import attr +import pyblish.api +import os +import clique +from copy import deepcopy +import re +import warnings + +from openpype.pipeline import ( + get_current_project_name, + get_representation_path, + Anatomy, +) +from openpype.client import ( + get_last_version_by_subset_name, + get_representations +) +from openpype.lib import Logger +from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline.farm.patterning import match_aov_pattern + + +@attr.s +class TimeData(object): + """Structure used to handle time related data.""" + start = attr.ib(type=int) + end = attr.ib(type=int) + fps = attr.ib() + step = attr.ib(default=1, type=int) + handle_start = attr.ib(default=0, type=int) + handle_end = attr.ib(default=0, type=int) + + +def remap_source(path, anatomy): + """Try to remap path to rootless path. + + Args: + path (str): Path to be remapped to rootless. + anatomy (Anatomy): Anatomy object to handle remapping + itself. + + Returns: + str: Remapped path. + + Throws: + ValueError: if the root cannot be found. + + """ + success, rootless_path = ( + anatomy.find_root_template_from_path(path) + ) + if success: + source = rootless_path + else: + raise ValueError( + "Root from template path cannot be found: {}".format(path)) + return source + + +def extend_frames(asset, subset, start, end): + """Get latest version of asset nad update frame range. + + Based on minimum and maximum values. + + Arguments: + asset (str): asset name + subset (str): subset name + start (int): start frame + end (int): end frame + + Returns: + (int, int): update frame start/end + + """ + # Frame comparison + prev_start = None + prev_end = None + + project_name = get_current_project_name() + version = get_last_version_by_subset_name( + project_name, + subset, + asset_name=asset + ) + + # Set prev start / end frames for comparison + if not prev_start and not prev_end: + prev_start = version["data"]["frameStart"] + prev_end = version["data"]["frameEnd"] + + updated_start = min(start, prev_start) + updated_end = max(end, prev_end) + + return updated_start, updated_end + + +def get_time_data_from_instance_or_context(instance): + """Get time data from instance (or context). + + If time data is not found on instance, data from context will be used. + + Args: + instance (pyblish.api.Instance): Source instance. + + Returns: + TimeData: dataclass holding time information. + + """ + return TimeData( + start=(instance.data.get("frameStart") or + instance.context.data.get("frameStart")), + end=(instance.data.get("frameEnd") or + instance.context.data.get("frameEnd")), + fps=(instance.data.get("fps") or + instance.context.data.get("fps")), + handle_start=(instance.data.get("handleStart") or + instance.context.data.get("handleStart")), # noqa: E501 + handle_end=(instance.data.get("handleEnd") or + instance.context.data.get("handleEnd")) + ) + + +def get_transferable_representations(instance): + """Transfer representations from original instance. + + This will get all representations on the original instance that + are flagged with `publish_on_farm` and return them to be included + on skeleton instance if needed. + + Args: + instance (pyblish.api.Instance): Original instance to be processed. + + Return: + list of dicts: List of transferable representations. + + """ + anatomy = instance.context.data["anatomy"] # type: Anatomy + to_transfer = [] + + for representation in instance.data.get("representations", []): + if "publish_on_farm" not in representation.get("tags", []): + continue + + trans_rep = representation.copy() + + staging_dir = trans_rep.get("stagingDir") + + if staging_dir: + try: + trans_rep["stagingDir"] = remap_source(staging_dir, anatomy) + except ValueError: + log = Logger.get_logger("farm_publishing") + log.warning( + ("Could not find root path for remapping \"{}\". " + "This may cause issues on farm.").format(staging_dir)) + + to_transfer.append(trans_rep) + return to_transfer + + +def create_skeleton_instance( + instance, families_transfer=None, instance_transfer=None): + # type: (pyblish.api.Instance, list, dict) -> dict + """Create skeleton instance from original instance data. + + This will create dictionary containing skeleton + - common - data used for publishing rendered instances. + This skeleton instance is then extended with additional data + and serialized to be processed by farm job. + + Args: + instance (pyblish.api.Instance): Original instance to + be used as a source of data. + families_transfer (list): List of family names to transfer + from the original instance to the skeleton. + instance_transfer (dict): Dict with keys as families and + values as a list of property names to transfer to the + new skeleton. + + Returns: + dict: Dictionary with skeleton instance data. + + """ + # list of family names to transfer to new family if present + + context = instance.context + data = instance.data.copy() + anatomy = instance.context.data["anatomy"] # type: Anatomy + + # get time related data from instance (or context) + time_data = get_time_data_from_instance_or_context(instance) + + if data.get("extendFrames", False): + time_data.start, time_data.end = extend_frames( + data["asset"], + data["subset"], + time_data.start, + time_data.end, + ) + + source = data.get("source") or context.data.get("currentFile") + success, rootless_path = ( + anatomy.find_root_template_from_path(source) + ) + if success: + source = rootless_path + else: + # `rootless_path` is not set to `source` if none of roots match + log = Logger.get_logger("farm_publishing") + log.warning(("Could not find root path for remapping \"{}\". " + "This may cause issues.").format(source)) + + family = ("render" + if "prerender.farm" not in instance.data["families"] + else "prerender") + families = [family] + + # pass review to families if marked as review + if data.get("review"): + families.append("review") + + instance_skeleton_data = { + "family": family, + "subset": data["subset"], + "families": families, + "asset": data["asset"], + "frameStart": time_data.start, + "frameEnd": time_data.end, + "handleStart": time_data.handle_start, + "handleEnd": time_data.handle_end, + "frameStartHandle": time_data.start - time_data.handle_start, + "frameEndHandle": time_data.end + time_data.handle_end, + "comment": data.get("comment"), + "fps": time_data.fps, + "source": source, + "extendFrames": data.get("extendFrames"), + "overrideExistingFrame": data.get("overrideExistingFrame"), + "pixelAspect": data.get("pixelAspect", 1), + "resolutionWidth": data.get("resolutionWidth", 1920), + "resolutionHeight": data.get("resolutionHeight", 1080), + "multipartExr": data.get("multipartExr", False), + "jobBatchName": data.get("jobBatchName", ""), + "useSequenceForReview": data.get("useSequenceForReview", True), + # map inputVersions `ObjectId` -> `str` so json supports it + "inputVersions": list(map(str, data.get("inputVersions", []))), + "colorspace": data.get("colorspace") + } + + # skip locking version if we are creating v01 + instance_version = data.get("version") # take this if exists + if instance_version != 1: + instance_skeleton_data["version"] = instance_version + + # transfer specific families from original instance to new render + for item in families_transfer: + if item in instance.data.get("families", []): + instance_skeleton_data["families"] += [item] + + # transfer specific properties from original instance based on + # mapping dictionary `instance_transfer` + for key, values in instance_transfer.items(): + if key in instance.data.get("families", []): + for v in values: + instance_skeleton_data[v] = instance.data.get(v) + + representations = get_transferable_representations(instance) + instance_skeleton_data["representations"] = representations + + persistent = instance.data.get("stagingDir_persistent") is True + instance_skeleton_data["stagingDir_persistent"] = persistent + + return instance_skeleton_data + + +def _add_review_families(families): + """Adds review flag to families. + + Handles situation when new instances are created which should have review + in families. In that case they should have 'ftrack' too. + + TODO: This is ugly and needs to be refactored. Ftrack family should be + added in different way (based on if the module is enabled?) + + """ + # if we have one representation with preview tag + # flag whole instance for review and for ftrack + if "ftrack" not in families and os.environ.get("FTRACK_SERVER"): + families.append("ftrack") + if "review" not in families: + families.append("review") + return families + + +def prepare_representations(skeleton_data, exp_files, anatomy, aov_filter, + skip_integration_repre_list, + do_not_add_review, + context, + color_managed_plugin): + """Create representations for file sequences. + + This will return representations of expected files if they are not + in hierarchy of aovs. There should be only one sequence of files for + most cases, but if not - we create representation from each of them. + + Arguments: + skeleton_data (dict): instance data for which we are + setting representations + exp_files (list): list of expected files + anatomy (Anatomy): + aov_filter (dict): add review for specific aov names + skip_integration_repre_list (list): exclude specific extensions, + do_not_add_review (bool): explicitly skip review + color_managed_plugin (publish.ColormanagedPyblishPluginMixin) + Returns: + list of representations + + """ + representations = [] + host_name = os.environ.get("AVALON_APP", "") + collections, remainders = clique.assemble(exp_files) + + log = Logger.get_logger("farm_publishing") + + # create representation for every collected sequence + for collection in collections: + ext = collection.tail.lstrip(".") + preview = False + # TODO 'useSequenceForReview' is temporary solution which does + # not work for 100% of cases. We must be able to tell what + # expected files contains more explicitly and from what + # should be review made. + # - "review" tag is never added when is set to 'False' + if skeleton_data["useSequenceForReview"]: + # toggle preview on if multipart is on + if skeleton_data.get("multipartExr", False): + log.debug( + "Adding preview tag because its multipartExr" + ) + preview = True + else: + render_file_name = list(collection)[0] + # if filtered aov name is found in filename, toggle it for + # preview video rendering + preview = match_aov_pattern( + host_name, aov_filter, render_file_name + ) + + staging = os.path.dirname(list(collection)[0]) + success, rootless_staging_dir = ( + anatomy.find_root_template_from_path(staging) + ) + if success: + staging = rootless_staging_dir + else: + log.warning(( + "Could not find root path for remapping \"{}\"." + " This may cause issues on farm." + ).format(staging)) + + frame_start = int(skeleton_data.get("frameStartHandle")) + if skeleton_data.get("slate"): + frame_start -= 1 + + # explicitly disable review by user + preview = preview and not do_not_add_review + rep = { + "name": ext, + "ext": ext, + "files": [os.path.basename(f) for f in list(collection)], + "frameStart": frame_start, + "frameEnd": int(skeleton_data.get("frameEndHandle")), + # If expectedFile are absolute, we need only filenames + "stagingDir": staging, + "fps": skeleton_data.get("fps"), + "tags": ["review"] if preview else [], + } + + # poor man exclusion + if ext in skip_integration_repre_list: + rep["tags"].append("delete") + + if skeleton_data.get("multipartExr", False): + rep["tags"].append("multipartExr") + + # support conversion from tiled to scanline + if skeleton_data.get("convertToScanline"): + log.info("Adding scanline conversion.") + rep["tags"].append("toScanline") + + representations.append(rep) + + if preview: + skeleton_data["families"] = _add_review_families( + skeleton_data["families"]) + + # add remainders as representations + for remainder in remainders: + ext = remainder.split(".")[-1] + + staging = os.path.dirname(remainder) + success, rootless_staging_dir = ( + anatomy.find_root_template_from_path(staging) + ) + if success: + staging = rootless_staging_dir + else: + log.warning(( + "Could not find root path for remapping \"{}\"." + " This may cause issues on farm." + ).format(staging)) + + rep = { + "name": ext, + "ext": ext, + "files": os.path.basename(remainder), + "stagingDir": staging, + } + + preview = match_aov_pattern( + host_name, aov_filter, remainder + ) + preview = preview and not do_not_add_review + if preview: + rep.update({ + "fps": skeleton_data.get("fps"), + "tags": ["review"] + }) + skeleton_data["families"] = \ + _add_review_families(skeleton_data["families"]) + + already_there = False + for repre in skeleton_data.get("representations", []): + # might be added explicitly before by publish_on_farm + already_there = repre.get("files") == rep["files"] + if already_there: + log.debug("repre {} already_there".format(repre)) + break + + if not already_there: + representations.append(rep) + + for rep in representations: + # inject colorspace data + color_managed_plugin.set_representation_colorspace( + rep, context, + colorspace=skeleton_data["colorspace"] + ) + + return representations + + +def create_instances_for_aov(instance, skeleton, aov_filter, + skip_integration_repre_list, + do_not_add_review): + """Create instances from AOVs. + + This will create new pyblish.api.Instances by going over expected + files defined on original instance. + + Args: + instance (pyblish.api.Instance): Original instance. + skeleton (dict): Skeleton instance data. + skip_integration_repre_list (list): skip + + Returns: + list of pyblish.api.Instance: Instances created from + expected files. + + """ + # we cannot attach AOVs to other subsets as we consider every + # AOV subset of its own. + + log = Logger.get_logger("farm_publishing") + additional_color_data = { + "renderProducts": instance.data["renderProducts"], + "colorspaceConfig": instance.data["colorspaceConfig"], + "display": instance.data["colorspaceDisplay"], + "view": instance.data["colorspaceView"] + } + + # Get templated path from absolute config path. + anatomy = instance.context.data["anatomy"] + colorspace_template = instance.data["colorspaceConfig"] + try: + additional_color_data["colorspaceTemplate"] = remap_source( + colorspace_template, anatomy) + except ValueError as e: + log.warning(e) + additional_color_data["colorspaceTemplate"] = colorspace_template + + # if there are subset to attach to and more than one AOV, + # we cannot proceed. + if ( + len(instance.data.get("attachTo", [])) > 0 + and len(instance.data.get("expectedFiles")[0].keys()) != 1 + ): + raise KnownPublishError( + "attaching multiple AOVs or renderable cameras to " + "subset is not supported yet.") + + # create instances for every AOV we found in expected files. + # NOTE: this is done for every AOV and every render camera (if + # there are multiple renderable cameras in scene) + return _create_instances_for_aov( + instance, + skeleton, + aov_filter, + additional_color_data, + skip_integration_repre_list, + do_not_add_review + ) + + +def _create_instances_for_aov(instance, skeleton, aov_filter, additional_data, + skip_integration_repre_list, do_not_add_review): + """Create instance for each AOV found. + + This will create new instance for every AOV it can detect in expected + files list. + + Args: + instance (pyblish.api.Instance): Original instance. + skeleton (dict): Skeleton data for instance (those needed) later + by collector. + additional_data (dict): .. + skip_integration_repre_list (list): list of extensions that shouldn't + be published + do_not_addbe _review (bool): explicitly disable review + + + Returns: + list of instances + + Throws: + ValueError: + + """ + # TODO: this needs to be taking the task from context or instance + task = os.environ["AVALON_TASK"] + + anatomy = instance.context.data["anatomy"] + subset = skeleton["subset"] + cameras = instance.data.get("cameras", []) + exp_files = instance.data["expectedFiles"] + log = Logger.get_logger("farm_publishing") + + instances = [] + # go through AOVs in expected files + for aov, files in exp_files[0].items(): + cols, rem = clique.assemble(files) + # we shouldn't have any reminders. And if we do, it should + # be just one item for single frame renders. + if not cols and rem: + if len(rem) != 1: + raise ValueError("Found multiple non related files " + "to render, don't know what to do " + "with them.") + col = rem[0] + ext = os.path.splitext(col)[1].lstrip(".") + else: + # but we really expect only one collection. + # Nothing else make sense. + if len(cols) != 1: + raise ValueError("Only one image sequence type is expected.") # noqa: E501 + ext = cols[0].tail.lstrip(".") + col = list(cols[0]) + + # create subset name `familyTaskSubset_AOV` + # TODO refactor/remove me + family = skeleton["family"] + if not subset.startswith(family): + group_name = '{}{}{}{}{}'.format( + family, + task[0].upper(), task[1:], + subset[0].upper(), subset[1:]) + else: + group_name = subset + + # if there are multiple cameras, we need to add camera name + if isinstance(col, (list, tuple)): + cam = [c for c in cameras if c in col[0]] + else: + # in case of single frame + cam = [c for c in cameras if c in col] + if cam: + if aov: + subset_name = '{}_{}_{}'.format(group_name, cam, aov) + else: + subset_name = '{}_{}'.format(group_name, cam) + else: + if aov: + subset_name = '{}_{}'.format(group_name, aov) + else: + subset_name = '{}'.format(group_name) + + if isinstance(col, (list, tuple)): + staging = os.path.dirname(col[0]) + else: + staging = os.path.dirname(col) + + try: + staging = remap_source(staging, anatomy) + except ValueError as e: + log.warning(e) + + log.info("Creating data for: {}".format(subset_name)) + + app = os.environ.get("AVALON_APP", "") + + if isinstance(col, list): + render_file_name = os.path.basename(col[0]) + else: + render_file_name = os.path.basename(col) + aov_patterns = aov_filter + + preview = match_aov_pattern(app, aov_patterns, render_file_name) + # toggle preview on if multipart is on + if instance.data.get("multipartExr"): + log.debug("Adding preview tag because its multipartExr") + preview = True + + new_instance = deepcopy(skeleton) + new_instance["subset"] = subset_name + new_instance["subsetGroup"] = group_name + + # explicitly disable review by user + preview = preview and not do_not_add_review + if preview: + new_instance["review"] = True + + # create representation + if isinstance(col, (list, tuple)): + files = [os.path.basename(f) for f in col] + else: + files = os.path.basename(col) + + # Copy render product "colorspace" data to representation. + colorspace = "" + products = additional_data["renderProducts"].layer_data.products + for product in products: + if product.productName == aov: + colorspace = product.colorspace + break + + rep = { + "name": ext, + "ext": ext, + "files": files, + "frameStart": int(skeleton["frameStartHandle"]), + "frameEnd": int(skeleton["frameEndHandle"]), + # If expectedFile are absolute, we need only filenames + "stagingDir": staging, + "fps": new_instance.get("fps"), + "tags": ["review"] if preview else [], + "colorspaceData": { + "colorspace": colorspace, + "config": { + "path": additional_data["colorspaceConfig"], + "template": additional_data["colorspaceTemplate"] + }, + "display": additional_data["display"], + "view": additional_data["view"] + } + } + + # support conversion from tiled to scanline + if instance.data.get("convertToScanline"): + log.info("Adding scanline conversion.") + rep["tags"].append("toScanline") + + # poor man exclusion + if ext in skip_integration_repre_list: + rep["tags"].append("delete") + + if preview: + new_instance["families"] = _add_review_families( + new_instance["families"]) + + new_instance["representations"] = [rep] + + # if extending frames from existing version, copy files from there + # into our destination directory + if new_instance.get("extendFrames", False): + copy_extend_frames(new_instance, rep) + instances.append(new_instance) + log.debug("instances:{}".format(instances)) + return instances + + +def get_resources(project_name, version, extension=None): + """Get the files from the specific version. + + This will return all get all files from representation. + + Todo: + This is really weird function, and it's use is + highly controversial. First, it will not probably work + ar all in final release of AYON, second, the logic isn't sound. + It should try to find representation matching the current one - + because it is used to pull out files from previous version to + be included in this one. + + .. deprecated:: 3.15.5 + This won't work in AYON and even the logic must be refactored. + + Args: + project_name (str): Name of the project. + version (dict): Version document. + extension (str): extension used to filter + representations. + + Returns: + list: of files + + """ + warnings.warn(( + "This won't work in AYON and even " + "the logic must be refactored."), DeprecationWarning) + extensions = [] + if extension: + extensions = [extension] + + # there is a `context_filter` argument that won't probably work in + # final release of AYON. SO we'll rather not use it + repre_docs = list(get_representations( + project_name, version_ids=[version["_id"]])) + + filtered = [] + for doc in repre_docs: + if doc["context"]["ext"] in extensions: + filtered.append(doc) + + representation = filtered[0] + directory = get_representation_path(representation) + print("Source: ", directory) + resources = sorted( + [ + os.path.normpath(os.path.join(directory, file_name)) + for file_name in os.listdir(directory) + ] + ) + + return resources + + +def copy_extend_frames(instance, representation): + """Copy existing frames from latest version. + + This will copy all existing frames from subset's latest version back + to render directory and rename them to what renderer is expecting. + + Arguments: + instance (pyblish.plugin.Instance): instance to get required + data from + representation (dict): presentation to operate on + + """ + import speedcopy + + R_FRAME_NUMBER = re.compile( + r".+\.(?P[0-9]+)\..+") + + log = Logger.get_logger("farm_publishing") + log.info("Preparing to copy ...") + start = instance.data.get("frameStart") + end = instance.data.get("frameEnd") + project_name = instance.context.data["project"] + anatomy = instance.context.data["anatomy"] # type: Anatomy + + # get latest version of subset + # this will stop if subset wasn't published yet + + version = get_last_version_by_subset_name( + project_name, + instance.data.get("subset"), + asset_name=instance.data.get("asset") + ) + + # get its files based on extension + subset_resources = get_resources( + project_name, version, representation.get("ext") + ) + r_col, _ = clique.assemble(subset_resources) + + # if override remove all frames we are expecting to be rendered, + # so we'll copy only those missing from current render + if instance.data.get("overrideExistingFrame"): + for frame in range(start, end + 1): + if frame not in r_col.indexes: + continue + r_col.indexes.remove(frame) + + # now we need to translate published names from representation + # back. This is tricky, right now we'll just use same naming + # and only switch frame numbers + resource_files = [] + r_filename = os.path.basename( + representation.get("files")[0]) # first file + op = re.search(R_FRAME_NUMBER, r_filename) + pre = r_filename[:op.start("frame")] + post = r_filename[op.end("frame"):] + assert op is not None, "padding string wasn't found" + for frame in list(r_col): + fn = re.search(R_FRAME_NUMBER, frame) + # silencing linter as we need to compare to True, not to + # type + assert fn is not None, "padding string wasn't found" + # list of tuples (source, destination) + staging = representation.get("stagingDir") + staging = anatomy.fill_root(staging) + resource_files.append( + (frame, os.path.join( + staging, "{}{}{}".format(pre, fn["frame"], post))) + ) + + # test if destination dir exists and create it if not + output_dir = os.path.dirname(representation.get("files")[0]) + if not os.path.isdir(output_dir): + os.makedirs(output_dir) + + # copy files + for source in resource_files: + speedcopy.copy(source[0], source[1]) + log.info(" > {}".format(source[1])) + + log.info("Finished copying %i files" % len(resource_files)) + + +def attach_instances_to_subset(attach_to, instances): + """Attach instance to subset. + + If we are attaching to other subsets, create copy of existing + instances, change data to match its subset and replace + existing instances with modified data. + + Args: + attach_to (list): List of instances to attach to. + instances (list): List of instances to attach. + + Returns: + list: List of attached instances. + + """ + new_instances = [] + for attach_instance in attach_to: + for i in instances: + new_inst = copy.deepcopy(i) + new_inst["version"] = attach_instance.get("version") + new_inst["subset"] = attach_instance.get("subset") + new_inst["family"] = attach_instance.get("family") + new_inst["append"] = True + # don't set subsetGroup if we are attaching + new_inst.pop("subsetGroup") + new_instances.append(new_inst) + return new_instances + + +def create_metadata_path(instance, anatomy): + ins_data = instance.data + # Ensure output dir exists + output_dir = ins_data.get( + "publishRenderMetadataFolder", ins_data["outputDir"]) + + log = Logger.get_logger("farm_publishing") + + try: + if not os.path.isdir(output_dir): + os.makedirs(output_dir) + except OSError: + # directory is not available + log.warning("Path is unreachable: `{}`".format(output_dir)) + + metadata_filename = "{}_metadata.json".format(ins_data["subset"]) + + metadata_path = os.path.join(output_dir, metadata_filename) + + # Convert output dir to `{root}/rest/of/path/...` with Anatomy + success, rootless_mtdt_p = anatomy.find_root_template_from_path( + metadata_path) + if not success: + # `rootless_path` is not set to `output_dir` if none of roots match + log.warning(( + "Could not find root path for remapping \"{}\"." + " This may cause issues on farm." + ).format(output_dir)) + rootless_mtdt_p = metadata_path + + return metadata_path, rootless_mtdt_p diff --git a/openpype/pipeline/farm/pyblish_functions.pyi b/openpype/pipeline/farm/pyblish_functions.pyi new file mode 100644 index 0000000000..76f7c34dcd --- /dev/null +++ b/openpype/pipeline/farm/pyblish_functions.pyi @@ -0,0 +1,24 @@ +import pyblish.api +from openpype.pipeline import Anatomy +from typing import Tuple, Union, List + + +class TimeData: + start: int + end: int + fps: float | int + step: int + handle_start: int + handle_end: int + + def __init__(self, start: int, end: int, fps: float | int, step: int, handle_start: int, handle_end: int): + ... + ... + +def remap_source(source: str, anatomy: Anatomy): ... +def extend_frames(asset: str, subset: str, start: int, end: int) -> Tuple[int, int]: ... +def get_time_data_from_instance_or_context(instance: pyblish.api.Instance) -> TimeData: ... +def get_transferable_representations(instance: pyblish.api.Instance) -> list: ... +def create_skeleton_instance(instance: pyblish.api.Instance, families_transfer: list = ..., instance_transfer: dict = ...) -> dict: ... +def create_instances_for_aov(instance: pyblish.api.Instance, skeleton: dict, aov_filter: dict) -> List[pyblish.api.Instance]: ... +def attach_instances_to_subset(attach_to: list, instances: list) -> list: ... diff --git a/openpype/pipeline/farm/tools.py b/openpype/pipeline/farm/tools.py new file mode 100644 index 0000000000..f3acac7a32 --- /dev/null +++ b/openpype/pipeline/farm/tools.py @@ -0,0 +1,112 @@ +import os + + +def get_published_workfile_instance(context): + """Find workfile instance in context""" + for i in context: + is_workfile = ( + "workfile" in i.data.get("families", []) or + i.data["family"] == "workfile" + ) + if not is_workfile: + continue + + # test if there is instance of workfile waiting + # to be published. + if i.data["publish"] is not True: + continue + + return i + + +def from_published_scene(instance, replace_in_path=True): + """Switch work scene for published scene. + + If rendering/exporting from published scenes is enabled, this will + replace paths from working scene to published scene. + + Args: + instance (pyblish.api.Instance): Instance data to process. + replace_in_path (bool): if True, it will try to find + old scene name in path of expected files and replace it + with name of published scene. + + Returns: + str: Published scene path. + None: if no published scene is found. + + Note: + Published scene path is actually determined from project Anatomy + as at the time this plugin is running the scene can be still + un-published. + + """ + workfile_instance = get_published_workfile_instance(instance.context) + if workfile_instance is None: + return + + # determine published path from Anatomy. + template_data = workfile_instance.data.get("anatomyData") + rep = workfile_instance.data["representations"][0] + template_data["representation"] = rep.get("name") + template_data["ext"] = rep.get("ext") + template_data["comment"] = None + + anatomy = instance.context.data['anatomy'] + template_obj = anatomy.templates_obj["publish"]["path"] + template_filled = template_obj.format_strict(template_data) + file_path = os.path.normpath(template_filled) + + if not os.path.exists(file_path): + raise + + if not replace_in_path: + return file_path + + # now we need to switch scene in expected files + # because token will now point to published + # scene file and that might differ from current one + def _clean_name(path): + return os.path.splitext(os.path.basename(path))[0] + + new_scene = _clean_name(file_path) + orig_scene = _clean_name(instance.context.data["currentFile"]) + expected_files = instance.data.get("expectedFiles") + + if isinstance(expected_files[0], dict): + # we have aovs and we need to iterate over them + new_exp = {} + for aov, files in expected_files[0].items(): + replaced_files = [] + for f in files: + replaced_files.append( + str(f).replace(orig_scene, new_scene) + ) + new_exp[aov] = replaced_files + # [] might be too much here, TODO + instance.data["expectedFiles"] = [new_exp] + else: + new_exp = [] + for f in expected_files: + new_exp.append( + str(f).replace(orig_scene, new_scene) + ) + instance.data["expectedFiles"] = new_exp + + metadata_folder = instance.data.get("publishRenderMetadataFolder") + if metadata_folder: + metadata_folder = metadata_folder.replace(orig_scene, + new_scene) + instance.data["publishRenderMetadataFolder"] = metadata_folder + + return file_path + + +def iter_expected_files(exp): + if isinstance(exp[0], dict): + for _aov, files in exp[0].items(): + for file in files: + yield file + else: + for file in exp: + yield file diff --git a/openpype/pipeline/legacy_io.py b/openpype/pipeline/legacy_io.py index bde2b24c2a..60fa035c22 100644 --- a/openpype/pipeline/legacy_io.py +++ b/openpype/pipeline/legacy_io.py @@ -4,6 +4,7 @@ import sys import logging import functools +from openpype import AYON_SERVER_ENABLED from . import schema from .mongodb import AvalonMongoDB, session_data_from_environment @@ -39,8 +40,9 @@ def install(): _connection_object.Session.update(session) _connection_object.install() - module._mongo_client = _connection_object.mongo_client - module._database = module.database = _connection_object.database + if not AYON_SERVER_ENABLED: + module._mongo_client = _connection_object.mongo_client + module._database = module.database = _connection_object.database module._is_installed = True diff --git a/openpype/pipeline/load/plugins.py b/openpype/pipeline/load/plugins.py index e380d65bbe..f87fb3312d 100644 --- a/openpype/pipeline/load/plugins.py +++ b/openpype/pipeline/load/plugins.py @@ -39,9 +39,6 @@ class LoaderPlugin(list): log = logging.getLogger("SubsetLoader") log.propagate = True - def __init__(self, context): - self.fname = self.filepath_from_context(context) - @classmethod def apply_settings(cls, project_settings, system_settings): host_name = os.environ.get("AVALON_APP") @@ -246,9 +243,6 @@ class SubsetLoaderPlugin(LoaderPlugin): namespace (str, optional): Use pre-defined namespace """ - def __init__(self, context): - pass - def discover_loader_plugins(project_name=None): from openpype.lib import Logger diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py index 2c40280ccd..42418be40e 100644 --- a/openpype/pipeline/load/utils.py +++ b/openpype/pipeline/load/utils.py @@ -314,7 +314,12 @@ def load_with_repre_context( ) ) - loader = Loader(repre_context) + loader = Loader() + + # Backwards compatibility: Originally the loader's __init__ required the + # representation context to set `fname` attribute to the filename to load + loader.fname = get_representation_path_from_context(repre_context) + return loader.load(repre_context, name, namespace, options) @@ -338,8 +343,7 @@ def load_with_subset_context( ) ) - loader = Loader(subset_context) - return loader.load(subset_context, name, namespace, options) + return Loader().load(subset_context, name, namespace, options) def load_with_subset_contexts( @@ -364,8 +368,7 @@ def load_with_subset_contexts( "Running '{}' on '{}'".format(Loader.__name__, joined_subset_names) ) - loader = Loader(subset_contexts) - return loader.load(subset_contexts, name, namespace, options) + return Loader().load(subset_contexts, name, namespace, options) def load_container( @@ -447,8 +450,7 @@ def remove_container(container): .format(container.get("loader")) ) - loader = Loader(get_representation_context(container["representation"])) - return loader.remove(container) + return Loader().remove(container) def update_container(container, version=-1): @@ -498,8 +500,7 @@ def update_container(container, version=-1): .format(container.get("loader")) ) - loader = Loader(get_representation_context(container["representation"])) - return loader.update(container, new_representation) + return Loader().update(container, new_representation) def switch_container(container, representation, loader_plugin=None): @@ -635,7 +636,7 @@ def get_representation_path(representation, root=None, dbcon=None): root = registered_root() - def path_from_represenation(): + def path_from_representation(): try: template = representation["data"]["template"] except KeyError: @@ -759,7 +760,7 @@ def get_representation_path(representation, root=None, dbcon=None): return os.path.normpath(path) return ( - path_from_represenation() or + path_from_representation() or path_from_config() or path_from_data() ) diff --git a/openpype/pipeline/mongodb.py b/openpype/pipeline/mongodb.py index be2b67a5e7..41a44c7373 100644 --- a/openpype/pipeline/mongodb.py +++ b/openpype/pipeline/mongodb.py @@ -5,6 +5,7 @@ import logging import pymongo from uuid import uuid4 +from openpype import AYON_SERVER_ENABLED from openpype.client import OpenPypeMongoConnection from . import schema @@ -187,7 +188,8 @@ class AvalonMongoDB: return self._installed = True - self._database = self.mongo_client[str(os.environ["AVALON_DB"])] + if not AYON_SERVER_ENABLED: + self._database = self.mongo_client[str(os.environ["AVALON_DB"])] def uninstall(self): """Close any connection to the database""" diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 0961d79234..ada12800a9 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -537,44 +537,24 @@ def filter_pyblish_plugins(plugins): plugins.remove(plugin) -def find_close_plugin(close_plugin_name, log): - if close_plugin_name: - plugins = pyblish.api.discover() - for plugin in plugins: - if plugin.__name__ == close_plugin_name: - return plugin - - log.debug("Close plugin not found, app might not close.") - - -def remote_publish(log, close_plugin_name=None, raise_error=False): +def remote_publish(log): """Loops through all plugins, logs to console. Used for tests. Args: log (Logger) - close_plugin_name (str): name of plugin with responsibility to - close host app """ - # Error exit as soon as any error occurs. - error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" - close_plugin = find_close_plugin(close_plugin_name, log) + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error}\n{error.traceback}" for result in pyblish.util.publish_iter(): - for record in result["records"]: - log.info("{}: {}".format( - result["plugin"].label, record.msg)) + if not result["error"]: + continue - if result["error"]: - error_message = error_format.format(**result) - log.error(error_message) - if close_plugin: # close host app explicitly after error - context = pyblish.api.Context() - close_plugin().process(context) - if raise_error: - # Fatal Error is because of Deadline - error_message = "Fatal Error: " + error_format.format(**result) - raise RuntimeError(error_message) + error_message = error_format.format(**result) + log.error(error_message) + # 'Fatal Error: ' is because of Deadline + raise RuntimeError("Fatal Error: {}".format(error_message)) def get_errored_instances_from_context(context, plugin=None): @@ -869,6 +849,110 @@ def _validate_transient_template(project_name, template_name, anatomy): ).format(template_name, project_name)) +def get_published_workfile_instance(context): + """Find workfile instance in context""" + for i in context: + is_workfile = ( + "workfile" in i.data.get("families", []) or + i.data["family"] == "workfile" + ) + if not is_workfile: + continue + + # test if there is instance of workfile waiting + # to be published. + if not i.data.get("publish", True): + continue + + return i + + +def replace_with_published_scene_path(instance, replace_in_path=True): + """Switch work scene path for published scene. + If rendering/exporting from published scenes is enabled, this will + replace paths from working scene to published scene. + This only works if publish contains workfile instance! + Args: + instance (pyblish.api.Instance): Pyblish instance. + replace_in_path (bool): if True, it will try to find + old scene name in path of expected files and replace it + with name of published scene. + Returns: + str: Published scene path. + None: if no published scene is found. + Note: + Published scene path is actually determined from project Anatomy + as at the time this plugin is running scene can still not be + published. + """ + log = Logger.get_logger("published_workfile") + workfile_instance = get_published_workfile_instance(instance.context) + if workfile_instance is None: + return + + # determine published path from Anatomy. + template_data = workfile_instance.data.get("anatomyData") + rep = workfile_instance.data["representations"][0] + template_data["representation"] = rep.get("name") + template_data["ext"] = rep.get("ext") + template_data["comment"] = None + + anatomy = instance.context.data['anatomy'] + anatomy_filled = anatomy.format(template_data) + template_filled = anatomy_filled["publish"]["path"] + file_path = os.path.normpath(template_filled) + + log.info("Using published scene for render {}".format(file_path)) + + if not os.path.exists(file_path): + log.error("published scene does not exist!") + raise + + if not replace_in_path: + return file_path + + # now we need to switch scene in expected files + # because token will now point to published + # scene file and that might differ from current one + def _clean_name(path): + return os.path.splitext(os.path.basename(path))[0] + + new_scene = _clean_name(file_path) + orig_scene = _clean_name(instance.context.data["currentFile"]) + expected_files = instance.data.get("expectedFiles") + + if isinstance(expected_files[0], dict): + # we have aovs and we need to iterate over them + new_exp = {} + for aov, files in expected_files[0].items(): + replaced_files = [] + for f in files: + replaced_files.append( + str(f).replace(orig_scene, new_scene) + ) + new_exp[aov] = replaced_files + # [] might be too much here, TODO + instance.data["expectedFiles"] = [new_exp] + else: + new_exp = [] + for f in expected_files: + new_exp.append( + str(f).replace(orig_scene, new_scene) + ) + instance.data["expectedFiles"] = new_exp + + metadata_folder = instance.data.get("publishRenderMetadataFolder") + if metadata_folder: + metadata_folder = metadata_folder.replace(orig_scene, + new_scene) + instance.data["publishRenderMetadataFolder"] = metadata_folder + + log.info("Scene name was switched {} -> {}".format( + orig_scene, new_scene + )) + + return file_path + def add_repre_files_for_cleanup(instance, repre): """ Explicitly mark repre files to be deleted. @@ -877,7 +961,7 @@ def add_repre_files_for_cleanup(instance, repre): """ files = repre["files"] staging_dir = repre.get("stagingDir") - if not staging_dir: + if not staging_dir or instance.data.get("stagingDir_persistent"): return if isinstance(files, str): diff --git a/openpype/pipeline/schema.py b/openpype/pipeline/schema/__init__.py similarity index 92% rename from openpype/pipeline/schema.py rename to openpype/pipeline/schema/__init__.py index 7e96bfe1b1..d7b33f2621 100644 --- a/openpype/pipeline/schema.py +++ b/openpype/pipeline/schema/__init__.py @@ -24,6 +24,7 @@ log_ = logging.getLogger(__name__) ValidationError = jsonschema.ValidationError SchemaError = jsonschema.SchemaError +CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) _CACHED = False @@ -121,17 +122,14 @@ def _precache(): """Store available schemas in-memory for reduced disk access""" global _CACHED - repos_root = os.environ["OPENPYPE_REPOS_ROOT"] - schema_dir = os.path.join(repos_root, "schema") - - for schema in os.listdir(schema_dir): + for schema in os.listdir(CURRENT_DIR): if schema.startswith(("_", ".")): continue if not schema.endswith(".json"): continue - if not os.path.isfile(os.path.join(schema_dir, schema)): + if not os.path.isfile(os.path.join(CURRENT_DIR, schema)): continue - with open(os.path.join(schema_dir, schema)) as f: + with open(os.path.join(CURRENT_DIR, schema)) as f: log_.debug("Installing schema '%s'.." % schema) _cache[schema] = json.load(f) _CACHED = True diff --git a/openpype/pipeline/schema/application-1.0.json b/openpype/pipeline/schema/application-1.0.json new file mode 100644 index 0000000000..953abee569 --- /dev/null +++ b/openpype/pipeline/schema/application-1.0.json @@ -0,0 +1,68 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:application-1.0", + "description": "An application definition.", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "label", + "application_dir", + "executable" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "label": { + "description": "Nice name of application.", + "type": "string" + }, + "application_dir": { + "description": "Name of directory used for application resources.", + "type": "string" + }, + "executable": { + "description": "Name of callable executable, this is called to launch the application", + "type": "string" + }, + "description": { + "description": "Description of application.", + "type": "string" + }, + "environment": { + "description": "Key/value pairs for environment variables related to this application. Supports lists for paths, such as PYTHONPATH.", + "type": "object", + "items": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + } + }, + "default_dirs": { + "type": "array", + "items": { + "type": "string" + } + }, + "copy": { + "type": "object", + "patternProperties": { + "^.*$": { + "anyOf": [ + {"type": "string"}, + {"type": "null"} + ] + } + }, + "additionalProperties": false + } + } +} diff --git a/openpype/pipeline/schema/asset-1.0.json b/openpype/pipeline/schema/asset-1.0.json new file mode 100644 index 0000000000..ab104c002a --- /dev/null +++ b/openpype/pipeline/schema/asset-1.0.json @@ -0,0 +1,35 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-1.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "name", + "subsets" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "name": { + "description": "Name of directory", + "type": "string" + }, + "subsets": { + "type": "array", + "items": { + "$ref": "subset.json" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/asset-2.0.json b/openpype/pipeline/schema/asset-2.0.json new file mode 100644 index 0000000000..b894d79792 --- /dev/null +++ b/openpype/pipeline/schema/asset-2.0.json @@ -0,0 +1,55 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-2.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "silo", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:asset-2.0"], + "example": "openpype:asset-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["asset"], + "example": "asset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "Bruce" + }, + "silo": { + "description": "Group or container of asset", + "type": "string", + "example": "assets" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/asset-3.0.json b/openpype/pipeline/schema/asset-3.0.json new file mode 100644 index 0000000000..948704d2a1 --- /dev/null +++ b/openpype/pipeline/schema/asset-3.0.json @@ -0,0 +1,55 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-3.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:asset-3.0"], + "example": "openpype:asset-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["asset"], + "example": "asset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "Bruce" + }, + "silo": { + "description": "Group or container of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "assets" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/config-1.0.json b/openpype/pipeline/schema/config-1.0.json new file mode 100644 index 0000000000..49398a57cd --- /dev/null +++ b/openpype/pipeline/schema/config-1.0.json @@ -0,0 +1,85 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "template": { + "type": "object", + "additionalProperties": false, + "patternProperties": { + "^.*$": { + "type": "string" + } + } + }, + "tasks": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/config-1.1.json b/openpype/pipeline/schema/config-1.1.json new file mode 100644 index 0000000000..6e15514aaf --- /dev/null +++ b/openpype/pipeline/schema/config-1.1.json @@ -0,0 +1,87 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.1", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "template": { + "type": "object", + "additionalProperties": false, + "patternProperties": { + "^.*$": { + "type": "string" + } + } + }, + "tasks": { + "type": "object", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": [ + "short_name" + ] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/config-2.0.json b/openpype/pipeline/schema/config-2.0.json new file mode 100644 index 0000000000..54b226711a --- /dev/null +++ b/openpype/pipeline/schema/config-2.0.json @@ -0,0 +1,87 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-2.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "templates": { + "type": "object" + }, + "roots": { + "type": "object" + }, + "imageio": { + "type": "object" + }, + "tasks": { + "type": "object", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": [ + "short_name" + ] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/container-1.0.json b/openpype/pipeline/schema/container-1.0.json new file mode 100644 index 0000000000..012e8499e6 --- /dev/null +++ b/openpype/pipeline/schema/container-1.0.json @@ -0,0 +1,100 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:container-1.0", + "description": "A loaded asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "id", + "objectName", + "name", + "author", + "loader", + "families", + "time", + "subset", + "asset", + "representation", + "version", + "silo", + "path", + "source" + ], + "properties": { + "id": { + "description": "Identifier for finding object in host", + "type": "string", + "enum": ["pyblish.mindbender.container"], + "example": "pyblish.mindbender.container" + }, + "objectName": { + "description": "Name of internal object, such as the objectSet in Maya.", + "type": "string", + "example": "Bruce_:rigDefault_CON" + }, + "name": { + "description": "Full name of application object", + "type": "string", + "example": "modelDefault" + }, + "author": { + "description": "Name of the author of the published version", + "type": "string", + "example": "Marcus Ottosson" + }, + "loader": { + "description": "Name of loader plug-in used to produce this container", + "type": "string", + "example": "ModelLoader" + }, + "families": { + "description": "Families associated with the this subset", + "type": "string", + "example": "mindbender.model" + }, + "time": { + "description": "File-system safe, formatted time", + "type": "string", + "example": "20170329T131545Z" + }, + "subset": { + "description": "Name of source subset", + "type": "string", + "example": "modelDefault" + }, + "asset": { + "description": "Name of source asset", + "type": "string" , + "example": "Bruce" + }, + "representation": { + "description": "Name of source representation", + "type": "string" , + "example": ".ma" + }, + "version": { + "description": "Version number", + "type": "number", + "example": 12 + }, + "silo": { + "description": "Silo of parent asset", + "type": "string", + "example": "assets" + }, + "path": { + "description": "Absolute path on disk", + "type": "string", + "example": "{root}/assets/Bruce/publish/rigDefault/v002" + }, + "source": { + "description": "Absolute path to file from which this version was published", + "type": "string", + "example": "{root}/assets/Bruce/work/rigging/maya/scenes/rig_v001.ma" + } + } +} diff --git a/openpype/pipeline/schema/container-2.0.json b/openpype/pipeline/schema/container-2.0.json new file mode 100644 index 0000000000..1673ee5d1d --- /dev/null +++ b/openpype/pipeline/schema/container-2.0.json @@ -0,0 +1,59 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:container-2.0", + "description": "A loaded asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "id", + "objectName", + "name", + "namespace", + "loader", + "representation" + ], + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:container-2.0"], + "example": "openpype:container-2.0" + }, + "id": { + "description": "Identifier for finding object in host", + "type": "string", + "enum": ["pyblish.avalon.container"], + "example": "pyblish.avalon.container" + }, + "objectName": { + "description": "Name of internal object, such as the objectSet in Maya.", + "type": "string", + "example": "Bruce_:rigDefault_CON" + }, + "loader": { + "description": "Name of loader plug-in used to produce this container", + "type": "string", + "example": "ModelLoader" + }, + "name": { + "description": "Internal object name of container in application", + "type": "string", + "example": "modelDefault_01" + }, + "namespace": { + "description": "Internal namespace of container in application", + "type": "string", + "example": "Bruce_" + }, + "representation": { + "description": "Unique id of representation in database", + "type": "string", + "example": "59523f355f8c1b5f6c5e8348" + } + } +} diff --git a/openpype/pipeline/schema/hero_version-1.0.json b/openpype/pipeline/schema/hero_version-1.0.json new file mode 100644 index 0000000000..b720dc2887 --- /dev/null +++ b/openpype/pipeline/schema/hero_version-1.0.json @@ -0,0 +1,44 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:hero_version-1.0", + "description": "Hero version of asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "version_id", + "schema", + "type", + "parent" + ], + + "properties": { + "_id": { + "description": "Document's id (database will create it's if not entered)", + "example": "ObjectId(592c33475f8c1b064c4d1696)" + }, + "version_id": { + "description": "The version ID from which it was created", + "example": "ObjectId(592c33475f8c1b064c4d1695)" + }, + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:hero_version-1.0"], + "example": "openpype:hero_version-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["hero_version"], + "example": "hero_version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "ObjectId(592c33475f8c1b064c4d1697)" + } + } +} diff --git a/openpype/pipeline/schema/inventory-1.0.json b/openpype/pipeline/schema/inventory-1.0.json new file mode 100644 index 0000000000..2fe78794ab --- /dev/null +++ b/openpype/pipeline/schema/inventory-1.0.json @@ -0,0 +1,10 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": true +} diff --git a/openpype/pipeline/schema/inventory-1.1.json b/openpype/pipeline/schema/inventory-1.1.json new file mode 100644 index 0000000000..b61a76b32a --- /dev/null +++ b/openpype/pipeline/schema/inventory-1.1.json @@ -0,0 +1,10 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.1", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": true +} diff --git a/openpype/pipeline/schema/project-2.0.json b/openpype/pipeline/schema/project-2.0.json new file mode 100644 index 0000000000..0ed5a55599 --- /dev/null +++ b/openpype/pipeline/schema/project-2.0.json @@ -0,0 +1,86 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-2.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-2.0"], + "example": "openpype:project-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "example": { + "schema": "openpype:config-1.0", + "apps": [ + { + "name": "maya2016", + "label": "Autodesk Maya 2016" + }, + { + "name": "nuke10", + "label": "The Foundry Nuke 10.0" + } + ], + "tasks": [ + {"name": "model"}, + {"name": "render"}, + {"name": "animate"}, + {"name": "rig"}, + {"name": "lookdev"}, + {"name": "layout"} + ], + "template": { + "work": + "{root}/{project}/{silo}/{asset}/work/{task}/{app}", + "publish": + "{root}/{project}/{silo}/{asset}/publish/{subset}/v{version:0>3}/{subset}.{representation}" + } + }, + "$ref": "config-1.0.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/project-2.1.json b/openpype/pipeline/schema/project-2.1.json new file mode 100644 index 0000000000..9413c9f691 --- /dev/null +++ b/openpype/pipeline/schema/project-2.1.json @@ -0,0 +1,86 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-2.1", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-2.1"], + "example": "openpype:project-2.1" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "example": { + "schema": "openpype:config-1.1", + "apps": [ + { + "name": "maya2016", + "label": "Autodesk Maya 2016" + }, + { + "name": "nuke10", + "label": "The Foundry Nuke 10.0" + } + ], + "tasks": { + "Model": {"short_name": "mdl"}, + "Render": {"short_name": "rnd"}, + "Animate": {"short_name": "anim"}, + "Rig": {"short_name": "rig"}, + "Lookdev": {"short_name": "look"}, + "Layout": {"short_name": "lay"} + }, + "template": { + "work": + "{root}/{project}/{silo}/{asset}/work/{task}/{app}", + "publish": + "{root}/{project}/{silo}/{asset}/publish/{subset}/v{version:0>3}/{subset}.{representation}" + } + }, + "$ref": "config-1.1.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/project-3.0.json b/openpype/pipeline/schema/project-3.0.json new file mode 100644 index 0000000000..be23e10c93 --- /dev/null +++ b/openpype/pipeline/schema/project-3.0.json @@ -0,0 +1,59 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-3.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-3.0"], + "example": "openpype:project-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "$ref": "config-2.0.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/representation-1.0.json b/openpype/pipeline/schema/representation-1.0.json new file mode 100644 index 0000000000..347c585f52 --- /dev/null +++ b/openpype/pipeline/schema/representation-1.0.json @@ -0,0 +1,28 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:representation-1.0", + "description": "The inverse of an instance", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "format", + "path" + ], + + "properties": { + "schema": {"type": "string"}, + "format": { + "description": "File extension, including '.'", + "type": "string" + }, + "path": { + "description": "Unformatted path to version.", + "type": "string" + } + } +} diff --git a/openpype/pipeline/schema/representation-2.0.json b/openpype/pipeline/schema/representation-2.0.json new file mode 100644 index 0000000000..f47c16a10a --- /dev/null +++ b/openpype/pipeline/schema/representation-2.0.json @@ -0,0 +1,78 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:representation-2.0", + "description": "The inverse of an instance", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:representation-2.0"], + "example": "openpype:representation-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["representation"], + "example": "representation" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of representation", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "abc" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "label": "Alembic" + } + }, + "dependencies": { + "description": "Other representation that this representation depends on", + "type": "array", + "items": {"type": "string"}, + "example": [ + "592d547a5f8c1b388093c145" + ] + }, + "context": { + "description": "Summary of the context to which this representation belong.", + "type": "object", + "properties": { + "project": {"type": "object"}, + "asset": {"type": "string"}, + "silo": {"type": ["string", "null"]}, + "subset": {"type": "string"}, + "version": {"type": "number"}, + "representation": {"type": "string"} + }, + "example": { + "project": "hulk", + "asset": "Bruce", + "silo": "assets", + "subset": "rigDefault", + "version": 12, + "representation": "ma" + } + } + } +} diff --git a/openpype/pipeline/schema/session-1.0.json b/openpype/pipeline/schema/session-1.0.json new file mode 100644 index 0000000000..5ced0a6f08 --- /dev/null +++ b/openpype/pipeline/schema/session-1.0.json @@ -0,0 +1,143 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-1.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECTS", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_SILO", + "AVALON_CONFIG" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_SILO": { + "description": "Name of asset group or container", + "type": "string", + "pattern": "^\\w*$", + "example": "assets" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_CONFIG": { + "description": "Name of Avalon configuration", + "type": "string", + "pattern": "^\\w*$", + "example": "polly" + }, + "AVALON_APP": { + "description": "Name of application", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_MONGO": { + "description": "Address to the asset database", + "type": "string", + "pattern": "^mongodb://[\\w/@:.]*$", + "example": "mongodb://localhost:27017", + "default": "mongodb://localhost:27017" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_SENTRY": { + "description": "Address to Sentry", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "https://5b872b280de742919b115bdc8da076a5:8d278266fe764361b8fa6024af004a9c@logs.mindbender.com/2", + "default": null + }, + "AVALON_DEADLINE": { + "description": "Address to Deadline", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "http://192.168.99.101", + "default": null + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_UPLOAD": { + "description": "Boolean of whether to upload published material to central asset repository", + "type": "string", + "default": null, + "example": "True" + }, + "AVALON_USERNAME": { + "description": "Generic username", + "type": "string", + "pattern": "^\\w*$", + "default": "avalon", + "example": "myself" + }, + "AVALON_PASSWORD": { + "description": "Generic password", + "type": "string", + "pattern": "^\\w*$", + "default": "secret", + "example": "abc123" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + }, + "AVALON_DEBUG": { + "description": "Enable debugging mode. Some applications may use this for e.g. extended verbosity or mock plug-ins.", + "type": "string", + "default": null, + "example": "True" + } + } +} diff --git a/openpype/pipeline/schema/session-2.0.json b/openpype/pipeline/schema/session-2.0.json new file mode 100644 index 0000000000..0a4d51beb2 --- /dev/null +++ b/openpype/pipeline/schema/session-2.0.json @@ -0,0 +1,134 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-2.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_CONFIG" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_SILO": { + "description": "Name of asset group or container", + "type": "string", + "pattern": "^\\w*$", + "example": "assets" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_CONFIG": { + "description": "Name of Avalon configuration", + "type": "string", + "pattern": "^\\w*$", + "example": "polly" + }, + "AVALON_APP": { + "description": "Name of application", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_SENTRY": { + "description": "Address to Sentry", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "https://5b872b280de742919b115bdc8da076a5:8d278266fe764361b8fa6024af004a9c@logs.mindbender.com/2", + "default": null + }, + "AVALON_DEADLINE": { + "description": "Address to Deadline", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "http://192.168.99.101", + "default": null + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_UPLOAD": { + "description": "Boolean of whether to upload published material to central asset repository", + "type": "string", + "default": null, + "example": "True" + }, + "AVALON_USERNAME": { + "description": "Generic username", + "type": "string", + "pattern": "^\\w*$", + "default": "avalon", + "example": "myself" + }, + "AVALON_PASSWORD": { + "description": "Generic password", + "type": "string", + "pattern": "^\\w*$", + "default": "secret", + "example": "abc123" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + }, + "AVALON_DEBUG": { + "description": "Enable debugging mode. Some applications may use this for e.g. extended verbosity or mock plug-ins.", + "type": "string", + "default": null, + "example": "True" + } + } +} diff --git a/openpype/pipeline/schema/session-3.0.json b/openpype/pipeline/schema/session-3.0.json new file mode 100644 index 0000000000..9f785939e4 --- /dev/null +++ b/openpype/pipeline/schema/session-3.0.json @@ -0,0 +1,81 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-3.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECT", + "AVALON_ASSET" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_APP": { + "description": "Name of host", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + } + } +} diff --git a/openpype/pipeline/schema/shaders-1.0.json b/openpype/pipeline/schema/shaders-1.0.json new file mode 100644 index 0000000000..7102ba1861 --- /dev/null +++ b/openpype/pipeline/schema/shaders-1.0.json @@ -0,0 +1,32 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:shaders-1.0", + "description": "Relationships between shaders and Avalon IDs", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "shader" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "shader": { + "description": "Name of directory", + "type": "array", + "items": { + "type": "str", + "description": "Avalon ID and optional face indexes, e.g. 'f9520572-ac1d-11e6-b39e-3085a99791c9.f[5002:5185]'" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/subset-1.0.json b/openpype/pipeline/schema/subset-1.0.json new file mode 100644 index 0000000000..a299a6d341 --- /dev/null +++ b/openpype/pipeline/schema/subset-1.0.json @@ -0,0 +1,35 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-1.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "name", + "versions" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "name": { + "description": "Name of directory", + "type": "string" + }, + "versions": { + "type": "array", + "items": { + "$ref": "version.json" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/subset-2.0.json b/openpype/pipeline/schema/subset-2.0.json new file mode 100644 index 0000000000..db256ec7fb --- /dev/null +++ b/openpype/pipeline/schema/subset-2.0.json @@ -0,0 +1,51 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-2.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:subset-2.0"], + "example": "openpype:subset-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["subset"], + "example": "subset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "shot01" + }, + "data": { + "type": "object", + "description": "Document metadata", + "example": { + "frameStart": 1000, + "frameEnd": 1201 + } + } + } +} diff --git a/openpype/pipeline/schema/subset-3.0.json b/openpype/pipeline/schema/subset-3.0.json new file mode 100644 index 0000000000..1a0db53c04 --- /dev/null +++ b/openpype/pipeline/schema/subset-3.0.json @@ -0,0 +1,62 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-3.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:subset-3.0"], + "example": "openpype:subset-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["subset"], + "example": "subset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "shot01" + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["families"], + "properties": { + "families": { + "type": "array", + "items": {"type": "string"}, + "description": "One or more families associated with this subset" + } + }, + "example": { + "families" : [ + "avalon.camera" + ], + "frameStart": 1000, + "frameEnd": 1201 + } + } + } +} diff --git a/openpype/pipeline/schema/thumbnail-1.0.json b/openpype/pipeline/schema/thumbnail-1.0.json new file mode 100644 index 0000000000..5bdf78a4b1 --- /dev/null +++ b/openpype/pipeline/schema/thumbnail-1.0.json @@ -0,0 +1,42 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:thumbnail-1.0", + "description": "Entity with thumbnail data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:thumbnail-1.0"], + "example": "openpype:thumbnail-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["thumbnail"], + "example": "thumbnail" + }, + "data": { + "description": "Thumbnail data", + "type": "object", + "example": { + "binary_data": "Binary({byte data of image})", + "template": "{thumbnail_root}/{project[name]}/{_id}{ext}}", + "template_data": { + "ext": ".jpg" + } + } + } + } +} diff --git a/openpype/pipeline/schema/version-1.0.json b/openpype/pipeline/schema/version-1.0.json new file mode 100644 index 0000000000..daa1997721 --- /dev/null +++ b/openpype/pipeline/schema/version-1.0.json @@ -0,0 +1,50 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-1.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "version", + "path", + "time", + "author", + "source", + "representations" + ], + + "properties": { + "schema": {"type": "string"}, + "representations": { + "type": "array", + "items": { + "$ref": "representation.json" + } + }, + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + } + } +} diff --git a/openpype/pipeline/schema/version-2.0.json b/openpype/pipeline/schema/version-2.0.json new file mode 100644 index 0000000000..099e9be70a --- /dev/null +++ b/openpype/pipeline/schema/version-2.0.json @@ -0,0 +1,92 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-2.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:version-2.0"], + "example": "openpype:version-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["version"], + "example": "version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Number of version", + "type": "number", + "example": 12 + }, + "locations": { + "description": "Where on the planet this version can be found.", + "type": "array", + "items": {"type": "string"}, + "example": ["data.avalon.com"] + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["families", "author", "source", "time"], + "properties": { + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "timeFormat": { + "description": "ISO format of time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + }, + "families": { + "type": "array", + "items": {"type": "string"}, + "description": "One or more families associated with this version" + } + }, + "example": { + "source" : "{root}/f02_prod/assets/BubbleWitch/work/modeling/marcus/maya/scenes/model_v001.ma", + "author" : "marcus", + "families" : [ + "avalon.model" + ], + "time" : "20170510T090203Z" + } + } + } +} diff --git a/openpype/pipeline/schema/version-3.0.json b/openpype/pipeline/schema/version-3.0.json new file mode 100644 index 0000000000..3e07fc4499 --- /dev/null +++ b/openpype/pipeline/schema/version-3.0.json @@ -0,0 +1,84 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-3.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:version-3.0"], + "example": "openpype:version-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["version"], + "example": "version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Number of version", + "type": "number", + "example": 12 + }, + "locations": { + "description": "Where on the planet this version can be found.", + "type": "array", + "items": {"type": "string"}, + "example": ["data.avalon.com"] + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["author", "source", "time"], + "properties": { + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "timeFormat": { + "description": "ISO format of time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + } + }, + "example": { + "source" : "{root}/f02_prod/assets/BubbleWitch/work/modeling/marcus/maya/scenes/model_v001.ma", + "author" : "marcus", + "time" : "20170510T090203Z" + } + } + } +} diff --git a/openpype/pipeline/schema/workfile-1.0.json b/openpype/pipeline/schema/workfile-1.0.json new file mode 100644 index 0000000000..5f9600ef20 --- /dev/null +++ b/openpype/pipeline/schema/workfile-1.0.json @@ -0,0 +1,52 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:workfile-1.0", + "description": "Workfile additional information.", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "filename", + "task_name", + "parent" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:workfile-1.0"], + "example": "openpype:workfile-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["workfile"], + "example": "workfile" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "filename": { + "description": "Workfile's filename", + "type": "string", + "example": "kuba_each_case_Alpaca_01_animation_v001.ma" + }, + "task_name": { + "description": "Task name", + "type": "string", + "example": "animation" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + } +} diff --git a/openpype/pipeline/template_data.py b/openpype/pipeline/template_data.py index 627eba5c3d..a48f0721b6 100644 --- a/openpype/pipeline/template_data.py +++ b/openpype/pipeline/template_data.py @@ -94,6 +94,9 @@ def get_asset_template_data(asset_doc, project_name): return { "asset": asset_doc["name"], + "folder": { + "name": asset_doc["name"] + }, "hierarchy": hierarchy, "parent": parent_name } @@ -128,7 +131,7 @@ def get_task_template_data(project_doc, asset_doc, task_name): Args: project_doc (Dict[str, Any]): Queried project document. asset_doc (Dict[str, Any]): Queried asset document. - tas_name (str): Name of task for which data should be returned. + task_name (str): Name of task for which data should be returned. Returns: Dict[str, Dict[str, str]]: Template data diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py index 39f3e17893..b2b3679450 100644 --- a/openpype/pipeline/thumbnail.py +++ b/openpype/pipeline/thumbnail.py @@ -2,6 +2,8 @@ import os import copy import logging +from openpype import AYON_SERVER_ENABLED +from openpype.lib import Logger from openpype.client import get_project from . import legacy_io from .anatomy import Anatomy @@ -10,13 +12,13 @@ from .plugin_discover import ( register_plugin, register_plugin_path, ) -log = logging.getLogger(__name__) def get_thumbnail_binary(thumbnail_entity, thumbnail_type, dbcon=None): if not thumbnail_entity: return + log = Logger.get_logger(__name__) resolvers = discover_thumbnail_resolvers() resolvers = sorted(resolvers, key=lambda cls: cls.priority) if dbcon is None: @@ -131,6 +133,62 @@ class BinaryThumbnail(ThumbnailResolver): return thumbnail_entity["data"].get("binary_data") +class ServerThumbnailResolver(ThumbnailResolver): + _cache = None + + @classmethod + def _get_cache(cls): + if cls._cache is None: + from openpype.client.server.thumbnails import AYONThumbnailCache + + cls._cache = AYONThumbnailCache() + return cls._cache + + def process(self, thumbnail_entity, thumbnail_type): + if not AYON_SERVER_ENABLED: + return None + data = thumbnail_entity["data"] + entity_type = data.get("entity_type") + entity_id = data.get("entity_id") + if not entity_type or not entity_id: + return None + + import ayon_api + + project_name = self.dbcon.active_project() + thumbnail_id = thumbnail_entity["_id"] + + cache = self._get_cache() + filepath = cache.get_thumbnail_filepath(project_name, thumbnail_id) + if filepath: + with open(filepath, "rb") as stream: + return stream.read() + + # This is new way how thumbnails can be received from server + # - output is 'ThumbnailContent' object + if hasattr(ayon_api, "get_thumbnail_by_id"): + result = ayon_api.get_thumbnail_by_id(thumbnail_id) + if result.is_valid: + filepath = cache.store_thumbnail( + project_name, + thumbnail_id, + result.content, + result.content_type + ) + else: + # Backwards compatibility for ayon api where 'get_thumbnail_by_id' + # is not implemented and output is filepath + filepath = ayon_api.get_thumbnail( + project_name, entity_type, entity_id, thumbnail_id + ) + + if not filepath: + return None + + with open(filepath, "rb") as stream: + return stream.read() + + # Thumbnail resolvers def discover_thumbnail_resolvers(): return discover(ThumbnailResolver) @@ -146,3 +204,4 @@ def register_thumbnail_resolver_path(path): register_thumbnail_resolver(TemplateResolver) register_thumbnail_resolver(BinaryThumbnail) +register_thumbnail_resolver(ServerThumbnailResolver) diff --git a/openpype/pipeline/version_start.py b/openpype/pipeline/version_start.py new file mode 100644 index 0000000000..0240ab0c7a --- /dev/null +++ b/openpype/pipeline/version_start.py @@ -0,0 +1,37 @@ +from openpype.lib.profiles_filtering import filter_profiles +from openpype.settings import get_project_settings + + +def get_versioning_start( + project_name, + host_name, + task_name=None, + task_type=None, + family=None, + subset=None, + project_settings=None, +): + """Get anatomy versioning start""" + if not project_settings: + project_settings = get_project_settings(project_name) + + version_start = 1 + settings = project_settings["global"] + profiles = settings.get("version_start_category", {}).get("profiles", []) + + if not profiles: + return version_start + + filtering_criteria = { + "host_names": host_name, + "families": family, + "task_names": task_name, + "task_types": task_type, + "subsets": subset + } + profile = filter_profiles(profiles, filtering_criteria) + + if profile is None: + return version_start + + return profile["version_start"] diff --git a/openpype/pipeline/workfile/build_workfile.py b/openpype/pipeline/workfile/build_workfile.py index 8329487839..7b153d37b9 100644 --- a/openpype/pipeline/workfile/build_workfile.py +++ b/openpype/pipeline/workfile/build_workfile.py @@ -9,7 +9,6 @@ from '~/openpype/pipeline/workfile/workfile_template_builder'. Which gives more abilities to define how build happens but require more code to achive it. """ -import os import re import collections import json @@ -26,7 +25,6 @@ from openpype.lib import ( filter_profiles, Logger, ) -from openpype.pipeline import legacy_io from openpype.pipeline.load import ( discover_loader_plugins, IncompatibleLoaderError, @@ -102,11 +100,17 @@ class BuildWorkfile: List[Dict[str, Any]]: Loaded containers during build. """ + from openpype.pipeline.context_tools import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name, + ) + loaded_containers = [] # Get current asset name and entity - project_name = legacy_io.active_project() - current_asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + current_asset_name = get_current_asset_name() current_asset_entity = get_asset_by_name( project_name, current_asset_name ) @@ -135,7 +139,7 @@ class BuildWorkfile: return loaded_containers # Get current task name - current_task_name = legacy_io.Session["AVALON_TASK"] + current_task_name = get_current_task_name() # Load workfile presets for task self.build_presets = self.get_build_presets( @@ -236,9 +240,14 @@ class BuildWorkfile: Dict[str, Any]: preset per entered task name """ - host_name = os.environ["AVALON_APP"] + from openpype.pipeline.context_tools import ( + get_current_host_name, + get_current_project_name, + ) + + host_name = get_current_host_name() project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] + get_current_project_name() ) host_settings = project_settings.get(host_name) or {} @@ -651,13 +660,15 @@ class BuildWorkfile: ``` """ + from openpype.pipeline.context_tools import get_current_project_name + output = {} if not asset_docs: return output asset_docs_by_ids = {asset["_id"]: asset for asset in asset_docs} - project_name = legacy_io.active_project() + project_name = get_current_project_name() subsets = list(get_subsets( project_name, asset_ids=asset_docs_by_ids.keys() )) diff --git a/openpype/pipeline/workfile/path_resolving.py b/openpype/pipeline/workfile/path_resolving.py index 15689f4d99..78acee20da 100644 --- a/openpype/pipeline/workfile/path_resolving.py +++ b/openpype/pipeline/workfile/path_resolving.py @@ -10,7 +10,7 @@ from openpype.lib import ( Logger, StringTemplate, ) -from openpype.pipeline import Anatomy +from openpype.pipeline import version_start, Anatomy from openpype.pipeline.template_data import get_template_data @@ -316,7 +316,13 @@ def get_last_workfile( ) if filename is None: data = copy.deepcopy(fill_data) - data["version"] = 1 + data["version"] = version_start.get_versioning_start( + data["project"]["name"], + data["app"], + task_name=data["task"]["name"], + task_type=data["task"]["type"], + family="workfile" + ) data.pop("comment", None) if not data.get("ext"): data["ext"] = extensions[0] diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py index e1013b2645..b218a34868 100644 --- a/openpype/pipeline/workfile/workfile_template_builder.py +++ b/openpype/pipeline/workfile/workfile_template_builder.py @@ -28,8 +28,7 @@ from openpype.settings import ( get_project_settings, get_system_settings, ) -from openpype.host import IWorkfileHost -from openpype.host import HostBase +from openpype.host import IWorkfileHost, HostBase from openpype.lib import ( Logger, StringTemplate, @@ -37,7 +36,7 @@ from openpype.lib import ( attribute_definitions, ) from openpype.lib.attribute_definitions import get_attributes_keys -from openpype.pipeline import legacy_io, Anatomy +from openpype.pipeline import Anatomy from openpype.pipeline.load import ( get_loaders_by_name, get_contexts_for_repre_docs, @@ -125,15 +124,30 @@ class AbstractTemplateBuilder(object): @property def project_name(self): - return legacy_io.active_project() + if isinstance(self._host, HostBase): + return self._host.get_current_project_name() + return os.getenv("AVALON_PROJECT") @property def current_asset_name(self): - return legacy_io.Session["AVALON_ASSET"] + if isinstance(self._host, HostBase): + return self._host.get_current_asset_name() + return os.getenv("AVALON_ASSET") @property def current_task_name(self): - return legacy_io.Session["AVALON_TASK"] + if isinstance(self._host, HostBase): + return self._host.get_current_task_name() + return os.getenv("AVALON_TASK") + + def get_current_context(self): + if isinstance(self._host, HostBase): + return self._host.get_current_context() + return { + "project_name": self.project_name, + "asset_name": self.current_asset_name, + "task_name": self.current_task_name + } @property def system_settings(self): @@ -790,10 +804,9 @@ class AbstractTemplateBuilder(object): fill_data["root"] = anatomy.roots fill_data["project"] = { "name": project_name, - "code": anatomy["attributes"]["code"] + "code": anatomy.project_code, } - result = StringTemplate.format_template(path, fill_data) if result.solved: path = result.normalized() @@ -1599,7 +1612,7 @@ class PlaceholderLoadMixin(object): pass - def delete_placeholder(self, placeholder, failed): + def delete_placeholder(self, placeholder): """Called when all item population is done.""" self.log.debug("Clean up of placeholder is not implemented.") @@ -1705,9 +1718,10 @@ class PlaceholderCreateMixin(object): creator_plugin = self.builder.get_creators_by_name()[creator_name] # create subset name - project_name = legacy_io.Session["AVALON_PROJECT"] - task_name = legacy_io.Session["AVALON_TASK"] - asset_name = legacy_io.Session["AVALON_ASSET"] + context = self._builder.get_current_context() + project_name = context["project_name"] + asset_name = context["asset_name"] + task_name = context["task_name"] if legacy_create: asset_doc = get_asset_by_name( @@ -1767,6 +1781,17 @@ class PlaceholderCreateMixin(object): self.post_placeholder_process(placeholder, failed) + if failed: + self.log.debug( + "Placeholder cleanup skipped due to failed placeholder " + "population." + ) + return + + if not placeholder.data.get("keep_placeholder", True): + self.delete_placeholder(placeholder) + + def create_failed(self, placeholder, creator_data): if hasattr(placeholder, "create_failed"): placeholder.create_failed(creator_data) @@ -1786,9 +1811,12 @@ class PlaceholderCreateMixin(object): representation. failed (bool): Loading of representation failed. """ - pass + def delete_placeholder(self, placeholder): + """Called when all item population is done.""" + self.log.debug("Clean up of placeholder is not implemented.") + def _before_instance_create(self, placeholder): """Can be overriden. Is called before instance is created.""" diff --git a/openpype/plugin.py b/openpype/plugin.py deleted file mode 100644 index 7e906b4451..0000000000 --- a/openpype/plugin.py +++ /dev/null @@ -1,128 +0,0 @@ -import functools -import warnings - -import pyblish.api - -# New location of orders: openpype.pipeline.publish.constants -# - can be imported as -# 'from openpype.pipeline.publish import ValidatePipelineOrder' -ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 -ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 -ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 -ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 - - -class PluginDeprecatedWarning(DeprecationWarning): - pass - - -def _deprecation_warning(item_name, warning_message): - warnings.simplefilter("always", PluginDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(item_name, warning_message), - category=PluginDeprecatedWarning, - stacklevel=4 - ) - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - _deprecation_warning(decorated_func.__name__, warning_message) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -# Classes just inheriting from pyblish classes -# - seems to be unused in code (not 100% sure) -# - they should be removed but because it is not clear if they're used -# we'll keep then and log deprecation warning -# Deprecated since 3.14.* will be removed in 3.16.* -class ContextPlugin(pyblish.api.ContextPlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.ContextPlugin'." - ) - super(ContextPlugin, self).__init__(*args, **kwargs) - - -# Deprecated since 3.14.* will be removed in 3.16.* -class InstancePlugin(pyblish.api.InstancePlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.InstancePlugin'." - ) - super(InstancePlugin, self).__init__(*args, **kwargs) - - -class Extractor(pyblish.api.InstancePlugin): - """Extractor base class. - - The extractor base class implements a "staging_dir" function used to - generate a temporary directory for an instance to extract to. - - This temporary directory is generated through `tempfile.mkdtemp()` - - """ - - order = 2.0 - - def staging_dir(self, instance): - """Provide a temporary directory in which to store extracted files - - Upon calling this method the staging directory is stored inside - the instance.data['stagingDir'] - """ - - from openpype.pipeline.publish import get_instance_staging_dir - - return get_instance_staging_dir(instance) - - -@deprecated("openpype.pipeline.publish.context_plugin_should_run") -def contextplugin_should_run(plugin, context): - """Return whether the ContextPlugin should run on the given context. - - This is a helper function to work around a bug pyblish-base#250 - Whenever a ContextPlugin sets specific families it will still trigger even - when no instances are present that have those families. - - This actually checks it correctly and returns whether it should run. - - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import context_plugin_should_run - - return context_plugin_should_run(plugin, context) diff --git a/openpype/plugins/actions/open_file_explorer.py b/openpype/plugins/actions/open_file_explorer.py new file mode 100644 index 0000000000..e4fbd91143 --- /dev/null +++ b/openpype/plugins/actions/open_file_explorer.py @@ -0,0 +1,125 @@ +import os +import platform +import subprocess + +from string import Formatter +from openpype.client import ( + get_project, + get_asset_by_name, +) +from openpype.pipeline import ( + Anatomy, + LauncherAction, +) +from openpype.pipeline.template_data import get_template_data + + +class OpenTaskPath(LauncherAction): + name = "open_task_path" + label = "Explore here" + icon = "folder-open" + order = 500 + + def is_compatible(self, session): + """Return whether the action is compatible with the session""" + return bool(session.get("AVALON_ASSET")) + + def process(self, session, **kwargs): + from qtpy import QtCore, QtWidgets + + project_name = session["AVALON_PROJECT"] + asset_name = session["AVALON_ASSET"] + task_name = session.get("AVALON_TASK", None) + + path = self._get_workdir(project_name, asset_name, task_name) + if not path: + return + + app = QtWidgets.QApplication.instance() + ctrl_pressed = QtCore.Qt.ControlModifier & app.keyboardModifiers() + if ctrl_pressed: + # Copy path to clipboard + self.copy_path_to_clipboard(path) + else: + self.open_in_explorer(path) + + def _find_first_filled_path(self, path): + if not path: + return "" + + fields = set() + for item in Formatter().parse(path): + _, field_name, format_spec, conversion = item + if not field_name: + continue + conversion = "!{}".format(conversion) if conversion else "" + format_spec = ":{}".format(format_spec) if format_spec else "" + orig_key = "{{{}{}{}}}".format( + field_name, conversion, format_spec) + fields.add(orig_key) + + for field in fields: + path = path.split(field, 1)[0] + return path + + def _get_workdir(self, project_name, asset_name, task_name): + project = get_project(project_name) + asset = get_asset_by_name(project_name, asset_name) + + data = get_template_data(project, asset, task_name) + + anatomy = Anatomy(project_name) + workdir = anatomy.templates_obj["work"]["folder"].format(data) + + # Remove any potential un-formatted parts of the path + valid_workdir = self._find_first_filled_path(workdir) + + # Path is not filled at all + if not valid_workdir: + raise AssertionError("Failed to calculate workdir.") + + # Normalize + valid_workdir = os.path.normpath(valid_workdir) + if os.path.exists(valid_workdir): + return valid_workdir + + # If task was selected, try to find asset path only to asset + if not task_name: + raise AssertionError("Folder does not exist.") + + data.pop("task", None) + workdir = anatomy.templates_obj["work"]["folder"].format(data) + valid_workdir = self._find_first_filled_path(workdir) + if valid_workdir: + # Normalize + valid_workdir = os.path.normpath(valid_workdir) + if os.path.exists(valid_workdir): + return valid_workdir + raise AssertionError("Folder does not exist.") + + @staticmethod + def open_in_explorer(path): + platform_name = platform.system().lower() + if platform_name == "windows": + args = ["start", path] + elif platform_name == "darwin": + args = ["open", "-na", path] + elif platform_name == "linux": + args = ["xdg-open", path] + else: + raise RuntimeError(f"Unknown platform {platform.system()}") + # Make sure path is converted correctly for 'os.system' + os.system(subprocess.list2cmdline(args)) + + @staticmethod + def copy_path_to_clipboard(path): + from qtpy import QtWidgets + + path = path.replace("\\", "/") + print(f"Copied to clipboard: {path}") + app = QtWidgets.QApplication.instance() + assert app, "Must have running QApplication instance" + + # Set to Clipboard + clipboard = QtWidgets.QApplication.clipboard() + clipboard.setText(os.path.normpath(path)) diff --git a/openpype/plugins/load/copy_file.py b/openpype/plugins/load/copy_file.py index 163f56a83a..7fd56c8a6a 100644 --- a/openpype/plugins/load/copy_file.py +++ b/openpype/plugins/load/copy_file.py @@ -14,8 +14,9 @@ class CopyFile(load.LoaderPlugin): color = get_default_entity_icon_color() def load(self, context, name=None, namespace=None, data=None): - self.log.info("Added copy to clipboard: {0}".format(self.fname)) - self.copy_file_to_clipboard(self.fname) + path = self.filepath_from_context(context) + self.log.info("Added copy to clipboard: {0}".format(path)) + self.copy_file_to_clipboard(path) @staticmethod def copy_file_to_clipboard(path): diff --git a/openpype/plugins/load/copy_file_path.py b/openpype/plugins/load/copy_file_path.py index 569e5c8780..b055494e85 100644 --- a/openpype/plugins/load/copy_file_path.py +++ b/openpype/plugins/load/copy_file_path.py @@ -14,8 +14,9 @@ class CopyFilePath(load.LoaderPlugin): color = "#999999" def load(self, context, name=None, namespace=None, data=None): - self.log.info("Added file path to clipboard: {0}".format(self.fname)) - self.copy_path_to_clipboard(self.fname) + path = self.filepath_from_context(context) + self.log.info("Added file path to clipboard: {0}".format(path)) + self.copy_path_to_clipboard(path) @staticmethod def copy_path_to_clipboard(path): diff --git a/openpype/plugins/load/delivery.py b/openpype/plugins/load/delivery.py index 4bd4f6e9cf..3b493989bd 100644 --- a/openpype/plugins/load/delivery.py +++ b/openpype/plugins/load/delivery.py @@ -95,6 +95,12 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): template_label.setCursor(QtGui.QCursor(QtCore.Qt.IBeamCursor)) template_label.setTextInteractionFlags(QtCore.Qt.TextSelectableByMouse) + renumber_frame = QtWidgets.QCheckBox() + + first_frame_start = QtWidgets.QSpinBox() + max_int = (1 << 32) // 2 + first_frame_start.setRange(0, max_int - 1) + root_line_edit = QtWidgets.QLineEdit() repre_checkboxes_layout = QtWidgets.QFormLayout() @@ -118,6 +124,8 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): input_layout.addRow("Selected representations", selected_label) input_layout.addRow("Delivery template", dropdown) input_layout.addRow("Template value", template_label) + input_layout.addRow("Renumber Frame", renumber_frame) + input_layout.addRow("Renumber start frame", first_frame_start) input_layout.addRow("Root", root_line_edit) input_layout.addRow("Representations", repre_checkboxes_layout) @@ -145,6 +153,8 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): self.selected_label = selected_label self.template_label = template_label self.dropdown = dropdown + self.first_frame_start = first_frame_start + self.renumber_frame = renumber_frame self.root_line_edit = root_line_edit self.progress_bar = progress_bar self.text_area = text_area @@ -181,6 +191,8 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): datetime_data = get_datetime_data() template_name = self.dropdown.currentText() format_dict = get_format_dict(self.anatomy, self.root_line_edit.text()) + renumber_frame = self.renumber_frame.isChecked() + frame_offset = self.first_frame_start.value() for repre in self._representations: if repre["name"] not in selected_repres: continue @@ -218,9 +230,31 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): src_paths.append(src_path) sources_and_frames = collect_frames(src_paths) + frames = set(sources_and_frames.values()) + frames.discard(None) + first_frame = None + if frames: + first_frame = min(frames) + for src_path, frame in sources_and_frames.items(): args[0] = src_path - if frame: + # Renumber frames + if renumber_frame and frame is not None: + # Calculate offset between + # first frame and current frame + # - '0' for first frame + offset = frame_offset - int(first_frame) + # Add offset to new frame start + dst_frame = int(frame) + offset + if dst_frame < 0: + msg = "Renumber frame has a smaller number than original frame" # noqa + report_items[msg].append(src_path) + self.log.warning("{} <{}>".format( + msg, dst_frame)) + continue + frame = dst_frame + + if frame is not None: anatomy_data["frame"] = frame new_report_items, uploaded = deliver_single_file(*args) report_items.update(new_report_items) diff --git a/openpype/plugins/load/open_djv.py b/openpype/plugins/load/open_djv.py index 9c36e7f405..5c679f6a51 100644 --- a/openpype/plugins/load/open_djv.py +++ b/openpype/plugins/load/open_djv.py @@ -33,9 +33,11 @@ class OpenInDJV(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - directory = os.path.dirname(self.fname) import clique + path = self.filepath_from_context(context) + directory = os.path.dirname(path) + pattern = clique.PATTERNS["frames"] files = os.listdir(directory) collections, remainder = clique.assemble( @@ -48,7 +50,7 @@ class OpenInDJV(load.LoaderPlugin): sequence = collections[0] first_image = list(sequence)[0] else: - first_image = self.fname + first_image = path filepath = os.path.normpath(os.path.join(directory, first_image)) self.log.info("Opening : {}".format(filepath)) diff --git a/openpype/plugins/load/open_file.py b/openpype/plugins/load/open_file.py index 00b2ecd7c5..5c4f4901d1 100644 --- a/openpype/plugins/load/open_file.py +++ b/openpype/plugins/load/open_file.py @@ -28,7 +28,7 @@ class OpenFile(load.LoaderPlugin): def load(self, context, name, namespace, data): - path = self.fname + path = self.filepath_from_context(context) if not os.path.exists(path): raise RuntimeError("File not found: {}".format(path)) diff --git a/openpype/plugins/publish/collect_anatomy_instance_data.py b/openpype/plugins/publish/collect_anatomy_instance_data.py index 128ad90b4f..b4f4d6a16a 100644 --- a/openpype/plugins/publish/collect_anatomy_instance_data.py +++ b/openpype/plugins/publish/collect_anatomy_instance_data.py @@ -32,6 +32,7 @@ from openpype.client import ( get_subsets, get_last_versions ) +from openpype.pipeline.version_start import get_versioning_start class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): @@ -187,25 +188,13 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): project_task_types = project_doc["config"]["tasks"] for instance in context: - if self.follow_workfile_version: - version_number = context.data('version') - else: - version_number = instance.data.get("version") - # If version is not specified for instance or context - if version_number is None: - # TODO we should be able to change default version by studio - # preferences (like start with version number `0`) - version_number = 1 - # use latest version (+1) if already any exist - latest_version = instance.data["latestVersion"] - if latest_version is not None: - version_number += int(latest_version) - anatomy_updates = { "asset": instance.data["asset"], + "folder": { + "name": instance.data["asset"], + }, "family": instance.data["family"], "subset": instance.data["subset"], - "version": version_number } # Hierarchy @@ -225,6 +214,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): anatomy_updates["parent"] = parent_name # Task + task_type = None task_name = instance.data.get("task") if task_name: asset_tasks = asset_doc["data"]["tasks"] @@ -240,6 +230,30 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): "short": task_code } + # Define version + if self.follow_workfile_version: + version_number = context.data('version') + else: + version_number = instance.data.get("version") + + # use latest version (+1) if already any exist + if version_number is None: + latest_version = instance.data["latestVersion"] + if latest_version is not None: + version_number = int(latest_version) + 1 + + # If version is not specified for instance or context + if version_number is None: + version_number = get_versioning_start( + context.data["projectName"], + instance.context.data["hostName"], + task_name=task_name, + task_type=task_type, + family=instance.data["family"], + subset=instance.data["subset"] + ) + anatomy_updates["version"] = version_number + # Additional data resolution_width = instance.data.get("resolutionWidth") if resolution_width: diff --git a/openpype/plugins/publish/collect_current_context.py b/openpype/plugins/publish/collect_current_context.py index 7e42700d7d..166d75e5de 100644 --- a/openpype/plugins/publish/collect_current_context.py +++ b/openpype/plugins/publish/collect_current_context.py @@ -6,7 +6,7 @@ Provides: """ import pyblish.api -from openpype.pipeline import legacy_io +from openpype.pipeline import get_current_context class CollectCurrentContext(pyblish.api.ContextPlugin): @@ -19,24 +19,20 @@ class CollectCurrentContext(pyblish.api.ContextPlugin): label = "Collect Current context" def process(self, context): - # Make sure 'legacy_io' is intalled - legacy_io.install() - # Check if values are already set project_name = context.data.get("projectName") asset_name = context.data.get("asset") task_name = context.data.get("task") + + current_context = get_current_context() if not project_name: - project_name = legacy_io.current_project() - context.data["projectName"] = project_name + context.data["projectName"] = current_context["project_name"] if not asset_name: - asset_name = legacy_io.Session.get("AVALON_ASSET") - context.data["asset"] = asset_name + context.data["asset"] = current_context["asset_name"] if not task_name: - task_name = legacy_io.Session.get("AVALON_TASK") - context.data["task"] = task_name + context.data["task"] = current_context["task_name"] # QUESTION should we be explicit with keys? (the same on instances) # - 'asset' -> 'assetName' diff --git a/openpype/plugins/publish/collect_scene_version.py b/openpype/plugins/publish/collect_scene_version.py index cd3231a07d..70a0aca296 100644 --- a/openpype/plugins/publish/collect_scene_version.py +++ b/openpype/plugins/publish/collect_scene_version.py @@ -3,6 +3,7 @@ import pyblish.api from openpype.lib import get_version_from_path from openpype.tests.lib import is_in_tests +from openpype.pipeline import KnownPublishError class CollectSceneVersion(pyblish.api.ContextPlugin): @@ -38,11 +39,15 @@ class CollectSceneVersion(pyblish.api.ContextPlugin): if ( os.environ.get("HEADLESS_PUBLISH") and not is_in_tests() - and context.data["hostName"] in self.skip_hosts_headless_publish): + and context.data["hostName"] in self.skip_hosts_headless_publish + ): self.log.debug("Skipping for headless publishing") return - assert context.data.get('currentFile'), "Cannot get current file" + if not context.data.get('currentFile'): + raise KnownPublishError("Cannot get current workfile path. " + "Make sure your scene is saved.") + filename = os.path.basename(context.data.get('currentFile')) if '' in filename: @@ -53,8 +58,9 @@ class CollectSceneVersion(pyblish.api.ContextPlugin): ) version = get_version_from_path(filename) - assert version, "Cannot determine version" + if version is None: + raise KnownPublishError("Unable to retrieve version number from " + "filename: {}".format(filename)) - rootVersion = int(version) - context.data['version'] = rootVersion + context.data['version'] = int(version) self.log.info('Scene Version: %s' % context.data.get('version')) diff --git a/openpype/plugins/publish/collect_sequence_frame_data.py b/openpype/plugins/publish/collect_sequence_frame_data.py new file mode 100644 index 0000000000..c200b245e9 --- /dev/null +++ b/openpype/plugins/publish/collect_sequence_frame_data.py @@ -0,0 +1,53 @@ +import pyblish.api +import clique + + +class CollectSequenceFrameData(pyblish.api.InstancePlugin): + """Collect Sequence Frame Data + If the representation includes files with frame numbers, + then set `frameStart` and `frameEnd` for the instance to the + start and end frame respectively + """ + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Sequence Frame Data" + families = ["plate", "pointcache", + "vdbcache", "online", + "render"] + hosts = ["traypublisher"] + + def process(self, instance): + frame_data = self.get_frame_data_from_repre_sequence(instance) + if not frame_data: + # if no dict data skip collecting the frame range data + return + for key, value in frame_data.items(): + if key not in instance.data: + instance.data[key] = value + self.log.debug(f"Collected Frame range data '{key}':{value} ") + + def get_frame_data_from_repre_sequence(self, instance): + repres = instance.data.get("representations") + if repres: + first_repre = repres[0] + if "ext" not in first_repre: + self.log.warning("Cannot find file extension" + " in representation data") + return + + files = first_repre["files"] + collections, remainder = clique.assemble(files) + if not collections: + # No sequences detected and we can't retrieve + # frame range + self.log.debug( + "No sequences detected in the representation data." + " Skipping collecting frame range data.") + return + collection = collections[0] + repres_frames = list(collection.indexes) + + return { + "frameStart": repres_frames[0], + "frameEnd": repres_frames[-1], + } diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index e67739e842..e5b37ee3b4 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -52,8 +52,9 @@ class ExtractBurnin(publish.Extractor): "photoshop", "flame", "houdini", - "max" - # "resolve" + "max", + "blender", + "unreal" ] optional = True diff --git a/openpype/plugins/publish/extract_hierarchy_avalon.py b/openpype/plugins/publish/extract_hierarchy_avalon.py index 493780645c..1d57545bc0 100644 --- a/openpype/plugins/publish/extract_hierarchy_avalon.py +++ b/openpype/plugins/publish/extract_hierarchy_avalon.py @@ -1,6 +1,7 @@ import collections from copy import deepcopy import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_assets, get_archived_assets @@ -16,6 +17,9 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin): families = ["clip", "shot"] def process(self, context): + if AYON_SERVER_ENABLED: + return + if "hierarchyContext" not in context.data: self.log.info("skipping IntegrateHierarchyToAvalon") return diff --git a/openpype/plugins/publish/extract_hierarchy_to_ayon.py b/openpype/plugins/publish/extract_hierarchy_to_ayon.py new file mode 100644 index 0000000000..915650ae41 --- /dev/null +++ b/openpype/plugins/publish/extract_hierarchy_to_ayon.py @@ -0,0 +1,234 @@ +import collections +import copy +import json +import uuid +import pyblish.api + +from ayon_api import slugify_string +from ayon_api.entity_hub import EntityHub + +from openpype import AYON_SERVER_ENABLED + + +def _default_json_parse(value): + return str(value) + + +class ExtractHierarchyToAYON(pyblish.api.ContextPlugin): + """Create entities in AYON based on collected data.""" + + order = pyblish.api.ExtractorOrder - 0.01 + label = "Extract Hierarchy To AYON" + families = ["clip", "shot"] + + def process(self, context): + if not AYON_SERVER_ENABLED: + return + + hierarchy_context = context.data.get("hierarchyContext") + if not hierarchy_context: + self.log.info("Skipping") + return + + project_name = context.data["projectName"] + hierarchy_context = self._filter_hierarchy(context) + if not hierarchy_context: + self.log.info("All folders were filtered out") + return + + self.log.debug("Hierarchy_context: {}".format( + json.dumps(hierarchy_context, default=_default_json_parse) + )) + + entity_hub = EntityHub(project_name) + project = entity_hub.project_entity + + hierarchy_match_queue = collections.deque() + hierarchy_match_queue.append((project, hierarchy_context)) + while hierarchy_match_queue: + item = hierarchy_match_queue.popleft() + entity, entity_info = item + + # Update attributes of entities + for attr_name, attr_value in entity_info["attributes"].items(): + if attr_name in entity.attribs: + entity.attribs[attr_name] = attr_value + + # Check if info has any children to sync + children_info = entity_info["children"] + tasks_info = entity_info["tasks"] + if not tasks_info and not children_info: + continue + + # Prepare children by lowered name to easily find matching entities + children_by_low_name = { + child.name.lower(): child + for child in entity.children + } + + # Create tasks if are not available + for task_info in tasks_info: + task_label = task_info["name"] + task_name = slugify_string(task_label) + if task_name == task_label: + task_label = None + task_entity = children_by_low_name.get(task_name.lower()) + # TODO propagate updates of tasks if there are any + # TODO check if existing entity have 'task' type + if task_entity is None: + task_entity = entity_hub.add_new_task( + task_info["type"], + parent_id=entity.id, + name=task_name + ) + + if task_label: + task_entity.label = task_label + + # Create/Update sub-folders + for child_info in children_info: + child_label = child_info["name"] + child_name = slugify_string(child_label) + if child_name == child_label: + child_label = None + # TODO check if existing entity have 'folder' type + child_entity = children_by_low_name.get(child_name.lower()) + if child_entity is None: + child_entity = entity_hub.add_new_folder( + child_info["entity_type"], + parent_id=entity.id, + name=child_name + ) + + if child_label: + child_entity.label = child_label + + # Add folder to queue + hierarchy_match_queue.append((child_entity, child_info)) + + entity_hub.commit_changes() + + def _filter_hierarchy(self, context): + """Filter hierarchy context by active folder names. + + Hierarchy context is filtered to folder names on active instances. + + Change hierarchy context to unified structure which suits logic in + entity creation. + + Output example: + { + "name": "MyProject", + "entity_type": "Project", + "attributes": {}, + "tasks": [], + "children": [ + { + "name": "seq_01", + "entity_type": "Sequence", + "attributes": {}, + "tasks": [], + "children": [ + ... + ] + }, + ... + ] + } + + Todos: + Change how active folder are defined (names won't be enough in + AYON). + + Args: + context (pyblish.api.Context): Pyblish context. + + Returns: + dict[str, Any]: Hierarchy structure filtered by folder names. + """ + + # filter only the active publishing instances + active_folder_names = set() + for instance in context: + if instance.data.get("publish") is not False: + active_folder_names.add(instance.data.get("asset")) + + active_folder_names.discard(None) + + self.log.debug("Active folder names: {}".format(active_folder_names)) + if not active_folder_names: + return None + + project_item = None + project_children_context = None + for key, value in context.data["hierarchyContext"].items(): + project_item = copy.deepcopy(value) + project_children_context = project_item.pop("childs", None) + project_item["name"] = key + project_item["tasks"] = [] + project_item["attributes"] = project_item.pop( + "custom_attributes", {} + ) + project_item["children"] = [] + + if not project_children_context: + return None + + project_id = uuid.uuid4().hex + items_by_id = {project_id: project_item} + parent_id_by_item_id = {project_id: None} + valid_ids = set() + + hierarchy_queue = collections.deque() + hierarchy_queue.append((project_id, project_children_context)) + while hierarchy_queue: + queue_item = hierarchy_queue.popleft() + parent_id, children_context = queue_item + if not children_context: + continue + + for asset_name, asset_info in children_context.items(): + if ( + asset_name not in active_folder_names + and not asset_info.get("childs") + ): + continue + item_id = uuid.uuid4().hex + new_item = copy.deepcopy(asset_info) + new_item["name"] = asset_name + new_item["children"] = [] + new_children_context = new_item.pop("childs", None) + tasks = new_item.pop("tasks", {}) + task_items = [] + for task_name, task_info in tasks.items(): + task_info["name"] = task_name + task_items.append(task_info) + new_item["tasks"] = task_items + new_item["attributes"] = new_item.pop("custom_attributes", {}) + + items_by_id[item_id] = new_item + parent_id_by_item_id[item_id] = parent_id + + if asset_name in active_folder_names: + valid_ids.add(item_id) + hierarchy_queue.append((item_id, new_children_context)) + + if not valid_ids: + return None + + for item_id in set(valid_ids): + parent_id = parent_id_by_item_id[item_id] + while parent_id is not None and parent_id not in valid_ids: + valid_ids.add(parent_id) + parent_id = parent_id_by_item_id[parent_id] + + valid_ids.discard(project_id) + for item_id in valid_ids: + parent_id = parent_id_by_item_id[item_id] + item = items_by_id[item_id] + parent_item = items_by_id[parent_id] + parent_item["children"].append(item) + + if not project_item["children"]: + return None + return project_item diff --git a/openpype/plugins/publish/extract_otio_audio_tracks.py b/openpype/plugins/publish/extract_otio_audio_tracks.py index e19b7eeb13..4f17731452 100644 --- a/openpype/plugins/publish/extract_otio_audio_tracks.py +++ b/openpype/plugins/publish/extract_otio_audio_tracks.py @@ -1,7 +1,7 @@ import os import pyblish from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess ) import tempfile @@ -20,9 +20,6 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): label = "Extract OTIO Audio Tracks" hosts = ["hiero", "resolve", "flame"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - def process(self, context): """Convert otio audio track's content to audio representations @@ -91,13 +88,13 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): # temp audio file audio_fpath = self.create_temp_file(name) - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-ss", str(start_sec), "-t", str(duration_sec), "-i", audio_file, audio_fpath - ] + ) # run subprocess self.log.debug("Executing: {}".format(" ".join(cmd))) @@ -210,13 +207,13 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): max_duration_sec = max(end_secs) # create empty cmd - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-f", "lavfi", "-i", "anullsrc=channel_layout=stereo:sample_rate=48000", "-t", str(max_duration_sec), empty_fpath - ] + ) # generate empty with ffmpeg # run subprocess @@ -295,7 +292,7 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): filters_tmp_filepath = tmp_file.name tmp_file.write(",".join(filters)) - args = [self.ffmpeg_path] + args = get_ffmpeg_tool_args("ffmpeg") args.extend(input_args) args.extend([ "-filter_complex_script", filters_tmp_filepath, diff --git a/openpype/plugins/publish/extract_otio_review.py b/openpype/plugins/publish/extract_otio_review.py index 9ebcad2af1..699207df8a 100644 --- a/openpype/plugins/publish/extract_otio_review.py +++ b/openpype/plugins/publish/extract_otio_review.py @@ -20,7 +20,7 @@ import opentimelineio as otio from pyblish import api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -338,8 +338,6 @@ class ExtractOTIOReview(publish.Extractor): Returns: otio.time.TimeRange: trimmed available range """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path and frame start to destination output_path, out_frame_start = self._get_ffmpeg_output() @@ -348,7 +346,7 @@ class ExtractOTIOReview(publish.Extractor): out_frame_start += end_offset # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") input_extension = None if sequence: diff --git a/openpype/plugins/publish/extract_otio_trimming_video.py b/openpype/plugins/publish/extract_otio_trimming_video.py index 70726338aa..67ff6c538c 100644 --- a/openpype/plugins/publish/extract_otio_trimming_video.py +++ b/openpype/plugins/publish/extract_otio_trimming_video.py @@ -11,7 +11,7 @@ from copy import deepcopy import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -75,14 +75,12 @@ class ExtractOTIOTrimmingVideo(publish.Extractor): otio_range (opentime.TimeRange): range to trim to """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path to destination output_path = self._get_ffmpeg_output(input_file_path) # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") video_path = input_file_path frame_start = otio_range.start_time.value diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index f053d1b500..9cc456872e 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -3,6 +3,7 @@ import re import copy import json import shutil +import subprocess from abc import ABCMeta, abstractmethod import six @@ -11,7 +12,7 @@ import speedcopy import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, filter_profiles, path_to_subprocess_arg, run_subprocess, @@ -72,9 +73,6 @@ class ExtractReview(pyblish.api.InstancePlugin): alpha_exts = ["exr", "png", "dpx"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - # Preset attributes profiles = None @@ -787,8 +785,9 @@ class ExtractReview(pyblish.api.InstancePlugin): arg = arg.replace(identifier, "").strip() audio_filters.append(arg) - all_args = [] - all_args.append(path_to_subprocess_arg(self.ffmpeg_path)) + all_args = [ + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")) + ] all_args.extend(input_args) if video_filters: all_args.append("-filter:v") diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py index fca3d96ca6..886384fee6 100644 --- a/openpype/plugins/publish/extract_review_slate.py +++ b/openpype/plugins/publish/extract_review_slate.py @@ -1,5 +1,6 @@ import os import re +import subprocess from pprint import pformat import pyblish.api @@ -7,7 +8,7 @@ import pyblish.api from openpype.lib import ( path_to_subprocess_arg, run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_data, get_ffprobe_streams, get_ffmpeg_codec_args, @@ -47,8 +48,6 @@ class ExtractReviewSlate(publish.Extractor): self.log.info("_ slates_data: {}".format(pformat(slates_data))) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - if "reviewToWidth" in inst_data: use_legacy_code = True else: @@ -86,8 +85,11 @@ class ExtractReviewSlate(publish.Extractor): input_width, input_height, input_timecode, - input_frame_rate + input_frame_rate, + input_pixel_aspect ) = self._get_video_metadata(streams) + if input_pixel_aspect: + pixel_aspect = input_pixel_aspect # Raise exception of any stream didn't define input resolution if input_width is None: @@ -260,7 +262,7 @@ class ExtractReviewSlate(publish.Extractor): _remove_at_end.append(slate_v_path) slate_args = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")), " ".join(input_args), " ".join(output_args) ] @@ -281,7 +283,6 @@ class ExtractReviewSlate(publish.Extractor): os.path.splitext(slate_v_path)) _remove_at_end.append(slate_silent_path) self._create_silent_slate( - ffmpeg_path, slate_v_path, slate_silent_path, audio_codec, @@ -309,12 +310,12 @@ class ExtractReviewSlate(publish.Extractor): "[0:v] [1:v] concat=n=2:v=1:a=0 [v]", "-map", '[v]' ] - concat_args = [ - ffmpeg_path, + concat_args = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", slate_v_path, "-i", input_path, - ] + ) concat_args.extend(fmap) if offset_timecode: concat_args.extend(["-timecode", offset_timecode]) @@ -421,6 +422,7 @@ class ExtractReviewSlate(publish.Extractor): input_width = None input_height = None input_frame_rate = None + input_pixel_aspect = None for stream in streams: if stream.get("codec_type") != "video": continue @@ -438,6 +440,16 @@ class ExtractReviewSlate(publish.Extractor): input_width = width input_height = height + input_pixel_aspect = stream.get("sample_aspect_ratio") + if input_pixel_aspect is not None: + try: + input_pixel_aspect = float( + eval(str(input_pixel_aspect).replace(':', '/'))) + except Exception: + self.log.debug( + "__Converting pixel aspect to float failed: {}".format( + input_pixel_aspect)) + tags = stream.get("tags") or {} input_timecode = tags.get("timecode") or "" @@ -448,7 +460,8 @@ class ExtractReviewSlate(publish.Extractor): input_width, input_height, input_timecode, - input_frame_rate + input_frame_rate, + input_pixel_aspect ) def _get_audio_metadata(self, streams): @@ -490,7 +503,6 @@ class ExtractReviewSlate(publish.Extractor): def _create_silent_slate( self, - ffmpeg_path, src_path, dst_path, audio_codec, @@ -515,8 +527,8 @@ class ExtractReviewSlate(publish.Extractor): one_frame_duration = str(int(one_frame_duration)) + "us" self.log.debug("One frame duration is {}".format(one_frame_duration)) - slate_silent_args = [ - ffmpeg_path, + slate_silent_args = get_ffmpeg_tool_args( + "ffmpeg", "-i", src_path, "-f", "lavfi", "-i", "anullsrc=r={}:cl={}:d={}".format( @@ -531,7 +543,7 @@ class ExtractReviewSlate(publish.Extractor): "-shortest", "-y", dst_path - ] + ) # run slate generation subprocess self.log.debug("Silent Slate Executing: {}".format( " ".join(slate_silent_args) diff --git a/openpype/plugins/publish/extract_scanline_exr.py b/openpype/plugins/publish/extract_scanline_exr.py index 0e4c0ca65f..9f22794a79 100644 --- a/openpype/plugins/publish/extract_scanline_exr.py +++ b/openpype/plugins/publish/extract_scanline_exr.py @@ -5,7 +5,12 @@ import shutil import pyblish.api -from openpype.lib import run_subprocess, get_oiio_tools_path +from openpype.lib import ( + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) +from openpype.pipeline import KnownPublishError class ExtractScanlineExr(pyblish.api.InstancePlugin): @@ -45,11 +50,11 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin): stagingdir = os.path.normpath(repre.get("stagingDir")) - oiio_tool_path = get_oiio_tools_path() - if not os.path.exists(oiio_tool_path): - self.log.error( - "OIIO tool not found in {}".format(oiio_tool_path)) - raise AssertionError("OIIO tool not found") + try: + oiio_tool_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + self.log.error("OIIO tool not found.") + raise KnownPublishError("OIIO tool not found") for file in input_files: @@ -57,8 +62,7 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin): temp_name = os.path.join(stagingdir, "__{}".format(file)) # move original render to temp location shutil.move(original_name, temp_name) - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = oiio_tool_args + [ os.path.join(stagingdir, temp_name), "--scanline", "-o", os.path.join(stagingdir, original_name) ] diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index b98ab64f56..b72a6d02ad 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -1,10 +1,11 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -174,12 +175,11 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): def create_thumbnail_oiio(self, src_path, dst_path): self.log.info("Extracting thumbnail {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, "-o", dst_path - ] + ) self.log.debug("running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) @@ -194,27 +194,27 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): def create_thumbnail_ffmpeg(self, src_path, dst_path): self.log.info("outputting {}".format(dst_path)) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} - jpeg_items = [] - jpeg_items.append(path_to_subprocess_arg(ffmpeg_path)) - # override file if already exists - jpeg_items.append("-y") + jpeg_items = [ + subprocess.list2cmdline(ffmpeg_path_args) + ] # flag for large file sizes max_int = 2147483647 - jpeg_items.append("-analyzeduration {}".format(max_int)) - jpeg_items.append("-probesize {}".format(max_int)) + jpeg_items.extend([ + "-y", + "-analyzeduration", str(max_int), + "-probesize", str(max_int), + ]) # use same input args like with mov jpeg_items.extend(ffmpeg_args.get("input") or []) # input file - jpeg_items.append("-i {}".format( - path_to_subprocess_arg(src_path) - )) + jpeg_items.extend(["-i", path_to_subprocess_arg(src_path)]) # output arguments from presets jpeg_items.extend(ffmpeg_args.get("output") or []) # we just want one frame from movie files - jpeg_items.append("-vframes 1") + jpeg_items.extend(["-vframes", "1"]) # output file jpeg_items.append(path_to_subprocess_arg(dst_path)) subprocess_command = " ".join(jpeg_items) diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py index a9c95d6065..1b9f0a8bae 100644 --- a/openpype/plugins/publish/extract_thumbnail_from_source.py +++ b/openpype/plugins/publish/extract_thumbnail_from_source.py @@ -17,8 +17,8 @@ import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -128,7 +128,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): if thumbnail_created: return full_output_path - self.log.warning("Thumbanil has not been created.") + self.log.warning("Thumbnail has not been created.") def _instance_has_thumbnail(self, instance): if "representations" not in instance.data: @@ -144,12 +144,12 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): def create_thumbnail_oiio(self, src_path, dst_path): self.log.info("outputting {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, + "--ch", "R,G,B", "-o", dst_path - ] + ) self.log.info("Running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) @@ -162,18 +162,16 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): return False def create_thumbnail_ffmpeg(self, src_path, dst_path): - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - max_int = str(2147483647) - ffmpeg_cmd = [ - ffmpeg_path, + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-analyzeduration", max_int, "-probesize", max_int, "-i", src_path, "-vframes", "1", dst_path - ] + ) self.log.info("Running: {}".format(" ".join(ffmpeg_cmd))) try: diff --git a/openpype/plugins/publish/extract_trim_video_audio.py b/openpype/plugins/publish/extract_trim_video_audio.py index b951136391..2907ae1839 100644 --- a/openpype/plugins/publish/extract_trim_video_audio.py +++ b/openpype/plugins/publish/extract_trim_video_audio.py @@ -4,7 +4,7 @@ from pprint import pformat import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -32,7 +32,7 @@ class ExtractTrimVideoAudio(publish.Extractor): instance.data["representations"] = list() # get ffmpet path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_tool_args = get_ffmpeg_tool_args("ffmpeg") # get staging dir staging_dir = self.staging_dir(instance) @@ -76,8 +76,7 @@ class ExtractTrimVideoAudio(publish.Extractor): if "trimming" not in fml ] - ffmpeg_args = [ - ffmpeg_path, + ffmpeg_args = ffmpeg_tool_args + [ "-ss", str(clip_start_h / fps), "-i", video_file_path, "-t", str(clip_dur_h / fps) diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index f392cf67f7..be07cffe72 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -2,9 +2,10 @@ import os import logging import sys import copy +import datetime + import clique import six - from bson.objectid import ObjectId import pyblish.api @@ -137,7 +138,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "mvUsdOverride", "simpleUnrealTexture", "online", - "uasset" + "uasset", + "blendScene" ] default_template_name = "publish" @@ -148,14 +150,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "project", "asset", "task", "subset", "version", "representation", "family", "hierarchy", "username", "user", "output" ] - skip_host_families = [] def process(self, instance): - if self._temp_skip_instance_by_settings(instance): - return - - # Mark instance as processed for legacy integrator - instance.data["processedWithNewIntegrator"] = True # Instance should be integrated on a farm if instance.data.get("farm"): @@ -201,39 +197,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # the try, except. file_transactions.finalize() - def _temp_skip_instance_by_settings(self, instance): - """Decide if instance will be processed with new or legacy integrator. - - This is temporary solution until we test all usecases with new (this) - integrator plugin. - """ - - host_name = instance.context.data["hostName"] - instance_family = instance.data["family"] - instance_families = set(instance.data.get("families") or []) - - skip = False - for item in self.skip_host_families: - if host_name not in item["host"]: - continue - - families = set(item["families"]) - if instance_family in families: - skip = True - break - - for family in instance_families: - if family in families: - skip = True - break - - if skip: - break - - if skip: - self.log.debug("Instance is marked to be skipped by settings.") - return skip - def filter_representations(self, instance): # Prepare repsentations that should be integrated repres = instance.data.get("representations") @@ -358,10 +321,16 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # Get the accessible sites for Site Sync modules_by_name = instance.context.data["openPypeModules"] - sync_server_module = modules_by_name["sync_server"] - sites = sync_server_module.compute_resource_sync_sites( - project_name=instance.data["projectEntity"]["name"] - ) + sync_server_module = modules_by_name.get("sync_server") + if sync_server_module is None: + sites = [{ + "name": "studio", + "created_dt": datetime.datetime.now() + }] + else: + sites = sync_server_module.compute_resource_sync_sites( + project_name=instance.data["projectEntity"]["name"] + ) self.log.debug("Sync Server Sites: {}".format(sites)) # Compute the resource file infos once (files belonging to the diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index b71207c24f..6c21664b78 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -6,6 +6,7 @@ import shutil import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_version_by_id, get_hero_version_by_subset_id, @@ -141,6 +142,12 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): )) return + if AYON_SERVER_ENABLED and src_version_entity["name"] == 0: + self.log.debug( + "Version 0 cannot have hero version. Skipping." + ) + return + all_copied_files = [] transfers = instance.data.get("transfers", list()) for _src, dst in transfers: @@ -195,11 +202,20 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): entity_id = None if old_version: entity_id = old_version["_id"] - new_hero_version = new_hero_version_doc( - src_version_entity["_id"], - src_version_entity["parent"], - entity_id=entity_id - ) + + if AYON_SERVER_ENABLED: + new_hero_version = new_hero_version_doc( + src_version_entity["parent"], + copy.deepcopy(src_version_entity["data"]), + src_version_entity["name"], + entity_id=entity_id + ) + else: + new_hero_version = new_hero_version_doc( + src_version_entity["_id"], + src_version_entity["parent"], + entity_id=entity_id + ) if old_version: self.log.debug("Replacing old hero version.") diff --git a/openpype/plugins/publish/integrate_inputlinks.py b/openpype/plugins/publish/integrate_inputlinks.py index 6964f2d938..3baa462a81 100644 --- a/openpype/plugins/publish/integrate_inputlinks.py +++ b/openpype/plugins/publish/integrate_inputlinks.py @@ -3,6 +3,7 @@ from collections import OrderedDict from bson.objectid import ObjectId import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io @@ -34,6 +35,7 @@ class IntegrateInputLinks(pyblish.api.ContextPlugin): plugin. """ + workfile = None publishing = [] @@ -133,3 +135,7 @@ class IntegrateInputLinks(pyblish.api.ContextPlugin): {"_id": version_doc["_id"]}, {"$set": {"data.inputLinks": input_links}} ) + + +if AYON_SERVER_ENABLED: + del IntegrateInputLinks diff --git a/openpype/plugins/publish/integrate_inputlinks_ayon.py b/openpype/plugins/publish/integrate_inputlinks_ayon.py new file mode 100644 index 0000000000..180524cd08 --- /dev/null +++ b/openpype/plugins/publish/integrate_inputlinks_ayon.py @@ -0,0 +1,160 @@ +import collections + +import pyblish.api +from ayon_api import create_link, make_sure_link_type_exists + +from openpype import AYON_SERVER_ENABLED + + +class IntegrateInputLinksAYON(pyblish.api.ContextPlugin): + """Connecting version level dependency links""" + + order = pyblish.api.IntegratorOrder + 0.2 + label = "Connect Dependency InputLinks AYON" + + def process(self, context): + """Connect dependency links for all instances, globally + + Code steps: + - filter instances that integrated version + - have "versionEntity" entry in data + - separate workfile instance within filtered instances + - when workfile instance is available: + - link all `loadedVersions` as input of the workfile + - link workfile as input of all other integrated versions + - link version's inputs if it's instance have "inputVersions" entry + - + + inputVersions: + The "inputVersions" in instance.data should be a list of + version ids (str), which are the dependencies of the publishing + instance that should be extracted from working scene by the DCC + specific publish plugin. + """ + + workfile_instance, other_instances = self.split_instances(context) + + # Variable where links are stored in submethods + new_links_by_type = collections.defaultdict(list) + + self.create_workfile_links( + workfile_instance, other_instances, new_links_by_type) + + self.create_generative_links(other_instances, new_links_by_type) + + self.create_links_on_server(context, new_links_by_type) + + def split_instances(self, context): + workfile_instance = None + other_instances = [] + + for instance in context: + # Skip inactive instances + if not instance.data.get("publish", True): + continue + + version_doc = instance.data.get("versionEntity") + if not version_doc: + self.log.debug( + "Instance {} doesn't have version.".format(instance)) + continue + + family = instance.data.get("family") + if family == "workfile": + workfile_instance = instance + else: + other_instances.append(instance) + return workfile_instance, other_instances + + def add_link(self, new_links_by_type, link_type, input_id, output_id): + """Add dependency link data into temporary variable. + + Args: + new_links_by_type (dict[str, list[dict[str, Any]]]): Object where + output is stored. + link_type (str): Type of link, one of 'reference' or 'generative' + input_id (str): Input version id. + output_id (str): Output version id. + """ + + new_links_by_type[link_type].append((input_id, output_id)) + + def create_workfile_links( + self, workfile_instance, other_instances, new_links_by_type + ): + if workfile_instance is None: + self.log.warn("No workfile in this publish session.") + return + + workfile_version_id = workfile_instance.data["versionEntity"]["_id"] + # link workfile to all publishing versions + for instance in other_instances: + self.add_link( + new_links_by_type, + "generative", + workfile_version_id, + instance.data["versionEntity"]["_id"], + ) + + loaded_versions = workfile_instance.context.get("loadedVersions") + if not loaded_versions: + return + + # link all loaded versions in scene into workfile + for version in loaded_versions: + self.add_link( + new_links_by_type, + "reference", + version["version"], + workfile_version_id, + ) + + def create_generative_links(self, other_instances, new_links_by_type): + for instance in other_instances: + input_versions = instance.data.get("inputVersions") + if not input_versions: + continue + + version_entity = instance.data["versionEntity"] + for input_version in input_versions: + self.add_link( + new_links_by_type, + "generative", + input_version, + version_entity["_id"], + ) + + def create_links_on_server(self, context, new_links): + """Create new links on server. + + Args: + dict[str, list[tuple[str, str]]]: Version links by link type. + """ + + if not new_links: + return + + project_name = context.data["projectName"] + + # Make sure link types are available on server + for link_type in new_links.keys(): + make_sure_link_type_exists( + project_name, link_type, "version", "version" + ) + + # Create link themselves + for link_type, items in new_links.items(): + for item in items: + input_id, output_id = item + create_link( + project_name, + link_type, + input_id, + "version", + output_id, + "version" + ) + + +if not AYON_SERVER_ENABLED: + del IntegrateInputLinksAYON diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py deleted file mode 100644 index c238cca633..0000000000 --- a/openpype/plugins/publish/integrate_legacy.py +++ /dev/null @@ -1,1299 +0,0 @@ -import os -from os.path import getsize -import logging -import sys -import copy -import clique -import errno -import six -import re -import shutil -from collections import deque, defaultdict -from datetime import datetime - -from bson.objectid import ObjectId -from pymongo import DeleteOne, InsertOne -import pyblish.api - -from openpype.client import ( - get_asset_by_name, - get_subset_by_id, - get_subset_by_name, - get_version_by_id, - get_version_by_name, - get_representations, - get_archived_representations, -) -from openpype.lib import ( - prepare_template_data, - create_hard_link, - StringTemplate, - TemplateUnsolved, - source_hash, - filter_profiles, - get_local_site_id, -) -from openpype.pipeline import legacy_io -from openpype.pipeline.publish import get_publish_template_name - -# this is needed until speedcopy for linux is fixed -if sys.platform == "win32": - from speedcopy import copyfile -else: - from shutil import copyfile - -log = logging.getLogger(__name__) - - -class IntegrateAssetNew(pyblish.api.InstancePlugin): - """Resolve any dependency issues - - This plug-in resolves any paths which, if not updated might break - the published file. - - The order of families is important, when working with lookdev you want to - first publish the texture, update the texture paths in the nodes and then - publish the shading network. Same goes for file dependent assets. - - Requirements for instance to be correctly integrated - - instance.data['representations'] - must be a list and each member - must be a dictionary with following data: - 'files': list of filenames for sequence, string for single file. - Only the filename is allowed, without the folder path. - 'stagingDir': "path/to/folder/with/files" - 'name': representation name (usually the same as extension) - 'ext': file extension - optional data - "frameStart" - "frameEnd" - 'fps' - "data": additional metadata for each representation. - """ - - label = "Integrate Asset (legacy)" - # Make sure it happens after new integrator - order = pyblish.api.IntegratorOrder + 0.00001 - families = ["workfile", - "pointcache", - "pointcloud", - "proxyAbc", - "camera", - "animation", - "model", - "maxScene", - "mayaAscii", - "mayaScene", - "setdress", - "layout", - "ass", - "vdbcache", - "scene", - "vrayproxy", - "vrayscene_layer", - "render", - "prerender", - "imagesequence", - "review", - "rendersetup", - "rig", - "plate", - "look", - "audio", - "yetiRig", - "yeticache", - "nukenodes", - "gizmo", - "source", - "matchmove", - "image", - "assembly", - "fbx", - "gltf", - "textures", - "action", - "harmony.template", - "harmony.palette", - "editorial", - "background", - "camerarig", - "redshiftproxy", - "effect", - "xgen", - "hda", - "usd", - "staticMesh", - "skeletalMesh", - "mvLook", - "mvUsdComposition", - "mvUsdOverride", - "simpleUnrealTexture" - ] - exclude_families = ["render.farm"] - db_representation_context_keys = [ - "project", "asset", "task", "subset", "version", "representation", - "family", "hierarchy", "task", "username", "user" - ] - default_template_name = "publish" - - # suffix to denote temporary files, use without '.' - TMP_FILE_EXT = 'tmp' - - # file_url : file_size of all published and uploaded files - integrated_file_sizes = {} - - # Attributes set by settings - subset_grouping_profiles = None - - def process(self, instance): - if instance.data.get("processedWithNewIntegrator"): - self.log.debug( - "Instance was already processed with new integrator" - ) - return - - for ef in self.exclude_families: - if ( - instance.data["family"] == ef or - ef in instance.data["families"]): - self.log.debug("Excluded family '{}' in '{}' or {}".format( - ef, instance.data["family"], instance.data["families"])) - return - - # instance should be published on a farm - if instance.data.get("farm"): - return - - # Prepare repsentations that should be integrated - repres = instance.data.get("representations") - # Raise error if instance don't have any representations - if not repres: - raise ValueError( - "Instance {} has no files to transfer".format( - instance.data["family"] - ) - ) - - # Validate type of stored representations - if not isinstance(repres, (list, tuple)): - raise TypeError( - "Instance 'files' must be a list, got: {0} {1}".format( - str(type(repres)), str(repres) - ) - ) - - # Filter representations - filtered_repres = [] - for repre in repres: - if "delete" in repre.get("tags", []): - continue - filtered_repres.append(repre) - - # Skip instance if there are not representations to integrate - # all representations should not be integrated - if not filtered_repres: - self.log.warning(( - "Skipping, there are no representations" - " to integrate for instance {}" - ).format(instance.data["family"])) - return - - self.integrated_file_sizes = {} - try: - self.register(instance, filtered_repres) - self.log.info("Integrated Asset in to the database ...") - self.log.info("instance.data: {}".format(instance.data)) - self.handle_destination_files(self.integrated_file_sizes, - 'finalize') - except Exception: - # clean destination - self.log.critical("Error when registering", exc_info=True) - self.handle_destination_files(self.integrated_file_sizes, 'remove') - six.reraise(*sys.exc_info()) - - def register(self, instance, repres): - # Required environment variables - anatomy_data = instance.data["anatomyData"] - - legacy_io.install() - - context = instance.context - - project_entity = instance.data["projectEntity"] - project_name = project_entity["name"] - - context_asset_name = None - context_asset_doc = context.data.get("assetEntity") - if context_asset_doc: - context_asset_name = context_asset_doc["name"] - - asset_name = instance.data["asset"] - asset_entity = instance.data.get("assetEntity") - if not asset_entity or asset_entity["name"] != context_asset_name: - asset_entity = get_asset_by_name(project_name, asset_name) - assert asset_entity, ( - "No asset found by the name \"{0}\" in project \"{1}\"" - ).format(asset_name, project_entity["name"]) - - instance.data["assetEntity"] = asset_entity - - # update anatomy data with asset specific keys - # - name should already been set - hierarchy = "" - parents = asset_entity["data"]["parents"] - if parents: - hierarchy = "/".join(parents) - anatomy_data["hierarchy"] = hierarchy - - # Make sure task name in anatomy data is same as on instance.data - asset_tasks = ( - asset_entity.get("data", {}).get("tasks") - ) or {} - task_name = instance.data.get("task") - if task_name: - task_info = asset_tasks.get(task_name) or {} - task_type = task_info.get("type") - - project_task_types = project_entity["config"]["tasks"] - task_code = project_task_types.get(task_type, {}).get("short_name") - anatomy_data["task"] = { - "name": task_name, - "type": task_type, - "short": task_code - } - - elif "task" in anatomy_data: - # Just set 'task_name' variable to context task - task_name = anatomy_data["task"]["name"] - task_type = anatomy_data["task"]["type"] - - else: - task_name = None - task_type = None - - # Fill family in anatomy data - anatomy_data["family"] = instance.data.get("family") - - stagingdir = instance.data.get("stagingDir") - if not stagingdir: - self.log.debug(( - "{0} is missing reference to staging directory." - " Will try to get it from representation." - ).format(instance)) - - else: - self.log.debug( - "Establishing staging directory @ {0}".format(stagingdir) - ) - - subset = self.get_subset(project_name, asset_entity, instance) - instance.data["subsetEntity"] = subset - - version_number = instance.data["version"] - self.log.debug("Next version: v{}".format(version_number)) - - version_data = self.create_version_data(context, instance) - - version_data_instance = instance.data.get('versionData') - if version_data_instance: - version_data.update(version_data_instance) - - # TODO rename method from `create_version` to - # `prepare_version` or similar... - version = self.create_version( - subset=subset, - version_number=version_number, - data=version_data - ) - - self.log.debug("Creating version ...") - - new_repre_names_low = [ - _repre["name"].lower() - for _repre in repres - ] - - existing_version = get_version_by_name( - project_name, version_number, subset["_id"] - ) - - if existing_version is None: - version_id = legacy_io.insert_one(version).inserted_id - else: - # Check if instance have set `append` mode which cause that - # only replicated representations are set to archive - append_repres = instance.data.get("append", False) - - # Update version data - # TODO query by _id and - legacy_io.update_many({ - 'type': 'version', - 'parent': subset["_id"], - 'name': version_number - }, { - '$set': version - }) - version_id = existing_version['_id'] - - # Find representations of existing version and archive them - current_repres = list(get_representations( - project_name, version_ids=[version_id] - )) - bulk_writes = [] - for repre in current_repres: - if append_repres: - # archive only duplicated representations - if repre["name"].lower() not in new_repre_names_low: - continue - # Representation must change type, - # `_id` must be stored to other key and replaced with new - # - that is because new representations should have same ID - repre_id = repre["_id"] - bulk_writes.append(DeleteOne({"_id": repre_id})) - - repre["orig_id"] = repre_id - repre["_id"] = ObjectId() - repre["type"] = "archived_representation" - bulk_writes.append(InsertOne(repre)) - - # bulk updates - if bulk_writes: - legacy_io.database[project_name].bulk_write( - bulk_writes - ) - - version = get_version_by_id(project_name, version_id) - instance.data["versionEntity"] = version - - existing_repres = list(get_archived_representations( - project_name, - version_ids=[version_id] - )) - - instance.data['version'] = version['name'] - - intent_value = instance.context.data.get("intent") - if intent_value and isinstance(intent_value, dict): - intent_value = intent_value.get("value") - - if intent_value: - anatomy_data["intent"] = intent_value - - anatomy = instance.context.data['anatomy'] - - # Find the representations to transfer amongst the files - # Each should be a single representation (as such, a single extension) - representations = [] - destination_list = [] - - orig_transfers = [] - if 'transfers' not in instance.data: - instance.data['transfers'] = [] - else: - orig_transfers = list(instance.data['transfers']) - - family = self.main_family_from_instance(instance) - - template_name = get_publish_template_name( - project_name, - instance.context.data["hostName"], - family, - task_name=task_info.get("name"), - task_type=task_info.get("type"), - project_settings=instance.context.data["project_settings"], - logger=self.log - ) - - published_representations = {} - for idx, repre in enumerate(repres): - published_files = [] - - # create template data for Anatomy - template_data = copy.deepcopy(anatomy_data) - if intent_value is not None: - template_data["intent"] = intent_value - - resolution_width = repre.get("resolutionWidth") - resolution_height = repre.get("resolutionHeight") - fps = instance.data.get("fps") - - if resolution_width: - template_data["resolution_width"] = resolution_width - if resolution_width: - template_data["resolution_height"] = resolution_height - if resolution_width: - template_data["fps"] = fps - - if "originalBasename" in instance.data: - template_data.update({ - "originalBasename": instance.data.get("originalBasename") - }) - - files = repre['files'] - if repre.get('stagingDir'): - stagingdir = repre['stagingDir'] - - if repre.get("outputName"): - template_data["output"] = repre['outputName'] - - template_data["representation"] = repre["name"] - - ext = repre["ext"] - if ext.startswith("."): - self.log.warning(( - "Implementaion warning: <\"{}\">" - " Representation's extension stored under \"ext\" key " - " started with dot (\"{}\")." - ).format(repre["name"], ext)) - ext = ext[1:] - repre["ext"] = ext - template_data["ext"] = ext - - self.log.info(template_name) - template = os.path.normpath( - anatomy.templates[template_name]["path"]) - - sequence_repre = isinstance(files, list) - repre_context = None - if sequence_repre: - self.log.debug( - "files: {}".format(files)) - src_collections, remainder = clique.assemble(files) - self.log.debug( - "src_tail_collections: {}".format(str(src_collections))) - src_collection = src_collections[0] - - # Assert that each member has identical suffix - src_head = src_collection.format("{head}") - src_tail = src_collection.format("{tail}") - - # fix dst_padding - valid_files = [x for x in files if src_collection.match(x)] - padd_len = len( - valid_files[0].replace(src_head, "").replace(src_tail, "") - ) - src_padding_exp = "%0{}d".format(padd_len) - - test_dest_files = list() - for i in [1, 2]: - template_data["representation"] = repre['ext'] - if not repre.get("udim"): - template_data["frame"] = src_padding_exp % i - else: - template_data["udim"] = src_padding_exp % i - - template_obj = anatomy.templates_obj[template_name]["path"] - template_filled = template_obj.format_strict(template_data) - if repre_context is None: - repre_context = template_filled.used_values - test_dest_files.append( - os.path.normpath(template_filled) - ) - if not repre.get("udim"): - template_data["frame"] = repre_context["frame"] - else: - template_data["udim"] = repre_context["udim"] - - self.log.debug( - "test_dest_files: {}".format(str(test_dest_files))) - - dst_collections, remainder = clique.assemble(test_dest_files) - dst_collection = dst_collections[0] - dst_head = dst_collection.format("{head}") - dst_tail = dst_collection.format("{tail}") - - index_frame_start = None - - # TODO use frame padding from right template group - if repre.get("frameStart") is not None: - frame_start_padding = int( - anatomy.templates["render"].get( - "frame_padding", - anatomy.templates["render"].get("padding") - ) - ) - - index_frame_start = int(repre.get("frameStart")) - - # exception for slate workflow - if index_frame_start and "slate" in instance.data["families"]: - index_frame_start -= 1 - - dst_padding_exp = src_padding_exp - dst_start_frame = None - collection_start = list(src_collection.indexes)[0] - for i in src_collection.indexes: - # TODO 1.) do not count padding in each index iteration - # 2.) do not count dst_padding from src_padding before - # index_frame_start check - frame_number = i - collection_start - src_padding = src_padding_exp % i - - src_file_name = "{0}{1}{2}".format( - src_head, src_padding, src_tail) - - dst_padding = src_padding_exp % frame_number - - if index_frame_start is not None: - dst_padding_exp = "%0{}d".format(frame_start_padding) - dst_padding = dst_padding_exp % (index_frame_start + frame_number) # noqa: E501 - elif repre.get("udim"): - dst_padding = int(i) - - dst = "{0}{1}{2}".format( - dst_head, - dst_padding, - dst_tail - ) - - self.log.debug("destination: `{}`".format(dst)) - src = os.path.join(stagingdir, src_file_name) - - self.log.debug("source: {}".format(src)) - instance.data["transfers"].append([src, dst]) - - published_files.append(dst) - - # for adding first frame into db - if not dst_start_frame: - dst_start_frame = dst_padding - - # Store used frame value to template data - if repre.get("frame"): - template_data["frame"] = dst_start_frame - - dst = "{0}{1}{2}".format( - dst_head, - dst_start_frame, - dst_tail - ) - repre['published_path'] = dst - - else: - # Single file - # _______ - # | |\ - # | | - # | | - # | | - # |_______| - # - template_data.pop("frame", None) - fname = files - assert not os.path.isabs(fname), ( - "Given file name is a full path" - ) - - template_data["representation"] = repre['ext'] - # Store used frame value to template data - if repre.get("udim"): - template_data["udim"] = repre["udim"][0] - src = os.path.join(stagingdir, fname) - template_obj = anatomy.templates_obj[template_name]["path"] - template_filled = template_obj.format_strict(template_data) - repre_context = template_filled.used_values - dst = os.path.normpath(template_filled) - - instance.data["transfers"].append([src, dst]) - - published_files.append(dst) - repre['published_path'] = dst - self.log.debug("__ dst: {}".format(dst)) - - if not instance.data.get("publishDir"): - instance.data["publishDir"] = ( - anatomy.templates_obj[template_name]["folder"] - .format_strict(template_data) - ) - if repre.get("udim"): - repre_context["udim"] = repre.get("udim") # store list - - repre["publishedFiles"] = published_files - - for key in self.db_representation_context_keys: - value = template_data.get(key) - if not value: - continue - repre_context[key] = template_data[key] - - # Use previous representation's id if there are any - repre_id = None - repre_name_low = repre["name"].lower() - for _repre in existing_repres: - # NOTE should we check lowered names? - if repre_name_low == _repre["name"]: - repre_id = _repre["orig_id"] - break - - # Create new id if existing representations does not match - if repre_id is None: - repre_id = ObjectId() - - data = repre.get("data") or {} - data.update({'path': dst, 'template': template}) - representation = { - "_id": repre_id, - "schema": "openpype:representation-2.0", - "type": "representation", - "parent": version_id, - "name": repre['name'], - "data": data, - "dependencies": instance.data.get("dependencies", "").split(), - - # Imprint shortcut to context - # for performance reasons. - "context": repre_context - } - - if repre.get("outputName"): - representation["context"]["output"] = repre['outputName'] - - if sequence_repre and repre.get("frameStart") is not None: - representation['context']['frame'] = ( - dst_padding_exp % int(repre.get("frameStart")) - ) - - # any file that should be physically copied is expected in - # 'transfers' or 'hardlinks' - if instance.data.get('transfers', False) or \ - instance.data.get('hardlinks', False): - # could throw exception, will be caught in 'process' - # all integration to DB is being done together lower, - # so no rollback needed - self.log.debug("Integrating source files to destination ...") - self.integrated_file_sizes.update(self.integrate(instance)) - self.log.debug("Integrated files {}". - format(self.integrated_file_sizes)) - - # get 'files' info for representation and all attached resources - self.log.debug("Preparing files information ...") - representation["files"] = self.get_files_info( - instance, - self.integrated_file_sizes) - - self.log.debug("__ representation: {}".format(representation)) - destination_list.append(dst) - self.log.debug("__ destination_list: {}".format(destination_list)) - instance.data['destination_list'] = destination_list - representations.append(representation) - published_representations[repre_id] = { - "representation": representation, - "anatomy_data": template_data, - "published_files": published_files - } - self.log.debug("__ representations: {}".format(representations)) - # reset transfers for next representation - # instance.data['transfers'] is used as a global variable - # in current codebase - instance.data['transfers'] = list(orig_transfers) - - # Remove old representations if there are any (before insertion of new) - if existing_repres: - repre_ids_to_remove = [] - for repre in existing_repres: - repre_ids_to_remove.append(repre["_id"]) - legacy_io.delete_many({"_id": {"$in": repre_ids_to_remove}}) - - for rep in instance.data["representations"]: - self.log.debug("__ rep: {}".format(rep)) - - legacy_io.insert_many(representations) - instance.data["published_representations"] = ( - published_representations - ) - # self.log.debug("Representation: {}".format(representations)) - self.log.info("Registered {} items".format(len(representations))) - - def integrate(self, instance): - """ Move the files. - - Through `instance.data["transfers"]` - - Args: - instance: the instance to integrate - Returns: - integrated_file_sizes: dictionary of destination file url and - its size in bytes - """ - # store destination url and size for reporting and rollback - integrated_file_sizes = {} - transfers = list(instance.data.get("transfers", list())) - for src, dest in transfers: - if os.path.normpath(src) != os.path.normpath(dest): - dest = self.get_dest_temp_url(dest) - self.copy_file(src, dest) - # TODO needs to be updated during site implementation - integrated_file_sizes[dest] = os.path.getsize(dest) - - # Produce hardlinked copies - # Note: hardlink can only be produced between two files on the same - # server/disk and editing one of the two will edit both files at once. - # As such it is recommended to only make hardlinks between static files - # to ensure publishes remain safe and non-edited. - hardlinks = instance.data.get("hardlinks", list()) - for src, dest in hardlinks: - dest = self.get_dest_temp_url(dest) - self.log.debug("Hardlinking file ... {} -> {}".format(src, dest)) - if not os.path.exists(dest): - self.hardlink_file(src, dest) - - # TODO needs to be updated during site implementation - integrated_file_sizes[dest] = os.path.getsize(dest) - - return integrated_file_sizes - - def copy_file(self, src, dst): - """ Copy given source to destination - - Arguments: - src (str): the source file which needs to be copied - dst (str): the destination of the sourc file - Returns: - None - """ - src = os.path.normpath(src) - dst = os.path.normpath(dst) - self.log.debug("Copying file ... {} -> {}".format(src, dst)) - dirname = os.path.dirname(dst) - try: - os.makedirs(dirname) - except OSError as e: - if e.errno == errno.EEXIST: - pass - else: - self.log.critical("An unexpected error occurred.") - six.reraise(*sys.exc_info()) - - # copy file with speedcopy and check if size of files are simetrical - while True: - if not shutil._samefile(src, dst): - copyfile(src, dst) - else: - self.log.critical( - "files are the same {} to {}".format(src, dst) - ) - os.remove(dst) - try: - shutil.copyfile(src, dst) - self.log.debug("Copying files with shutil...") - except OSError as e: - self.log.critical("Cannot copy {} to {}".format(src, dst)) - self.log.critical(e) - six.reraise(*sys.exc_info()) - if str(getsize(src)) in str(getsize(dst)): - break - - def hardlink_file(self, src, dst): - dirname = os.path.dirname(dst) - - try: - os.makedirs(dirname) - except OSError as e: - if e.errno == errno.EEXIST: - pass - else: - self.log.critical("An unexpected error occurred.") - six.reraise(*sys.exc_info()) - - create_hard_link(src, dst) - - def get_subset(self, project_name, asset, instance): - subset_name = instance.data["subset"] - subset = get_subset_by_name(project_name, subset_name, asset["_id"]) - - if subset is None: - self.log.info("Subset '%s' not found, creating ..." % subset_name) - self.log.debug("families. %s" % instance.data.get('families')) - self.log.debug( - "families. %s" % type(instance.data.get('families'))) - - family = instance.data.get("family") - families = [] - if family: - families.append(family) - - for _family in (instance.data.get("families") or []): - if _family not in families: - families.append(_family) - - _id = legacy_io.insert_one({ - "schema": "openpype:subset-3.0", - "type": "subset", - "name": subset_name, - "data": { - "families": families - }, - "parent": asset["_id"] - }).inserted_id - - subset = get_subset_by_id(project_name, _id) - - # QUESTION Why is changing of group and updating it's - # families in 'get_subset'? - self._set_subset_group(instance, subset["_id"]) - - # Update families on subset. - families = [instance.data["family"]] - families.extend(instance.data.get("families", [])) - legacy_io.update_many( - {"type": "subset", "_id": ObjectId(subset["_id"])}, - {"$set": {"data.families": families}} - ) - - return subset - - def _set_subset_group(self, instance, subset_id): - """ - Mark subset as belonging to group in DB. - - Uses Settings > Global > Publish plugins > IntegrateAssetNew - - Args: - instance (dict): processed instance - subset_id (str): DB's subset _id - - """ - # Fist look into instance data - subset_group = instance.data.get("subsetGroup") - if not subset_group: - subset_group = self._get_subset_group(instance) - - if subset_group: - legacy_io.update_many({ - 'type': 'subset', - '_id': ObjectId(subset_id) - }, {'$set': {'data.subsetGroup': subset_group}}) - - def _get_subset_group(self, instance): - """Look into subset group profiles set by settings. - - Attribute 'subset_grouping_profiles' is defined by OpenPype settings. - """ - # Skip if 'subset_grouping_profiles' is empty - if not self.subset_grouping_profiles: - return None - - # QUESTION - # - is there a chance that task name is not filled in anatomy - # data? - # - should we use context task in that case? - anatomy_data = instance.data["anatomyData"] - task_name = None - task_type = None - if "task" in anatomy_data: - task_name = anatomy_data["task"]["name"] - task_type = anatomy_data["task"]["type"] - filtering_criteria = { - "families": instance.data["family"], - "hosts": instance.context.data["hostName"], - "tasks": task_name, - "task_types": task_type - } - matching_profile = filter_profiles( - self.subset_grouping_profiles, - filtering_criteria - ) - # Skip if there is not matchin profile - if not matching_profile: - return None - - filled_template = None - template = matching_profile["template"] - fill_pairs = ( - ("family", filtering_criteria["families"]), - ("task", filtering_criteria["tasks"]), - ("host", filtering_criteria["hosts"]), - ("subset", instance.data["subset"]), - ("renderlayer", instance.data.get("renderlayer")) - ) - fill_pairs = prepare_template_data(fill_pairs) - - try: - filled_template = StringTemplate.format_strict_template( - template, fill_pairs - ) - except (KeyError, TemplateUnsolved): - keys = [] - if fill_pairs: - keys = fill_pairs.keys() - - msg = "Subset grouping failed. " \ - "Only {} are expected in Settings".format(','.join(keys)) - self.log.warning(msg) - - return filled_template - - def create_version(self, subset, version_number, data=None): - """ Copy given source to destination - - Args: - subset (dict): the registered subset of the asset - version_number (int): the version number - - Returns: - dict: collection of data to create a version - """ - - return {"schema": "openpype:version-3.0", - "type": "version", - "parent": subset["_id"], - "name": version_number, - "data": data} - - def create_version_data(self, context, instance): - """Create the data collection for the version - - Args: - context: the current context - instance: the current instance being published - - Returns: - dict: the required information with instance.data as key - """ - - families = [] - current_families = instance.data.get("families", list()) - instance_family = instance.data.get("family", None) - - if instance_family is not None: - families.append(instance_family) - families += current_families - - # create relative source path for DB - source = instance.data.get("source") - if not source: - source = context.data["currentFile"] - anatomy = instance.context.data["anatomy"] - source = self.get_rootless_path(anatomy, source) - - self.log.debug("Source: {}".format(source)) - version_data = { - "families": families, - "time": context.data["time"], - "author": context.data["user"], - "source": source, - "comment": instance.data["comment"], - "machine": context.data.get("machine"), - "fps": context.data.get( - "fps", instance.data.get("fps") - ) - } - - intent_value = instance.context.data.get("intent") - if intent_value and isinstance(intent_value, dict): - intent_value = intent_value.get("value") - - if intent_value: - version_data["intent"] = intent_value - - # Include optional data if present in - optionals = [ - "frameStart", "frameEnd", "step", - "handleEnd", "handleStart", "sourceHashes" - ] - for key in optionals: - if key in instance.data: - version_data[key] = instance.data[key] - - return version_data - - def main_family_from_instance(self, instance): - """Returns main family of entered instance.""" - family = instance.data.get("family") - if not family: - family = instance.data["families"][0] - return family - - def get_rootless_path(self, anatomy, path): - """ Returns, if possible, path without absolute portion from host - (eg. 'c:\' or '/opt/..') - This information is host dependent and shouldn't be captured. - Example: - 'c:/projects/MyProject1/Assets/publish...' > - '{root}/MyProject1/Assets...' - - Args: - anatomy: anatomy part from instance - path: path (absolute) - Returns: - path: modified path if possible, or unmodified path - + warning logged - """ - success, rootless_path = ( - anatomy.find_root_template_from_path(path) - ) - if success: - path = rootless_path - else: - self.log.warning(( - "Could not find root path for remapping \"{}\"." - " This may cause issues on farm." - ).format(path)) - return path - - def get_files_info(self, instance, integrated_file_sizes): - """ Prepare 'files' portion for attached resources and main asset. - Combining records from 'transfers' and 'hardlinks' parts from - instance. - All attached resources should be added, currently without - Context info. - - Arguments: - instance: the current instance being published - integrated_file_sizes: dictionary of destination path (absolute) - and its file size - Returns: - output_resources: array of dictionaries to be added to 'files' key - in representation - """ - resources = list(instance.data.get("transfers", [])) - resources.extend(list(instance.data.get("hardlinks", []))) - - self.log.debug("get_resource_files_info.resources:{}". - format(resources)) - - output_resources = [] - anatomy = instance.context.data["anatomy"] - for _src, dest in resources: - path = self.get_rootless_path(anatomy, dest) - dest = self.get_dest_temp_url(dest) - file_hash = source_hash(dest) - if self.TMP_FILE_EXT and \ - ',{}'.format(self.TMP_FILE_EXT) in file_hash: - file_hash = file_hash.replace(',{}'.format(self.TMP_FILE_EXT), - '') - - file_info = self.prepare_file_info(path, - integrated_file_sizes[dest], - file_hash, - instance=instance) - output_resources.append(file_info) - - return output_resources - - def get_dest_temp_url(self, dest): - """ Enhance destination path with TMP_FILE_EXT to denote temporary - file. - Temporary files will be renamed after successful registration - into DB and full copy to destination - - Arguments: - dest: destination url of published file (absolute) - Returns: - dest: destination path + '.TMP_FILE_EXT' - """ - if self.TMP_FILE_EXT and '.{}'.format(self.TMP_FILE_EXT) not in dest: - dest += '.{}'.format(self.TMP_FILE_EXT) - return dest - - def prepare_file_info(self, path, size=None, file_hash=None, - sites=None, instance=None): - """ Prepare information for one file (asset or resource) - - Arguments: - path: destination url of published file (rootless) - size(optional): size of file in bytes - file_hash(optional): hash of file for synchronization validation - sites(optional): array of published locations, - [ {'name':'studio', 'created_dt':date} by default - keys expected ['studio', 'site1', 'gdrive1'] - instance(dict, optional): to get collected settings - Returns: - rec: dictionary with filled info - """ - local_site = 'studio' # default - remote_site = None - always_accesible = [] - sync_project_presets = None - - rec = { - "_id": ObjectId(), - "path": path - } - if size: - rec["size"] = size - - if file_hash: - rec["hash"] = file_hash - - if sites: - rec["sites"] = sites - else: - system_sync_server_presets = ( - instance.context.data["system_settings"] - ["modules"] - ["sync_server"]) - log.debug("system_sett:: {}".format(system_sync_server_presets)) - - if system_sync_server_presets["enabled"]: - sync_project_presets = ( - instance.context.data["project_settings"] - ["global"] - ["sync_server"]) - - if sync_project_presets and sync_project_presets["enabled"]: - local_site, remote_site = self._get_sites(sync_project_presets) - - always_accesible = sync_project_presets["config"]. \ - get("always_accessible_on", []) - - already_attached_sites = {} - meta = {"name": local_site, "created_dt": datetime.now()} - rec["sites"] = [meta] - already_attached_sites[meta["name"]] = meta["created_dt"] - - if sync_project_presets and sync_project_presets["enabled"]: - if remote_site and \ - remote_site not in already_attached_sites.keys(): - # add remote - meta = {"name": remote_site.strip()} - rec["sites"].append(meta) - already_attached_sites[meta["name"]] = None - - # add alternative sites - rec, already_attached_sites = self._add_alternative_sites( - system_sync_server_presets, already_attached_sites, rec) - - # add skeleton for site where it should be always synced to - for always_on_site in set(always_accesible): - if always_on_site not in already_attached_sites.keys(): - meta = {"name": always_on_site.strip()} - rec["sites"].append(meta) - already_attached_sites[meta["name"]] = None - - log.debug("final sites:: {}".format(rec["sites"])) - - return rec - - def _get_sites(self, sync_project_presets): - """Returns tuple (local_site, remote_site)""" - local_site_id = get_local_site_id() - local_site = sync_project_presets["config"]. \ - get("active_site", "studio").strip() - - if local_site == 'local': - local_site = local_site_id - - remote_site = sync_project_presets["config"].get("remote_site") - - if remote_site == 'local': - remote_site = local_site_id - - return local_site, remote_site - - def _add_alternative_sites(self, - system_sync_server_presets, - already_attached_sites, - rec): - """Loop through all configured sites and add alternatives. - - See SyncServerModule.handle_alternate_site - """ - conf_sites = system_sync_server_presets.get("sites", {}) - - alt_site_pairs = self._get_alt_site_pairs(conf_sites) - - already_attached_keys = list(already_attached_sites.keys()) - for added_site in already_attached_keys: - real_created = already_attached_sites[added_site] - for alt_site in alt_site_pairs.get(added_site, []): - if alt_site in already_attached_sites.keys(): - continue - meta = {"name": alt_site} - # alt site inherits state of 'created_dt' - if real_created: - meta["created_dt"] = real_created - rec["sites"].append(meta) - already_attached_sites[meta["name"]] = real_created - - return rec, already_attached_sites - - def _get_alt_site_pairs(self, conf_sites): - """Returns dict of site and its alternative sites. - - If `site` has alternative site, it means that alt_site has 'site' as - alternative site - Args: - conf_sites (dict) - Returns: - (dict): {'site': [alternative sites]...} - """ - alt_site_pairs = defaultdict(list) - for site_name, site_info in conf_sites.items(): - alt_sites = set(site_info.get("alternative_sites", [])) - alt_site_pairs[site_name].extend(alt_sites) - - for alt_site in alt_sites: - alt_site_pairs[alt_site].append(site_name) - - for site_name, alt_sites in alt_site_pairs.items(): - sites_queue = deque(alt_sites) - while sites_queue: - alt_site = sites_queue.popleft() - - # safety against wrong config - # {"SFTP": {"alternative_site": "SFTP"} - if alt_site == site_name or alt_site not in alt_site_pairs: - continue - - for alt_alt_site in alt_site_pairs[alt_site]: - if ( - alt_alt_site != site_name - and alt_alt_site not in alt_sites - ): - alt_sites.append(alt_alt_site) - sites_queue.append(alt_alt_site) - - return alt_site_pairs - - def handle_destination_files(self, integrated_file_sizes, mode): - """ Clean destination files - Called when error happened during integrating to DB or to disk - OR called to rename uploaded files from temporary name to final to - highlight publishing in progress/broken - Used to clean unwanted files - - Arguments: - integrated_file_sizes: dictionary, file urls as keys, size as value - mode: 'remove' - clean files, - 'finalize' - rename files, - remove TMP_FILE_EXT suffix denoting temp file - """ - if integrated_file_sizes: - for file_url, _file_size in integrated_file_sizes.items(): - if not os.path.exists(file_url): - self.log.debug( - "File {} was not found.".format(file_url) - ) - continue - - try: - if mode == 'remove': - self.log.debug("Removing file {}".format(file_url)) - os.remove(file_url) - if mode == 'finalize': - new_name = re.sub( - r'\.{}$'.format(self.TMP_FILE_EXT), - '', - file_url - ) - - if os.path.exists(new_name): - self.log.debug( - "Overwriting file {} to {}".format( - file_url, new_name - ) - ) - shutil.copy(file_url, new_name) - os.remove(file_url) - else: - self.log.debug( - "Renaming file {} to {}".format( - file_url, new_name - ) - ) - os.rename(file_url, new_name) - except OSError: - self.log.error("Cannot {} file {}".format(mode, file_url), - exc_info=True) - six.reraise(*sys.exc_info()) diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py index 2e87d8fc86..9929d8f754 100644 --- a/openpype/plugins/publish/integrate_thumbnail.py +++ b/openpype/plugins/publish/integrate_thumbnail.py @@ -18,6 +18,7 @@ import collections import six import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.client import get_versions from openpype.client.operations import OperationsSession, new_thumbnail_doc from openpype.pipeline.publish import get_publish_instance_label @@ -39,6 +40,10 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): ] def process(self, context): + if AYON_SERVER_ENABLED: + self.log.info("AYON is enabled. Skipping v3 thumbnail integration") + return + # Filter instances which can be used for integration filtered_instance_items = self._prepare_instances(context) if not filtered_instance_items: diff --git a/openpype/plugins/publish/integrate_thumbnail_ayon.py b/openpype/plugins/publish/integrate_thumbnail_ayon.py new file mode 100644 index 0000000000..ba5664c69f --- /dev/null +++ b/openpype/plugins/publish/integrate_thumbnail_ayon.py @@ -0,0 +1,207 @@ +""" Integrate Thumbnails for Openpype use in Loaders. + + This thumbnail is different from 'thumbnail' representation which could + be uploaded to Ftrack, or used as any other representation in Loaders to + pull into a scene. + + This one is used only as image describing content of published item and + shows up only in Loader in right column section. +""" + +import os +import collections + +import pyblish.api + +from openpype import AYON_SERVER_ENABLED +from openpype.client import get_versions +from openpype.client.operations import OperationsSession + +InstanceFilterResult = collections.namedtuple( + "InstanceFilterResult", + ["instance", "thumbnail_path", "version_id"] +) + + +class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin): + """Integrate Thumbnails for Openpype use in Loaders.""" + + label = "Integrate Thumbnails to AYON" + order = pyblish.api.IntegratorOrder + 0.01 + + required_context_keys = [ + "project", "asset", "task", "subset", "version" + ] + + def process(self, context): + if not AYON_SERVER_ENABLED: + self.log.info("AYON is not enabled. Skipping") + return + + # Filter instances which can be used for integration + filtered_instance_items = self._prepare_instances(context) + if not filtered_instance_items: + self.log.info( + "All instances were filtered. Thumbnail integration skipped." + ) + return + + project_name = context.data["projectName"] + + # Collect version ids from all filtered instance + version_ids = { + instance_items.version_id + for instance_items in filtered_instance_items + } + # Query versions + version_docs = get_versions( + project_name, + version_ids=version_ids, + hero=True, + fields=["_id", "type", "name"] + ) + # Store version by their id (converted to string) + version_docs_by_str_id = { + str(version_doc["_id"]): version_doc + for version_doc in version_docs + } + self._integrate_thumbnails( + filtered_instance_items, + version_docs_by_str_id, + project_name + ) + + def _prepare_instances(self, context): + context_thumbnail_path = context.get("thumbnailPath") + valid_context_thumbnail = bool( + context_thumbnail_path + and os.path.exists(context_thumbnail_path) + ) + + filtered_instances = [] + for instance in context: + instance_label = self._get_instance_label(instance) + # Skip instances without published representations + # - there is no place where to put the thumbnail + published_repres = instance.data.get("published_representations") + if not published_repres: + self.log.debug(( + "There are no published representations" + " on the instance {}." + ).format(instance_label)) + continue + + # Find thumbnail path on instance + thumbnail_path = self._get_instance_thumbnail_path( + published_repres) + if thumbnail_path: + self.log.debug(( + "Found thumbnail path for instance \"{}\"." + " Thumbnail path: {}" + ).format(instance_label, thumbnail_path)) + + elif valid_context_thumbnail: + # Use context thumbnail path if is available + thumbnail_path = context_thumbnail_path + self.log.debug(( + "Using context thumbnail path for instance \"{}\"." + " Thumbnail path: {}" + ).format(instance_label, thumbnail_path)) + + # Skip instance if thumbnail path is not available for it + if not thumbnail_path: + self.log.info(( + "Skipping thumbnail integration for instance \"{}\"." + " Instance and context" + " thumbnail paths are not available." + ).format(instance_label)) + continue + + version_id = str(self._get_version_id(published_repres)) + filtered_instances.append( + InstanceFilterResult(instance, thumbnail_path, version_id) + ) + return filtered_instances + + def _get_version_id(self, published_representations): + for repre_info in published_representations.values(): + return repre_info["representation"]["parent"] + + def _get_instance_thumbnail_path(self, published_representations): + thumb_repre_doc = None + for repre_info in published_representations.values(): + repre_doc = repre_info["representation"] + if repre_doc["name"].lower() == "thumbnail": + thumb_repre_doc = repre_doc + break + + if thumb_repre_doc is None: + self.log.debug( + "There is not representation with name \"thumbnail\"" + ) + return None + + path = thumb_repre_doc["data"]["path"] + if not os.path.exists(path): + self.log.warning( + "Thumbnail file cannot be found. Path: {}".format(path) + ) + return None + return os.path.normpath(path) + + def _integrate_thumbnails( + self, + filtered_instance_items, + version_docs_by_str_id, + project_name + ): + from openpype.client.server.operations import create_thumbnail + + op_session = OperationsSession() + + for instance_item in filtered_instance_items: + instance, thumbnail_path, version_id = instance_item + instance_label = self._get_instance_label(instance) + version_doc = version_docs_by_str_id.get(version_id) + if not version_doc: + self.log.warning(( + "Version entity for instance \"{}\" was not found." + ).format(instance_label)) + continue + + thumbnail_id = create_thumbnail(project_name, thumbnail_path) + + # Set thumbnail id for version + op_session.update_entity( + project_name, + version_doc["type"], + version_doc["_id"], + {"data.thumbnail_id": thumbnail_id} + ) + if version_doc["type"] == "hero_version": + version_name = "Hero" + else: + version_name = version_doc["name"] + self.log.debug("Setting thumbnail for version \"{}\" <{}>".format( + version_name, version_id + )) + + asset_entity = instance.data["assetEntity"] + op_session.update_entity( + project_name, + asset_entity["type"], + asset_entity["_id"], + {"data.thumbnail_id": thumbnail_id} + ) + self.log.debug("Setting thumbnail for asset \"{}\" <{}>".format( + asset_entity["name"], version_id + )) + + op_session.commit() + + def _get_instance_label(self, instance): + return ( + instance.data.get("label") + or instance.data.get("name") + or "N/A" + ) diff --git a/openpype/plugins/publish/integrate_version_attrs.py b/openpype/plugins/publish/integrate_version_attrs.py new file mode 100644 index 0000000000..ed179ae319 --- /dev/null +++ b/openpype/plugins/publish/integrate_version_attrs.py @@ -0,0 +1,93 @@ +import pyblish.api +import ayon_api + +from openpype import AYON_SERVER_ENABLED +from openpype.client.operations import OperationsSession + + +class IntegrateVersionAttributes(pyblish.api.ContextPlugin): + """Integrate version attributes from predefined key. + + Any integration after 'IntegrateAsset' can fill 'versionAttributes' with + attribute key & value to be updated on created version. + + The integration must make sure the attribute is available for the version + entity otherwise an error would be raised. + + Example of 'versionAttributes': + { + "ftrack_id": "0123456789-101112-131415", + "syncsketch_id": "987654321-012345-678910" + } + """ + + label = "Integrate Version Attributes" + order = pyblish.api.IntegratorOrder + 0.5 + + def process(self, context): + available_attributes = ayon_api.get_attributes_for_type("version") + skipped_attributes = set() + project_name = context.data["projectName"] + op_session = OperationsSession() + for instance in context: + label = self.get_instance_label(instance) + version_entity = instance.data.get("versionEntity") + if not version_entity: + continue + attributes = instance.data.get("versionAttributes") + if not attributes: + self.log.debug(( + "Skipping instance {} because it does not specify" + " version attributes to set." + ).format(label)) + continue + + filtered_attributes = {} + for attr, value in attributes.items(): + if attr not in available_attributes: + skipped_attributes.add(attr) + else: + filtered_attributes[attr] = value + + if not filtered_attributes: + self.log.debug(( + "Skipping instance {} because all version attributes were" + " filtered out." + ).format(label)) + continue + + self.log.debug("Updating attributes on version {} to {}".format( + version_entity["_id"], str(filtered_attributes) + )) + op_session.update_entity( + project_name, + "version", + version_entity["_id"], + {"attrib": filtered_attributes} + ) + + if skipped_attributes: + self.log.warning(( + "Skipped version attributes integration because they're" + " not available on the server: {}" + ).format(str(skipped_attributes))) + + if len(op_session): + op_session.commit() + self.log.info("Updated version attributes") + else: + self.log.debug("There are no version attributes to update") + + @staticmethod + def get_instance_label(instance): + return ( + instance.data.get("label") + or instance.data.get("name") + or instance.data.get("subset") + or str(instance) + ) + + +# Discover the plugin only in AYON mode +if not AYON_SERVER_ENABLED: + del IntegrateVersionAttributes diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index 56a0fe60cd..7f1c3b01e2 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -88,9 +88,15 @@ class PypeCommands: """ from openpype.lib import Logger - from openpype.lib.applications import get_app_environments_for_context + from openpype.lib.applications import ( + get_app_environments_for_context, + LaunchTypes, + ) from openpype.modules import ModulesManager - from openpype.pipeline import install_openpype_plugins + from openpype.pipeline import ( + install_openpype_plugins, + get_global_context, + ) from openpype.tools.utils.host_tools import show_publish from openpype.tools.utils.lib import qt_app_context @@ -112,12 +118,15 @@ class PypeCommands: if not any(paths): raise RuntimeError("No publish paths specified") - if os.getenv("AVALON_APP_NAME"): + app_full_name = os.getenv("AVALON_APP_NAME") + if app_full_name: + context = get_global_context() env = get_app_environments_for_context( - os.environ["AVALON_PROJECT"], - os.environ["AVALON_ASSET"], - os.environ["AVALON_TASK"], - os.environ["AVALON_APP_NAME"] + context["project_name"], + context["asset_name"], + context["task_name"], + app_full_name, + launch_type=LaunchTypes.farm_publish, ) os.environ.update(env) @@ -156,74 +165,6 @@ class PypeCommands: log.info("Publish finished.") - @staticmethod - def remotepublishfromapp(project_name, batch_path, host_name, - user_email, targets=None): - """Opens installed variant of 'host' and run remote publish there. - - Eventually should be yanked out to Webpublisher cli. - - Currently implemented and tested for Photoshop where customer - wants to process uploaded .psd file and publish collected layers - from there. Triggered by Webpublisher. - - Checks if no other batches are running (status =='in_progress). If - so, it sleeps for SLEEP (this is separate process), - waits for WAIT_FOR seconds altogether. - - Requires installed host application on the machine. - - Runs publish process as user would, in automatic fashion. - - Args: - project_name (str): project to publish (only single context is - expected per call of remotepublish - batch_path (str): Path batch folder. Contains subfolders with - resources (workfile, another subfolder 'renders' etc.) - host_name (str): 'photoshop' - user_email (string): email address for webpublisher - used to - find Ftrack user with same email - targets (list): Pyblish targets - (to choose validator for example) - """ - - from openpype.hosts.webpublisher.publish_functions import ( - cli_publish_from_app - ) - - cli_publish_from_app( - project_name, batch_path, host_name, user_email, targets - ) - - @staticmethod - def remotepublish(project, batch_path, user_email, targets=None): - """Start headless publishing. - - Used to publish rendered assets, workfiles etc via Webpublisher. - Eventually should be yanked out to Webpublisher cli. - - Publish use json from passed paths argument. - - Args: - project (str): project to publish (only single context is expected - per call of remotepublish - batch_path (str): Path batch folder. Contains subfolders with - resources (workfile, another subfolder 'renders' etc.) - user_email (string): email address for webpublisher - used to - find Ftrack user with same email - targets (list): Pyblish targets - (to choose validator for example) - - Raises: - RuntimeError: When there is no path to process. - """ - - from openpype.hosts.webpublisher.publish_functions import ( - cli_publish - ) - - cli_publish(project, batch_path, user_email, targets) - @staticmethod def extractenvironments(output_json_path, project, asset, task, app, env_group): @@ -232,11 +173,19 @@ class PypeCommands: Called by Deadline plugin to propagate environment into render jobs. """ - from openpype.lib.applications import get_app_environments_for_context + from openpype.lib.applications import ( + get_app_environments_for_context, + LaunchTypes, + ) if all((project, asset, task, app)): env = get_app_environments_for_context( - project, asset, task, app, env_group + project, + asset, + task, + app, + env_group=env_group, + launch_type=LaunchTypes.farm_render, ) else: env = os.environ.copy() @@ -260,12 +209,6 @@ class PypeCommands: main(output_path, project_name, asset_name, strict) - def texture_copy(self, project, asset, path): - pass - - def run_application(self, app, project, asset, task, tools, arguments): - pass - def validate_jsons(self): pass @@ -291,7 +234,14 @@ class PypeCommands: folder = "../tests" # disable warnings and show captured stdout even if success - args = ["--disable-pytest-warnings", "-rP", folder] + args = [ + "--disable-pytest-warnings", + "--capture=sys", + "--print", + "-W ignore::DeprecationWarning", + "-rP", + folder + ] if mark: args.extend(["-m", mark]) @@ -318,34 +268,6 @@ class PypeCommands: import pytest pytest.main(args) - def syncserver(self, active_site): - """Start running sync_server in background. - - This functionality is available in directly in module cli commands. - `~/openpype_console module sync_server syncservice` - """ - - os.environ["OPENPYPE_LOCAL_ID"] = active_site - - def signal_handler(sig, frame): - print("You pressed Ctrl+C. Process ended.") - sync_server_module.server_exit() - sys.exit(0) - - signal.signal(signal.SIGINT, signal_handler) - signal.signal(signal.SIGTERM, signal_handler) - - from openpype.modules import ModulesManager - - manager = ModulesManager() - sync_server_module = manager.modules_by_name["sync_server"] - - sync_server_module.server_init() - sync_server_module.server_start() - - while True: - time.sleep(1.0) - def repack_version(self, directory): """Repacking OpenPype version.""" from openpype.tools.repack_version import VersionRepacker diff --git a/openpype/resources/__init__.py b/openpype/resources/__init__.py index 0d7778e546..b8671f517a 100644 --- a/openpype/resources/__init__.py +++ b/openpype/resources/__init__.py @@ -1,4 +1,5 @@ import os +from openpype import AYON_SERVER_ENABLED from openpype.lib.openpype_version import is_running_staging RESOURCES_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -40,11 +41,17 @@ def get_liberation_font_path(bold=False, italic=False): def get_openpype_production_icon_filepath(): - return get_resource("icons", "openpype_icon.png") + filename = "openpype_icon.png" + if AYON_SERVER_ENABLED: + filename = "AYON_icon.png" + return get_resource("icons", filename) def get_openpype_staging_icon_filepath(): - return get_resource("icons", "openpype_icon_staging.png") + filename = "openpype_icon_staging.png" + if AYON_SERVER_ENABLED: + filename = "AYON_icon_staging.png" + return get_resource("icons", filename) def get_openpype_icon_filepath(staging=None): @@ -60,7 +67,12 @@ def get_openpype_splash_filepath(staging=None): if staging is None: staging = is_running_staging() - if staging: + if AYON_SERVER_ENABLED: + if staging: + splash_file_name = "AYON_splash_staging.png" + else: + splash_file_name = "AYON_splash.png" + elif staging: splash_file_name = "openpype_splash_staging.png" else: splash_file_name = "openpype_splash.png" diff --git a/openpype/resources/icons/AYON_icon.png b/openpype/resources/icons/AYON_icon.png new file mode 100644 index 0000000000..ed13aeea52 Binary files /dev/null and b/openpype/resources/icons/AYON_icon.png differ diff --git a/openpype/resources/icons/AYON_icon_staging.png b/openpype/resources/icons/AYON_icon_staging.png new file mode 100644 index 0000000000..9da5b0488e Binary files /dev/null and b/openpype/resources/icons/AYON_icon_staging.png differ diff --git a/openpype/resources/icons/AYON_splash.png b/openpype/resources/icons/AYON_splash.png new file mode 100644 index 0000000000..734aefb740 Binary files /dev/null and b/openpype/resources/icons/AYON_splash.png differ diff --git a/openpype/resources/icons/AYON_splash_staging.png b/openpype/resources/icons/AYON_splash_staging.png new file mode 100644 index 0000000000..ab2537e8a8 Binary files /dev/null and b/openpype/resources/icons/AYON_splash_staging.png differ diff --git a/openpype/scripts/fusion_switch_shot.py b/openpype/scripts/fusion_switch_shot.py index fc22f060a2..1cc728226f 100644 --- a/openpype/scripts/fusion_switch_shot.py +++ b/openpype/scripts/fusion_switch_shot.py @@ -15,9 +15,11 @@ from openpype.pipeline import ( install_host, registered_host, legacy_io, + get_current_project_name, ) from openpype.pipeline.context_tools import get_workdir_from_session +from openpype.pipeline.version_start import get_versioning_start log = logging.getLogger("Update Slap Comp") @@ -25,9 +27,6 @@ log = logging.getLogger("Update Slap Comp") def _format_version_folder(folder): """Format a version folder based on the filepath - Assumption here is made that, if the path does not exists the folder - will be "v001" - Args: folder: file path to a folder @@ -35,9 +34,13 @@ def _format_version_folder(folder): str: new version folder name """ - new_version = 1 + new_version = get_versioning_start( + get_current_project_name(), + "fusion", + family="workfile" + ) if os.path.isdir(folder): - re_version = re.compile("v\d+$") + re_version = re.compile(r"v\d+$") versions = [i for i in os.listdir(folder) if os.path.isdir(i) and re_version.match(i)] if versions: @@ -130,7 +133,7 @@ def update_frame_range(comp, representations): """ version_ids = [r["parent"] for r in representations] - project_name = legacy_io.active_project() + project_name = get_current_project_name() versions = list(get_versions(project_name, version_ids=version_ids)) start = min(v["data"]["frameStart"] for v in versions) @@ -161,7 +164,7 @@ def switch(asset_name, filepath=None, new=True): # Assert asset name exists # It is better to do this here then to wait till switch_shot does it - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset = get_asset_by_name(project_name, asset_name) assert asset, "Could not find '%s' in the database" % asset_name diff --git a/openpype/scripts/otio_burnin.py b/openpype/scripts/otio_burnin.py index 085b62501c..189feaee3a 100644 --- a/openpype/scripts/otio_burnin.py +++ b/openpype/scripts/otio_burnin.py @@ -8,21 +8,15 @@ from string import Formatter import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffmpeg_codec_args, get_ffmpeg_format_args, convert_ffprobe_fps_value, - convert_ffprobe_fps_to_float, ) - -ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") -ffprobe_path = get_ffmpeg_tool_path("ffprobe") - - FFMPEG = ( - '"{}"%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' -).format(ffmpeg_path) + '{}%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' +).format(subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg"))) DRAWTEXT = ( "drawtext@'%(label)s'=fontfile='%(font)s':text=\\'%(text)s\\':" @@ -46,14 +40,14 @@ def _get_ffprobe_data(source): :param str source: source media file :rtype: [{}, ...] """ - command = [ - ffprobe_path, + command = get_ffmpeg_tool_args( + "ffprobe", "-v", "quiet", "-print_format", "json", "-show_format", "-show_streams", source - ] + ) kwargs = { "stdout": subprocess.PIPE, } diff --git a/openpype/scripts/remote_publish.py b/openpype/scripts/remote_publish.py index 37df35e36c..d362f7abdc 100644 --- a/openpype/scripts/remote_publish.py +++ b/openpype/scripts/remote_publish.py @@ -9,4 +9,4 @@ except ImportError as exc: if __name__ == "__main__": # Perform remote publish with thorough error checking log = Logger.get_logger(__name__) - remote_publish(log, raise_error=True) + remote_publish(log) diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py new file mode 100644 index 0000000000..50abfe4839 --- /dev/null +++ b/openpype/settings/ayon_settings.py @@ -0,0 +1,1400 @@ +"""Helper functionality to convert AYON settings to OpenPype v3 settings. + +The settings are converted, so we can use v3 code with AYON settings. Once +the code of and addon is converted to full AYON addon which expect AYON +settings the conversion function can be removed. + +The conversion is hardcoded -> there is no other way how to achieve the result. + +Main entrypoints are functions: +- convert_project_settings - convert settings to project settings +- convert_system_settings - convert settings to system settings +# Both getters cache values +- get_ayon_project_settings - replacement for 'get_project_settings' +- get_ayon_system_settings - replacement for 'get_system_settings' +""" +import os +import collections +import json +import copy +import time + +import six +import ayon_api + + +def _convert_color(color_value): + if isinstance(color_value, six.string_types): + color_value = color_value.lstrip("#") + color_value_len = len(color_value) + _color_value = [] + for idx in range(color_value_len // 2): + _color_value.append(int(color_value[idx:idx + 2], 16)) + for _ in range(4 - len(_color_value)): + _color_value.append(255) + return _color_value + + if isinstance(color_value, list): + # WARNING R,G,B can be 'int' or 'float' + # - 'float' variant is using 'int' for min: 0 and max: 1 + if len(color_value) == 3: + # Add alpha + color_value.append(255) + else: + # Convert float alha to int + alpha = int(color_value[3] * 255) + if alpha > 255: + alpha = 255 + elif alpha < 0: + alpha = 0 + color_value[3] = alpha + return color_value + + +def _convert_host_imageio(host_settings): + if "imageio" not in host_settings: + return + + # --- imageio --- + ayon_imageio = host_settings["imageio"] + # TODO remove when fixed on server + if "ocio_config" in ayon_imageio["ocio_config"]: + ayon_imageio["ocio_config"]["filepath"] = ( + ayon_imageio["ocio_config"].pop("ocio_config") + ) + # Convert file rules + imageio_file_rules = ayon_imageio["file_rules"] + new_rules = {} + for rule in imageio_file_rules["rules"]: + name = rule.pop("name") + new_rules[name] = rule + imageio_file_rules["rules"] = new_rules + + +def _convert_applications_groups(groups, clear_metadata): + environment_key = "environment" + if isinstance(groups, dict): + new_groups = [] + for name, item in groups.items(): + item["name"] = name + new_groups.append(item) + groups = new_groups + + output = {} + group_dynamic_labels = {} + for group in groups: + group_name = group.pop("name") + if "label" in group: + group_dynamic_labels[group_name] = group["label"] + + tool_group_envs = group[environment_key] + if isinstance(tool_group_envs, six.string_types): + group[environment_key] = json.loads(tool_group_envs) + + variants = {} + variant_dynamic_labels = {} + for variant in group.pop("variants"): + variant_name = variant.pop("name") + label = variant.get("label") + if label and label != variant_name: + variant_dynamic_labels[variant_name] = label + variant_envs = variant[environment_key] + if isinstance(variant_envs, six.string_types): + variant[environment_key] = json.loads(variant_envs) + variants[variant_name] = variant + group["variants"] = variants + + if not clear_metadata: + variants["__dynamic_keys_labels__"] = variant_dynamic_labels + output[group_name] = group + + if not clear_metadata: + output["__dynamic_keys_labels__"] = group_dynamic_labels + return output + + +def _convert_applications_system_settings( + ayon_settings, output, clear_metadata +): + # Addon settings + addon_settings = ayon_settings["applications"] + + # Remove project settings + addon_settings.pop("only_available", None) + + # Applications settings + ayon_apps = addon_settings["applications"] + + additional_apps = ayon_apps.pop("additional_apps") + applications = _convert_applications_groups( + ayon_apps, clear_metadata + ) + applications["additional_apps"] = _convert_applications_groups( + additional_apps, clear_metadata + ) + + # Tools settings + tools = _convert_applications_groups( + addon_settings["tool_groups"], clear_metadata + ) + + output["applications"] = applications + output["tools"] = {"tool_groups": tools} + + +def _convert_general(ayon_settings, output, default_settings): + # TODO get studio name/code + core_settings = ayon_settings["core"] + environments = core_settings["environments"] + if isinstance(environments, six.string_types): + environments = json.loads(environments) + + general = default_settings["general"] + general.update({ + "log_to_server": False, + "studio_name": core_settings["studio_name"], + "studio_code": core_settings["studio_code"], + "environment": environments + }) + output["general"] = general + + +def _convert_kitsu_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("kitsu") is not None + kitsu_settings = default_settings["modules"]["kitsu"] + kitsu_settings["enabled"] = enabled + if enabled: + kitsu_settings["server"] = ayon_settings["kitsu"]["server"] + output["modules"]["kitsu"] = kitsu_settings + + +def _convert_timers_manager_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("timers_manager") is not None + manager_settings = default_settings["modules"]["timers_manager"] + manager_settings["enabled"] = enabled + if enabled: + ayon_manager = ayon_settings["timers_manager"] + manager_settings.update({ + key: ayon_manager[key] + for key in { + "auto_stop", + "full_time", + "message_time", + "disregard_publishing" + } + }) + output["modules"]["timers_manager"] = manager_settings + + +def _convert_clockify_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("clockify") is not None + clockify_settings = default_settings["modules"]["clockify"] + clockify_settings["enabled"] = enabled + if enabled: + clockify_settings["workspace_name"] = ( + ayon_settings["clockify"]["workspace_name"] + ) + output["modules"]["clockify"] = clockify_settings + + +def _convert_deadline_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("deadline") is not None + deadline_settings = default_settings["modules"]["deadline"] + deadline_settings["enabled"] = enabled + if enabled: + ayon_deadline = ayon_settings["deadline"] + deadline_settings["deadline_urls"] = { + item["name"]: item["value"] + for item in ayon_deadline["deadline_urls"] + } + + output["modules"]["deadline"] = deadline_settings + + +def _convert_muster_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("muster") is not None + muster_settings = default_settings["modules"]["muster"] + muster_settings["enabled"] = enabled + if enabled: + ayon_muster = ayon_settings["muster"] + muster_settings["MUSTER_REST_URL"] = ayon_muster["MUSTER_REST_URL"] + muster_settings["templates_mapping"] = { + item["name"]: item["value"] + for item in ayon_muster["templates_mapping"] + } + output["modules"]["muster"] = muster_settings + + +def _convert_royalrender_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("royalrender") is not None + rr_settings = default_settings["modules"]["royalrender"] + rr_settings["enabled"] = enabled + if enabled: + ayon_royalrender = ayon_settings["royalrender"] + rr_settings["rr_paths"] = { + item["name"]: item["value"] + for item in ayon_royalrender["rr_paths"] + } + output["modules"]["royalrender"] = rr_settings + + +def _convert_modules_system( + ayon_settings, output, addon_versions, default_settings +): + # TODO add all modules + # TODO add 'enabled' values + for func in ( + _convert_kitsu_system_settings, + _convert_timers_manager_system_settings, + _convert_clockify_system_settings, + _convert_deadline_system_settings, + _convert_muster_system_settings, + _convert_royalrender_system_settings, + ): + func(ayon_settings, output, addon_versions, default_settings) + + modules_settings = output["modules"] + for module_name in ( + "sync_server", + "log_viewer", + "standalonepublish_tool", + "project_manager", + "job_queue", + "avalon", + "addon_paths", + ): + settings = default_settings["modules"][module_name] + if "enabled" in settings: + settings["enabled"] = False + modules_settings[module_name] = settings + + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + # Make sure addons have access to settings in initialization + # - ModulesManager passes only modules settings into initialization + if key not in modules_settings: + modules_settings[key] = value + + +def convert_system_settings(ayon_settings, default_settings, addon_versions): + default_settings = copy.deepcopy(default_settings) + output = { + "modules": {} + } + if "applications" in ayon_settings: + _convert_applications_system_settings(ayon_settings, output, False) + + if "core" in ayon_settings: + _convert_general(ayon_settings, output, default_settings) + + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + for key, value in default_settings.items(): + if key not in output: + output[key] = value + + _convert_modules_system( + ayon_settings, + output, + addon_versions, + default_settings + ) + return output + + +# --------- Project settings --------- +def _convert_applications_project_settings(ayon_settings, output): + if "applications" not in ayon_settings: + return + + output["applications"] = { + "only_available": ayon_settings["applications"]["only_available"] + } + + +def _convert_blender_project_settings(ayon_settings, output): + if "blender" not in ayon_settings: + return + ayon_blender = ayon_settings["blender"] + _convert_host_imageio(ayon_blender) + + ayon_publish = ayon_blender["publish"] + + for plugin in ("ExtractThumbnail", "ExtractPlayblast"): + plugin_settings = ayon_publish[plugin] + plugin_settings["presets"] = json.loads(plugin_settings["presets"]) + + output["blender"] = ayon_blender + + +def _convert_celaction_project_settings(ayon_settings, output): + if "celaction" not in ayon_settings: + return + + ayon_celaction = ayon_settings["celaction"] + _convert_host_imageio(ayon_celaction) + + output["celaction"] = ayon_celaction + + +def _convert_flame_project_settings(ayon_settings, output): + if "flame" not in ayon_settings: + return + + ayon_flame = ayon_settings["flame"] + + ayon_publish_flame = ayon_flame["publish"] + # Plugin 'ExtractSubsetResources' renamed to 'ExtractProductResources' + if "ExtractSubsetResources" in ayon_publish_flame: + ayon_product_resources = ayon_publish_flame["ExtractSubsetResources"] + else: + ayon_product_resources = ( + ayon_publish_flame.pop("ExtractProductResources")) + ayon_publish_flame["ExtractSubsetResources"] = ayon_product_resources + + # 'ExtractSubsetResources' changed model of 'export_presets_mapping' + # - some keys were moved under 'other_parameters' + new_subset_resources = {} + for item in ayon_product_resources.pop("export_presets_mapping"): + name = item.pop("name") + if "other_parameters" in item: + other_parameters = item.pop("other_parameters") + item.update(other_parameters) + new_subset_resources[name] = item + + ayon_product_resources["export_presets_mapping"] = new_subset_resources + + # 'imageio' changed model + # - missing subkey 'project' which is in root of 'imageio' model + _convert_host_imageio(ayon_flame) + ayon_imageio_flame = ayon_flame["imageio"] + if "project" not in ayon_imageio_flame: + profile_mapping = ayon_imageio_flame.pop("profilesMapping") + ayon_flame["imageio"] = { + "project": ayon_imageio_flame, + "profilesMapping": profile_mapping + } + + ayon_load_flame = ayon_flame["load"] + for plugin_name in ("LoadClip", "LoadClipBatch"): + plugin_settings = ayon_load_flame[plugin_name] + plugin_settings["families"] = plugin_settings.pop("product_types") + plugin_settings["clip_name_template"] = ( + plugin_settings["clip_name_template"] + .replace("{folder[name]}", "{asset}") + .replace("{product[name]}", "{subset}") + ) + plugin_settings["layer_rename_template"] = ( + plugin_settings["layer_rename_template"] + .replace("{folder[name]}", "{asset}") + .replace("{product[name]}", "{subset}") + ) + + output["flame"] = ayon_flame + + +def _convert_fusion_project_settings(ayon_settings, output): + if "fusion" not in ayon_settings: + return + + ayon_fusion = ayon_settings["fusion"] + _convert_host_imageio(ayon_fusion) + + ayon_imageio_fusion = ayon_fusion["imageio"] + + if "ocioSettings" in ayon_imageio_fusion: + ayon_ocio_setting = ayon_imageio_fusion.pop("ocioSettings") + paths = ayon_ocio_setting.pop("ocioPathModel") + for key, value in tuple(paths.items()): + new_value = [] + if value: + new_value.append(value) + paths[key] = new_value + + ayon_ocio_setting["configFilePath"] = paths + ayon_imageio_fusion["ocio"] = ayon_ocio_setting + elif "ocio" in ayon_imageio_fusion: + paths = ayon_imageio_fusion["ocio"].pop("configFilePath") + for key, value in tuple(paths.items()): + new_value = [] + if value: + new_value.append(value) + paths[key] = new_value + ayon_imageio_fusion["ocio"]["configFilePath"] = paths + + _convert_host_imageio(ayon_imageio_fusion) + + ayon_create_saver = ayon_fusion["create"]["CreateSaver"] + ayon_create_saver["temp_rendering_path_template"] = ( + ayon_create_saver["temp_rendering_path_template"] + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + .replace("{folder[name]}", "{asset}") + .replace("{task[name]}", "{task}") + ) + + output["fusion"] = ayon_fusion + + +def _convert_maya_project_settings(ayon_settings, output): + if "maya" not in ayon_settings: + return + + ayon_maya = ayon_settings["maya"] + + # Change key of render settings + ayon_maya["RenderSettings"] = ayon_maya.pop("render_settings") + + # Convert extensions mapping + ayon_maya["ext_mapping"] = { + item["name"]: item["value"] + for item in ayon_maya["ext_mapping"] + } + + # Publish UI filters + new_filters = {} + for item in ayon_maya["filters"]: + new_filters[item["name"]] = { + subitem["name"]: subitem["value"] + for subitem in item["value"] + } + ayon_maya["filters"] = new_filters + + # Maya dirmap + ayon_maya_dirmap = ayon_maya.pop("maya_dirmap") + ayon_maya_dirmap_path = ayon_maya_dirmap["paths"] + ayon_maya_dirmap_path["source-path"] = ( + ayon_maya_dirmap_path.pop("source_path") + ) + ayon_maya_dirmap_path["destination-path"] = ( + ayon_maya_dirmap_path.pop("destination_path") + ) + ayon_maya["maya-dirmap"] = ayon_maya_dirmap + + # Create plugins + ayon_create = ayon_maya["create"] + ayon_create_static_mesh = ayon_create["CreateUnrealStaticMesh"] + if "static_mesh_prefixes" in ayon_create_static_mesh: + ayon_create_static_mesh["static_mesh_prefix"] = ( + ayon_create_static_mesh.pop("static_mesh_prefixes") + ) + + # --- Publish (START) --- + ayon_publish = ayon_maya["publish"] + try: + attributes = json.loads( + ayon_publish["ValidateAttributes"]["attributes"] + ) + except ValueError: + attributes = {} + ayon_publish["ValidateAttributes"]["attributes"] = attributes + + try: + SUFFIX_NAMING_TABLE = json.loads( + ayon_publish + ["ValidateTransformNamingSuffix"] + ["SUFFIX_NAMING_TABLE"] + ) + except ValueError: + SUFFIX_NAMING_TABLE = {} + ayon_publish["ValidateTransformNamingSuffix"]["SUFFIX_NAMING_TABLE"] = ( + SUFFIX_NAMING_TABLE + ) + + validate_frame_range = ayon_publish["ValidateFrameRange"] + if "exclude_product_types" in validate_frame_range: + validate_frame_range["exclude_families"] = ( + validate_frame_range.pop("exclude_product_types")) + + # Extract playblast capture settings + validate_rendern_settings = ayon_publish["ValidateRenderSettings"] + for key in ( + "arnold_render_attributes", + "vray_render_attributes", + "redshift_render_attributes", + "renderman_render_attributes", + ): + if key not in validate_rendern_settings: + continue + validate_rendern_settings[key] = [ + [item["type"], item["value"]] + for item in validate_rendern_settings[key] + ] + + plugin_path_attributes = ayon_publish["ValidatePluginPathAttributes"] + plugin_path_attributes["attribute"] = { + item["name"]: item["value"] + for item in plugin_path_attributes["attribute"] + } + + ayon_capture_preset = ayon_publish["ExtractPlayblast"]["capture_preset"] + display_options = ayon_capture_preset["DisplayOptions"] + for key in ("background", "backgroundBottom", "backgroundTop"): + display_options[key] = _convert_color(display_options[key]) + + for src_key, dst_key in ( + ("DisplayOptions", "Display Options"), + ("ViewportOptions", "Viewport Options"), + ("CameraOptions", "Camera Options"), + ): + ayon_capture_preset[dst_key] = ayon_capture_preset.pop(src_key) + + viewport_options = ayon_capture_preset["Viewport Options"] + viewport_options["pluginObjects"] = { + item["name"]: item["value"] + for item in viewport_options["pluginObjects"] + } + + # Extract Camera Alembic bake attributes + try: + bake_attributes = json.loads( + ayon_publish["ExtractCameraAlembic"]["bake_attributes"] + ) + except ValueError: + bake_attributes = [] + ayon_publish["ExtractCameraAlembic"]["bake_attributes"] = bake_attributes + + # --- Publish (END) --- + for renderer_settings in ayon_maya["RenderSettings"].values(): + if ( + not isinstance(renderer_settings, dict) + or "additional_options" not in renderer_settings + ): + continue + renderer_settings["additional_options"] = [ + [item["attribute"], item["value"]] + for item in renderer_settings["additional_options"] + ] + + # Workfile build + ayon_workfile_build = ayon_maya["workfile_build"] + for item in ayon_workfile_build["profiles"]: + for key in ("current_context", "linked_assets"): + for subitem in item[key]: + if "families" in subitem: + break + subitem["families"] = subitem.pop("product_types") + subitem["subset_name_filters"] = subitem.pop( + "product_name_filters") + + _convert_host_imageio(ayon_maya) + + ayon_maya_load = ayon_maya["load"] + load_colors = ayon_maya_load["colors"] + for key, color in tuple(load_colors.items()): + load_colors[key] = _convert_color(color) + + reference_loader = ayon_maya_load["reference_loader"] + reference_loader["namespace"] = ( + reference_loader["namespace"] + .replace("{product[name]}", "{subset}") + ) + + if ayon_maya_load.get("import_loader"): + import_loader = ayon_maya_load["import_loader"] + import_loader["namespace"] = ( + import_loader["namespace"] + .replace("{product[name]}", "{subset}") + ) + + output["maya"] = ayon_maya + + +def _convert_nuke_knobs(knobs): + new_knobs = [] + for knob in knobs: + knob_type = knob["type"] + + if knob_type == "boolean": + knob_type = "bool" + + if knob_type != "bool": + value = knob[knob_type] + elif knob_type in knob: + value = knob[knob_type] + else: + value = knob["boolean"] + + new_knob = { + "type": knob_type, + "name": knob["name"], + } + new_knobs.append(new_knob) + + if knob_type == "formatable": + new_knob["template"] = value["template"] + new_knob["to_type"] = value["to_type"] + continue + + value_key = "value" + if knob_type == "expression": + value_key = "expression" + + elif knob_type == "color_gui": + value = _convert_color(value) + + elif knob_type == "vector_2d": + value = [value["x"], value["y"]] + + elif knob_type == "vector_3d": + value = [value["x"], value["y"], value["z"]] + + elif knob_type == "box": + value = [value["x"], value["y"], value["r"], value["t"]] + + new_knob[value_key] = value + return new_knobs + + +def _convert_nuke_project_settings(ayon_settings, output): + if "nuke" not in ayon_settings: + return + + ayon_nuke = ayon_settings["nuke"] + + # --- Dirmap --- + dirmap = ayon_nuke.pop("dirmap") + for src_key, dst_key in ( + ("source_path", "source-path"), + ("destination_path", "destination-path"), + ): + dirmap["paths"][dst_key] = dirmap["paths"].pop(src_key) + ayon_nuke["nuke-dirmap"] = dirmap + + # --- Filters --- + new_gui_filters = {} + for item in ayon_nuke.pop("filters"): + subvalue = {} + key = item["name"] + for subitem in item["value"]: + subvalue[subitem["name"]] = subitem["value"] + new_gui_filters[key] = subvalue + ayon_nuke["filters"] = new_gui_filters + + # --- Load --- + ayon_load = ayon_nuke["load"] + ayon_load["LoadClip"]["_representations"] = ( + ayon_load["LoadClip"].pop("representations_include") + ) + ayon_load["LoadImage"]["_representations"] = ( + ayon_load["LoadImage"].pop("representations_include") + ) + + # --- Create --- + ayon_create = ayon_nuke["create"] + for creator_name in ( + "CreateWritePrerender", + "CreateWriteImage", + "CreateWriteRender", + ): + create_plugin_settings = ayon_create[creator_name] + create_plugin_settings["temp_rendering_path_template"] = ( + create_plugin_settings["temp_rendering_path_template"] + .replace("{product[name]}", "{subset}") + .replace("{product[type]}", "{family}") + .replace("{task[name]}", "{task}") + .replace("{folder[name]}", "{asset}") + ) + new_prenodes = {} + for prenode in create_plugin_settings["prenodes"]: + name = prenode.pop("name") + prenode["knobs"] = _convert_nuke_knobs(prenode["knobs"]) + new_prenodes[name] = prenode + + create_plugin_settings["prenodes"] = new_prenodes + + # --- Publish --- + ayon_publish = ayon_nuke["publish"] + slate_mapping = ayon_publish["ExtractSlateFrame"]["key_value_mapping"] + for key in tuple(slate_mapping.keys()): + value = slate_mapping[key] + slate_mapping[key] = [value["enabled"], value["template"]] + + ayon_publish["ValidateKnobs"]["knobs"] = json.loads( + ayon_publish["ValidateKnobs"]["knobs"] + ) + + new_review_data_outputs = {} + for item in ayon_publish["ExtractReviewDataMov"]["outputs"]: + item_filter = item["filter"] + if "product_names" in item_filter: + item_filter["subsets"] = item_filter.pop("product_names") + item_filter["families"] = item_filter.pop("product_types") + + name = item.pop("name") + new_review_data_outputs[name] = item + ayon_publish["ExtractReviewDataMov"]["outputs"] = new_review_data_outputs + + collect_instance_data = ayon_publish["CollectInstanceData"] + if "sync_workfile_version_on_product_types" in collect_instance_data: + collect_instance_data["sync_workfile_version_on_families"] = ( + collect_instance_data.pop( + "sync_workfile_version_on_product_types")) + + # TODO 'ExtractThumbnail' does not have ideal schema in v3 + ayon_extract_thumbnail = ayon_publish["ExtractThumbnail"] + new_thumbnail_nodes = {} + for item in ayon_extract_thumbnail["nodes"]: + name = item["nodeclass"] + value = [] + for knob in _convert_nuke_knobs(item["knobs"]): + knob_name = knob["name"] + # This may crash + if knob["type"] == "expression": + knob_value = knob["expression"] + else: + knob_value = knob["value"] + value.append([knob_name, knob_value]) + new_thumbnail_nodes[name] = value + + ayon_extract_thumbnail["nodes"] = new_thumbnail_nodes + + if "reposition_nodes" in ayon_extract_thumbnail: + for item in ayon_extract_thumbnail["reposition_nodes"]: + item["knobs"] = _convert_nuke_knobs(item["knobs"]) + + # --- ImageIO --- + # NOTE 'monitorOutLut' is maybe not yet in v3 (ut should be) + _convert_host_imageio(ayon_nuke) + ayon_imageio = ayon_nuke["imageio"] + for item in ayon_imageio["nodes"]["requiredNodes"]: + item["knobs"] = _convert_nuke_knobs(item["knobs"]) + for item in ayon_imageio["nodes"]["overrideNodes"]: + item["knobs"] = _convert_nuke_knobs(item["knobs"]) + + output["nuke"] = ayon_nuke + + +def _convert_hiero_project_settings(ayon_settings, output): + if "hiero" not in ayon_settings: + return + + ayon_hiero = ayon_settings["hiero"] + _convert_host_imageio(ayon_hiero) + + new_gui_filters = {} + for item in ayon_hiero.pop("filters"): + subvalue = {} + key = item["name"] + for subitem in item["value"]: + subvalue[subitem["name"]] = subitem["value"] + new_gui_filters[key] = subvalue + ayon_hiero["filters"] = new_gui_filters + + ayon_load_clip = ayon_hiero["load"]["LoadClip"] + if "product_types" in ayon_load_clip: + ayon_load_clip["families"] = ayon_load_clip.pop("product_types") + + ayon_load_clip = ayon_hiero["load"]["LoadClip"] + ayon_load_clip["clip_name_template"] = ( + ayon_load_clip["clip_name_template"] + .replace("{folder[name]}", "{asset}") + .replace("{product[name]}", "{subset}") + ) + + output["hiero"] = ayon_hiero + + +def _convert_photoshop_project_settings(ayon_settings, output): + if "photoshop" not in ayon_settings: + return + + ayon_photoshop = ayon_settings["photoshop"] + _convert_host_imageio(ayon_photoshop) + + ayon_publish_photoshop = ayon_photoshop["publish"] + + ayon_colorcoded = ayon_publish_photoshop["CollectColorCodedInstances"] + if "flatten_product_type_template" in ayon_colorcoded: + ayon_colorcoded["flatten_subset_template"] = ( + ayon_colorcoded.pop("flatten_product_type_template")) + + collect_review = ayon_publish_photoshop["CollectReview"] + if "active" in collect_review: + collect_review["publish"] = collect_review.pop("active") + + output["photoshop"] = ayon_photoshop + + +def _convert_tvpaint_project_settings(ayon_settings, output): + if "tvpaint" not in ayon_settings: + return + ayon_tvpaint = ayon_settings["tvpaint"] + + _convert_host_imageio(ayon_tvpaint) + + filters = {} + for item in ayon_tvpaint["filters"]: + value = item["value"] + try: + value = json.loads(value) + + except ValueError: + value = {} + filters[item["name"]] = value + ayon_tvpaint["filters"] = filters + + ayon_publish_settings = ayon_tvpaint["publish"] + for plugin_name in ( + "ValidateProjectSettings", + "ValidateMarks", + "ValidateStartFrame", + "ValidateAssetName", + ): + ayon_value = ayon_publish_settings[plugin_name] + for src_key, dst_key in ( + ("action_enabled", "optional"), + ("action_enable", "active"), + ): + if src_key in ayon_value: + ayon_value[dst_key] = ayon_value.pop(src_key) + + extract_sequence_setting = ayon_publish_settings["ExtractSequence"] + extract_sequence_setting["review_bg"] = _convert_color( + extract_sequence_setting["review_bg"] + ) + + output["tvpaint"] = ayon_tvpaint + + +def _convert_traypublisher_project_settings(ayon_settings, output): + if "traypublisher" not in ayon_settings: + return + + ayon_traypublisher = ayon_settings["traypublisher"] + + _convert_host_imageio(ayon_traypublisher) + + ayon_editorial_simple = ( + ayon_traypublisher["editorial_creators"]["editorial_simple"] + ) + # Subset -> Product type conversion + if "product_type_presets" in ayon_editorial_simple: + family_presets = ayon_editorial_simple.pop("product_type_presets") + for item in family_presets: + item["family"] = item.pop("product_type") + ayon_editorial_simple["family_presets"] = family_presets + + if "shot_metadata_creator" in ayon_editorial_simple: + shot_metadata_creator = ayon_editorial_simple.pop( + "shot_metadata_creator" + ) + if isinstance(shot_metadata_creator["clip_name_tokenizer"], dict): + shot_metadata_creator["clip_name_tokenizer"] = [ + {"name": "_sequence_", "regex": "(sc\\d{3})"}, + {"name": "_shot_", "regex": "(sh\\d{3})"}, + ] + ayon_editorial_simple.update(shot_metadata_creator) + + ayon_editorial_simple["clip_name_tokenizer"] = { + item["name"]: item["regex"] + for item in ayon_editorial_simple["clip_name_tokenizer"] + } + + if "shot_subset_creator" in ayon_editorial_simple: + ayon_editorial_simple.update( + ayon_editorial_simple.pop("shot_subset_creator")) + for item in ayon_editorial_simple["shot_hierarchy"]["parents"]: + item["type"] = item.pop("parent_type") + + # Simple creators + ayon_simple_creators = ayon_traypublisher["simple_creators"] + for item in ayon_simple_creators: + if "product_type" not in item: + break + item["family"] = item.pop("product_type") + + shot_add_tasks = ayon_editorial_simple["shot_add_tasks"] + if isinstance(shot_add_tasks, dict): + shot_add_tasks = [] + new_shot_add_tasks = { + item["name"]: item["task_type"] + for item in shot_add_tasks + } + ayon_editorial_simple["shot_add_tasks"] = new_shot_add_tasks + + output["traypublisher"] = ayon_traypublisher + + +def _convert_webpublisher_project_settings(ayon_settings, output): + if "webpublisher" not in ayon_settings: + return + + ayon_webpublisher = ayon_settings["webpublisher"] + _convert_host_imageio(ayon_webpublisher) + + ayon_publish = ayon_webpublisher["publish"] + + ayon_collect_files = ayon_publish["CollectPublishedFiles"] + ayon_collect_files["task_type_to_family"] = { + item["name"]: item["value"] + for item in ayon_collect_files["task_type_to_family"] + } + + output["webpublisher"] = ayon_webpublisher + + +def _convert_deadline_project_settings(ayon_settings, output): + if "deadline" not in ayon_settings: + return + + ayon_deadline = ayon_settings["deadline"] + + for key in ("deadline_urls",): + ayon_deadline.pop(key) + + ayon_deadline_publish = ayon_deadline["publish"] + limit_groups = { + item["name"]: item["value"] + for item in ayon_deadline_publish["NukeSubmitDeadline"]["limit_groups"] + } + ayon_deadline_publish["NukeSubmitDeadline"]["limit_groups"] = limit_groups + + maya_submit = ayon_deadline_publish["MayaSubmitDeadline"] + for json_key in ("jobInfo", "pluginInfo"): + src_text = maya_submit.pop(json_key) + try: + value = json.loads(src_text) + except ValueError: + value = {} + maya_submit[json_key] = value + + nuke_submit = ayon_deadline_publish["NukeSubmitDeadline"] + nuke_submit["env_search_replace_values"] = { + item["name"]: item["value"] + for item in nuke_submit.pop("env_search_replace_values") + } + nuke_submit["limit_groups"] = { + item["name"]: item["value"] for item in nuke_submit.pop("limit_groups") + } + + process_subsetted_job = ayon_deadline_publish["ProcessSubmittedJobOnFarm"] + process_subsetted_job["aov_filter"] = { + item["name"]: item["value"] + for item in process_subsetted_job.pop("aov_filter") + } + + output["deadline"] = ayon_deadline + + +def _convert_royalrender_project_settings(ayon_settings, output): + if "royalrender" not in ayon_settings: + return + ayon_royalrender = ayon_settings["royalrender"] + rr_paths = ayon_royalrender.get("selected_rr_paths", []) + + output["royalrender"] = { + "publish": ayon_royalrender["publish"], + "rr_paths": rr_paths, + } + + +def _convert_kitsu_project_settings(ayon_settings, output): + if "kitsu" not in ayon_settings: + return + + ayon_kitsu_settings = ayon_settings["kitsu"] + ayon_kitsu_settings.pop("server") + + integrate_note = ayon_kitsu_settings["publish"]["IntegrateKitsuNote"] + status_change_conditions = integrate_note["status_change_conditions"] + if "product_type_requirements" in status_change_conditions: + status_change_conditions["family_requirements"] = ( + status_change_conditions.pop("product_type_requirements")) + + output["kitsu"] = ayon_kitsu_settings + + +def _convert_shotgrid_project_settings(ayon_settings, output): + if "shotgrid" not in ayon_settings: + return + + ayon_shotgrid = ayon_settings["shotgrid"] + # This means that a different variant of addon is used + if "leecher_backend_url" not in ayon_shotgrid: + return + + for key in { + "leecher_backend_url", + "filter_projects_by_login", + "shotgrid_settings", + "leecher_manager_url", + }: + ayon_shotgrid.pop(key) + + asset_field = ayon_shotgrid["fields"]["asset"] + asset_field["type"] = asset_field.pop("asset_type") + + task_field = ayon_shotgrid["fields"]["task"] + if "task" in task_field: + task_field["step"] = task_field.pop("task") + + output["shotgrid"] = ayon_settings["shotgrid"] + + +def _convert_slack_project_settings(ayon_settings, output): + if "slack" not in ayon_settings: + return + + ayon_slack = ayon_settings["slack"] + ayon_slack.pop("enabled", None) + for profile in ayon_slack["publish"]["CollectSlackFamilies"]["profiles"]: + profile["tasks"] = profile.pop("task_names") + profile["subsets"] = profile.pop("subset_names") + + output["slack"] = ayon_slack + + +def _convert_global_project_settings(ayon_settings, output, default_settings): + if "core" not in ayon_settings: + return + + ayon_core = ayon_settings["core"] + + _convert_host_imageio(ayon_core) + + for key in ( + "environments", + "studio_name", + "studio_code", + ): + ayon_core.pop(key) + + # Publish conversion + ayon_publish = ayon_core["publish"] + + ayon_collect_audio = ayon_publish["CollectAudio"] + if "audio_product_name" in ayon_collect_audio: + ayon_collect_audio["audio_subset_name"] = ( + ayon_collect_audio.pop("audio_product_name")) + + for profile in ayon_publish["ExtractReview"]["profiles"]: + if "product_types" in profile: + profile["families"] = profile.pop("product_types") + new_outputs = {} + for output_def in profile.pop("outputs"): + name = output_def.pop("name") + new_outputs[name] = output_def + + output_def_filter = output_def["filter"] + if "product_names" in output_def_filter: + output_def_filter["subsets"] = ( + output_def_filter.pop("product_names")) + + for color_key in ("overscan_color", "bg_color"): + output_def[color_key] = _convert_color(output_def[color_key]) + + letter_box = output_def["letter_box"] + for color_key in ("fill_color", "line_color"): + letter_box[color_key] = _convert_color(letter_box[color_key]) + + if "output_width" in output_def: + output_def["width"] = output_def.pop("output_width") + + if "output_height" in output_def: + output_def["height"] = output_def.pop("output_height") + + profile["outputs"] = new_outputs + + # Extract Burnin plugin + extract_burnin = ayon_publish["ExtractBurnin"] + extract_burnin_options = extract_burnin["options"] + for color_key in ("font_color", "bg_color"): + extract_burnin_options[color_key] = _convert_color( + extract_burnin_options[color_key] + ) + + for profile in extract_burnin["profiles"]: + extract_burnin_defs = profile["burnins"] + if "product_names" in profile: + profile["subsets"] = profile.pop("product_names") + profile["families"] = profile.pop("product_types") + + for burnin_def in extract_burnin_defs: + for key in ( + "TOP_LEFT", + "TOP_CENTERED", + "TOP_RIGHT", + "BOTTOM_LEFT", + "BOTTOM_CENTERED", + "BOTTOM_RIGHT", + ): + burnin_def[key] = ( + burnin_def[key] + .replace("{product[name]}", "{subset}") + .replace("{Product[name]}", "{Subset}") + .replace("{PRODUCT[NAME]}", "{SUBSET}") + .replace("{product[type]}", "{family}") + .replace("{Product[type]}", "{Family}") + .replace("{PRODUCT[TYPE]}", "{FAMILY}") + .replace("{folder[name]}", "{asset}") + .replace("{Folder[name]}", "{Asset}") + .replace("{FOLDER[NAME]}", "{ASSET}") + ) + profile["burnins"] = { + extract_burnin_def.pop("name"): extract_burnin_def + for extract_burnin_def in extract_burnin_defs + } + + ayon_integrate_hero = ayon_publish["IntegrateHeroVersion"] + for profile in ayon_integrate_hero["template_name_profiles"]: + if "product_types" not in profile: + break + profile["families"] = profile.pop("product_types") + + if "IntegrateProductGroup" in ayon_publish: + subset_group = ayon_publish.pop("IntegrateProductGroup") + subset_group_profiles = subset_group.pop("product_grouping_profiles") + for profile in subset_group_profiles: + profile["families"] = profile.pop("product_types") + subset_group["subset_grouping_profiles"] = subset_group_profiles + ayon_publish["IntegrateSubsetGroup"] = subset_group + + # Cleanup plugin + ayon_cleanup = ayon_publish["CleanUp"] + if "patterns" in ayon_cleanup: + ayon_cleanup["paterns"] = ayon_cleanup.pop("patterns") + + # Project root settings - json string to dict + ayon_core["project_environments"] = json.loads( + ayon_core["project_environments"] + ) + ayon_core["project_folder_structure"] = json.dumps(json.loads( + ayon_core["project_folder_structure"] + )) + + # Tools settings + ayon_tools = ayon_core["tools"] + ayon_create_tool = ayon_tools["creator"] + if "product_name_profiles" in ayon_create_tool: + product_name_profiles = ayon_create_tool.pop("product_name_profiles") + for profile in product_name_profiles: + profile["families"] = profile.pop("product_types") + ayon_create_tool["subset_name_profiles"] = product_name_profiles + + for profile in ayon_create_tool["subset_name_profiles"]: + template = profile["template"] + profile["template"] = ( + template + .replace("{task[name]}", "{task}") + .replace("{Task[name]}", "{Task}") + .replace("{TASK[NAME]}", "{TASK}") + .replace("{product[type]}", "{family}") + .replace("{Product[type]}", "{Family}") + .replace("{PRODUCT[TYPE]}", "{FAMILY}") + .replace("{folder[name]}", "{asset}") + .replace("{Folder[name]}", "{Asset}") + .replace("{FOLDER[NAME]}", "{ASSET}") + ) + + product_smart_select_key = "families_smart_select" + if "product_types_smart_select" in ayon_create_tool: + product_smart_select_key = "product_types_smart_select" + + new_smart_select_families = { + item["name"]: item["task_names"] + for item in ayon_create_tool.pop(product_smart_select_key) + } + ayon_create_tool["families_smart_select"] = new_smart_select_families + + ayon_loader_tool = ayon_tools["loader"] + if "product_type_filter_profiles" in ayon_loader_tool: + product_type_filter_profiles = ( + ayon_loader_tool.pop("product_type_filter_profiles")) + for profile in product_type_filter_profiles: + profile["filter_families"] = profile.pop("filter_product_types") + + ayon_loader_tool["family_filter_profiles"] = ( + product_type_filter_profiles) + + ayon_publish_tool = ayon_tools["publish"] + for profile in ayon_publish_tool["hero_template_name_profiles"]: + if "product_types" in profile: + profile["families"] = profile.pop("product_types") + + for profile in ayon_publish_tool["template_name_profiles"]: + if "product_types" in profile: + profile["families"] = profile.pop("product_types") + + ayon_core["sync_server"] = ( + default_settings["global"]["sync_server"] + ) + output["global"] = ayon_core + + +def convert_project_settings(ayon_settings, default_settings): + # Missing settings + # - standalonepublisher + default_settings = copy.deepcopy(default_settings) + output = {} + exact_match = { + "aftereffects", + "harmony", + "houdini", + "resolve", + "unreal", + } + for key in exact_match: + if key in ayon_settings: + output[key] = ayon_settings[key] + _convert_host_imageio(output[key]) + + _convert_applications_project_settings(ayon_settings, output) + _convert_blender_project_settings(ayon_settings, output) + _convert_celaction_project_settings(ayon_settings, output) + _convert_flame_project_settings(ayon_settings, output) + _convert_fusion_project_settings(ayon_settings, output) + _convert_maya_project_settings(ayon_settings, output) + _convert_nuke_project_settings(ayon_settings, output) + _convert_hiero_project_settings(ayon_settings, output) + _convert_photoshop_project_settings(ayon_settings, output) + _convert_tvpaint_project_settings(ayon_settings, output) + _convert_traypublisher_project_settings(ayon_settings, output) + _convert_webpublisher_project_settings(ayon_settings, output) + + _convert_deadline_project_settings(ayon_settings, output) + _convert_royalrender_project_settings(ayon_settings, output) + _convert_kitsu_project_settings(ayon_settings, output) + _convert_shotgrid_project_settings(ayon_settings, output) + _convert_slack_project_settings(ayon_settings, output) + + _convert_global_project_settings(ayon_settings, output, default_settings) + + for key, value in ayon_settings.items(): + if key not in output: + output[key] = value + + for key, value in default_settings.items(): + if key not in output: + output[key] = value + + return output + + +class CacheItem: + lifetime = 10 + + def __init__(self, value, outdate_time=None): + self._value = value + if outdate_time is None: + outdate_time = time.time() + self.lifetime + self._outdate_time = outdate_time + + @classmethod + def create_outdated(cls): + return cls({}, 0) + + def get_value(self): + return copy.deepcopy(self._value) + + def update_value(self, value): + self._value = value + self._outdate_time = time.time() + self.lifetime + + @property + def is_outdated(self): + return time.time() > self._outdate_time + + +class _AyonSettingsCache: + use_bundles = None + variant = None + addon_versions = CacheItem.create_outdated() + studio_settings = CacheItem.create_outdated() + cache_by_project_name = collections.defaultdict( + CacheItem.create_outdated) + + @classmethod + def _use_bundles(cls): + if _AyonSettingsCache.use_bundles is None: + major, minor, _, _, _ = ayon_api.get_server_version_tuple() + _AyonSettingsCache.use_bundles = major == 0 and minor >= 3 + return _AyonSettingsCache.use_bundles + + @classmethod + def _get_variant(cls): + if _AyonSettingsCache.variant is None: + from openpype.lib.openpype_version import is_staging_enabled + + _AyonSettingsCache.variant = ( + "staging" if is_staging_enabled() else "production" + ) + return _AyonSettingsCache.variant + + @classmethod + def _get_bundle_name(cls): + return os.environ["AYON_BUNDLE_NAME"] + + @classmethod + def get_value_by_project(cls, project_name): + cache_item = _AyonSettingsCache.cache_by_project_name[project_name] + if cache_item.is_outdated: + if cls._use_bundles(): + value = ayon_api.get_addons_settings( + bundle_name=cls._get_bundle_name(), + project_name=project_name + ) + else: + value = ayon_api.get_addons_settings(project_name) + cache_item.update_value(value) + return cache_item.get_value() + + @classmethod + def _get_addon_versions_from_bundle(cls): + expected_bundle = cls._get_bundle_name() + bundles = ayon_api.get_bundles()["bundles"] + bundle = next( + ( + bundle + for bundle in bundles + if bundle["name"] == expected_bundle + ), + None + ) + if bundle is not None: + return bundle["addons"] + return {} + + @classmethod + def get_addon_versions(cls): + cache_item = _AyonSettingsCache.addon_versions + if cache_item.is_outdated: + if cls._use_bundles(): + addons = cls._get_addon_versions_from_bundle() + else: + settings_data = ayon_api.get_addons_settings( + only_values=False, variant=cls._get_variant()) + addons = settings_data["versions"] + cache_item.update_value(addons) + + return cache_item.get_value() + + +def get_ayon_project_settings(default_values, project_name): + ayon_settings = _AyonSettingsCache.get_value_by_project(project_name) + return convert_project_settings(ayon_settings, default_values) + + +def get_ayon_system_settings(default_values): + addon_versions = _AyonSettingsCache.get_addon_versions() + ayon_settings = _AyonSettingsCache.get_value_by_project(None) + + return convert_system_settings( + ayon_settings, default_values, addon_versions + ) diff --git a/openpype/settings/defaults/project_settings/aftereffects.json b/openpype/settings/defaults/project_settings/aftereffects.json index 63f544e536..77ccb74410 100644 --- a/openpype/settings/defaults/project_settings/aftereffects.json +++ b/openpype/settings/defaults/project_settings/aftereffects.json @@ -12,7 +12,7 @@ }, "create": { "RenderCreator": { - "defaults": [ + "default_variants": [ "Main" ], "mark_for_review": true diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index eae5b239c8..df865adeba 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -4,6 +4,8 @@ "apply_on_opening": false, "base_file_unit_scale": 0.01 }, + "set_resolution_startup": true, + "set_frames_startup": true, "imageio": { "activate_host_color_management": true, "ocio_config": { @@ -54,7 +56,8 @@ "camera", "rig", "action", - "layout" + "layout", + "blendScene" ] }, "ExtractFBX": { @@ -82,6 +85,11 @@ "optional": true, "active": true }, + "ExtractCameraABC": { + "enabled": true, + "optional": true, + "active": true + }, "ExtractLayout": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json index b87c45666d..e2ca334b5f 100644 --- a/openpype/settings/defaults/project_settings/ftrack.json +++ b/openpype/settings/defaults/project_settings/ftrack.json @@ -1,9 +1,10 @@ { "events": { "sync_to_avalon": { - "statuses_name_change": [ - "ready", - "not ready" + "role_list": [ + "Pypeclub", + "Administrator", + "Project manager" ] }, "prepare_project": { diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 802b964375..06a595d1c5 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -1,10 +1,13 @@ { + "version_start_category": { + "profiles": [] + }, "imageio": { "activate_global_color_management": false, "ocio_config": { "filepath": [ - "{OPENPYPE_ROOT}/vendor/bin/ocioconfig/OpenColorIOConfigs/aces_1.2/config.ocio", - "{OPENPYPE_ROOT}/vendor/bin/ocioconfig/OpenColorIOConfigs/nuke-default/config.ocio" + "{BUILTIN_OCIO_ROOT}/aces_1.2/config.ocio", + "{BUILTIN_OCIO_ROOT}/nuke-default/config.ocio" ] }, "file_rules": { @@ -53,7 +56,8 @@ }, "ValidateEditorialAssetName": { "enabled": true, - "optional": false + "optional": false, + "active": true }, "ValidateVersion": { "enabled": true, @@ -300,74 +304,6 @@ } ] }, - "IntegrateAssetNew": { - "subset_grouping_profiles": [ - { - "families": [], - "hosts": [], - "task_types": [], - "tasks": [], - "template": "" - } - ], - "template_name_profiles": [ - { - "families": [], - "hosts": [], - "task_types": [], - "tasks": [], - "template_name": "publish" - }, - { - "families": [ - "review", - "render", - "prerender" - ], - "hosts": [], - "task_types": [], - "tasks": [], - "template_name": "render" - }, - { - "families": [ - "simpleUnrealTexture" - ], - "hosts": [ - "standalonepublisher" - ], - "task_types": [], - "tasks": [], - "template_name": "simpleUnrealTexture" - }, - { - "families": [ - "staticMesh", - "skeletalMesh" - ], - "hosts": [ - "maya" - ], - "task_types": [], - "tasks": [], - "template_name": "maya2unreal" - }, - { - "families": [ - "online" - ], - "hosts": [ - "traypublisher" - ], - "task_types": [], - "tasks": [], - "template_name": "online" - } - ] - }, - "IntegrateAsset": { - "skip_host_families": [] - }, "IntegrateHeroVersion": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/harmony.json b/openpype/settings/defaults/project_settings/harmony.json index 02f51d1d2b..b424b43cc1 100644 --- a/openpype/settings/defaults/project_settings/harmony.json +++ b/openpype/settings/defaults/project_settings/harmony.json @@ -10,22 +10,6 @@ "rules": {} } }, - "load": { - "ImageSequenceLoader": { - "family": [ - "shot", - "render", - "image", - "plate", - "reference" - ], - "representations": [ - "jpeg", - "png", - "jpg" - ] - } - }, "publish": { "CollectPalettes": { "allowed_tasks": [ diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json index a53f1ff202..9d047c28bd 100644 --- a/openpype/settings/defaults/project_settings/houdini.json +++ b/openpype/settings/defaults/project_settings/houdini.json @@ -14,48 +14,70 @@ "create": { "CreateArnoldAss": { "enabled": true, - "defaults": [], + "default_variants": [ + "Main" + ], "ext": ".ass" }, "CreateAlembicCamera": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateCompositeSequence": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreatePointCache": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateRedshiftROP": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateRemotePublish": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateVDBCache": { "enabled": true, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSD": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSDModel": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "USDCreateShadingWorkspace": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] }, "CreateUSDRender": { "enabled": false, - "defaults": [] + "default_variants": [ + "Main" + ] } }, "publish": { diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index e3fc5f0723..38f14ec022 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -521,19 +521,19 @@ "enabled": true, "make_tx": true, "rs_tex": false, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRender": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateUnrealStaticMesh": { "enabled": true, - "defaults": [ + "default_variants": [ "", "_Main" ], @@ -547,7 +547,9 @@ }, "CreateUnrealSkeletalMesh": { "enabled": true, - "defaults": [], + "default_variants": [ + "Main" + ], "joint_hints": "jnt_org" }, "CreateMultiverseLook": { @@ -555,12 +557,11 @@ "publish_mip_map": true }, "CreateAnimation": { - "enabled": false, "write_color_sets": false, "write_face_sets": false, "include_parent_hierarchy": false, "include_user_defined_attributes": false, - "defaults": [ + "default_variants": [ "Main" ] }, @@ -568,7 +569,7 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, - "defaults": [ + "default_variants": [ "Main", "Proxy", "Sculpt" @@ -579,7 +580,7 @@ "write_color_sets": false, "write_face_sets": false, "include_user_defined_attributes": false, - "defaults": [ + "default_variants": [ "Main" ] }, @@ -587,20 +588,20 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateReview": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ], "useMayaTimeline": true }, "CreateAss": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ], "expandProcedurals": false, @@ -615,68 +616,68 @@ "maskOverride": false, "maskDriver": false, "maskFilter": false, - "maskColor_manager": false, - "maskOperator": false + "maskOperator": false, + "maskColor_manager": false }, "CreateVrayProxy": { "enabled": true, "vrmesh": true, "alembic": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsd": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsdComp": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMultiverseUsdOver": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateAssembly": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateCamera": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateLayout": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateMayaScene": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRenderSetup": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateRig": { "enabled": true, - "defaults": [ + "default_variants": [ "Main", "Sim", "Cloth" @@ -684,20 +685,20 @@ }, "CreateSetDress": { "enabled": true, - "defaults": [ + "default_variants": [ "Main", "Anim" ] }, "CreateVRayScene": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] }, "CreateYetiRig": { "enabled": true, - "defaults": [ + "default_variants": [ "Main" ] } @@ -1464,6 +1465,10 @@ "namespace": "{asset_name}_{subset}_##_", "group_name": "_GRP", "display_handle": true + }, + "import_loader": { + "namespace": "{asset_name}_{subset}_##_", + "group_name": "_GRP" } }, "workfile_build": { diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 85e3c0d3c3..b736c462ff 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -465,34 +465,6 @@ "viewer_process_override": "", "bake_viewer_process": true, "bake_viewer_input_process": true, - "reformat_node_add": false, - "reformat_node_config": [ - { - "type": "text", - "name": "type", - "value": "to format" - }, - { - "type": "text", - "name": "format", - "value": "HD_1080" - }, - { - "type": "text", - "name": "filter", - "value": "Lanczos6" - }, - { - "type": "bool", - "name": "black_outside", - "value": true - }, - { - "type": "bool", - "name": "pbb", - "value": false - } - ], "reformat_nodes_config": { "enabled": false, "reposition_nodes": [ diff --git a/openpype/settings/defaults/project_settings/royalrender.json b/openpype/settings/defaults/project_settings/royalrender.json index b72fed8474..14e36058aa 100644 --- a/openpype/settings/defaults/project_settings/royalrender.json +++ b/openpype/settings/defaults/project_settings/royalrender.json @@ -1,4 +1,7 @@ { + "rr_paths": [ + "default" + ], "publish": { "CollectSequencesFromJob": { "review": true diff --git a/openpype/settings/defaults/project_settings/substancepainter.json b/openpype/settings/defaults/project_settings/substancepainter.json index 4adeff98ef..2f9344d435 100644 --- a/openpype/settings/defaults/project_settings/substancepainter.json +++ b/openpype/settings/defaults/project_settings/substancepainter.json @@ -2,11 +2,11 @@ "imageio": { "activate_host_color_management": true, "ocio_config": { - "override_global_config": true, + "override_global_config": false, "filepath": [] }, "file_rules": { - "activate_host_rules": true, + "activate_host_rules": false, "rules": {} } }, diff --git a/openpype/settings/defaults/project_settings/traypublisher.json b/openpype/settings/defaults/project_settings/traypublisher.json index 4c2c2f1391..dda958ebcd 100644 --- a/openpype/settings/defaults/project_settings/traypublisher.json +++ b/openpype/settings/defaults/project_settings/traypublisher.json @@ -329,6 +329,11 @@ } }, "publish": { + "CollectFrameDataFromAssetEntity": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateFrameRange": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/tvpaint.json b/openpype/settings/defaults/project_settings/tvpaint.json index 1f4f468656..fdbd6d5d0f 100644 --- a/openpype/settings/defaults/project_settings/tvpaint.json +++ b/openpype/settings/defaults/project_settings/tvpaint.json @@ -60,11 +60,6 @@ 255, 255, 255 - ], - "families_to_review": [ - "review", - "renderlayer", - "renderscene" ] }, "ValidateProjectSettings": { diff --git a/openpype/settings/defaults/system_settings/general.json b/openpype/settings/defaults/system_settings/general.json index d2994d1a62..496c37cd4d 100644 --- a/openpype/settings/defaults/system_settings/general.json +++ b/openpype/settings/defaults/system_settings/general.json @@ -15,6 +15,11 @@ "darwin": [], "linux": [] }, + "local_openpype_path": { + "windows": "", + "darwin": "", + "linux": "" + }, "production_version": "", "staging_version": "", "version_check_interval": 5 diff --git a/openpype/settings/defaults/system_settings/modules.json b/openpype/settings/defaults/system_settings/modules.json index 1ddbfd2726..f524f01d45 100644 --- a/openpype/settings/defaults/system_settings/modules.json +++ b/openpype/settings/defaults/system_settings/modules.json @@ -185,9 +185,9 @@ "enabled": false, "rr_paths": { "default": { - "windows": "", - "darwin": "", - "linux": "" + "windows": "C:\\RR8", + "darwin": "/Volumes/share/RR8", + "linux": "/mnt/studio/RR8" } } }, diff --git a/openpype/settings/entities/__init__.py b/openpype/settings/entities/__init__.py index 5e3a76094e..00db2b33a7 100644 --- a/openpype/settings/entities/__init__.py +++ b/openpype/settings/entities/__init__.py @@ -107,7 +107,8 @@ from .enum_entity import ( TaskTypeEnumEntity, DeadlineUrlEnumEntity, AnatomyTemplatesEnumEntity, - ShotgridUrlEnumEntity + ShotgridUrlEnumEntity, + RoyalRenderRootEnumEntity ) from .list_entity import ListEntity @@ -170,6 +171,7 @@ __all__ = ( "TaskTypeEnumEntity", "DeadlineUrlEnumEntity", "ShotgridUrlEnumEntity", + "RoyalRenderRootEnumEntity", "AnatomyTemplatesEnumEntity", "ListEntity", diff --git a/openpype/settings/entities/enum_entity.py b/openpype/settings/entities/enum_entity.py index de3bd353eb..26ecd33551 100644 --- a/openpype/settings/entities/enum_entity.py +++ b/openpype/settings/entities/enum_entity.py @@ -1,3 +1,5 @@ +import abc +import six import copy from .input_entities import InputEntity from .exceptions import EntitySchemaError @@ -477,8 +479,8 @@ class TaskTypeEnumEntity(BaseEnumEntity): self.set(value_on_not_set) -class DeadlineUrlEnumEntity(BaseEnumEntity): - schema_types = ["deadline_url-enum"] +class DynamicEnumEntity(BaseEnumEntity): + schema_types = [] def _item_initialization(self): self.multiselection = self.schema_data.get("multiselection", True) @@ -496,22 +498,8 @@ class DeadlineUrlEnumEntity(BaseEnumEntity): # GUI attribute self.placeholder = self.schema_data.get("placeholder") - def _get_enum_values(self): - deadline_urls_entity = self.get_entity_from_path( - "system_settings/modules/deadline/deadline_urls" - ) - - valid_keys = set() - enum_items_list = [] - for server_name, url_entity in deadline_urls_entity.items(): - enum_items_list.append( - {server_name: "{}: {}".format(server_name, url_entity.value)} - ) - valid_keys.add(server_name) - return enum_items_list, valid_keys - def set_override_state(self, *args, **kwargs): - super(DeadlineUrlEnumEntity, self).set_override_state(*args, **kwargs) + super(DynamicEnumEntity, self).set_override_state(*args, **kwargs) self.enum_items, self.valid_keys = self._get_enum_values() if self.multiselection: @@ -528,22 +516,50 @@ class DeadlineUrlEnumEntity(BaseEnumEntity): elif self._current_value not in self.valid_keys: self._current_value = tuple(self.valid_keys)[0] + @abc.abstractmethod + def _get_enum_values(self): + pass -class ShotgridUrlEnumEntity(BaseEnumEntity): + +class DeadlineUrlEnumEntity(DynamicEnumEntity): + schema_types = ["deadline_url-enum"] + + def _get_enum_values(self): + deadline_urls_entity = self.get_entity_from_path( + "system_settings/modules/deadline/deadline_urls" + ) + + valid_keys = set() + enum_items_list = [] + for server_name, url_entity in deadline_urls_entity.items(): + enum_items_list.append( + {server_name: "{}: {}".format(server_name, url_entity.value)} + ) + valid_keys.add(server_name) + return enum_items_list, valid_keys + + +class RoyalRenderRootEnumEntity(DynamicEnumEntity): + schema_types = ["rr_root-enum"] + + def _get_enum_values(self): + rr_root_entity = self.get_entity_from_path( + "system_settings/modules/royalrender/rr_paths" + ) + + valid_keys = set() + enum_items_list = [] + for server_name, url_entity in rr_root_entity.items(): + enum_items_list.append( + {server_name: "{}: {}".format(server_name, url_entity.value)} + ) + valid_keys.add(server_name) + return enum_items_list, valid_keys + + +class ShotgridUrlEnumEntity(DynamicEnumEntity): schema_types = ["shotgrid_url-enum"] - def _item_initialization(self): - self.multiselection = False - - self.enum_items = [] - self.valid_keys = set() - - self.valid_value_types = (STRING_TYPE,) - self.value_on_not_set = "" - - # GUI attribute - self.placeholder = self.schema_data.get("placeholder") - def _get_enum_values(self): shotgrid_settings = self.get_entity_from_path( "system_settings/modules/shotgrid/shotgrid_settings" @@ -562,16 +578,6 @@ class ShotgridUrlEnumEntity(BaseEnumEntity): valid_keys.add(server_name) return enum_items_list, valid_keys - def set_override_state(self, *args, **kwargs): - super(ShotgridUrlEnumEntity, self).set_override_state(*args, **kwargs) - - self.enum_items, self.valid_keys = self._get_enum_values() - if not self.valid_keys: - self._current_value = "" - - elif self._current_value not in self.valid_keys: - self._current_value = tuple(self.valid_keys)[0] - class AnatomyTemplatesEnumEntity(BaseEnumEntity): schema_types = ["anatomy-templates-enum"] diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json index 35b8fede86..72f09a641d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json @@ -32,7 +32,7 @@ "children": [ { "type": "list", - "key": "defaults", + "key": "default_variants", "label": "Default Variants", "object_type": "text", "docstring": "Fill default variant(s) (like 'Main' or 'Default') used in subset name creation." diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json index c549b577b2..aeb70dfd8c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json @@ -31,6 +31,16 @@ } ] }, + { + "key": "set_resolution_startup", + "type": "boolean", + "label": "Set Resolution on Startup" + }, + { + "key": "set_frames_startup", + "type": "boolean", + "label": "Set Start/End Frames and FPS on Startup" + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json index 157a8d297e..d6efb118b9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json @@ -21,12 +21,9 @@ }, { "type": "list", - "key": "statuses_name_change", - "label": "Statuses", - "object_type": { - "type": "text", - "multiline": false - } + "key": "role_list", + "label": "Roles", + "object_type": "text" } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_global.json b/openpype/settings/entities/schemas/projects_schema/schema_project_global.json index 953361935c..4094632c72 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_global.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_global.json @@ -5,6 +5,61 @@ "label": "Global", "is_file": true, "children": [ + { + "type": "dict", + "key": "version_start_category", + "label": "Version Start", + "collapsible": true, + "collapsible_key": true, + "children": [ + { + "type": "list", + "collapsible": true, + "key": "profiles", + "label": "Profiles", + "object_type": { + "type": "dict", + "children": [ + { + "key": "host_names", + "label": "Host names", + "type": "hosts-enum", + "multiselection": true + }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subset names", + "type": "list", + "object_type": "text" + }, + { + "key": "version_start", + "label": "Version Start", + "type": "number", + "minimum": 0 + } + ] + } + } + ] + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json b/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json index 98a815f2d4..f081c48b23 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_harmony.json @@ -18,34 +18,6 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "key": "load", - "label": "Loader plugins", - "children": [ - { - "type": "dict", - "collapsible": true, - "key": "ImageSequenceLoader", - "label": "Load Image Sequence", - "children": [ - { - "type": "list", - "key": "family", - "label": "Families", - "object_type": "text" - }, - { - "type": "list", - "key": "representations", - "label": "Representations", - "object_type": "text" - } - ] - } - ] - }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json index 26c64e6219..6b516ddf4a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json @@ -284,6 +284,10 @@ "type": "schema_template", "name": "template_workfile_options" }, + { + "type": "label", + "label": "^ Settings and for Workfile Builder is deprecated and will be soon removed.
Please use Template Workfile Build Settings instead." + }, { "type": "schema", "name": "schema_templated_workfile_build" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json index cabb4747d5..f4bf2f51ba 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_royalrender.json @@ -5,6 +5,12 @@ "collapsible": true, "is_file": true, "children": [ + { + "type": "rr_root-enum", + "key": "rr_paths", + "label": "Royal Render Roots", + "multiselect": true + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json b/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json index 4faeca89f3..a5f1c57121 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_shotgrid.json @@ -13,7 +13,8 @@ { "type": "shotgrid_url-enum", "key": "shotgrid_server", - "label": "Shotgrid Server" + "label": "Shotgrid Server", + "multiselection": false }, { "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json index e75e2887db..184fc657be 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json @@ -349,6 +349,10 @@ "type": "schema_template", "name": "template_validate_plugin", "template_data": [ + { + "key": "CollectFrameDataFromAssetEntity", + "label": "Collect frame range from asset entity" + }, { "key": "ValidateFrameRange", "label": "Validate frame range" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json index 45fc13bdde..e9255f426e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json @@ -273,18 +273,6 @@ "key": "review_bg", "label": "Review BG color", "use_alpha": false - }, - { - "type": "enum", - "key": "families_to_review", - "label": "Families to review", - "multiselection": true, - "enum_items": [ - {"review": "review"}, - {"renderpass": "renderPass"}, - {"renderlayer": "renderLayer"}, - {"renderscene": "renderScene"} - ] } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 1037519f57..2f0bf0a831 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -105,7 +105,11 @@ }, { "key": "ExtractCamera", - "label": "Extract FBX Camera as FBX" + "label": "Extract Camera as FBX" + }, + { + "key": "ExtractCameraABC", + "label": "Extract Camera as ABC" }, { "key": "ExtractLayout", @@ -174,4 +178,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index 3164cfb62d..c7e91fd22d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -118,6 +118,11 @@ "type": "boolean", "key": "optional", "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" } ] }, @@ -888,142 +893,6 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "key": "IntegrateAssetNew", - "label": "IntegrateAsset (Legacy)", - "is_group": true, - "children": [ - { - "type": "label", - "label": "NOTE: Subset grouping profiles settings were moved to Integrate Subset Group. Please move values there." - }, - { - "type": "list", - "key": "subset_grouping_profiles", - "label": "Subset grouping profiles (DEPRECATED)", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "key": "families", - "label": "Families", - "type": "list", - "object_type": "text" - }, - { - "type": "hosts-enum", - "key": "hosts", - "label": "Hosts", - "multiselection": true - }, - { - "key": "task_types", - "label": "Task types", - "type": "task-types-enum" - }, - { - "key": "tasks", - "label": "Task names", - "type": "list", - "object_type": "text" - }, - { - "type": "separator" - }, - { - "type": "text", - "key": "template", - "label": "Template" - } - ] - } - }, - { - "type": "label", - "label": "NOTE: Publish template profiles settings were moved to Tools/Publish/Template name profiles. Please move values there." - }, - { - "type": "list", - "key": "template_name_profiles", - "label": "Template name profiles (DEPRECATED)", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "type": "label", - "label": "" - }, - { - "key": "families", - "label": "Families", - "type": "list", - "object_type": "text" - }, - { - "type": "hosts-enum", - "key": "hosts", - "label": "Hosts", - "multiselection": true - }, - { - "key": "task_types", - "label": "Task types", - "type": "task-types-enum" - }, - { - "key": "tasks", - "label": "Task names", - "type": "list", - "object_type": "text" - }, - { - "type": "separator" - }, - { - "type": "text", - "key": "template_name", - "label": "Template name" - } - ] - } - } - ] - }, - { - "type": "dict", - "collapsible": true, - "key": "IntegrateAsset", - "label": "Integrate Asset", - "is_group": true, - "children": [ - { - "type": "list", - "key": "skip_host_families", - "label": "Skip hosts and families", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "type": "hosts-enum", - "key": "host", - "label": "Host" - }, - { - "type": "list", - "key": "families", - "label": "Families", - "object_type": "text" - } - ] - } - } - ] - }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json index 85ec482e73..23fc7c9351 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json @@ -320,10 +320,6 @@ "key": "publish", "label": "Publish", "children": [ - { - "type": "label", - "label": "NOTE: For backwards compatibility can be value empty and in that case are used values from IntegrateAssetNew. This will change in future so please move all values here as soon as possible." - }, { "type": "list", "key": "template_name_profiles", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json index 83e0cf789a..799bc0e81a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_create.json @@ -18,8 +18,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -39,51 +39,51 @@ ] }, - { - "type": "schema_template", - "name": "template_create_plugin", - "template_data": [ - { - "key": "CreateAlembicCamera", - "label": "Create Alembic Camera" - }, - { - "key": "CreateCompositeSequence", - "label": "Create Composite (Image Sequence)" - }, - { - "key": "CreatePointCache", - "label": "Create Point Cache" - }, - { - "key": "CreateRedshiftROP", - "label": "Create Redshift ROP" - }, - { - "key": "CreateRemotePublish", - "label": "Create Remote Publish" - }, - { - "key": "CreateVDBCache", - "label": "Create VDB Cache" - }, - { - "key": "CreateUSD", - "label": "Create USD" - }, - { - "key": "CreateUSDModel", - "label": "Create USD Model" - }, - { - "key": "USDCreateShadingWorkspace", - "label": "Create USD Shading Workspace" - }, - { - "key": "CreateUSDRender", - "label": "Create USD Render" - } - ] - } + { + "type": "schema_template", + "name": "template_create_plugin", + "template_data": [ + { + "key": "CreateAlembicCamera", + "label": "Create Alembic Camera" + }, + { + "key": "CreateCompositeSequence", + "label": "Create Composite (Image Sequence)" + }, + { + "key": "CreatePointCache", + "label": "Create Point Cache" + }, + { + "key": "CreateRedshiftROP", + "label": "Create Redshift ROP" + }, + { + "key": "CreateRemotePublish", + "label": "Create Remote Publish" + }, + { + "key": "CreateVDBCache", + "label": "Create VDB Cache" + }, + { + "key": "CreateUSD", + "label": "Create USD" + }, + { + "key": "CreateUSDModel", + "label": "Create USD Model" + }, + { + "key": "USDCreateShadingWorkspace", + "label": "Create USD Shading Workspace" + }, + { + "key": "CreateUSDRender", + "label": "Create USD Render" + } + ] + } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index a8b76a0331..b56e381c1d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -28,15 +28,21 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] }, - { - "type": "schema", - "name": "schema_maya_create_render" + { + "type": "schema_template", + "name": "template_create_plugin", + "template_data": [ + { + "key": "CreateRender", + "label": "Create Render" + } + ] }, { "type": "dict", @@ -52,8 +58,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -84,8 +90,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -120,12 +126,10 @@ "collapsible": true, "key": "CreateAnimation", "label": "Create Animation", - "checkbox_key": "enabled", "children": [ { - "type": "boolean", - "key": "enabled", - "label": "Enabled" + "type": "label", + "label": "This plugin is not optional due to implicit creation through loading the \"rig\" family.\nThis family is also hidden from creation due to complexity in setup." }, { "type": "boolean", @@ -149,8 +153,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -179,8 +183,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -214,8 +218,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -244,8 +248,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] @@ -264,8 +268,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -289,8 +293,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" }, { @@ -318,52 +322,52 @@ { "type": "boolean", "key": "maskOptions", - "label": "Mask Options" + "label": "Export Options" }, { "type": "boolean", "key": "maskCamera", - "label": "Mask Camera" + "label": "Export Cameras" }, { "type": "boolean", "key": "maskLight", - "label": "Mask Light" + "label": "Export Lights" }, { "type": "boolean", "key": "maskShape", - "label": "Mask Shape" + "label": "Export Shapes" }, { "type": "boolean", "key": "maskShader", - "label": "Mask Shader" + "label": "Export Shaders" }, { "type": "boolean", "key": "maskOverride", - "label": "Mask Override" + "label": "Export Override Nodes" }, { "type": "boolean", "key": "maskDriver", - "label": "Mask Driver" + "label": "Export Drivers" }, { "type": "boolean", "key": "maskFilter", - "label": "Mask Filter" - }, - { - "type": "boolean", - "key": "maskColor_manager", - "label": "Mask Color Manager" + "label": "Export Filters" }, { "type": "boolean", "key": "maskOperator", - "label": "Mask Operator" + "label": "Export Operators" + }, + { + "type": "boolean", + "key": "maskColor_manager", + "label": "Export Color Managers" } ] }, @@ -391,8 +395,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json deleted file mode 100644 index 68ad7ad63d..0000000000 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create_render.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "type": "dict", - "collapsible": true, - "key": "CreateRender", - "label": "Create Render", - "checkbox_key": "enabled", - "children": [ - { - "type": "boolean", - "key": "enabled", - "label": "Enabled" - }, - { - "type": "list", - "key": "defaults", - "label": "Default Subsets", - "object_type": "text" - } - ] -} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json index 4b6b97ab4e..e73d39c06d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json @@ -121,6 +121,28 @@ "label": "Display Handle On Load References" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "import_loader", + "label": "Import Loader", + "children": [ + { + "type": "text", + "label": "Namespace", + "key": "namespace" + }, + { + "type": "text", + "label": "Group name", + "key": "group_name" + }, + { + "type": "label", + "label": "Here's a link to the doc where you can find explanations about customing the naming of referenced assets: https://openpype.io/docs/admin_hosts_maya#load-plugins" + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 07c8d8715b..b115ee3faa 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -103,7 +103,7 @@ }, { "key": "exclude_families", - "label": "Families", + "label": "Exclude Families", "type": "list", "object_type": "text" } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index 3019c9b1b5..f006392bef 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -308,26 +308,6 @@ { "type": "separator" }, - { - "type": "label", - "label": "Currently we are supporting also multiple reposition nodes.
Older single reformat node is still supported
and if it is activated then preference will
be on it. If you want to use multiple reformat
nodes then you need to disable single reformat
node and enable multiple Reformat nodes here." - }, - { - "type": "boolean", - "key": "reformat_node_add", - "label": "Add Reformat Node", - "default": false - }, - { - "type": "schema_template", - "name": "template_nuke_knob_inputs", - "template_data": [ - { - "label": "Reformat Node Knobs", - "key": "reformat_node_config" - } - ] - }, { "key": "reformat_nodes_config", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json index 14d15e7840..3d2ed9f3d4 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_create_plugin.json @@ -13,8 +13,8 @@ }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json index c9dee8681a..51c78ce8f0 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_knob_inputs.json @@ -213,7 +213,7 @@ }, { "type": "number", - "key": "y", + "key": "z", "default": 1, "decimal": 4, "maximum": 99999999 @@ -238,29 +238,75 @@ "object_types": [ { "type": "number", - "key": "x", + "key": "r", "default": 1, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "x", + "key": "g", "default": 1, "decimal": 4, "maximum": 99999999 }, + { + "type": "number", + "key": "b", + "default": 1, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "a", + "default": 1, + "decimal": 4, + "maximum": 99999999 + } + ] + } + ] + }, + { + "key": "box", + "label": "Box", + "children": [ + { + "type": "text", + "key": "name", + "label": "Name" + }, + { + "type": "list-strict", + "key": "value", + "label": "Value", + "object_types": [ + { + "type": "number", + "key": "x", + "default": 0, + "decimal": 4, + "maximum": 99999999 + }, { "type": "number", "key": "y", - "default": 1, + "default": 0, "decimal": 4, "maximum": 99999999 }, { "type": "number", - "key": "y", - "default": 1, + "key": "r", + "default": 1920, + "decimal": 4, + "maximum": 99999999 + }, + { + "type": "number", + "key": "t", + "default": 1080, "decimal": 4, "maximum": 99999999 } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json index 8be48e669d..3a34858f4e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/template_nuke_write_attrs.json @@ -13,6 +13,12 @@ }, { "use_range_limit": "Use range limit" + }, + { + "ordered": "Defined order" + }, + { + "channels": "Channels override" } ] } diff --git a/openpype/settings/entities/schemas/system_schema/schema_general.json b/openpype/settings/entities/schemas/system_schema/schema_general.json index d6c22fe54c..2609441061 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_general.json +++ b/openpype/settings/entities/schemas/system_schema/schema_general.json @@ -128,8 +128,12 @@ { "type": "collapsible-wrap", "label": "OpenPype deployment control", - "collapsible": false, + "collapsible": true, "children": [ + { + "type": "label", + "label": "Define location accessible by artist machine to check for zip updates with Openpype code." + }, { "type": "path", "key": "openpype_path", @@ -138,6 +142,18 @@ "multipath": true, "require_restart": true }, + { + "type": "label", + "label": "Define custom location for artist machine where to unzip versions of Openpype code. By default it is in user app data folder." + }, + { + "type": "path", + "key": "local_openpype_path", + "label": "Custom Local Versions Folder", + "multiplatform": true, + "multipath": false, + "require_restart": true + }, { "type": "splitter" }, diff --git a/openpype/settings/handlers.py b/openpype/settings/handlers.py index a1f3331ccc..671cabfbc2 100644 --- a/openpype/settings/handlers.py +++ b/openpype/settings/handlers.py @@ -7,10 +7,14 @@ from abc import ABCMeta, abstractmethod import six import openpype.version -from openpype.client.mongo import OpenPypeMongoConnection -from openpype.client.entities import get_project_connection, get_project +from openpype.client.mongo import ( + OpenPypeMongoConnection, + get_project_connection, +) +from openpype.client.entities import get_project from openpype.lib.pype_info import get_workstation_info + from .constants import ( GLOBAL_SETTINGS_KEY, SYSTEM_SETTINGS_KEY, @@ -185,6 +189,7 @@ class SettingsStateInfo: class SettingsHandler(object): global_keys = { "openpype_path", + "local_openpype_path", "admin_password", "log_to_server", "disk_mapping", @@ -1798,10 +1803,7 @@ class MongoLocalSettingsHandler(LocalSettingsHandler): def __init__(self, local_site_id=None): # Get mongo connection - from openpype.lib import ( - OpenPypeMongoConnection, - get_local_site_id - ) + from openpype.lib import get_local_site_id if local_site_id is None: local_site_id = get_local_site_id() diff --git a/openpype/settings/lib.py b/openpype/settings/lib.py index 73554df236..ce62dde43f 100644 --- a/openpype/settings/lib.py +++ b/openpype/settings/lib.py @@ -4,6 +4,9 @@ import functools import logging import platform import copy + +from openpype import AYON_SERVER_ENABLED + from .exceptions import ( SaveWarningExc ) @@ -18,6 +21,11 @@ from .constants import ( DEFAULT_PROJECT_KEY ) +from .ayon_settings import ( + get_ayon_project_settings, + get_ayon_system_settings +) + log = logging.getLogger(__name__) # Py2 + Py3 json decode exception @@ -40,36 +48,17 @@ _SETTINGS_HANDLER = None _LOCAL_SETTINGS_HANDLER = None -def require_handler(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - global _SETTINGS_HANDLER - if _SETTINGS_HANDLER is None: - _SETTINGS_HANDLER = create_settings_handler() - return func(*args, **kwargs) - return wrapper - - -def require_local_handler(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - global _LOCAL_SETTINGS_HANDLER - if _LOCAL_SETTINGS_HANDLER is None: - _LOCAL_SETTINGS_HANDLER = create_local_settings_handler() - return func(*args, **kwargs) - return wrapper - - -def create_settings_handler(): - from .handlers import MongoSettingsHandler - # Handler can't be created in global space on initialization but only when - # needed. Plus here may be logic: Which handler is used (in future). - return MongoSettingsHandler() - - -def create_local_settings_handler(): - from .handlers import MongoLocalSettingsHandler - return MongoLocalSettingsHandler() +def clear_metadata_from_settings(values): + """Remove all metadata keys from loaded settings.""" + if isinstance(values, dict): + for key in tuple(values.keys()): + if key in METADATA_KEYS: + values.pop(key) + else: + clear_metadata_from_settings(values[key]) + elif isinstance(values, list): + for item in values: + clear_metadata_from_settings(item) def calculate_changes(old_value, new_value): @@ -91,6 +80,42 @@ def calculate_changes(old_value, new_value): return changes +def create_settings_handler(): + if AYON_SERVER_ENABLED: + raise RuntimeError("Mongo settings handler was triggered in AYON mode") + from .handlers import MongoSettingsHandler + # Handler can't be created in global space on initialization but only when + # needed. Plus here may be logic: Which handler is used (in future). + return MongoSettingsHandler() + + +def create_local_settings_handler(): + if AYON_SERVER_ENABLED: + raise RuntimeError("Mongo settings handler was triggered in AYON mode") + from .handlers import MongoLocalSettingsHandler + return MongoLocalSettingsHandler() + + +def require_handler(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + global _SETTINGS_HANDLER + if _SETTINGS_HANDLER is None: + _SETTINGS_HANDLER = create_settings_handler() + return func(*args, **kwargs) + return wrapper + + +def require_local_handler(func): + @functools.wraps(func) + def wrapper(*args, **kwargs): + global _LOCAL_SETTINGS_HANDLER + if _LOCAL_SETTINGS_HANDLER is None: + _LOCAL_SETTINGS_HANDLER = create_local_settings_handler() + return func(*args, **kwargs) + return wrapper + + @require_handler def get_system_last_saved_info(): return _SETTINGS_HANDLER.get_system_last_saved_info() @@ -494,10 +519,17 @@ def save_local_settings(data): @require_local_handler -def get_local_settings(): +def _get_local_settings(): return _LOCAL_SETTINGS_HANDLER.get_local_settings() +def get_local_settings(): + if not AYON_SERVER_ENABLED: + return _get_local_settings() + # TODO implement ayon implementation + return {} + + def load_openpype_default_settings(): """Load openpype default settings.""" return load_jsons_from_dir(DEFAULTS_DIR) @@ -890,7 +922,7 @@ def apply_local_settings_on_project_settings( sync_server_config["remote_site"] = remote_site -def get_system_settings(clear_metadata=True, exclude_locals=None): +def _get_system_settings(clear_metadata=True, exclude_locals=None): """System settings with applied studio overrides.""" default_values = get_default_settings()[SYSTEM_SETTINGS_KEY] studio_values = get_studio_system_settings_overrides() @@ -992,7 +1024,7 @@ def get_anatomy_settings( return result -def get_project_settings( +def _get_project_settings( project_name, clear_metadata=True, exclude_locals=None ): """Project settings with applied studio and project overrides.""" @@ -1043,7 +1075,7 @@ def get_current_project_settings(): @require_handler -def get_global_settings(): +def _get_global_settings(): default_settings = load_openpype_default_settings() default_values = default_settings["system_settings"]["general"] studio_values = _SETTINGS_HANDLER.get_global_settings() @@ -1053,7 +1085,14 @@ def get_global_settings(): } -def get_general_environments(): +def get_global_settings(): + if not AYON_SERVER_ENABLED: + return _get_global_settings() + default_settings = load_openpype_default_settings() + return default_settings["system_settings"]["general"] + + +def _get_general_environments(): """Get general environments. Function is implemented to be able load general environments without using @@ -1082,14 +1121,24 @@ def get_general_environments(): return environments -def clear_metadata_from_settings(values): - """Remove all metadata keys from loaded settings.""" - if isinstance(values, dict): - for key in tuple(values.keys()): - if key in METADATA_KEYS: - values.pop(key) - else: - clear_metadata_from_settings(values[key]) - elif isinstance(values, list): - for item in values: - clear_metadata_from_settings(item) +def get_general_environments(): + if not AYON_SERVER_ENABLED: + return _get_general_environments() + value = get_system_settings() + return value["general"]["environment"] + + +def get_system_settings(*args, **kwargs): + if not AYON_SERVER_ENABLED: + return _get_system_settings(*args, **kwargs) + + default_settings = get_default_settings()[SYSTEM_SETTINGS_KEY] + return get_ayon_system_settings(default_settings) + + +def get_project_settings(project_name, *args, **kwargs): + if not AYON_SERVER_ENABLED: + return _get_project_settings(project_name, *args, **kwargs) + + default_settings = get_default_settings()[PROJECT_SETTINGS_KEY] + return get_ayon_project_settings(default_settings, project_name) diff --git a/openpype/tests/lib.py b/openpype/tests/lib.py index 1fa5fb8054..c7d4423aba 100644 --- a/openpype/tests/lib.py +++ b/openpype/tests/lib.py @@ -5,7 +5,6 @@ import tempfile import contextlib import pyblish -import pyblish.cli import pyblish.plugin from pyblish.vendor import six diff --git a/openpype/tools/adobe_webserver/app.py b/openpype/tools/adobe_webserver/app.py index 3911baf7ac..49d61d3883 100644 --- a/openpype/tools/adobe_webserver/app.py +++ b/openpype/tools/adobe_webserver/app.py @@ -16,7 +16,7 @@ from wsrpc_aiohttp import ( WSRPCClient ) -from openpype.pipeline import legacy_io +from openpype.pipeline import get_global_context log = logging.getLogger(__name__) @@ -80,9 +80,10 @@ class WebServerTool: loop=asyncio.get_event_loop()) await client.connect() - project = legacy_io.Session["AVALON_PROJECT"] - asset = legacy_io.Session["AVALON_ASSET"] - task = legacy_io.Session["AVALON_TASK"] + context = get_global_context() + project = context["project_name"] + asset = context["asset_name"] + task = context["task_name"] log.info("Sending context change to {}-{}-{}".format(project, asset, task)) diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py index d46c238da1..7967416e9f 100644 --- a/openpype/tools/attribute_defs/widgets.py +++ b/openpype/tools/attribute_defs/widgets.py @@ -343,6 +343,7 @@ class TextAttrWidget(_BaseAttrDefWidget): return self._input_widget.text() def set_value(self, value, multivalue=False): + block_signals = False if multivalue: set_value = set(value) if None in set_value: @@ -352,13 +353,18 @@ class TextAttrWidget(_BaseAttrDefWidget): if len(set_value) == 1: value = tuple(set_value)[0] else: + block_signals = True value = "< Multiselection >" if value != self.current_value(): + if block_signals: + self._input_widget.blockSignals(True) if self.multiline: self._input_widget.setPlainText(value) else: self._input_widget.setText(value) + if block_signals: + self._input_widget.blockSignals(False) class BoolAttrWidget(_BaseAttrDefWidget): @@ -391,7 +397,9 @@ class BoolAttrWidget(_BaseAttrDefWidget): set_value.add(self.attr_def.default) if len(set_value) > 1: + self._input_widget.blockSignals(True) self._input_widget.setCheckState(QtCore.Qt.PartiallyChecked) + self._input_widget.blockSignals(False) return value = tuple(set_value)[0] diff --git a/openpype/tools/context_dialog/window.py b/openpype/tools/context_dialog/window.py index 86c53b55c5..4fe41c9949 100644 --- a/openpype/tools/context_dialog/window.py +++ b/openpype/tools/context_dialog/window.py @@ -5,7 +5,7 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype import style from openpype.pipeline import AvalonMongoDB -from openpype.tools.utils.lib import center_window +from openpype.tools.utils.lib import center_window, get_openpype_qt_app from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget from openpype.tools.utils.constants import ( PROJECT_NAME_ROLE @@ -376,9 +376,7 @@ def main( strict=True ): # Run Qt application - app = QtWidgets.QApplication.instance() - if app is None: - app = QtWidgets.QApplication([]) + app = get_openpype_qt_app() window = ContextDialog() window.set_strict(strict) window.set_context(project_name, asset_name) diff --git a/openpype/tools/creator/widgets.py b/openpype/tools/creator/widgets.py index 74f75811ff..0ebbd905e5 100644 --- a/openpype/tools/creator/widgets.py +++ b/openpype/tools/creator/widgets.py @@ -5,6 +5,7 @@ from qtpy import QtWidgets, QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS from openpype.tools.utils import ErrorMessageBox @@ -42,10 +43,13 @@ class CreateErrorMessageBox(ErrorMessageBox): def _get_report_data(self): report_message = ( - "Failed to create Subset: \"{subset}\" Family: \"{family}\"" + "Failed to create {subset_label}: \"{subset}\"" + " {family_label}: \"{family}\"" " in Asset: \"{asset}\"" "\n\nError: {message}" ).format( + subset_label="Product" if AYON_SERVER_ENABLED else "Subset", + family_label="Type" if AYON_SERVER_ENABLED else "Family", subset=self._subset_name, family=self._family, asset=self._asset_name, @@ -57,9 +61,13 @@ class CreateErrorMessageBox(ErrorMessageBox): def _create_content(self, content_layout): item_name_template = ( - "Family: {}
" - "Subset: {}
" - "Asset: {}
" + "{}: {{}}
" + "{}: {{}}
" + "{}: {{}}
" + ).format( + "Product type" if AYON_SERVER_ENABLED else "Family", + "Product name" if AYON_SERVER_ENABLED else "Subset", + "Folder" if AYON_SERVER_ENABLED else "Asset" ) exc_msg_template = "{}" @@ -151,15 +159,21 @@ class VariantLineEdit(QtWidgets.QLineEdit): def as_empty(self): self._set_border("empty") - self.report.emit("Empty subset name ..") + self.report.emit("Empty {} name ..".format( + "product" if AYON_SERVER_ENABLED else "subset" + )) def as_exists(self): self._set_border("exists") - self.report.emit("Existing subset, appending next version.") + self.report.emit("Existing {}, appending next version.".format( + "product" if AYON_SERVER_ENABLED else "subset" + )) def as_new(self): self._set_border("new") - self.report.emit("New subset, creating first version.") + self.report.emit("New {}, creating first version.".format( + "product" if AYON_SERVER_ENABLED else "subset" + )) def _set_border(self, status): qcolor, style = self.colors[status] diff --git a/openpype/tools/creator/window.py b/openpype/tools/creator/window.py index 57e2c49576..47f27a262a 100644 --- a/openpype/tools/creator/window.py +++ b/openpype/tools/creator/window.py @@ -8,7 +8,11 @@ from openpype.client import get_asset_by_name, get_subsets from openpype import style from openpype.settings import get_current_project_settings from openpype.tools.utils.lib import qt_app_context -from openpype.pipeline import legacy_io +from openpype.pipeline import ( + get_current_project_name, + get_current_asset_name, + get_current_task_name, +) from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, legacy_create, @@ -216,7 +220,7 @@ class CreatorWindow(QtWidgets.QDialog): self._set_valid_state(False) return - project_name = legacy_io.active_project() + project_name = get_current_project_name() asset_doc = None if creator_plugin: # Get the asset from the database which match with the name @@ -237,7 +241,7 @@ class CreatorWindow(QtWidgets.QDialog): return asset_id = asset_doc["_id"] - task_name = legacy_io.Session["AVALON_TASK"] + task_name = get_current_task_name() # Calculate subset name with Creator plugin subset_name = creator_plugin.get_subset_name( @@ -369,7 +373,7 @@ class CreatorWindow(QtWidgets.QDialog): self.setStyleSheet(style.load_stylesheet()) def refresh(self): - self._asset_name_input.setText(legacy_io.Session["AVALON_ASSET"]) + self._asset_name_input.setText(get_current_asset_name()) self._creators_model.reset() @@ -382,7 +386,7 @@ class CreatorWindow(QtWidgets.QDialog): ) current_index = None family = None - task_name = legacy_io.Session.get("AVALON_TASK", None) + task_name = get_current_task_name() or None lowered_task_name = task_name.lower() if task_name: for _family, _task_names in pype_project_setting.items(): diff --git a/openpype/tools/libraryloader/app.py b/openpype/tools/libraryloader/app.py index bd10595333..e68e9a5931 100644 --- a/openpype/tools/libraryloader/app.py +++ b/openpype/tools/libraryloader/app.py @@ -114,9 +114,10 @@ class LibraryLoaderWindow(QtWidgets.QDialog): manager = ModulesManager() sync_server = manager.modules_by_name.get("sync_server") - sync_server_enabled = False - if sync_server is not None: - sync_server_enabled = sync_server.enabled + sync_server_enabled = ( + sync_server is not None + and sync_server.enabled + ) repres_widget = None if sync_server_enabled: diff --git a/openpype/tools/loader/app.py b/openpype/tools/loader/app.py index 302fe6c366..b305233247 100644 --- a/openpype/tools/loader/app.py +++ b/openpype/tools/loader/app.py @@ -223,7 +223,7 @@ class LoaderWindow(QtWidgets.QDialog): lib.schedule(self._refresh, 50, channel="mongo") def on_assetschanged(self, *args): - self.echo("Fetching asset..") + self.echo("Fetching hierarchy..") lib.schedule(self._assetschanged, 50, channel="mongo") def on_subsetschanged(self, *args): diff --git a/openpype/tools/loader/model.py b/openpype/tools/loader/model.py index e58e02f89a..69b7e593b1 100644 --- a/openpype/tools/loader/model.py +++ b/openpype/tools/loader/model.py @@ -7,6 +7,7 @@ from uuid import uuid4 from qtpy import QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_assets, get_subsets, @@ -63,6 +64,7 @@ class BaseRepresentationModel(object): """Sets/Resets sync server vars after every change (refresh.)""" repre_icons = {} sync_server = None + sync_server_enabled = False active_site = active_provider = None remote_site = remote_provider = None @@ -74,6 +76,7 @@ class BaseRepresentationModel(object): if not project_name: self.repre_icons = repre_icons self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -99,8 +102,13 @@ class BaseRepresentationModel(object): self._modules_manager = ModulesManager() self._last_manager_cache = now_time - sync_server = self._modules_manager.modules_by_name["sync_server"] - if sync_server.is_project_enabled(project_name, single=True): + sync_server = self._modules_manager.modules_by_name.get("sync_server") + if ( + sync_server is not None + and sync_server.enabled + and sync_server.is_project_enabled(project_name, single=True) + ): + sync_server_enabled = True active_site = sync_server.get_active_site(project_name) active_provider = sync_server.get_provider_for_site( project_name, active_site) @@ -117,6 +125,7 @@ class BaseRepresentationModel(object): self.repre_icons = repre_icons self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -143,9 +152,9 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): ] column_labels_mapping = { - "subset": "Subset", - "asset": "Asset", - "family": "Family", + "subset": "Product" if AYON_SERVER_ENABLED else "Subset", + "asset": "Folder" if AYON_SERVER_ENABLED else "Asset", + "family": "Product type" if AYON_SERVER_ENABLED else "Family", "version": "Version", "time": "Time", "author": "Author", @@ -212,6 +221,7 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): self.repre_icons = {} self.sync_server = None + self.sync_server_enabled = False self.active_site = self.active_provider = None self.columns_index = dict( @@ -281,7 +291,7 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): ) # update availability on active site when version changes - if self.sync_server.enabled and version_doc: + if self.sync_server_enabled and version_doc: repres_info = list( self.sync_server.get_repre_info_for_versions( project_name, @@ -506,7 +516,7 @@ class SubsetsModel(BaseRepresentationModel, TreeModel): return repre_info_by_version_id = {} - if self.sync_server.enabled: + if self.sync_server_enabled: versions_by_id = {} for _subset_id, doc in last_versions_by_subset_id.items(): versions_by_id[doc["_id"]] = doc @@ -1032,12 +1042,16 @@ class RepresentationModel(TreeModel, BaseRepresentationModel): self._version_ids = [] manager = ModulesManager() - sync_server = active_site = remote_site = None + active_site = remote_site = None active_provider = remote_provider = None + sync_server = manager.modules_by_name.get("sync_server") + sync_server_enabled = ( + sync_server is not None + and sync_server.enabled + ) project_name = dbcon.current_project() - if project_name: - sync_server = manager.modules_by_name["sync_server"] + if sync_server_enabled and project_name: active_site = sync_server.get_active_site(project_name) remote_site = sync_server.get_remote_site(project_name) @@ -1056,6 +1070,7 @@ class RepresentationModel(TreeModel, BaseRepresentationModel): remote_provider = 'studio' self.sync_server = sync_server + self.sync_server_enabled = sync_server_enabled self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site @@ -1173,9 +1188,15 @@ class RepresentationModel(TreeModel, BaseRepresentationModel): repre_groups_items[doc["name"]] = 0 group = group_item - progress = lib.get_progress_for_repre( - doc, self.active_site, self.remote_site - ) + progress = { + self.active_site: 0, + self.remote_site: 0, + } + if self.sync_server_enabled: + progress = self.sync_server.get_progress_for_repre( + doc, + self.active_site, + self.remote_site) active_site_icon = self._icons.get(self.active_provider) remote_site_icon = self._icons.get(self.remote_provider) diff --git a/openpype/tools/loader/widgets.py b/openpype/tools/loader/widgets.py index b3aa381d14..5dd3af08d6 100644 --- a/openpype/tools/loader/widgets.py +++ b/openpype/tools/loader/widgets.py @@ -886,7 +886,9 @@ class ThumbnailWidget(QtWidgets.QLabel): self.set_pixmap() return - thumbnail_ent = get_thumbnail(project_name, thumbnail_id) + thumbnail_ent = get_thumbnail( + project_name, thumbnail_id, src_type, src_id + ) if not thumbnail_ent: return diff --git a/openpype/tools/project_manager/project_manager/model.py b/openpype/tools/project_manager/project_manager/model.py index 29a26f700f..f6c98d6f6c 100644 --- a/openpype/tools/project_manager/project_manager/model.py +++ b/openpype/tools/project_manager/project_manager/model.py @@ -84,6 +84,13 @@ class ProjectProxyFilter(QtCore.QSortFilterProxyModel): super(ProjectProxyFilter, self).__init__(*args, **kwargs) self._filter_default = False + def lessThan(self, left, right): + if left.data(PROJECT_NAME_ROLE) is None: + return True + if right.data(PROJECT_NAME_ROLE) is None: + return False + return super(ProjectProxyFilter, self).lessThan(left, right) + def set_filter_default(self, enabled=True): """Set if filtering of default item is enabled.""" if enabled == self._filter_default: diff --git a/openpype/tools/project_manager/project_manager/multiselection_combobox.py b/openpype/tools/project_manager/project_manager/multiselection_combobox.py index 4b5d468982..4100ada221 100644 --- a/openpype/tools/project_manager/project_manager/multiselection_combobox.py +++ b/openpype/tools/project_manager/project_manager/multiselection_combobox.py @@ -1,6 +1,14 @@ from qtpy import QtCore, QtWidgets -from openpype.tools.utils.lib import checkstate_int_to_enum +from openpype.tools.utils.lib import ( + checkstate_int_to_enum, + checkstate_enum_to_int, +) +from openpype.tools.utils.constants import ( + CHECKED_INT, + UNCHECKED_INT, + ITEM_IS_USER_TRISTATE, +) class ComboItemDelegate(QtWidgets.QStyledItemDelegate): @@ -107,9 +115,9 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): return if state == QtCore.Qt.Unchecked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT elif event.type() == QtCore.QEvent.KeyPress: # TODO: handle QtCore.Qt.Key_Enter, Key_Return? @@ -117,15 +125,15 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): # toggle the current items check state if ( index_flags & QtCore.Qt.ItemIsUserCheckable - and index_flags & QtCore.Qt.ItemIsTristate + and index_flags & ITEM_IS_USER_TRISTATE ): - new_state = QtCore.Qt.CheckState((int(state) + 1) % 3) + new_state = (checkstate_enum_to_int(state) + 1) % 3 elif index_flags & QtCore.Qt.ItemIsUserCheckable: if state != QtCore.Qt.Checked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT if new_state is not None: model.setData(current_index, new_state, QtCore.Qt.CheckStateRole) @@ -180,9 +188,9 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): for idx in range(self.count()): value = self.itemData(idx, role=QtCore.Qt.UserRole) if value in values: - check_state = QtCore.Qt.Checked + check_state = CHECKED_INT else: - check_state = QtCore.Qt.Unchecked + check_state = UNCHECKED_INT self.setItemData(idx, check_state, QtCore.Qt.CheckStateRole) def value(self): diff --git a/openpype/tools/publisher/widgets/assets_widget.py b/openpype/tools/publisher/widgets/assets_widget.py index a750d8d540..c536f93c9b 100644 --- a/openpype/tools/publisher/widgets/assets_widget.py +++ b/openpype/tools/publisher/widgets/assets_widget.py @@ -2,6 +2,7 @@ import collections from qtpy import QtWidgets, QtCore, QtGui +from openpype import AYON_SERVER_ENABLED from openpype.tools.utils import ( PlaceholderLineEdit, RecursiveSortFilterProxyModel, @@ -187,7 +188,8 @@ class AssetsDialog(QtWidgets.QDialog): proxy_model.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) filter_input = PlaceholderLineEdit(self) - filter_input.setPlaceholderText("Filter assets..") + filter_input.setPlaceholderText("Filter {}..".format( + "folders" if AYON_SERVER_ENABLED else "assets")) asset_view = AssetDialogView(self) asset_view.setModel(proxy_model) diff --git a/openpype/tools/publisher/widgets/create_widget.py b/openpype/tools/publisher/widgets/create_widget.py index b7605b1188..64fed1d70c 100644 --- a/openpype/tools/publisher/widgets/create_widget.py +++ b/openpype/tools/publisher/widgets/create_widget.py @@ -2,9 +2,11 @@ import re from qtpy import QtWidgets, QtCore, QtGui +from openpype import AYON_SERVER_ENABLED from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, PRE_CREATE_THUMBNAIL_KEY, + DEFAULT_VARIANT_VALUE, TaskNotSetError, ) @@ -203,7 +205,9 @@ class CreateWidget(QtWidgets.QWidget): variant_subset_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) variant_subset_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) variant_subset_layout.addRow("Variant", variant_widget) - variant_subset_layout.addRow("Subset", subset_name_input) + variant_subset_layout.addRow( + "Product" if AYON_SERVER_ENABLED else "Subset", + subset_name_input) creator_basics_layout = QtWidgets.QVBoxLayout(creator_basics_widget) creator_basics_layout.setContentsMargins(0, 0, 0, 0) @@ -623,7 +627,7 @@ class CreateWidget(QtWidgets.QWidget): default_variants = creator_item.default_variants if not default_variants: - default_variants = ["Main"] + default_variants = [DEFAULT_VARIANT_VALUE] default_variant = creator_item.default_variant if not default_variant: @@ -639,7 +643,7 @@ class CreateWidget(QtWidgets.QWidget): elif variant: self.variant_hints_menu.addAction(variant) - variant_text = default_variant or "Main" + variant_text = default_variant or DEFAULT_VARIANT_VALUE # Make sure subset name is updated to new plugin if variant_text == self.variant_input.text(): self._on_variant_change() diff --git a/openpype/tools/publisher/widgets/images/browse.png b/openpype/tools/publisher/widgets/images/browse.png new file mode 100644 index 0000000000..b115bb6766 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/browse.png differ diff --git a/openpype/tools/publisher/widgets/images/options.png b/openpype/tools/publisher/widgets/images/options.png new file mode 100644 index 0000000000..b394dbd4ce Binary files /dev/null and b/openpype/tools/publisher/widgets/images/options.png differ diff --git a/openpype/tools/publisher/widgets/images/paste.png b/openpype/tools/publisher/widgets/images/paste.png new file mode 100644 index 0000000000..14a6050da1 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/paste.png differ diff --git a/openpype/tools/publisher/widgets/images/take_screenshot.png b/openpype/tools/publisher/widgets/images/take_screenshot.png new file mode 100644 index 0000000000..242a36a026 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/take_screenshot.png differ diff --git a/openpype/tools/publisher/widgets/overview_widget.py b/openpype/tools/publisher/widgets/overview_widget.py index 25fff73134..778aa1139f 100644 --- a/openpype/tools/publisher/widgets/overview_widget.py +++ b/openpype/tools/publisher/widgets/overview_widget.py @@ -28,12 +28,14 @@ class OverviewWidget(QtWidgets.QFrame): self._refreshing_instances = False self._controller = controller - create_widget = CreateWidget(controller, self) + subset_content_widget = QtWidgets.QWidget(self) + + create_widget = CreateWidget(controller, subset_content_widget) # --- Created Subsets/Instances --- # Common widget for creation and overview subset_views_widget = BorderedLabelWidget( - "Subsets to publish", self + "Subsets to publish", subset_content_widget ) subset_view_cards = InstanceCardView(controller, subset_views_widget) @@ -45,14 +47,14 @@ class OverviewWidget(QtWidgets.QFrame): subset_views_layout.setCurrentWidget(subset_view_cards) # Buttons at the bottom of subset view - create_btn = CreateInstanceBtn(self) - delete_btn = RemoveInstanceBtn(self) - change_view_btn = ChangeViewBtn(self) + create_btn = CreateInstanceBtn(subset_views_widget) + delete_btn = RemoveInstanceBtn(subset_views_widget) + change_view_btn = ChangeViewBtn(subset_views_widget) # --- Overview --- # Subset details widget subset_attributes_wrap = BorderedLabelWidget( - "Publish options", self + "Publish options", subset_content_widget ) subset_attributes_widget = SubsetAttributesWidget( controller, subset_attributes_wrap @@ -81,7 +83,6 @@ class OverviewWidget(QtWidgets.QFrame): subset_views_widget.set_center_widget(subset_view_widget) # Whole subset layout with attributes and details - subset_content_widget = QtWidgets.QWidget(self) subset_content_layout = QtWidgets.QHBoxLayout(subset_content_widget) subset_content_layout.setContentsMargins(0, 0, 0, 0) subset_content_layout.addWidget(create_widget, 7) @@ -161,44 +162,62 @@ class OverviewWidget(QtWidgets.QFrame): self._change_anim = change_anim # Start in create mode - self._create_widget_policy = create_widget.sizePolicy() - self._subset_views_widget_policy = subset_views_widget.sizePolicy() - self._subset_attributes_wrap_policy = ( - subset_attributes_wrap.sizePolicy() - ) - self._max_widget_width = None self._current_state = "create" subset_attributes_wrap.setVisible(False) + def make_sure_animation_is_finished(self): + if self._change_anim.state() == QtCore.QAbstractAnimation.Running: + self._change_anim.stop() + self._on_change_anim_finished() + def set_state(self, new_state, animate): if new_state == self._current_state: return self._current_state = new_state - anim_is_running = ( - self._change_anim.state() == QtCore.QAbstractAnimation.Running - ) if not animate: - self._change_visibility_for_state() - if anim_is_running: - self._change_anim.stop() + self.make_sure_animation_is_finished() return - if self._max_widget_width is None: - self._max_widget_width = self._subset_views_widget.maximumWidth() - if new_state == "create": direction = QtCore.QAbstractAnimation.Backward else: direction = QtCore.QAbstractAnimation.Forward self._change_anim.setDirection(direction) - if not anim_is_running: - view_width = self._subset_views_widget.width() - self._subset_views_widget.setMinimumWidth(view_width) - self._subset_views_widget.setMaximumWidth(view_width) + if ( + self._change_anim.state() != QtCore.QAbstractAnimation.Running + ): + self._start_animation() + + def _start_animation(self): + views_geo = self._subset_views_widget.geometry() + layout_spacing = self._subset_content_layout.spacing() + if self._create_widget.isVisible(): + create_geo = self._create_widget.geometry() + subset_geo = QtCore.QRect(create_geo) + subset_geo.moveTop(views_geo.top()) + subset_geo.moveLeft(views_geo.right() + layout_spacing) + self._subset_attributes_wrap.setVisible(True) + + elif self._subset_attributes_wrap.isVisible(): + subset_geo = self._subset_attributes_wrap.geometry() + create_geo = QtCore.QRect(subset_geo) + create_geo.moveTop(views_geo.top()) + create_geo.moveRight(views_geo.left() - (layout_spacing + 1)) + self._create_widget.setVisible(True) + else: self._change_anim.start() + return + + while self._subset_content_layout.count(): + self._subset_content_layout.takeAt(0) + self._subset_views_widget.setGeometry(views_geo) + self._create_widget.setGeometry(create_geo) + self._subset_attributes_wrap.setGeometry(subset_geo) + + self._change_anim.start() def get_subset_views_geo(self): parent = self._subset_views_widget.parent() @@ -281,41 +300,39 @@ class OverviewWidget(QtWidgets.QFrame): def _on_change_anim(self, value): self._create_widget.setVisible(True) self._subset_attributes_wrap.setVisible(True) - width = ( - self._subset_content_widget.width() - - ( - self._subset_views_widget.width() - + (self._subset_content_layout.spacing() * 2) - ) - ) - subset_attrs_width = int((float(width) / self.anim_end_value) * value) - if subset_attrs_width > width: - subset_attrs_width = width + layout_spacing = self._subset_content_layout.spacing() + content_width = ( + self._subset_content_widget.width() - (layout_spacing * 2) + ) + content_height = self._subset_content_widget.height() + views_width = max( + int(content_width * 0.3), + self._subset_views_widget.minimumWidth() + ) + width = content_width - views_width + # Visible widths of other widgets + subset_attrs_width = int((float(width) / self.anim_end_value) * value) create_width = width - subset_attrs_width - self._create_widget.setMinimumWidth(create_width) - self._create_widget.setMaximumWidth(create_width) - self._subset_attributes_wrap.setMinimumWidth(subset_attrs_width) - self._subset_attributes_wrap.setMaximumWidth(subset_attrs_width) + views_geo = QtCore.QRect( + create_width + layout_spacing, 0, + views_width, content_height + ) + create_geo = QtCore.QRect(0, 0, width, content_height) + subset_attrs_geo = QtCore.QRect(create_geo) + create_geo.moveRight(views_geo.left() - (layout_spacing + 1)) + subset_attrs_geo.moveLeft(views_geo.right() + layout_spacing) + + self._subset_views_widget.setGeometry(views_geo) + self._create_widget.setGeometry(create_geo) + self._subset_attributes_wrap.setGeometry(subset_attrs_geo) def _on_change_anim_finished(self): self._change_visibility_for_state() - self._create_widget.setMinimumWidth(0) - self._create_widget.setMaximumWidth(self._max_widget_width) - self._subset_attributes_wrap.setMinimumWidth(0) - self._subset_attributes_wrap.setMaximumWidth(self._max_widget_width) - self._subset_views_widget.setMinimumWidth(0) - self._subset_views_widget.setMaximumWidth(self._max_widget_width) - self._create_widget.setSizePolicy( - self._create_widget_policy - ) - self._subset_attributes_wrap.setSizePolicy( - self._subset_attributes_wrap_policy - ) - self._subset_views_widget.setSizePolicy( - self._subset_views_widget_policy - ) + self._subset_content_layout.addWidget(self._create_widget, 7) + self._subset_content_layout.addWidget(self._subset_views_widget, 3) + self._subset_content_layout.addWidget(self._subset_attributes_wrap, 7) def _change_visibility_for_state(self): self._create_widget.setVisible( diff --git a/openpype/tools/publisher/widgets/screenshot_widget.py b/openpype/tools/publisher/widgets/screenshot_widget.py new file mode 100644 index 0000000000..4ccf920571 --- /dev/null +++ b/openpype/tools/publisher/widgets/screenshot_widget.py @@ -0,0 +1,314 @@ +import os +import tempfile + +from qtpy import QtCore, QtGui, QtWidgets + + +class ScreenMarquee(QtWidgets.QDialog): + """Dialog to interactively define screen area. + + This allows to select a screen area through a marquee selection. + + You can use any of its classmethods for easily saving an image, + capturing to QClipboard or returning a QPixmap, respectively + `capture_to_file`, `capture_to_clipboard` and `capture_to_pixmap`. + """ + + def __init__(self, parent=None): + super(ScreenMarquee, self).__init__(parent=parent) + + self.setWindowFlags( + QtCore.Qt.FramelessWindowHint + | QtCore.Qt.WindowStaysOnTopHint + | QtCore.Qt.CustomizeWindowHint + | QtCore.Qt.Tool) + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + self.setCursor(QtCore.Qt.CrossCursor) + self.setMouseTracking(True) + + fade_anim = QtCore.QVariantAnimation() + fade_anim.setStartValue(0) + fade_anim.setEndValue(50) + fade_anim.setDuration(200) + fade_anim.setEasingCurve(QtCore.QEasingCurve.OutCubic) + fade_anim.start(QtCore.QAbstractAnimation.DeleteWhenStopped) + + fade_anim.valueChanged.connect(self._on_fade_anim) + + app = QtWidgets.QApplication.instance() + if hasattr(app, "screenAdded"): + app.screenAdded.connect(self._on_screen_added) + app.screenRemoved.connect(self._fit_screen_geometry) + elif hasattr(app, "desktop"): + desktop = app.desktop() + desktop.screenCountChanged.connect(self._fit_screen_geometry) + + for screen in QtWidgets.QApplication.screens(): + screen.geometryChanged.connect(self._fit_screen_geometry) + + self._opacity = fade_anim.currentValue() + self._click_pos = None + self._capture_rect = None + + self._fade_anim = fade_anim + + def get_captured_pixmap(self): + if self._capture_rect is None: + return QtGui.QPixmap() + + return self.get_desktop_pixmap(self._capture_rect) + + def paintEvent(self, event): + """Paint event""" + + # Convert click and current mouse positions to local space. + mouse_pos = self.mapFromGlobal(QtGui.QCursor.pos()) + click_pos = None + if self._click_pos is not None: + click_pos = self.mapFromGlobal(self._click_pos) + + painter = QtGui.QPainter(self) + + # Draw background. Aside from aesthetics, this makes the full + # tool region accept mouse events. + painter.setBrush(QtGui.QColor(0, 0, 0, self._opacity)) + painter.setPen(QtCore.Qt.NoPen) + painter.drawRect(event.rect()) + + # Clear the capture area + if click_pos is not None: + capture_rect = QtCore.QRect(click_pos, mouse_pos) + painter.setCompositionMode( + QtGui.QPainter.CompositionMode_Clear) + painter.drawRect(capture_rect) + painter.setCompositionMode( + QtGui.QPainter.CompositionMode_SourceOver) + + pen_color = QtGui.QColor(255, 255, 255, 64) + pen = QtGui.QPen(pen_color, 1, QtCore.Qt.DotLine) + painter.setPen(pen) + + # Draw cropping markers at click position + rect = event.rect() + if click_pos is not None: + painter.drawLine( + rect.left(), click_pos.y(), + rect.right(), click_pos.y() + ) + painter.drawLine( + click_pos.x(), rect.top(), + click_pos.x(), rect.bottom() + ) + + # Draw cropping markers at current mouse position + painter.drawLine( + rect.left(), mouse_pos.y(), + rect.right(), mouse_pos.y() + ) + painter.drawLine( + mouse_pos.x(), rect.top(), + mouse_pos.x(), rect.bottom() + ) + + def mousePressEvent(self, event): + """Mouse click event""" + + if event.button() == QtCore.Qt.LeftButton: + # Begin click drag operation + self._click_pos = event.globalPos() + + def mouseReleaseEvent(self, event): + """Mouse release event""" + if ( + self._click_pos is not None + and event.button() == QtCore.Qt.LeftButton + ): + # End click drag operation and commit the current capture rect + self._capture_rect = QtCore.QRect( + self._click_pos, event.globalPos() + ).normalized() + self._click_pos = None + self.close() + + def mouseMoveEvent(self, event): + """Mouse move event""" + self.repaint() + + def keyPressEvent(self, event): + """Mouse press event""" + if event.key() == QtCore.Qt.Key_Escape: + self._click_pos = None + self._capture_rect = None + self.close() + return + return super(ScreenMarquee, self).mousePressEvent(event) + + def showEvent(self, event): + self._fit_screen_geometry() + self._fade_anim.start() + + def _fit_screen_geometry(self): + # Compute the union of all screen geometries, and resize to fit. + workspace_rect = QtCore.QRect() + for screen in QtWidgets.QApplication.screens(): + workspace_rect = workspace_rect.united(screen.geometry()) + self.setGeometry(workspace_rect) + + def _on_fade_anim(self): + """Animation callback for opacity.""" + + self._opacity = self._fade_anim.currentValue() + self.repaint() + + def _on_screen_added(self): + for screen in QtGui.QGuiApplication.screens(): + screen.geometryChanged.connect(self._fit_screen_geometry) + + @classmethod + def get_desktop_pixmap(cls, rect): + """Performs a screen capture on the specified rectangle. + + Args: + rect (QtCore.QRect): The rectangle to capture. + + Returns: + QtGui.QPixmap: Captured pixmap image + """ + + if rect.width() < 1 or rect.height() < 1: + return QtGui.QPixmap() + + screen_pixes = [] + for screen in QtWidgets.QApplication.screens(): + screen_geo = screen.geometry() + if not screen_geo.intersects(rect): + continue + + screen_pix_rect = screen_geo.intersected(rect) + screen_pix = screen.grabWindow( + 0, + screen_pix_rect.x() - screen_geo.x(), + screen_pix_rect.y() - screen_geo.y(), + screen_pix_rect.width(), screen_pix_rect.height() + ) + paste_point = QtCore.QPoint( + screen_pix_rect.x() - rect.x(), + screen_pix_rect.y() - rect.y() + ) + screen_pixes.append((screen_pix, paste_point)) + + output_pix = QtGui.QPixmap(rect.width(), rect.height()) + output_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(output_pix) + for item in screen_pixes: + (screen_pix, offset) = item + pix_painter.drawPixmap(offset, screen_pix) + + pix_painter.end() + + return output_pix + + @classmethod + def capture_to_pixmap(cls): + """Take screenshot with marquee into pixmap. + + Note: + The pixmap can be invalid (use 'isNull' to check). + + Returns: + QtGui.QPixmap: Captured pixmap image. + """ + + tool = cls() + tool.exec_() + return tool.get_captured_pixmap() + + @classmethod + def capture_to_file(cls, filepath=None): + """Take screenshot with marquee into file. + + Args: + filepath (Optional[str]): Path where screenshot will be saved. + + Returns: + Union[str, None]: Path to the saved screenshot, or None if user + cancelled the operation. + """ + + pixmap = cls.capture_to_pixmap() + if pixmap.isNull(): + return None + + if filepath is None: + with tempfile.NamedTemporaryFile( + prefix="screenshot_", suffix=".png", delete=False + ) as tmpfile: + filepath = tmpfile.name + + else: + output_dir = os.path.dirname(filepath) + if not os.path.exists(output_dir): + os.makedirs(output_dir) + + pixmap.save(filepath) + return filepath + + @classmethod + def capture_to_clipboard(cls): + """Take screenshot with marquee into clipboard. + + Notes: + Screenshot is not in clipboard if user cancelled the operation. + + Returns: + bool: Screenshot was added to clipboard. + """ + + clipboard = QtWidgets.QApplication.clipboard() + pixmap = cls.capture_to_pixmap() + if pixmap.isNull(): + return False + image = pixmap.toImage() + clipboard.setImage(image, QtGui.QClipboard.Clipboard) + return True + + +def capture_to_pixmap(): + """Take screenshot with marquee into pixmap. + + Note: + The pixmap can be invalid (use 'isNull' to check). + + Returns: + QtGui.QPixmap: Captured pixmap image. + """ + + return ScreenMarquee.capture_to_pixmap() + + +def capture_to_file(filepath=None): + """Take screenshot with marquee into file. + + Args: + filepath (Optional[str]): Path where screenshot will be saved. + + Returns: + Union[str, None]: Path to the saved screenshot, or None if user + cancelled the operation. + """ + + return ScreenMarquee.capture_to_file(filepath) + + +def capture_to_clipboard(): + """Take screenshot with marquee into clipboard. + + Notes: + Screenshot is not in clipboard if user cancelled the operation. + + Returns: + bool: Screenshot was added to clipboard. + """ + + return ScreenMarquee.capture_to_clipboard() diff --git a/openpype/tools/publisher/widgets/thumbnail_widget.py b/openpype/tools/publisher/widgets/thumbnail_widget.py index b17ca0adc8..60970710d8 100644 --- a/openpype/tools/publisher/widgets/thumbnail_widget.py +++ b/openpype/tools/publisher/widgets/thumbnail_widget.py @@ -7,8 +7,8 @@ from openpype.style import get_objected_colors from openpype.lib import ( run_subprocess, is_oiio_supported, - get_oiio_tools_path, - get_ffmpeg_tool_path, + get_oiio_tool_args, + get_ffmpeg_tool_args, ) from openpype.lib.transcoding import ( IMAGE_EXTENSIONS, @@ -22,6 +22,7 @@ from openpype.tools.utils import ( from openpype.tools.publisher.control import CardMessageTypes from .icons import get_image +from .screenshot_widget import capture_to_file class ThumbnailPainterWidget(QtWidgets.QWidget): @@ -306,20 +307,43 @@ class ThumbnailWidget(QtWidgets.QWidget): thumbnail_painter = ThumbnailPainterWidget(self) + icon_color = get_objected_colors("bg-view-selection").get_qcolor() + icon_color.setAlpha(255) + buttons_widget = QtWidgets.QWidget(self) buttons_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) - icon_color = get_objected_colors("bg-view-selection").get_qcolor() - icon_color.setAlpha(255) clear_image = get_image("clear_thumbnail") clear_pix = paint_image_with_color(clear_image, icon_color) - clear_button = PixmapButton(clear_pix, buttons_widget) clear_button.setObjectName("ThumbnailPixmapHoverButton") + clear_button.setToolTip("Clear thumbnail") + + take_screenshot_image = get_image("take_screenshot") + take_screenshot_pix = paint_image_with_color( + take_screenshot_image, icon_color) + take_screenshot_btn = PixmapButton( + take_screenshot_pix, buttons_widget) + take_screenshot_btn.setObjectName("ThumbnailPixmapHoverButton") + take_screenshot_btn.setToolTip("Take screenshot") + + paste_image = get_image("paste") + paste_pix = paint_image_with_color(paste_image, icon_color) + paste_btn = PixmapButton(paste_pix, buttons_widget) + paste_btn.setObjectName("ThumbnailPixmapHoverButton") + paste_btn.setToolTip("Paste from clipboard") + + browse_image = get_image("browse") + browse_pix = paint_image_with_color(browse_image, icon_color) + browse_btn = PixmapButton(browse_pix, buttons_widget) + browse_btn.setObjectName("ThumbnailPixmapHoverButton") + browse_btn.setToolTip("Browse...") buttons_layout = QtWidgets.QHBoxLayout(buttons_widget) - buttons_layout.setContentsMargins(3, 3, 3, 3) - buttons_layout.addStretch(1) + buttons_layout.setContentsMargins(0, 0, 0, 0) + buttons_layout.addWidget(take_screenshot_btn, 0) + buttons_layout.addWidget(paste_btn, 0) + buttons_layout.addWidget(browse_btn, 0) buttons_layout.addWidget(clear_button, 0) layout = QtWidgets.QHBoxLayout(self) @@ -327,6 +351,9 @@ class ThumbnailWidget(QtWidgets.QWidget): layout.addWidget(thumbnail_painter) clear_button.clicked.connect(self._on_clear_clicked) + take_screenshot_btn.clicked.connect(self._on_take_screenshot) + paste_btn.clicked.connect(self._on_paste_from_clipboard) + browse_btn.clicked.connect(self._on_browse_clicked) self._controller = controller self._output_dir = controller.get_thumbnail_temp_dir_path() @@ -338,9 +365,16 @@ class ThumbnailWidget(QtWidgets.QWidget): self._adapted_to_size = True self._last_width = None self._last_height = None + self._hide_on_finish = False self._buttons_widget = buttons_widget self._thumbnail_painter = thumbnail_painter + self._clear_button = clear_button + self._take_screenshot_btn = take_screenshot_btn + self._paste_btn = paste_btn + self._browse_btn = browse_btn + + clear_button.setEnabled(False) @property def width_ratio(self): @@ -430,13 +464,75 @@ class ThumbnailWidget(QtWidgets.QWidget): self._thumbnail_painter.clear_cache() + def _set_current_thumbails(self, thumbnail_paths): + self._thumbnail_painter.set_current_thumbnails(thumbnail_paths) + self._update_buttons_position() + def set_current_thumbnails(self, thumbnail_paths=None): self._thumbnail_painter.set_current_thumbnails(thumbnail_paths) self._update_buttons_position() + self._clear_button.setEnabled(self._thumbnail_painter.has_pixes) def _on_clear_clicked(self): self.set_current_thumbnails() self.thumbnail_cleared.emit() + self._clear_button.setEnabled(False) + + def _on_take_screenshot(self): + window = self.window() + state = window.windowState() + window.setWindowState(QtCore.Qt.WindowMinimized) + output_path = os.path.join( + self._output_dir, uuid.uuid4().hex + ".png") + if capture_to_file(output_path): + self.thumbnail_created.emit(output_path) + # restore original window state + window.setWindowState(state) + + def _on_paste_from_clipboard(self): + """Set thumbnail from a pixmap image in the system clipboard""" + + clipboard = QtWidgets.QApplication.clipboard() + pixmap = clipboard.pixmap() + if pixmap.isNull(): + return + + # Save as temporary file + output_path = os.path.join( + self._output_dir, uuid.uuid4().hex + ".png") + + output_dir = os.path.dirname(output_path) + if not os.path.exists(output_dir): + os.makedirs(output_dir) + + if pixmap.save(output_path): + self.thumbnail_created.emit(output_path) + + def _on_browse_clicked(self): + ext_filter = "Source (*{0})".format( + " *".join(self._review_extensions) + ) + filepath, _ = QtWidgets.QFileDialog.getOpenFileName( + self, "Choose thumbnail", os.path.expanduser("~"), ext_filter + ) + if not filepath: + return + valid_path = False + ext = os.path.splitext(filepath)[-1].lower() + if ext in self._review_extensions: + valid_path = True + + output = None + if valid_path: + output = export_thumbnail(filepath, self._output_dir) + + if output: + self.thumbnail_created.emit(output) + else: + self._controller.emit_card_message( + "Couldn't convert the source for thumbnail", + CardMessageTypes.error + ) def _adapt_to_size(self): if not self._adapted_to_size: @@ -452,13 +548,25 @@ class ThumbnailWidget(QtWidgets.QWidget): self._thumbnail_painter.clear_cache() def _update_buttons_position(self): - self._buttons_widget.setVisible(self._thumbnail_painter.has_pixes) size = self.size() + my_width = size.width() my_height = size.height() - height = self._buttons_widget.sizeHint().height() + buttons_sh = self._buttons_widget.sizeHint() + buttons_height = buttons_sh.height() + buttons_width = buttons_sh.width() + pos_x = my_width - (buttons_width + 3) + pos_y = my_height - (buttons_height + 3) + if pos_x < 0: + pos_x = 0 + buttons_width = my_width + if pos_y < 0: + pos_y = 0 + buttons_height = my_height self._buttons_widget.setGeometry( - 0, my_height - height, - size.width(), height + pos_x, + pos_y, + buttons_width, + buttons_height ) def resizeEvent(self, event): @@ -481,12 +589,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): if not is_oiio_supported(): return None - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-i", src_path, "--subimage", "0", "-o", dst_path - ] + ) try: _run_silent_subprocess(oiio_cmd) except Exception: @@ -495,12 +603,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): def _convert_thumbnail_ffmpeg(src_path, dst_path): - ffmpeg_cmd = [ - get_ffmpeg_tool_path(), + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", src_path, dst_path - ] + ) try: _run_silent_subprocess(ffmpeg_cmd) except Exception: diff --git a/openpype/tools/publisher/widgets/widgets.py b/openpype/tools/publisher/widgets/widgets.py index 0b13f26d57..1bbe73381f 100644 --- a/openpype/tools/publisher/widgets/widgets.py +++ b/openpype/tools/publisher/widgets/widgets.py @@ -9,6 +9,7 @@ import collections from qtpy import QtWidgets, QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.lib.attribute_definitions import UnknownDef from openpype.tools.attribute_defs import create_widget_for_attr_def from openpype.tools import resources @@ -1116,10 +1117,16 @@ class GlobalAttrsWidget(QtWidgets.QWidget): main_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) main_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) main_layout.addRow("Variant", variant_input) - main_layout.addRow("Asset", asset_value_widget) + main_layout.addRow( + "Folder" if AYON_SERVER_ENABLED else "Asset", + asset_value_widget) main_layout.addRow("Task", task_value_widget) - main_layout.addRow("Family", family_value_widget) - main_layout.addRow("Subset", subset_value_widget) + main_layout.addRow( + "Product type" if AYON_SERVER_ENABLED else "Family", + family_value_widget) + main_layout.addRow( + "Product name" if AYON_SERVER_ENABLED else "Subset", + subset_value_widget) main_layout.addRow(btns_layout) variant_input.value_changed.connect(self._on_variant_change) diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 2bda0c1cfe..39e78c01bb 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -634,16 +634,7 @@ class PublisherWindow(QtWidgets.QDialog): if old_tab == "details": self._publish_details_widget.close_details_popup() - if new_tab in ("create", "publish"): - animate = True - if old_tab not in ("create", "publish"): - animate = False - self._content_stacked_layout.setCurrentWidget( - self._overview_widget - ) - self._overview_widget.set_state(new_tab, animate) - - elif new_tab == "details": + if new_tab == "details": self._content_stacked_layout.setCurrentWidget( self._publish_details_widget ) @@ -654,6 +645,21 @@ class PublisherWindow(QtWidgets.QDialog): self._report_widget ) + old_on_overview = old_tab in ("create", "publish") + if new_tab in ("create", "publish"): + self._content_stacked_layout.setCurrentWidget( + self._overview_widget + ) + # Overview state is animated only when switching between + # 'create' and 'publish' tab + self._overview_widget.set_state(new_tab, old_on_overview) + + elif old_on_overview: + # Make sure animation finished if previous tab was 'create' + # or 'publish'. That is just for safety to avoid stuck animation + # when user clicks too fast. + self._overview_widget.make_sure_animation_is_finished() + is_create = new_tab == "create" if is_create: self._install_app_event_listener() diff --git a/openpype/tools/push_to_project/app.py b/openpype/tools/push_to_project/app.py index 9ca5fd83e9..b3ec33f353 100644 --- a/openpype/tools/push_to_project/app.py +++ b/openpype/tools/push_to_project/app.py @@ -1,6 +1,6 @@ import click -from qtpy import QtWidgets, QtCore +from openpype.tools.utils import get_openpype_qt_app from openpype.tools.push_to_project.window import PushToContextSelectWindow @@ -15,20 +15,7 @@ def main(project, version): version (str): Version id. """ - app = QtWidgets.QApplication.instance() - if not app: - # 'AA_EnableHighDpiScaling' must be set before app instance creation - high_dpi_scale_attr = getattr( - QtCore.Qt, "AA_EnableHighDpiScaling", None - ) - if high_dpi_scale_attr is not None: - QtWidgets.QApplication.setAttribute(high_dpi_scale_attr) - - app = QtWidgets.QApplication([]) - - attr = getattr(QtCore.Qt, "AA_UseHighDpiPixmaps", None) - if attr is not None: - app.setAttribute(attr) + app = get_openpype_qt_app() window = PushToContextSelectWindow() window.show() diff --git a/openpype/tools/push_to_project/control_integrate.py b/openpype/tools/push_to_project/control_integrate.py index 37a0512d59..a822339ccf 100644 --- a/openpype/tools/push_to_project/control_integrate.py +++ b/openpype/tools/push_to_project/control_integrate.py @@ -40,6 +40,7 @@ from openpype.lib import ( from openpype.lib.file_transaction import FileTransaction from openpype.settings import get_project_settings from openpype.pipeline import Anatomy +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.template_data import get_template_data from openpype.pipeline.publish import get_publish_template_name from openpype.pipeline.create import get_subset_name @@ -940,9 +941,17 @@ class ProjectPushItemProcess: last_version_doc = get_last_version_by_subset_id( project_name, subset_id ) - version = 1 if last_version_doc: - version += int(last_version_doc["name"]) + version = int(last_version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + self.host_name, + task_name=self.task_info["name"], + task_type=self.task_info["type"], + family=families[0], + subset=subset_doc["name"] + ) existing_version_doc = get_version_by_name( project_name, version, subset_id @@ -966,14 +975,6 @@ class ProjectPushItemProcess: return - if version is None: - last_version_doc = get_last_version_by_subset_id( - project_name, subset_id - ) - version = 1 - if last_version_doc: - version += int(last_version_doc["name"]) - version_doc = new_version_doc( version, subset_id, version_data ) diff --git a/openpype/tools/sceneinventory/lib.py b/openpype/tools/sceneinventory/lib.py index 5db3c479c5..0ac7622d65 100644 --- a/openpype/tools/sceneinventory/lib.py +++ b/openpype/tools/sceneinventory/lib.py @@ -1,9 +1,3 @@ -import os -from openpype_modules import sync_server - -from qtpy import QtGui - - def walk_hierarchy(node): """Recursively yield group node.""" for child in node.children(): @@ -12,71 +6,3 @@ def walk_hierarchy(node): for _child in walk_hierarchy(child): yield _child - - -def get_site_icons(): - resource_path = os.path.join( - os.path.dirname(sync_server.sync_server_module.__file__), - "providers", - "resources" - ) - icons = {} - # TODO get from sync module - for provider in ["studio", "local_drive", "gdrive"]: - pix_url = "{}/{}.png".format(resource_path, provider) - icons[provider] = QtGui.QIcon(pix_url) - - return icons - - -def get_progress_for_repre(repre_doc, active_site, remote_site): - """ - Calculates average progress for representation. - - If site has created_dt >> fully available >> progress == 1 - - Could be calculated in aggregate if it would be too slow - Args: - repre_doc(dict): representation dict - Returns: - (dict) with active and remote sites progress - {'studio': 1.0, 'gdrive': -1} - gdrive site is not present - -1 is used to highlight the site should be added - {'studio': 1.0, 'gdrive': 0.0} - gdrive site is present, not - uploaded yet - """ - progress = {active_site: -1, remote_site: -1} - if not repre_doc: - return progress - - files = {active_site: 0, remote_site: 0} - doc_files = repre_doc.get("files") or [] - for doc_file in doc_files: - if not isinstance(doc_file, dict): - continue - - sites = doc_file.get("sites") or [] - for site in sites: - if ( - # Pype 2 compatibility - not isinstance(site, dict) - # Check if site name is one of progress sites - or site["name"] not in progress - ): - continue - - files[site["name"]] += 1 - norm_progress = max(progress[site["name"]], 0) - if site.get("created_dt"): - progress[site["name"]] = norm_progress + 1 - elif site.get("progress"): - progress[site["name"]] = norm_progress + site["progress"] - else: # site exists, might be failed, do not add again - progress[site["name"]] = 0 - - # for example 13 fully avail. files out of 26 >> 13/26 = 0.5 - avg_progress = { - active_site: progress[active_site] / max(files[active_site], 1), - remote_site: progress[remote_site] / max(files[remote_site], 1) - } - return avg_progress diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 5cc849bb9e..4fd82f04a4 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -15,7 +15,7 @@ from openpype.client import ( get_representation_by_id, ) from openpype.pipeline import ( - legacy_io, + get_current_project_name, schema, HeroVersionType, registered_host, @@ -24,11 +24,7 @@ from openpype.style import get_default_entity_icon_color from openpype.tools.utils.models import TreeModel, Item from openpype.modules import ModulesManager -from .lib import ( - get_site_icons, - walk_hierarchy, - get_progress_for_repre -) +from .lib import walk_hierarchy class InventoryModel(TreeModel): @@ -54,8 +50,10 @@ class InventoryModel(TreeModel): self._default_icon_color = get_default_entity_icon_color() manager = ModulesManager() - sync_server = manager.modules_by_name["sync_server"] - self.sync_enabled = sync_server.enabled + sync_server = manager.modules_by_name.get("sync_server") + self.sync_enabled = ( + sync_server is not None and sync_server.enabled + ) self._site_icons = {} self.active_site = self.remote_site = None self.active_provider = self.remote_provider = None @@ -63,7 +61,7 @@ class InventoryModel(TreeModel): if not self.sync_enabled: return - project_name = legacy_io.current_project() + project_name = get_current_project_name() active_site = sync_server.get_active_site(project_name) remote_site = sync_server.get_remote_site(project_name) @@ -80,12 +78,15 @@ class InventoryModel(TreeModel): project_name, remote_site ) - # self.sync_server = sync_server + self.sync_server = sync_server self.active_site = active_site self.active_provider = active_provider self.remote_site = remote_site self.remote_provider = remote_provider - self._site_icons = get_site_icons() + self._site_icons = { + provider: QtGui.QIcon(icon_path) + for provider, icon_path in sync_server.get_site_icons().items() + } if "active_site" not in self.Columns: self.Columns.append("active_site") if "remote_site" not in self.Columns: @@ -321,7 +322,7 @@ class InventoryModel(TreeModel): """ # NOTE: @iLLiCiTiT this need refactor - project_name = legacy_io.active_project() + project_name = get_current_project_name() self.beginResetModel() @@ -445,7 +446,7 @@ class InventoryModel(TreeModel): group_node["group"] = subset["data"].get("subsetGroup") if self.sync_enabled: - progress = get_progress_for_repre( + progress = self.sync_server.get_progress_for_repre( representation, self.active_site, self.remote_site ) group_node["active_site"] = self.active_site diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py index 57e6e24411..af463e4867 100644 --- a/openpype/tools/sceneinventory/view.py +++ b/openpype/tools/sceneinventory/view.py @@ -23,7 +23,6 @@ from openpype.pipeline import ( ) from openpype.modules import ModulesManager from openpype.tools.utils.lib import ( - get_progress_for_repre, iter_model_rows, format_version ) @@ -55,8 +54,11 @@ class SceneInventoryView(QtWidgets.QTreeView): self._selected = None manager = ModulesManager() - self.sync_server = manager.modules_by_name["sync_server"] - self.sync_enabled = self.sync_server.enabled + sync_server = manager.modules_by_name.get("sync_server") + sync_enabled = sync_server is not None and sync_server.enabled + + self.sync_server = sync_server + self.sync_enabled = sync_enabled def _set_hierarchy_view(self, enabled): if enabled == self._hierarchy_view: @@ -361,7 +363,7 @@ class SceneInventoryView(QtWidgets.QTreeView): if not repre_doc: continue - progress = get_progress_for_repre( + progress = self.sync_server.get_progress_for_repre( repre_doc, active_site, remote_site diff --git a/openpype/tools/settings/__init__.py b/openpype/tools/settings/__init__.py index a5b1ea51a5..04f64e13f1 100644 --- a/openpype/tools/settings/__init__.py +++ b/openpype/tools/settings/__init__.py @@ -1,7 +1,8 @@ import sys -from qtpy import QtWidgets, QtGui +from qtpy import QtGui from openpype import style +from openpype.tools.utils import get_openpype_qt_app from .lib import ( BTN_FIXED_SIZE, CHILD_OFFSET @@ -24,9 +25,7 @@ def main(user_role=None): user_role, ", ".join(allowed_roles) )) - app = QtWidgets.QApplication.instance() - if not app: - app = QtWidgets.QApplication(sys.argv) + app = get_openpype_qt_app() app.setWindowIcon(QtGui.QIcon(style.app_icon_path())) widget = MainWidget(user_role) diff --git a/openpype/tools/settings/local_settings/projects_widget.py b/openpype/tools/settings/local_settings/projects_widget.py index 4a4148d7cd..f2b6535115 100644 --- a/openpype/tools/settings/local_settings/projects_widget.py +++ b/openpype/tools/settings/local_settings/projects_widget.py @@ -267,25 +267,26 @@ class SitesWidget(QtWidgets.QWidget): self.input_objects = {} def _get_sites_inputs(self): - sync_server_module = ( - self.modules_manager.modules_by_name["sync_server"] - ) + output = [] + if self._project_name is None: + return output + + sync_server_module = self.modules_manager.modules_by_name.get( + "sync_server") + if sync_server_module is None or not sync_server_module.enabled: + return output site_configs = sync_server_module.get_all_site_configs( self._project_name, local_editable_only=True) - roots_entity = ( - self.project_settings[PROJECT_ANATOMY_KEY][LOCAL_ROOTS_KEY] - ) site_names = [self.active_site_widget.current_text(), self.remote_site_widget.current_text()] - output = [] for site_name in site_names: if not site_name: continue site_inputs = [] - site_config = site_configs[site_name] + site_config = site_configs.get(site_name, {}) for root_name, path_entity in site_config.get("root", {}).items(): if not path_entity: continue @@ -350,9 +351,6 @@ class SitesWidget(QtWidgets.QWidget): def refresh(self): self._clear_widgets() - if self._project_name is None: - return - # Site label for site_name, site_inputs in self._get_sites_inputs(): site_widget = QtWidgets.QWidget(self.content_widget) diff --git a/openpype/tools/settings/settings/multiselection_combobox.py b/openpype/tools/settings/settings/multiselection_combobox.py index 896be3c06c..d64fc83745 100644 --- a/openpype/tools/settings/settings/multiselection_combobox.py +++ b/openpype/tools/settings/settings/multiselection_combobox.py @@ -1,5 +1,13 @@ from qtpy import QtCore, QtGui, QtWidgets -from openpype.tools.utils.lib import checkstate_int_to_enum +from openpype.tools.utils.lib import ( + checkstate_int_to_enum, + checkstate_enum_to_int, +) +from openpype.tools.utils.constants import ( + CHECKED_INT, + UNCHECKED_INT, + ITEM_IS_USER_TRISTATE, +) class ComboItemDelegate(QtWidgets.QStyledItemDelegate): @@ -30,7 +38,7 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): QtCore.Qt.Key_PageDown, QtCore.Qt.Key_PageUp, QtCore.Qt.Key_Home, - QtCore.Qt.Key_End + QtCore.Qt.Key_End, } top_bottom_padding = 2 @@ -127,25 +135,25 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): return if state == QtCore.Qt.Unchecked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT elif event.type() == QtCore.QEvent.KeyPress: # TODO: handle QtCore.Qt.Key_Enter, Key_Return? if event.key() == QtCore.Qt.Key_Space: - # toggle the current items check state if ( index_flags & QtCore.Qt.ItemIsUserCheckable - and index_flags & QtCore.Qt.ItemIsTristate + and index_flags & ITEM_IS_USER_TRISTATE ): - new_state = QtCore.Qt.CheckState((int(state) + 1) % 3) + new_state = (checkstate_enum_to_int(state) + 1) % 3 elif index_flags & QtCore.Qt.ItemIsUserCheckable: + # toggle the current items check state if state != QtCore.Qt.Checked: - new_state = QtCore.Qt.Checked + new_state = CHECKED_INT else: - new_state = QtCore.Qt.Unchecked + new_state = UNCHECKED_INT if new_state is not None: model.setData(current_index, new_state, QtCore.Qt.CheckStateRole) @@ -249,7 +257,6 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): QtWidgets.QStyle.SC_ComboBoxArrow ) total_width = option.rect.width() - btn_rect.width() - font_metricts = self.fontMetrics() line = 0 self.lines = {line: []} @@ -305,9 +312,9 @@ class MultiSelectionComboBox(QtWidgets.QComboBox): for idx in range(self.count()): value = self.itemData(idx, role=QtCore.Qt.UserRole) if value in values: - check_state = QtCore.Qt.Checked + check_state = CHECKED_INT else: - check_state = QtCore.Qt.Unchecked + check_state = UNCHECKED_INT self.setItemData(idx, check_state, QtCore.Qt.CheckStateRole) self.update_size_hint() diff --git a/openpype/tools/standalonepublish/widgets/widget_asset.py b/openpype/tools/standalonepublish/widgets/widget_asset.py index 5da25a0c3e..669366dd1d 100644 --- a/openpype/tools/standalonepublish/widgets/widget_asset.py +++ b/openpype/tools/standalonepublish/widgets/widget_asset.py @@ -2,6 +2,7 @@ import contextlib from qtpy import QtWidgets, QtCore import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_projects, get_project, @@ -181,7 +182,8 @@ class AssetWidget(QtWidgets.QWidget): filter = PlaceholderLineEdit() filter.textChanged.connect(proxy.setFilterFixedString) - filter.setPlaceholderText("Filter assets..") + filter.setPlaceholderText("Filter {}..".format( + "folders" if AYON_SERVER_ENABLED else "assets")) header.addWidget(filter) header.addWidget(refresh) diff --git a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py index f46e31786c..306c43e85d 100644 --- a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py +++ b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py @@ -5,6 +5,8 @@ import clique import subprocess import openpype.lib from qtpy import QtWidgets, QtCore + +from openpype.lib import get_ffprobe_data from . import DropEmpty, ComponentsList, ComponentItem @@ -269,26 +271,8 @@ class DropDataFrame(QtWidgets.QFrame): self._process_data(data) def load_data_with_probe(self, filepath): - ffprobe_path = openpype.lib.get_ffmpeg_tool_path("ffprobe") - args = [ - "\"{}\"".format(ffprobe_path), - '-v', 'quiet', - '-print_format json', - '-show_format', - '-show_streams', - '"{}"'.format(filepath) - ] - ffprobe_p = subprocess.Popen( - ' '.join(args), - stdout=subprocess.PIPE, - shell=True - ) - ffprobe_output = ffprobe_p.communicate()[0] - if ffprobe_p.returncode != 0: - raise RuntimeError( - 'Failed on ffprobe: check if ffprobe path is set in PATH env' - ) - return json.loads(ffprobe_output)['streams'][0] + ffprobe_data = get_ffprobe_data(filepath) + return ffprobe_data["streams"][0] def get_file_data(self, data): filepath = data['files'][0] diff --git a/openpype/tools/standalonepublish/widgets/widget_family.py b/openpype/tools/standalonepublish/widgets/widget_family.py index 8c18a93a00..73dc2122db 100644 --- a/openpype/tools/standalonepublish/widgets/widget_family.py +++ b/openpype/tools/standalonepublish/widgets/widget_family.py @@ -10,6 +10,7 @@ from openpype.client import ( ) from openpype.settings import get_project_settings from openpype.pipeline import LegacyCreator +from openpype.pipeline.version_start import get_versioning_start from openpype.pipeline.create import ( SUBSET_NAME_ALLOWED_SYMBOLS, TaskNotSetError, @@ -299,7 +300,15 @@ class FamilyWidget(QtWidgets.QWidget): project_name = self.dbcon.active_project() asset_name = self.asset_name subset_name = self.input_result.text() - version = 1 + plugin = self.list_families.currentItem().data(PluginRole) + family = plugin.family.rsplit(".", 1)[-1] + version = get_versioning_start( + project_name, + "standalonepublisher", + task_name=self.dbcon.Session["AVALON_TASK"], + family=family, + subset=subset_name + ) asset_doc = None subset_doc = None diff --git a/openpype/tools/tray/pype_info_widget.py b/openpype/tools/tray/pype_info_widget.py index c616ad4dba..dc222b79b5 100644 --- a/openpype/tools/tray/pype_info_widget.py +++ b/openpype/tools/tray/pype_info_widget.py @@ -2,11 +2,14 @@ import os import json import collections +import ayon_api from qtpy import QtCore, QtGui, QtWidgets from openpype import style from openpype import resources +from openpype import AYON_SERVER_ENABLED from openpype.settings.lib import get_local_settings +from openpype.lib import get_openpype_execute_args from openpype.lib.pype_info import ( get_all_current_info, get_openpype_info, @@ -327,8 +330,9 @@ class PypeInfoSubWidget(QtWidgets.QWidget): main_layout.addWidget(self._create_openpype_info_widget(), 0) main_layout.addWidget(self._create_separator(), 0) main_layout.addWidget(self._create_workstation_widget(), 0) - main_layout.addWidget(self._create_separator(), 0) - main_layout.addWidget(self._create_local_settings_widget(), 0) + if not AYON_SERVER_ENABLED: + main_layout.addWidget(self._create_separator(), 0) + main_layout.addWidget(self._create_local_settings_widget(), 0) main_layout.addWidget(self._create_separator(), 0) main_layout.addWidget(self._create_environ_widget(), 1) @@ -425,31 +429,59 @@ class PypeInfoSubWidget(QtWidgets.QWidget): def _create_openpype_info_widget(self): """Create widget with information about OpenPype application.""" - # Get pype info data - pype_info = get_openpype_info() - # Modify version key/values - version_value = "{} ({})".format( - pype_info.pop("version", self.not_applicable), - pype_info.pop("version_type", self.not_applicable) - ) - pype_info["version_value"] = version_value - # Prepare label mapping - key_label_mapping = { - "version_value": "Running version:", - "build_verison": "Build version:", - "executable": "OpenPype executable:", - "pype_root": "OpenPype location:", - "mongo_url": "OpenPype Mongo URL:" - } - # Prepare keys order - keys_order = [ - "version_value", - "build_verison", - "executable", - "pype_root", - "mongo_url" - ] - for key in pype_info.keys(): + if AYON_SERVER_ENABLED: + executable_args = get_openpype_execute_args() + username = "N/A" + user_info = ayon_api.get_user() + if user_info: + username = user_info.get("name") or username + full_name = user_info.get("attrib", {}).get("fullName") + if full_name: + username = "{} ({})".format(full_name, username) + info_values = { + "executable": executable_args[-1], + "server_url": os.environ["AYON_SERVER_URL"], + "username": username + } + key_label_mapping = { + "executable": "AYON Executable:", + "server_url": "AYON Server:", + "username": "AYON Username:" + } + # Prepare keys order + keys_order = [ + "server_url", + "username", + "executable", + ] + + else: + # Get pype info data + info_values = get_openpype_info() + # Modify version key/values + version_value = "{} ({})".format( + info_values.pop("version", self.not_applicable), + info_values.pop("version_type", self.not_applicable) + ) + info_values["version_value"] = version_value + # Prepare label mapping + key_label_mapping = { + "version_value": "Running version:", + "build_verison": "Build version:", + "executable": "OpenPype executable:", + "pype_root": "OpenPype location:", + "mongo_url": "OpenPype Mongo URL:" + } + # Prepare keys order + keys_order = [ + "version_value", + "build_verison", + "executable", + "pype_root", + "mongo_url" + ] + + for key in info_values.keys(): if key not in keys_order: keys_order.append(key) @@ -466,9 +498,9 @@ class PypeInfoSubWidget(QtWidgets.QWidget): info_layout.addWidget(title_label, 0, 0, 1, 2) for key in keys_order: - if key not in pype_info: + if key not in info_values: continue - value = pype_info[key] + value = info_values[key] label = key_label_mapping.get(key, key) row = info_layout.rowCount() info_layout.addWidget( diff --git a/openpype/tools/tray/pype_tray.py b/openpype/tools/tray/pype_tray.py index fdc0a8094d..a5876ca721 100644 --- a/openpype/tools/tray/pype_tray.py +++ b/openpype/tools/tray/pype_tray.py @@ -8,6 +8,7 @@ import platform from qtpy import QtCore, QtGui, QtWidgets import openpype.version +from openpype import AYON_SERVER_ENABLED from openpype import resources, style from openpype.lib import ( Logger, @@ -35,7 +36,8 @@ from openpype.settings import ( from openpype.tools.utils import ( WrappedCallbackItem, paint_image_with_color, - get_warning_pixmap + get_warning_pixmap, + get_openpype_qt_app, ) from .pype_info_widget import PypeInfoWidget @@ -589,6 +591,11 @@ class TrayManager: self.tray_widget.showMessage(*args, **kwargs) def _add_version_item(self): + if AYON_SERVER_ENABLED: + login_action = QtWidgets.QAction("Login", self.tray_widget) + login_action.triggered.connect(self._on_ayon_login) + self.tray_widget.menu.addAction(login_action) + subversion = os.environ.get("OPENPYPE_SUBVERSION") client_name = os.environ.get("OPENPYPE_CLIENT") @@ -614,6 +621,19 @@ class TrayManager: self._restart_action = restart_action + def _on_ayon_login(self): + self.execute_in_main_thread(self._show_ayon_login) + + def _show_ayon_login(self): + from ayon_common.connection.credentials import change_user_ui + + result = change_user_ui() + if result.shutdown: + self.exit() + + elif result.restart or result.token_changed: + self.restart() + def _on_restart_action(self): self.restart(use_expected_version=True) @@ -839,37 +859,7 @@ class PypeTrayStarter(QtCore.QObject): def main(): - log = Logger.get_logger(__name__) - app = QtWidgets.QApplication.instance() - - high_dpi_scale_attr = None - if not app: - # 'AA_EnableHighDpiScaling' must be set before app instance creation - high_dpi_scale_attr = getattr( - QtCore.Qt, "AA_EnableHighDpiScaling", None - ) - if high_dpi_scale_attr is not None: - QtWidgets.QApplication.setAttribute(high_dpi_scale_attr) - - app = QtWidgets.QApplication([]) - - if high_dpi_scale_attr is None: - log.debug(( - "Attribute 'AA_EnableHighDpiScaling' was not set." - " UI quality may be affected." - )) - - for attr_name in ( - "AA_UseHighDpiPixmaps", - ): - attr = getattr(QtCore.Qt, attr_name, None) - if attr is None: - log.debug(( - "Missing QtCore.Qt attribute \"{}\"." - " UI quality may be affected." - ).format(attr_name)) - else: - app.setAttribute(attr) + app = get_openpype_qt_app() starter = PypeTrayStarter(app) diff --git a/openpype/tools/traypublisher/window.py b/openpype/tools/traypublisher/window.py index 3ac1b4c4ad..a1ed38dcc0 100644 --- a/openpype/tools/traypublisher/window.py +++ b/openpype/tools/traypublisher/window.py @@ -17,7 +17,7 @@ from openpype.pipeline import install_host from openpype.hosts.traypublisher.api import TrayPublisherHost from openpype.tools.publisher.control_qt import QtPublisherController from openpype.tools.publisher.window import PublisherWindow -from openpype.tools.utils import PlaceholderLineEdit +from openpype.tools.utils import PlaceholderLineEdit, get_openpype_qt_app from openpype.tools.utils.constants import PROJECT_NAME_ROLE from openpype.tools.utils.models import ( ProjectModel, @@ -263,9 +263,7 @@ def main(): host = TrayPublisherHost() install_host(host) - app_instance = QtWidgets.QApplication.instance() - if app_instance is None: - app_instance = QtWidgets.QApplication([]) + app_instance = get_openpype_qt_app() if platform.system().lower() == "windows": import ctypes diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py index 10bd527692..f35bfaee70 100644 --- a/openpype/tools/utils/__init__.py +++ b/openpype/tools/utils/__init__.py @@ -25,6 +25,7 @@ from .lib import ( set_style_property, DynamicQThread, qt_app_context, + get_openpype_qt_app, get_asset_icon, get_asset_icon_by_name, get_asset_icon_name_from_doc, @@ -68,6 +69,7 @@ __all__ = ( "set_style_property", "DynamicQThread", "qt_app_context", + "get_openpype_qt_app", "get_asset_icon", "get_asset_icon_by_name", "get_asset_icon_name_from_doc", diff --git a/openpype/tools/utils/assets_widget.py b/openpype/tools/utils/assets_widget.py index ffbdd995d6..a45d762c73 100644 --- a/openpype/tools/utils/assets_widget.py +++ b/openpype/tools/utils/assets_widget.py @@ -5,6 +5,7 @@ import qtpy from qtpy import QtWidgets, QtCore, QtGui import qtawesome +from openpype import AYON_SERVER_ENABLED from openpype.client import ( get_project, get_assets, @@ -607,7 +608,8 @@ class AssetsWidget(QtWidgets.QWidget): refresh_btn.setToolTip("Refresh items") filter_input = PlaceholderLineEdit(header_widget) - filter_input.setPlaceholderText("Filter assets..") + filter_input.setPlaceholderText("Filter {}..".format( + "folders" if AYON_SERVER_ENABLED else "assets")) # Header header_layout = QtWidgets.QHBoxLayout(header_widget) diff --git a/openpype/tools/utils/constants.py b/openpype/tools/utils/constants.py index 99f2602ee3..77324762b3 100644 --- a/openpype/tools/utils/constants.py +++ b/openpype/tools/utils/constants.py @@ -5,6 +5,12 @@ UNCHECKED_INT = getattr(QtCore.Qt.Unchecked, "value", 0) PARTIALLY_CHECKED_INT = getattr(QtCore.Qt.PartiallyChecked, "value", 1) CHECKED_INT = getattr(QtCore.Qt.Checked, "value", 2) +# Checkbox state +try: + ITEM_IS_USER_TRISTATE = QtCore.Qt.ItemIsUserTristate +except AttributeError: + ITEM_IS_USER_TRISTATE = QtCore.Qt.ItemIsTristate + DEFAULT_PROJECT_LABEL = "< Default >" PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 101 PROJECT_IS_ACTIVE_ROLE = QtCore.Qt.UserRole + 102 diff --git a/openpype/tools/utils/host_tools.py b/openpype/tools/utils/host_tools.py index ac242d24d2..bc4b7867c2 100644 --- a/openpype/tools/utils/host_tools.py +++ b/openpype/tools/utils/host_tools.py @@ -10,7 +10,7 @@ from openpype.host import IWorkfileHost, ILoadHost from openpype.lib import Logger from openpype.pipeline import ( registered_host, - legacy_io, + get_current_asset_name, ) from .lib import qt_app_context @@ -96,7 +96,7 @@ class HostToolsHelper: use_context = False if use_context: - context = {"asset": legacy_io.Session["AVALON_ASSET"]} + context = {"asset": get_current_asset_name()} loader_tool.set_context(context, refresh=True) else: loader_tool.refresh() diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py index 58ece7c68f..2df46c1eae 100644 --- a/openpype/tools/utils/lib.py +++ b/openpype/tools/utils/lib.py @@ -14,11 +14,16 @@ from openpype.client import ( from openpype.style import ( get_default_entity_icon_color, get_objected_colors, + get_app_icon_path, ) from openpype.resources import get_image_path from openpype.lib import filter_profiles, Logger from openpype.settings import get_project_settings -from openpype.pipeline import registered_host +from openpype.pipeline import ( + registered_host, + get_current_context, + get_current_host_name, +) from .constants import CHECKED_INT, UNCHECKED_INT @@ -46,7 +51,6 @@ def checkstate_enum_to_int(state): return 2 - def center_window(window): """Move window to center of it's screen.""" @@ -149,6 +153,36 @@ def qt_app_context(): yield app +def get_openpype_qt_app(): + """Main Qt application initialized for OpenPype processed. + + This function should be used only inside OpenPype process and never inside + other processes. + """ + + app = QtWidgets.QApplication.instance() + if app is None: + for attr_name in ( + "AA_EnableHighDpiScaling", + "AA_UseHighDpiPixmaps", + ): + attr = getattr(QtCore.Qt, attr_name, None) + if attr is not None: + QtWidgets.QApplication.setAttribute(attr) + + if hasattr( + QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy" + ): + QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( + QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough + ) + + app = QtWidgets.QApplication(sys.argv) + + app.setWindowIcon(QtGui.QIcon(get_app_icon_path())) + return app + + class SharedObjects: jobs = {} icons = {} @@ -496,10 +530,11 @@ class FamilyConfigCache: return # Update the icons from the project configuration - project_name = os.environ.get("AVALON_PROJECT") - asset_name = os.environ.get("AVALON_ASSET") - task_name = os.environ.get("AVALON_TASK") - host_name = os.environ.get("AVALON_APP") + context = get_current_context() + project_name = context["project_name"] + asset_name = context["asset_name"] + task_name = context["task_name"] + host_name = get_current_host_name() if not all((project_name, asset_name, task_name)): return @@ -725,20 +760,23 @@ def create_qthread(func, *args, **kwargs): def get_repre_icons(): """Returns a dict {'provider_name': QIcon}""" + icons = {} try: from openpype_modules import sync_server except Exception: # Backwards compatibility - from openpype.modules import sync_server + try: + from openpype.modules import sync_server + except Exception: + return icons resource_path = os.path.join( os.path.dirname(sync_server.sync_server_module.__file__), "providers", "resources" ) - icons = {} if not os.path.exists(resource_path): print("No icons for Site Sync found") - return {} + return icons for file_name in os.listdir(resource_path): if file_name and not file_name.endswith("png"): @@ -752,61 +790,6 @@ def get_repre_icons(): return icons -def get_progress_for_repre(doc, active_site, remote_site): - """ - Calculates average progress for representation. - - If site has created_dt >> fully available >> progress == 1 - - Could be calculated in aggregate if it would be too slow - Args: - doc(dict): representation dict - Returns: - (dict) with active and remote sites progress - {'studio': 1.0, 'gdrive': -1} - gdrive site is not present - -1 is used to highlight the site should be added - {'studio': 1.0, 'gdrive': 0.0} - gdrive site is present, not - uploaded yet - """ - progress = {active_site: -1, - remote_site: -1} - if not doc: - return progress - - files = {active_site: 0, remote_site: 0} - doc_files = doc.get("files") or [] - for doc_file in doc_files: - if not isinstance(doc_file, dict): - continue - - sites = doc_file.get("sites") or [] - for site in sites: - if ( - # Pype 2 compatibility - not isinstance(site, dict) - # Check if site name is one of progress sites - or site["name"] not in progress - ): - continue - - files[site["name"]] += 1 - norm_progress = max(progress[site["name"]], 0) - if site.get("created_dt"): - progress[site["name"]] = norm_progress + 1 - elif site.get("progress"): - progress[site["name"]] = norm_progress + site["progress"] - else: # site exists, might be failed, do not add again - progress[site["name"]] = 0 - - # for example 13 fully avail. files out of 26 >> 13/26 = 0.5 - avg_progress = {} - avg_progress[active_site] = \ - progress[active_site] / max(files[active_site], 1) - avg_progress[remote_site] = \ - progress[remote_site] / max(files[remote_site], 1) - return avg_progress - - def is_sync_loader(loader): return is_remove_site_loader(loader) or is_add_site_loader(loader) diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index 5a8104611b..a70437cc65 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -410,6 +410,18 @@ class PixmapButtonPainter(QtWidgets.QWidget): self._pixmap = pixmap self._cached_pixmap = None + self._disabled = False + + def resizeEvent(self, event): + super(PixmapButtonPainter, self).resizeEvent(event) + self._cached_pixmap = None + self.repaint() + + def set_enabled(self, enabled): + if self._disabled != enabled: + return + self._disabled = not enabled + self.repaint() def set_pixmap(self, pixmap): self._pixmap = pixmap @@ -444,6 +456,8 @@ class PixmapButtonPainter(QtWidgets.QWidget): if self._cached_pixmap is None: self._cache_pixmap() + if self._disabled: + painter.setOpacity(0.5) painter.drawPixmap(0, 0, self._cached_pixmap) painter.end() @@ -464,6 +478,10 @@ class PixmapButton(ClickableFrame): layout.setContentsMargins(*args) self._update_painter_geo() + def setEnabled(self, enabled): + self._button_painter.set_enabled(enabled) + super(PixmapButton, self).setEnabled(enabled) + def set_pixmap(self, pixmap): self._button_painter.set_pixmap(pixmap) diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py index 2f338cf516..e4715a0340 100644 --- a/openpype/tools/workfiles/files_widget.py +++ b/openpype/tools/workfiles/files_widget.py @@ -21,6 +21,7 @@ from openpype.pipeline import ( registered_host, legacy_io, Anatomy, + get_current_project_name, ) from openpype.pipeline.context_tools import ( compute_session_changes, @@ -99,7 +100,7 @@ class FilesWidget(QtWidgets.QWidget): self._task_type = None # Pype's anatomy object for current project - project_name = legacy_io.Session["AVALON_PROJECT"] + project_name = get_current_project_name() self.anatomy = Anatomy(project_name) self.project_name = project_name # Template key used to get work template from anatomy templates diff --git a/openpype/tools/workfiles/save_as_dialog.py b/openpype/tools/workfiles/save_as_dialog.py index 9f1d1060da..7052eaed06 100644 --- a/openpype/tools/workfiles/save_as_dialog.py +++ b/openpype/tools/workfiles/save_as_dialog.py @@ -12,6 +12,7 @@ from openpype.pipeline import ( from openpype.pipeline.workfile import get_last_workfile_with_version from openpype.pipeline.template_data import get_template_data_with_names from openpype.tools.utils import PlaceholderLineEdit +from openpype.pipeline import version_start, get_current_host_name log = logging.getLogger(__name__) @@ -218,7 +219,15 @@ class SaveAsDialog(QtWidgets.QDialog): # Version number input version_input = QtWidgets.QSpinBox(version_widget) - version_input.setMinimum(1) + version_input.setMinimum( + version_start.get_versioning_start( + self.data["project"]["name"], + get_current_host_name(), + task_name=self.data["task"]["name"], + task_type=self.data["task"]["type"], + family="workfile" + ) + ) version_input.setMaximum(9999) # Last version checkbox @@ -420,7 +429,13 @@ class SaveAsDialog(QtWidgets.QDialog): )[1] if version is None: - version = 1 + version = version_start.get_versioning_start( + data["project"]["name"], + get_current_host_name(), + task_name=self.data["task"]["name"], + task_type=self.data["task"]["type"], + family="workfile" + ) else: version += 1 diff --git a/openpype/tools/workfiles/window.py b/openpype/tools/workfiles/window.py index 53f8894665..50c39d4a40 100644 --- a/openpype/tools/workfiles/window.py +++ b/openpype/tools/workfiles/window.py @@ -15,7 +15,12 @@ from openpype.client.operations import ( ) from openpype import style from openpype import resources -from openpype.pipeline import Anatomy +from openpype.pipeline import ( + Anatomy, + get_current_project_name, + get_current_asset_name, + get_current_task_name, +) from openpype.pipeline import legacy_io from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget from openpype.tools.utils.tasks_widget import TasksWidget @@ -285,8 +290,8 @@ class Window(QtWidgets.QWidget): if use_context is None or use_context is True: context = { - "asset": legacy_io.Session["AVALON_ASSET"], - "task": legacy_io.Session["AVALON_TASK"] + "asset": get_current_asset_name(), + "task": get_current_task_name() } self.set_context(context) @@ -296,7 +301,7 @@ class Window(QtWidgets.QWidget): @property def project_name(self): - return legacy_io.Session["AVALON_PROJECT"] + return get_current_project_name() def showEvent(self, event): super(Window, self).showEvent(event) @@ -325,7 +330,7 @@ class Window(QtWidgets.QWidget): workfile_doc = None if asset_id and task_name and filepath: filename = os.path.split(filepath)[1] - project_name = legacy_io.active_project() + project_name = self.project_name workfile_doc = get_workfile_info( project_name, asset_id, task_name, filename ) @@ -356,7 +361,7 @@ class Window(QtWidgets.QWidget): if not update_data: return - project_name = legacy_io.active_project() + project_name = self.project_name session = OperationsSession() session.update_entity( @@ -373,7 +378,7 @@ class Window(QtWidgets.QWidget): return filename = os.path.split(filepath)[1] - project_name = legacy_io.active_project() + project_name = self.project_name return get_workfile_info( project_name, asset_id, task_name, filename ) @@ -385,7 +390,7 @@ class Window(QtWidgets.QWidget): workdir, filename = os.path.split(filepath) - project_name = legacy_io.active_project() + project_name = self.project_name asset_id = self.assets_widget.get_selected_asset_id() task_name = self.tasks_widget.get_selected_task_name() diff --git a/openpype/vendor/python/common/ayon_api/__init__.py b/openpype/vendor/python/common/ayon_api/__init__.py new file mode 100644 index 0000000000..027e7a3da2 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/__init__.py @@ -0,0 +1,392 @@ +from .version import __version__ +from .utils import ( + TransferProgress, + slugify_string, + create_dependency_package_basename, +) +from .server_api import ( + ServerAPI, +) + +from ._api import ( + GlobalServerAPI, + ServiceContext, + + init_service, + get_service_name, + get_service_addon_name, + get_service_addon_version, + get_service_addon_settings, + + is_connection_created, + create_connection, + close_connection, + change_token, + set_environments, + get_server_api_connection, + get_site_id, + set_site_id, + get_client_version, + set_client_version, + get_default_settings_variant, + set_default_settings_variant, + get_sender, + set_sender, + + get_base_url, + get_rest_url, + + raw_get, + raw_post, + raw_put, + raw_patch, + raw_delete, + + get, + post, + put, + patch, + delete, + + get_event, + get_events, + dispatch_event, + update_event, + enroll_event_job, + + download_file, + upload_file, + + query_graphql, + + get_addons_info, + get_addon_url, + download_addon_private_file, + + get_installers, + create_installer, + update_installer, + delete_installer, + download_installer, + upload_installer, + + get_dependencies_info, + update_dependency_info, + get_dependency_packages, + create_dependency_package, + update_dependency_package, + delete_dependency_package, + + download_dependency_package, + upload_dependency_package, + + upload_addon_zip, + + get_bundles, + create_bundle, + update_bundle, + delete_bundle, + + get_info, + get_server_version, + get_server_version_tuple, + get_user, + get_users, + + get_attributes_for_type, + get_attributes_fields_for_type, + get_default_fields_for_type, + + get_project_anatomy_preset, + get_project_anatomy_presets, + get_project_roots_by_site, + get_project_roots_for_site, + + get_addon_site_settings_schema, + get_addon_settings_schema, + + get_addon_studio_settings, + get_addon_project_settings, + get_addon_settings, + get_bundle_settings, + get_addons_studio_settings, + get_addons_project_settings, + get_addons_settings, + + get_secrets, + get_secret, + save_secret, + delete_secret, + + get_project_names, + get_projects, + get_project, + create_project, + update_project, + delete_project, + + get_folder_by_id, + get_folder_by_name, + get_folder_by_path, + get_folders, + get_folders_hierarchy, + + get_tasks, + get_task_by_id, + get_task_by_name, + + get_folder_ids_with_products, + get_product_by_id, + get_product_by_name, + get_products, + get_product_types, + get_project_product_types, + get_product_type_names, + + get_version_by_id, + get_version_by_name, + version_is_latest, + get_versions, + get_hero_version_by_product_id, + get_hero_version_by_id, + get_hero_versions, + get_last_versions, + get_last_version_by_product_id, + get_last_version_by_product_name, + get_representation_by_id, + get_representation_by_name, + get_representations, + get_representations_parents, + get_representation_parents, + get_repre_ids_by_context_filters, + + get_workfiles_info, + get_workfile_info, + get_workfile_info_by_id, + + get_thumbnail_by_id, + get_thumbnail, + get_folder_thumbnail, + get_version_thumbnail, + get_workfile_thumbnail, + create_thumbnail, + update_thumbnail, + + get_full_link_type_name, + get_link_types, + get_link_type, + create_link_type, + delete_link_type, + make_sure_link_type_exists, + + create_link, + delete_link, + get_entities_links, + get_folder_links, + get_folders_links, + get_task_links, + get_tasks_links, + get_product_links, + get_products_links, + get_version_links, + get_versions_links, + get_representations_links, + get_representation_links, + + send_batch_operations, +) + + +__all__ = ( + "__version__", + + "TransferProgress", + "slugify_string", + "create_dependency_package_basename", + + "ServerAPI", + + "GlobalServerAPI", + "ServiceContext", + + "init_service", + "get_service_name", + "get_service_addon_name", + "get_service_addon_version", + "get_service_addon_settings", + + "is_connection_created", + "create_connection", + "close_connection", + "change_token", + "set_environments", + "get_server_api_connection", + "get_site_id", + "set_site_id", + "get_client_version", + "set_client_version", + "get_default_settings_variant", + "set_default_settings_variant", + "get_sender", + "set_sender", + + "get_base_url", + "get_rest_url", + + "raw_get", + "raw_post", + "raw_put", + "raw_patch", + "raw_delete", + + "get", + "post", + "put", + "patch", + "delete", + + "get_event", + "get_events", + "dispatch_event", + "update_event", + "enroll_event_job", + + "download_file", + "upload_file", + + "query_graphql", + + "get_addons_info", + "get_addon_url", + "download_addon_private_file", + + "get_installers", + "create_installer", + "update_installer", + "delete_installer", + "download_installer", + "upload_installer", + + "get_dependencies_info", + "update_dependency_info", + "get_dependency_packages", + "create_dependency_package", + "update_dependency_package", + "delete_dependency_package", + + "download_dependency_package", + "upload_dependency_package", + + "upload_addon_zip", + + "get_bundles", + "create_bundle", + "update_bundle", + "delete_bundle", + + "get_info", + "get_server_version", + "get_server_version_tuple", + "get_user", + "get_users", + + "get_attributes_for_type", + "get_attributes_fields_for_type", + "get_default_fields_for_type", + + "get_project_anatomy_preset", + "get_project_anatomy_presets", + "get_project_roots_by_site", + "get_project_roots_for_site", + + "get_addon_site_settings_schema", + "get_addon_settings_schema", + "get_addon_studio_settings", + "get_addon_project_settings", + "get_addon_settings", + "get_bundle_settings", + "get_addons_studio_settings", + "get_addons_project_settings", + "get_addons_settings", + + "get_secrets", + "get_secret", + "save_secret", + "delete_secret", + + "get_project_names", + "get_projects", + "get_project", + "create_project", + "update_project", + "delete_project", + + "get_folder_by_id", + "get_folder_by_name", + "get_folder_by_path", + "get_folders", + + "get_tasks", + "get_task_by_id", + "get_task_by_name", + + "get_folder_ids_with_products", + "get_product_by_id", + "get_product_by_name", + "get_products", + "get_product_types", + "get_project_product_types", + "get_product_type_names", + + "get_version_by_id", + "get_version_by_name", + "version_is_latest", + "get_versions", + "get_hero_version_by_product_id", + "get_hero_version_by_id", + "get_hero_versions", + "get_last_versions", + "get_last_version_by_product_id", + "get_last_version_by_product_name", + "get_representation_by_id", + "get_representation_by_name", + "get_representations", + "get_representations_parents", + "get_representation_parents", + "get_repre_ids_by_context_filters", + + "get_workfiles_info", + "get_workfile_info", + "get_workfile_info_by_id", + + "get_thumbnail_by_id", + "get_thumbnail", + "get_folder_thumbnail", + "get_version_thumbnail", + "get_workfile_thumbnail", + "create_thumbnail", + "update_thumbnail", + + "get_full_link_type_name", + "get_link_types", + "get_link_type", + "create_link_type", + "delete_link_type", + "make_sure_link_type_exists", + + "create_link", + "delete_link", + "get_entities_links", + "get_folder_links", + "get_folders_links", + "get_task_links", + "get_tasks_links", + "get_product_links", + "get_products_links", + "get_version_links", + "get_versions_links", + "get_representations_links", + "get_representation_links", + + "send_batch_operations", +) diff --git a/openpype/vendor/python/common/ayon_api/_api.py b/openpype/vendor/python/common/ayon_api/_api.py new file mode 100644 index 0000000000..1d7b1837f1 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/_api.py @@ -0,0 +1,1244 @@ +"""Singleton based server api for direct access. + +This implementation will be probably the most used part of package. Gives +option to have singleton connection to Server URL based on environment variable +values. All public functions and classes are imported in '__init__.py' so +they're available directly in top module import. +""" + +import os +import socket + +from .constants import ( + SERVER_URL_ENV_KEY, + SERVER_API_ENV_KEY, +) +from .server_api import ServerAPI +from .exceptions import FailedServiceInit + + +class GlobalServerAPI(ServerAPI): + """Extended server api which also handles storing tokens and url. + + Created object expect to have set environment variables + 'AYON_SERVER_URL'. Also is expecting filled 'AYON_API_KEY' + but that can be filled afterwards with calling 'login' method. + """ + + def __init__( + self, + site_id=None, + client_version=None, + default_settings_variant=None, + ssl_verify=None, + cert=None, + ): + url = self.get_url() + token = self.get_token() + + super(GlobalServerAPI, self).__init__( + url, + token, + site_id, + client_version, + default_settings_variant, + ssl_verify, + cert, + # We want to make sure that server and api key validation + # happens all the time in 'GlobalServerAPI'. + create_session=False, + ) + self.validate_server_availability() + self.create_session() + + def login(self, username, password): + """Login to the server or change user. + + If user is the same as current user and token is available the + login is skipped. + """ + + previous_token = self._access_token + super(GlobalServerAPI, self).login(username, password) + if self.has_valid_token and previous_token != self._access_token: + os.environ[SERVER_API_ENV_KEY] = self._access_token + + @staticmethod + def get_url(): + return os.environ.get(SERVER_URL_ENV_KEY) + + @staticmethod + def get_token(): + return os.environ.get(SERVER_API_ENV_KEY) + + @staticmethod + def set_environments(url, token): + """Change url and token environemnts in currently running process. + + Args: + url (str): New server url. + token (str): User's token. + """ + + os.environ[SERVER_URL_ENV_KEY] = url or "" + os.environ[SERVER_API_ENV_KEY] = token or "" + + +class GlobalContext: + """Singleton connection holder. + + Goal is to avoid create connection on import which can be dangerous in + some cases. + """ + + _connection = None + + @classmethod + def is_connection_created(cls): + return cls._connection is not None + + @classmethod + def change_token(cls, url, token): + GlobalServerAPI.set_environments(url, token) + if cls._connection is None: + return + + if cls._connection.get_base_url() == url: + cls._connection.set_token(token) + else: + cls.close_connection() + + @classmethod + def close_connection(cls): + if cls._connection is not None: + cls._connection.close_session() + cls._connection = None + + @classmethod + def create_connection(cls, *args, **kwargs): + if cls._connection is not None: + cls.close_connection() + cls._connection = GlobalServerAPI(*args, **kwargs) + return cls._connection + + @classmethod + def get_server_api_connection(cls): + if cls._connection is None: + cls.create_connection() + return cls._connection + + +class ServiceContext: + """Helper for services running under server. + + When service is running from server the process receives information about + connection from environment variables. This class helps to initialize the + values without knowing environment variables (that may change over time). + + All what must be done is to call 'init_service' function/method. The + arguments are for cases when the service is running in specific environment + and their values are e.g. loaded from private file or for testing purposes. + """ + + token = None + server_url = None + addon_name = None + addon_version = None + service_name = None + + @classmethod + def init_service( + cls, + token=None, + server_url=None, + addon_name=None, + addon_version=None, + service_name=None, + connect=True + ): + token = token or os.environ.get("AYON_API_KEY") + server_url = server_url or os.environ.get("AYON_SERVER_URL") + if not server_url: + raise FailedServiceInit("URL to server is not set") + + if not token: + raise FailedServiceInit( + "Token to server {} is not set".format(server_url) + ) + + addon_name = addon_name or os.environ.get("AYON_ADDON_NAME") + addon_version = addon_version or os.environ.get("AYON_ADDON_VERSION") + service_name = service_name or os.environ.get("AYON_SERVICE_NAME") + + cls.token = token + cls.server_url = server_url + cls.addon_name = addon_name + cls.addon_version = addon_version + cls.service_name = service_name or socket.gethostname() + + # Make sure required environments for GlobalServerAPI are set + GlobalServerAPI.set_environments(cls.server_url, cls.token) + + if connect: + print("Connecting to server \"{}\"".format(server_url)) + con = GlobalContext.get_server_api_connection() + user = con.get_user() + print("Logged in as user \"{}\"".format(user["name"])) + + +def init_service(*args, **kwargs): + """Initialize current connection from service. + + The service expect specific environment variables. The variables must all + be set to make the connection work as a service. + """ + + ServiceContext.init_service(*args, **kwargs) + + +def get_service_addon_name(): + """Name of addon which initialized service connection. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Union[str, None]: Name of addon or None. + """ + + return ServiceContext.addon_name + + +def get_service_addon_version(): + """Version of addon which initialized service connection. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Union[str, None]: Version of addon or None. + """ + + return ServiceContext.addon_version + + +def get_service_name(): + """Name of service. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Union[str, None]: Name of service if service was registered. + """ + + return ServiceContext.service_name + + +def get_service_addon_settings(): + """Addon settings of service which initialized service. + + Service context must be initialized to be able to use this function. Call + 'init_service' on you service start to do so. + + Returns: + Dict[str, Any]: Addon settings. + + Raises: + ValueError: When service was not initialized. + """ + + addon_name = get_service_addon_name() + addon_version = get_service_addon_version() + if addon_name is None or addon_version is None: + raise ValueError("Service is not initialized") + return get_addon_settings(addon_name, addon_version) + + +def is_connection_created(): + """Is global connection created. + + Returns: + bool: True if connection was connected. + """ + + return GlobalContext.is_connection_created() + + +def create_connection(site_id=None, client_version=None): + """Create global connection. + + Args: + site_id (str): Machine site id/name. + client_version (str): Desktop app version. + + Returns: + GlobalServerAPI: Created connection. + """ + + return GlobalContext.create_connection(site_id, client_version) + + +def close_connection(): + """Close global connection if is connected.""" + + GlobalContext.close_connection() + + +def change_token(url, token): + """Change connection token for url. + + This function can be also used to change url. + + Args: + url (str): Server url. + token (str): API key token. + """ + + GlobalContext.change_token(url, token) + + +def set_environments(url, token): + """Set global environments for global connection. + + Args: + url (Union[str, None]): Url to server or None to unset environments. + token (Union[str, None]): API key token to be used for connection. + """ + + GlobalServerAPI.set_environments(url, token) + + +def get_server_api_connection(): + """Access to global scope object of GlobalServerAPI. + + This access expect to have set environment variables 'AYON_SERVER_URL' + and 'AYON_API_KEY'. + + Returns: + GlobalServerAPI: Object of connection to server. + """ + + return GlobalContext.get_server_api_connection() + + +def get_site_id(): + con = get_server_api_connection() + return con.get_site_id() + + +def set_site_id(site_id): + """Set site id of already connected client connection. + + Site id is human-readable machine id used in AYON desktop application. + + Args: + site_id (Union[str, None]): Site id used in connection. + """ + + con = get_server_api_connection() + con.set_site_id(site_id) + + +def get_client_version(): + """Version of client used to connect to server. + + Client version is AYON client build desktop application. + + Returns: + str: Client version string used in connection. + """ + + con = get_server_api_connection() + return con.get_client_version() + + +def set_client_version(client_version): + """Set version of already connected client connection. + + Client version is version of AYON desktop application. + + Args: + client_version (Union[str, None]): Client version string. + """ + + con = get_server_api_connection() + con.set_client_version(client_version) + + +def get_default_settings_variant(): + """Default variant used for settings. + + Returns: + Union[str, None]: name of variant or None. + """ + + con = get_server_api_connection() + return con.get_client_version() + + +def set_default_settings_variant(variant): + """Change default variant for addon settings. + + Note: + It is recommended to set only 'production' or 'staging' variants + as default variant. + + Args: + variant (Union[str, None]): Settings variant name. + """ + + con = get_server_api_connection() + return con.set_default_settings_variant(variant) + + +def get_sender(): + """Sender used to send requests. + + Returns: + Union[str, None]: Sender name or None. + """ + + con = get_server_api_connection() + return con.get_sender() + + +def set_sender(sender): + """Change sender used for requests. + + Args: + sender (Union[str, None]): Sender name or None. + """ + + con = get_server_api_connection() + return con.set_sender(sender) + + +def get_base_url(): + con = get_server_api_connection() + return con.get_base_url() + + +def get_rest_url(): + con = get_server_api_connection() + return con.get_rest_url() + + +def raw_get(*args, **kwargs): + con = get_server_api_connection() + return con.raw_get(*args, **kwargs) + + +def raw_post(*args, **kwargs): + con = get_server_api_connection() + return con.raw_post(*args, **kwargs) + + +def raw_put(*args, **kwargs): + con = get_server_api_connection() + return con.raw_put(*args, **kwargs) + + +def raw_patch(*args, **kwargs): + con = get_server_api_connection() + return con.raw_patch(*args, **kwargs) + + +def raw_delete(*args, **kwargs): + con = get_server_api_connection() + return con.raw_delete(*args, **kwargs) + + +def get(*args, **kwargs): + con = get_server_api_connection() + return con.get(*args, **kwargs) + + +def post(*args, **kwargs): + con = get_server_api_connection() + return con.post(*args, **kwargs) + + +def put(*args, **kwargs): + con = get_server_api_connection() + return con.put(*args, **kwargs) + + +def patch(*args, **kwargs): + con = get_server_api_connection() + return con.patch(*args, **kwargs) + + +def delete(*args, **kwargs): + con = get_server_api_connection() + return con.delete(*args, **kwargs) + + +def get_event(*args, **kwargs): + con = get_server_api_connection() + return con.get_event(*args, **kwargs) + + +def get_events(*args, **kwargs): + con = get_server_api_connection() + return con.get_events(*args, **kwargs) + + +def dispatch_event(*args, **kwargs): + con = get_server_api_connection() + return con.dispatch_event(*args, **kwargs) + + +def update_event(*args, **kwargs): + con = get_server_api_connection() + return con.update_event(*args, **kwargs) + + +def enroll_event_job(*args, **kwargs): + con = get_server_api_connection() + return con.enroll_event_job(*args, **kwargs) + + +def download_file(*args, **kwargs): + con = get_server_api_connection() + return con.download_file(*args, **kwargs) + + +def upload_file(*args, **kwargs): + con = get_server_api_connection() + return con.upload_file(*args, **kwargs) + + +def query_graphql(*args, **kwargs): + con = get_server_api_connection() + return con.query_graphql(*args, **kwargs) + + +def get_users(*args, **kwargs): + con = get_server_api_connection() + return con.get_users(*args, **kwargs) + + +def get_user(*args, **kwargs): + con = get_server_api_connection() + return con.get_user(*args, **kwargs) + + +def get_attributes_for_type(*args, **kwargs): + con = get_server_api_connection() + return con.get_attributes_for_type(*args, **kwargs) + + +def get_addons_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_info(*args, **kwargs) + + +def get_addon_url(addon_name, addon_version, *subpaths): + con = get_server_api_connection() + return con.get_addon_url(addon_name, addon_version, *subpaths) + + +def download_addon_private_file(*args, **kwargs): + con = get_server_api_connection() + return con.download_addon_private_file(*args, **kwargs) + + +def get_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_info(*args, **kwargs) + + +def get_server_version(*args, **kwargs): + con = get_server_api_connection() + return con.get_server_version(*args, **kwargs) + + +def get_server_version_tuple(*args, **kwargs): + con = get_server_api_connection() + return con.get_server_version_tuple(*args, **kwargs) + + +# Installers +def get_installers(*args, **kwargs): + con = get_server_api_connection() + return con.get_installers(*args, **kwargs) + + +def create_installer(*args, **kwargs): + con = get_server_api_connection() + return con.create_installer(*args, **kwargs) + + +def update_installer(*args, **kwargs): + con = get_server_api_connection() + return con.update_installer(*args, **kwargs) + + +def delete_installer(*args, **kwargs): + con = get_server_api_connection() + return con.delete_installer(*args, **kwargs) + + +def download_installer(*args, **kwargs): + con = get_server_api_connection() + con.download_installer(*args, **kwargs) + + +def upload_installer(*args, **kwargs): + con = get_server_api_connection() + con.upload_installer(*args, **kwargs) + + +# Dependency packages +def get_dependencies_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_dependencies_info(*args, **kwargs) + + +def update_dependency_info(*args, **kwargs): + con = get_server_api_connection() + return con.update_dependency_info(*args, **kwargs) + + +def download_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.download_dependency_package(*args, **kwargs) + + +def upload_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.upload_dependency_package(*args, **kwargs) + + +def get_dependency_packages(*args, **kwargs): + con = get_server_api_connection() + return con.get_dependency_packages(*args, **kwargs) + + +def create_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.create_dependency_package(*args, **kwargs) + + +def update_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.update_dependency_package(*args, **kwargs) + + +def delete_dependency_package(*args, **kwargs): + con = get_server_api_connection() + return con.delete_dependency_package(*args, **kwargs) + + +def upload_addon_zip(*args, **kwargs): + con = get_server_api_connection() + return con.upload_addon_zip(*args, **kwargs) + + +def get_project_anatomy_presets(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_anatomy_presets(*args, **kwargs) + + +def get_bundles(*args, **kwargs): + con = get_server_api_connection() + return con.get_bundles(*args, **kwargs) + + +def create_bundle(*args, **kwargs): + con = get_server_api_connection() + return con.create_bundle(*args, **kwargs) + + +def update_bundle(*args, **kwargs): + con = get_server_api_connection() + return con.update_bundle(*args, **kwargs) + + +def delete_bundle(*args, **kwargs): + con = get_server_api_connection() + return con.delete_bundle(*args, **kwargs) + + +def get_project_anatomy_preset(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_anatomy_preset(*args, **kwargs) + + +def get_project_roots_by_site(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_roots_by_site(*args, **kwargs) + + +def get_project_roots_for_site(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_roots_for_site(*args, **kwargs) + + +def get_addon_settings_schema(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_settings_schema(*args, **kwargs) + + +def get_addon_site_settings_schema(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_site_settings_schema(*args, **kwargs) + + +def get_addon_studio_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_studio_settings(*args, **kwargs) + + +def get_addon_project_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_project_settings(*args, **kwargs) + + +def get_addon_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_settings(*args, **kwargs) + + +def get_addon_site_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addon_site_settings(*args, **kwargs) + + +def get_bundle_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_bundle_settings(*args, **kwargs) + + +def get_addons_studio_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_studio_settings(*args, **kwargs) + + +def get_addons_project_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_project_settings(*args, **kwargs) + + +def get_addons_settings(*args, **kwargs): + con = get_server_api_connection() + return con.get_addons_settings(*args, **kwargs) + + +def get_secrets(*args, **kwargs): + con = get_server_api_connection() + return con.get_secrets(*args, **kwargs) + + +def get_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def save_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def delete_secret(*args, **kwargs): + con = get_server_api_connection() + return con.delete_secret(*args, **kwargs) + + +def get_project_names(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_names(*args, **kwargs) + + +def get_project(*args, **kwargs): + con = get_server_api_connection() + return con.get_project(*args, **kwargs) + + +def get_projects(*args, **kwargs): + con = get_server_api_connection() + return con.get_projects(*args, **kwargs) + + +def get_folders(*args, **kwargs): + con = get_server_api_connection() + return con.get_folders(*args, **kwargs) + + +def get_folders_hierarchy(*args, **kwargs): + con = get_server_api_connection() + return con.get_folders_hierarchy(*args, **kwargs) + + +def get_tasks(*args, **kwargs): + con = get_server_api_connection() + return con.get_tasks(*args, **kwargs) + + +def get_task_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_task_by_id(*args, **kwargs) + + +def get_task_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_task_by_name(*args, **kwargs) + + +def get_folder_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_by_id(*args, **kwargs) + + +def get_folder_by_path(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_by_path(*args, **kwargs) + + +def get_folder_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_by_name(*args, **kwargs) + + +def get_folder_ids_with_products(*args, **kwargs): + con = get_server_api_connection() + return con.get_folder_ids_with_products(*args, **kwargs) + + +def get_product_types(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_types(*args, **kwargs) + + +def get_project_product_types(*args, **kwargs): + con = get_server_api_connection() + return con.get_project_product_types(*args, **kwargs) + + +def get_product_type_names(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_type_names(*args, **kwargs) + + +def get_products(*args, **kwargs): + con = get_server_api_connection() + return con.get_products(*args, **kwargs) + + +def get_product_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_by_id(*args, **kwargs) + + +def get_product_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_product_by_name(*args, **kwargs) + + +def get_versions(*args, **kwargs): + con = get_server_api_connection() + return con.get_versions(*args, **kwargs) + + +def get_version_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_version_by_id(*args, **kwargs) + + +def get_version_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_version_by_name(*args, **kwargs) + + +def get_hero_version_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_hero_version_by_id(*args, **kwargs) + + +def get_hero_version_by_product_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_hero_version_by_product_id(*args, **kwargs) + + +def get_hero_versions(*args, **kwargs): + con = get_server_api_connection() + return con.get_hero_versions(*args, **kwargs) + + +def get_last_versions(*args, **kwargs): + con = get_server_api_connection() + return con.get_last_versions(*args, **kwargs) + + +def get_last_version_by_product_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_last_version_by_product_id(*args, **kwargs) + + +def get_last_version_by_product_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_last_version_by_product_name(*args, **kwargs) + + +def version_is_latest(*args, **kwargs): + con = get_server_api_connection() + return con.version_is_latest(*args, **kwargs) + + +def get_representations(*args, **kwargs): + con = get_server_api_connection() + return con.get_representations(*args, **kwargs) + + +def get_representation_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_representation_by_id(*args, **kwargs) + + +def get_representation_by_name(*args, **kwargs): + con = get_server_api_connection() + return con.get_representation_by_name(*args, **kwargs) + + +def get_representation_parents(*args, **kwargs): + con = get_server_api_connection() + return con.get_representation_parents(*args, **kwargs) + + +def get_representations_parents(*args, **kwargs): + con = get_server_api_connection() + return con.get_representations_parents(*args, **kwargs) + + +def get_repre_ids_by_context_filters(*args, **kwargs): + con = get_server_api_connection() + return con.get_repre_ids_by_context_filters(*args, **kwargs) + + +def get_workfiles_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_workfiles_info(*args, **kwargs) + + +def get_workfile_info(*args, **kwargs): + con = get_server_api_connection() + return con.get_workfile_info(*args, **kwargs) + + +def get_workfile_info_by_id(*args, **kwargs): + con = get_server_api_connection() + return con.get_workfile_info_by_id(*args, **kwargs) + + +def create_project( + project_name, + project_code, + library_project=False, + preset_name=None +): + con = get_server_api_connection() + return con.create_project( + project_name, + project_code, + library_project, + preset_name + ) + + +def update_project(project_name, *args, **kwargs): + con = get_server_api_connection() + return con.update_project(project_name, *args, **kwargs) + + +def delete_project(project_name): + con = get_server_api_connection() + return con.delete_project(project_name) + + +def get_thumbnail_by_id(project_name, thumbnail_id): + con = get_server_api_connection() + con.get_thumbnail_by_id(project_name, thumbnail_id) + + +def get_thumbnail(project_name, entity_type, entity_id, thumbnail_id=None): + con = get_server_api_connection() + con.get_thumbnail(project_name, entity_type, entity_id, thumbnail_id) + + +def get_folder_thumbnail(project_name, folder_id, thumbnail_id=None): + con = get_server_api_connection() + return con.get_folder_thumbnail(project_name, folder_id, thumbnail_id) + + +def get_version_thumbnail(project_name, version_id, thumbnail_id=None): + con = get_server_api_connection() + return con.get_version_thumbnail(project_name, version_id, thumbnail_id) + + +def get_workfile_thumbnail(project_name, workfile_id, thumbnail_id=None): + con = get_server_api_connection() + return con.get_workfile_thumbnail(project_name, workfile_id, thumbnail_id) + + +def create_thumbnail(project_name, src_filepath, thumbnail_id=None): + con = get_server_api_connection() + return con.create_thumbnail(project_name, src_filepath, thumbnail_id) + + +def update_thumbnail(project_name, thumbnail_id, src_filepath): + con = get_server_api_connection() + return con.update_thumbnail(project_name, thumbnail_id, src_filepath) + + +def get_attributes_fields_for_type(entity_type): + con = get_server_api_connection() + return con.get_attributes_fields_for_type(entity_type) + + +def get_default_fields_for_type(entity_type): + con = get_server_api_connection() + return con.get_default_fields_for_type(entity_type) + + +def get_full_link_type_name(link_type_name, input_type, output_type): + con = get_server_api_connection() + return con.get_full_link_type_name( + link_type_name, input_type, output_type) + + +def get_link_types(project_name): + con = get_server_api_connection() + return con.get_link_types(project_name) + + +def get_link_type(project_name, link_type_name, input_type, output_type): + con = get_server_api_connection() + return con.get_link_type( + project_name, link_type_name, input_type, output_type) + + +def create_link_type( + project_name, link_type_name, input_type, output_type, data=None): + con = get_server_api_connection() + return con.create_link_type( + project_name, link_type_name, input_type, output_type, data=data) + + +def delete_link_type(project_name, link_type_name, input_type, output_type): + con = get_server_api_connection() + return con.delete_link_type( + project_name, link_type_name, input_type, output_type) + + +def make_sure_link_type_exists( + project_name, link_type_name, input_type, output_type, data=None +): + con = get_server_api_connection() + return con.make_sure_link_type_exists( + project_name, link_type_name, input_type, output_type, data=data + ) + + +def create_link( + project_name, + link_type_name, + input_id, + input_type, + output_id, + output_type +): + con = get_server_api_connection() + return con.create_link( + project_name, + link_type_name, + input_id, input_type, + output_id, output_type + ) + + +def delete_link(project_name, link_id): + con = get_server_api_connection() + return con.delete_link(project_name, link_id) + + +def get_entities_links( + project_name, + entity_type, + entity_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_entities_links( + project_name, + entity_type, + entity_ids, + link_types, + link_direction + ) + + +def get_folders_links( + project_name, + folder_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_folders_links( + project_name, + folder_ids, + link_types, + link_direction + ) + + +def get_folder_links( + project_name, + folder_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_folder_links( + project_name, + folder_id, + link_types, + link_direction + ) + + +def get_tasks_links( + project_name, + task_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_tasks_links( + project_name, + task_ids, + link_types, + link_direction + ) + + +def get_task_links( + project_name, + task_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_task_links( + project_name, + task_id, + link_types, + link_direction + ) + + +def get_products_links( + project_name, + product_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_products_links( + project_name, + product_ids, + link_types, + link_direction + ) + + +def get_product_links( + project_name, + product_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_product_links( + project_name, + product_id, + link_types, + link_direction + ) + + +def get_versions_links( + project_name, + version_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_versions_links( + project_name, + version_ids, + link_types, + link_direction + ) + + +def get_version_links( + project_name, + version_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_version_links( + project_name, + version_id, + link_types, + link_direction + ) + + +def get_representations_links( + project_name, + representation_ids=None, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_representations_links( + project_name, + representation_ids, + link_types, + link_direction + ) + + +def get_representation_links( + project_name, + representation_id, + link_types=None, + link_direction=None +): + con = get_server_api_connection() + return con.get_representation_links( + project_name, + representation_id, + link_types, + link_direction + ) + + +def send_batch_operations( + project_name, + operations, + can_fail=False, + raise_on_fail=True +): + con = get_server_api_connection() + return con.send_batch_operations( + project_name, + operations, + can_fail=can_fail, + raise_on_fail=raise_on_fail + ) diff --git a/openpype/vendor/python/common/ayon_api/constants.py b/openpype/vendor/python/common/ayon_api/constants.py new file mode 100644 index 0000000000..eb1ace0590 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/constants.py @@ -0,0 +1,134 @@ +# Environments where server url and api key are stored for global connection +SERVER_URL_ENV_KEY = "AYON_SERVER_URL" +SERVER_API_ENV_KEY = "AYON_API_KEY" +# Backwards compatibility +SERVER_TOKEN_ENV_KEY = SERVER_API_ENV_KEY + +# --- User --- +DEFAULT_USER_FIELDS = { + "roles", + "name", + "isService", + "isManager", + "isGuest", + "isAdmin", + "defaultRoles", + "createdAt", + "active", + "hasPassword", + "updatedAt", + "apiKeyPreview", + "attrib.avatarUrl", + "attrib.email", + "attrib.fullName", +} + +# --- Product types --- +DEFAULT_PRODUCT_TYPE_FIELDS = { + "name", + "icon", + "color", +} + +# --- Project --- +DEFAULT_PROJECT_FIELDS = { + "active", + "name", + "code", + "config", + "createdAt", +} + +# --- Folders --- +DEFAULT_FOLDER_FIELDS = { + "id", + "name", + "label", + "folderType", + "path", + "parentId", + "active", + "thumbnailId", +} + +# --- Tasks --- +DEFAULT_TASK_FIELDS = { + "id", + "name", + "label", + "taskType", + "folderId", + "active", + "assignees", +} + +# --- Products --- +DEFAULT_PRODUCT_FIELDS = { + "id", + "name", + "folderId", + "active", + "productType", +} + +# --- Versions --- +DEFAULT_VERSION_FIELDS = { + "id", + "name", + "version", + "productId", + "taskId", + "active", + "author", + "thumbnailId", + "createdAt", + "updatedAt", +} + +# --- Representations --- +DEFAULT_REPRESENTATION_FIELDS = { + "id", + "name", + "context", + "createdAt", + "active", + "versionId", +} + +REPRESENTATION_FILES_FIELDS = { + "files.name", + "files.hash", + "files.id", + "files.path", + "files.size", +} + +# --- Workfile info --- +DEFAULT_WORKFILE_INFO_FIELDS = { + "active", + "createdAt", + "createdBy", + "id", + "name", + "path", + "projectName", + "taskId", + "thumbnailId", + "updatedAt", + "updatedBy", +} + +DEFAULT_EVENT_FIELDS = { + "id", + "hash", + "createdAt", + "dependsOn", + "description", + "project", + "retries", + "sender", + "status", + "topic", + "updatedAt", + "user", +} diff --git a/openpype/vendor/python/common/ayon_api/entity_hub.py b/openpype/vendor/python/common/ayon_api/entity_hub.py new file mode 100644 index 0000000000..b9b017bac5 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/entity_hub.py @@ -0,0 +1,2647 @@ +import re +import copy +import collections +from abc import ABCMeta, abstractmethod + +import six +from ._api import get_server_api_connection +from .utils import create_entity_id, convert_entity_id, slugify_string + +UNKNOWN_VALUE = object() +PROJECT_PARENT_ID = object() +_NOT_SET = object() + + +class EntityHub(object): + """Helper to create, update or remove entities in project. + + The hub is a guide to operation with folder entities and update of project. + Project entity must already exist on server (can be only updated). + + Object is caching entities queried from server. They won't be required once + they were queried, so it is recommended to create new hub or clear cache + frequently. + + Todos: + Listen to server events about entity changes to be able update already + queried entities. + + Args: + project_name (str): Name of project where changes will happen. + connection (ServerAPI): Connection to server with logged user. + allow_data_changes (bool): This option gives ability to change 'data' + key on entities. This is not recommended as 'data' may be use for + secure information and would also slow down server queries. Content + of 'data' key can't be received only GraphQl. + """ + + def __init__( + self, project_name, connection=None, allow_data_changes=False + ): + if not connection: + connection = get_server_api_connection() + self._connection = connection + + self._project_name = project_name + self._entities_by_id = {} + self._entities_by_parent_id = collections.defaultdict(list) + self._project_entity = UNKNOWN_VALUE + + self._allow_data_changes = allow_data_changes + + self._path_reset_queue = None + + @property + def allow_data_changes(self): + """Entity hub allows changes of 'data' key on entities. + + Data are private and not all users may have access to them. Also to get + 'data' for entity is required to use REST api calls, which means to + query each entity on-by-one from server. + + Returns: + bool: Data changes are allowed. + """ + + return self._allow_data_changes + + @property + def project_name(self): + """Project name which is maintained by hub. + + Returns: + str: Name of project. + """ + + return self._project_name + + @property + def project_entity(self): + """Project entity. + + Returns: + ProjectEntity: Project entity. + """ + + if self._project_entity is UNKNOWN_VALUE: + self.fill_project_from_server() + return self._project_entity + + def get_attributes_for_type(self, entity_type): + """Get attributes available for a type. + + Attributes are based on entity types. + + Todos: + Use attribute schema to validate values on entities. + + Args: + entity_type (Literal["project", "folder", "task"]): Entity type + for which should be attributes received. + + Returns: + Dict[str, Dict[str, Any]]: Attribute schemas that are available + for entered entity type. + """ + + return self._connection.get_attributes_for_type(entity_type) + + def get_entity_by_id(self, entity_id): + """Receive entity by its id without entity type. + + The entity must be already existing in cached objects. + + Args: + entity_id (str): Id of entity. + + Returns: + Union[BaseEntity, None]: Entity object or None. + """ + + return self._entities_by_id.get(entity_id) + + def get_folder_by_id(self, entity_id, allow_query=True): + """Get folder entity by id. + + Args: + entity_id (str): Id of folder entity. + allow_query (bool): Try to query entity from server if is not + available in cache. + + Returns: + Union[FolderEntity, None]: Object of folder or 'None'. + """ + + if allow_query: + return self.get_or_query_entity_by_id(entity_id, ["folder"]) + return self._entities_by_id.get(entity_id) + + def get_task_by_id(self, entity_id, allow_query=True): + """Get task entity by id. + + Args: + entity_id (str): Id of task entity. + allow_query (bool): Try to query entity from server if is not + available in cache. + + Returns: + Union[TaskEntity, None]: Object of folder or 'None'. + """ + + if allow_query: + return self.get_or_query_entity_by_id(entity_id, ["task"]) + return self._entities_by_id.get(entity_id) + + def get_or_query_entity_by_id(self, entity_id, entity_types): + """Get or query entity based on it's id and possible entity types. + + This is a helper function when entity id is known but entity type may + have multiple possible options. + + Args: + entity_id (str): Entity id. + entity_types (Iterable[str]): Possible entity types that can the id + represent. e.g. '["folder", "project"]' + """ + + existing_entity = self._entities_by_id.get(entity_id) + if existing_entity is not None: + return existing_entity + + if not entity_types: + return None + + entity_data = None + for entity_type in entity_types: + if entity_type == "folder": + entity_data = self._connection.get_folder_by_id( + self.project_name, + entity_id, + fields=self._get_folder_fields(), + own_attributes=True + ) + elif entity_type == "task": + entity_data = self._connection.get_task_by_id( + self.project_name, + entity_id, + own_attributes=True + ) + else: + raise ValueError( + "Unknonwn entity type \"{}\"".format(entity_type) + ) + + if entity_data: + break + + if not entity_data: + return None + + if entity_type == "folder": + return self.add_folder(entity_data) + elif entity_type == "task": + return self.add_task(entity_data) + + return None + + @property + def entities(self): + """Iterator over available entities. + + Returns: + Iterator[BaseEntity]: All queried/created entities cached in hub. + """ + + for entity in self._entities_by_id.values(): + yield entity + + def add_new_folder(self, *args, created=True, **kwargs): + """Create folder object and add it to entity hub. + + Args: + folder_type (str): Type of folder. Folder type must be available in + config of project folder types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + label (Optional[str]): Folder label. + path (Optional[str]): Folder path. Path consist of all parent names + with slash('/') used as separator. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + + Returns: + FolderEntity: Added folder entity. + """ + + folder_entity = FolderEntity( + *args, **kwargs, created=created, entity_hub=self + ) + self.add_entity(folder_entity) + return folder_entity + + def add_new_task(self, *args, created=True, **kwargs): + """Create folder object and add it to entity hub. + + Args: + task_type (str): Type of task. Task type must be available in + config of project folder types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + label (Optional[str]): Folder label. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + + Returns: + TaskEntity: Added task entity. + """ + + task_entity = TaskEntity( + *args, **kwargs, created=created, entity_hub=self + ) + self.add_entity(task_entity) + return task_entity + + def add_folder(self, folder): + """Create folder object and add it to entity hub. + + Args: + folder (Dict[str, Any]): Folder entity data. + + Returns: + FolderEntity: Added folder entity. + """ + + folder_entity = FolderEntity.from_entity_data(folder, entity_hub=self) + self.add_entity(folder_entity) + return folder_entity + + def add_task(self, task): + """Create task object and add it to entity hub. + + Args: + task (Dict[str, Any]): Task entity data. + + Returns: + TaskEntity: Added task entity. + """ + + task_entity = TaskEntity.from_entity_data(task, entity_hub=self) + self.add_entity(task_entity) + return task_entity + + def add_entity(self, entity): + """Add entity to hub cache. + + Args: + entity (BaseEntity): Entity that should be added to hub's cache. + """ + + self._entities_by_id[entity.id] = entity + parent_children = self._entities_by_parent_id[entity.parent_id] + if entity not in parent_children: + parent_children.append(entity) + + if entity.parent_id is PROJECT_PARENT_ID: + return + + parent = self._entities_by_id.get(entity.parent_id) + if parent is not None: + parent.add_child(entity.id) + + def folder_path_reseted(self, folder_id): + """Method called from 'FolderEntity' on path reset. + + This should reset cache of folder paths on all children entities. + + The path cache is always propagated from top to bottom so if an entity + has not cached path it means that any children can't have it cached. + """ + + if self._path_reset_queue is not None: + self._path_reset_queue.append(folder_id) + return + + self._path_reset_queue = collections.deque() + self._path_reset_queue.append(folder_id) + while self._path_reset_queue: + children = self._entities_by_parent_id[folder_id] + for child in children: + # Get child path but don't trigger cache + path = child.get_path(False) + if path is not None: + # Reset it's path cache if is set + child.reset_path() + else: + self._path_reset_queue.append(child.id) + + self._path_reset_queue = None + + def unset_entity_parent(self, entity_id, parent_id): + entity = self._entities_by_id.get(entity_id) + parent = self._entities_by_id.get(parent_id) + children_ids = UNKNOWN_VALUE + if parent is not None: + children_ids = parent.get_children_ids(False) + + has_set_parent = False + if entity is not None: + has_set_parent = entity.parent_id == parent_id + + new_parent_id = None + if has_set_parent: + entity.parent_id = new_parent_id + + if children_ids is not UNKNOWN_VALUE and entity_id in children_ids: + parent.remove_child(entity_id) + + if entity is None or not has_set_parent: + self.reset_immutable_for_hierarchy_cache(parent_id) + return + + orig_parent_children = self._entities_by_parent_id[parent_id] + if entity in orig_parent_children: + orig_parent_children.remove(entity) + + new_parent_children = self._entities_by_parent_id[new_parent_id] + if entity not in new_parent_children: + new_parent_children.append(entity) + self.reset_immutable_for_hierarchy_cache(parent_id) + + def set_entity_parent(self, entity_id, parent_id, orig_parent_id=_NOT_SET): + parent = self._entities_by_id.get(parent_id) + entity = self._entities_by_id.get(entity_id) + if entity is None: + if parent is not None: + children_ids = parent.get_children_ids(False) + if ( + children_ids is not UNKNOWN_VALUE + and entity_id in children_ids + ): + parent.remove_child(entity_id) + self.reset_immutable_for_hierarchy_cache(parent.id) + return + + if orig_parent_id is _NOT_SET: + orig_parent_id = entity.parent_id + if orig_parent_id == parent_id: + return + + orig_parent_children = self._entities_by_parent_id[orig_parent_id] + if entity in orig_parent_children: + orig_parent_children.remove(entity) + self.reset_immutable_for_hierarchy_cache(orig_parent_id) + + orig_parent = self._entities_by_id.get(orig_parent_id) + if orig_parent is not None: + orig_parent.remove_child(entity_id) + + parent_children = self._entities_by_parent_id[parent_id] + if entity not in parent_children: + parent_children.append(entity) + + entity.parent_id = parent_id + if parent is None or parent.get_children_ids(False) is UNKNOWN_VALUE: + return + + parent.add_child(entity_id) + self.reset_immutable_for_hierarchy_cache(parent_id) + + def _query_entity_children(self, entity): + folder_fields = self._get_folder_fields() + tasks = [] + folders = [] + if entity.entity_type == "project": + folders = list(self._connection.get_folders( + entity["name"], + parent_ids=[entity.id], + fields=folder_fields, + own_attributes=True + )) + + elif entity.entity_type == "folder": + folders = list(self._connection.get_folders( + self.project_entity["name"], + parent_ids=[entity.id], + fields=folder_fields, + own_attributes=True + )) + + tasks = list(self._connection.get_tasks( + self.project_entity["name"], + folder_ids=[entity.id], + own_attributes=True + )) + + children_ids = { + child.id + for child in self._entities_by_parent_id[entity.id] + } + for folder in folders: + folder_entity = self._entities_by_id.get(folder["id"]) + if folder_entity is not None: + if folder_entity.parent_id == entity.id: + children_ids.add(folder_entity.id) + continue + + folder_entity = self.add_folder(folder) + children_ids.add(folder_entity.id) + + for task in tasks: + task_entity = self._entities_by_id.get(task["id"]) + if task_entity is not None: + if task_entity.parent_id == entity.id: + children_ids.add(task_entity.id) + continue + + task_entity = self.add_task(task) + children_ids.add(task_entity.id) + + entity.fill_children_ids(children_ids) + + def get_entity_children(self, entity, allow_query=True): + children_ids = entity.get_children_ids(allow_query=False) + if children_ids is not UNKNOWN_VALUE: + return entity.get_children() + + if children_ids is UNKNOWN_VALUE and not allow_query: + return UNKNOWN_VALUE + + self._query_entity_children(entity) + + return entity.get_children() + + def delete_entity(self, entity): + parent_id = entity.parent_id + if parent_id is None: + return + + parent = self._entities_by_id.get(parent_id) + if parent is not None: + parent.remove_child(entity.id) + + def reset_immutable_for_hierarchy_cache( + self, entity_id, bottom_to_top=True + ): + if bottom_to_top is None or entity_id is None: + return + + reset_queue = collections.deque() + reset_queue.append(entity_id) + if bottom_to_top: + while reset_queue: + entity_id = reset_queue.popleft() + entity = self.get_entity_by_id(entity_id) + if entity is None: + continue + entity.reset_immutable_for_hierarchy_cache(None) + reset_queue.append(entity.parent_id) + else: + while reset_queue: + entity_id = reset_queue.popleft() + entity = self.get_entity_by_id(entity_id) + if entity is None: + continue + entity.reset_immutable_for_hierarchy_cache(None) + for child in self._entities_by_parent_id[entity.id]: + reset_queue.append(child.id) + + def fill_project_from_server(self): + """Query project data from server and create project entity. + + This method will invalidate previous object of Project entity. + + Returns: + ProjectEntity: Entity that was updated with server data. + + Raises: + ValueError: When project was not found on server. + """ + + project_name = self.project_name + project = self._connection.get_project( + project_name, + own_attributes=True + ) + if not project: + raise ValueError( + "Project \"{}\" was not found.".format(project_name) + ) + + self._project_entity = ProjectEntity( + project["code"], + parent_id=PROJECT_PARENT_ID, + entity_id=project["name"], + library=project["library"], + folder_types=project["folderTypes"], + task_types=project["taskTypes"], + statuses=project["statuses"], + name=project["name"], + attribs=project["ownAttrib"], + data=project["data"], + active=project["active"], + entity_hub=self + ) + self.add_entity(self._project_entity) + return self._project_entity + + def _get_folder_fields(self): + folder_fields = set( + self._connection.get_default_fields_for_type("folder") + ) + folder_fields.add("hasProducts") + if self._allow_data_changes: + folder_fields.add("data") + return folder_fields + + def query_entities_from_server(self): + """Query whole project at once.""" + + project_entity = self.fill_project_from_server() + + folder_fields = self._get_folder_fields() + + folders = self._connection.get_folders( + project_entity.name, + fields=folder_fields, + own_attributes=True + ) + tasks = self._connection.get_tasks( + project_entity.name, + own_attributes=True + ) + folders_by_parent_id = collections.defaultdict(list) + for folder in folders: + parent_id = folder["parentId"] + folders_by_parent_id[parent_id].append(folder) + + tasks_by_parent_id = collections.defaultdict(list) + for task in tasks: + parent_id = task["folderId"] + tasks_by_parent_id[parent_id].append(task) + + lock_queue = collections.deque() + hierarchy_queue = collections.deque() + hierarchy_queue.append((None, project_entity)) + while hierarchy_queue: + item = hierarchy_queue.popleft() + parent_id, parent_entity = item + + lock_queue.append(parent_entity) + + children_ids = set() + for folder in folders_by_parent_id[parent_id]: + folder_entity = self.add_folder(folder) + children_ids.add(folder_entity.id) + folder_entity.has_published_content = folder["hasProducts"] + hierarchy_queue.append((folder_entity.id, folder_entity)) + + for task in tasks_by_parent_id[parent_id]: + task_entity = self.add_task(task) + lock_queue.append(task_entity) + children_ids.add(task_entity.id) + + parent_entity.fill_children_ids(children_ids) + + # Lock entities when all are added to hub + # - lock only entities added in this method + while lock_queue: + entity = lock_queue.popleft() + entity.lock() + + def lock(self): + if self._project_entity is None: + return + + for entity in self._entities_by_id.values(): + entity.lock() + + def _get_top_entities(self): + all_ids = set(self._entities_by_id.keys()) + return [ + entity + for entity in self._entities_by_id.values() + if entity.parent_id not in all_ids + ] + + def _split_entities(self): + top_entities = self._get_top_entities() + entities_queue = collections.deque(top_entities) + removed_entity_ids = [] + created_entity_ids = [] + other_entity_ids = [] + while entities_queue: + entity = entities_queue.popleft() + removed = entity.removed + if removed: + removed_entity_ids.append(entity.id) + elif entity.created: + created_entity_ids.append(entity.id) + else: + other_entity_ids.append(entity.id) + + for child in tuple(self._entities_by_parent_id[entity.id]): + if removed: + self.unset_entity_parent(child.id, entity.id) + entities_queue.append(child) + return created_entity_ids, other_entity_ids, removed_entity_ids + + def _get_update_body(self, entity, changes=None): + if changes is None: + changes = entity.changes + + if not changes: + return None + return { + "type": "update", + "entityType": entity.entity_type, + "entityId": entity.id, + "data": changes + } + + def _get_create_body(self, entity): + return { + "type": "create", + "entityType": entity.entity_type, + "entityId": entity.id, + "data": entity.to_create_body_data() + } + + def _get_delete_body(self, entity): + return { + "type": "delete", + "entityType": entity.entity_type, + "entityId": entity.id + } + + def _pre_commit_types_changes( + self, project_changes, orig_types, changes_key, post_changes + ): + """Compare changes of types on a project. + + Compare old and new types. Change project changes content if some old + types were removed. In that case the final change of types will + happen when all other entities have changed. + + Args: + project_changes (dict[str, Any]): Project changes. + orig_types (list[dict[str, Any]]): Original types. + changes_key (Literal[folderTypes, taskTypes]): Key of type changes + in project changes. + post_changes (dict[str, Any]): An object where post changes will + be stored. + """ + + if changes_key not in project_changes: + return + + new_types = project_changes[changes_key] + + orig_types_by_name = { + type_info["name"]: type_info + for type_info in orig_types + } + new_names = { + type_info["name"] + for type_info in new_types + } + diff_names = set(orig_types_by_name) - new_names + if not diff_names: + return + + # Create copy of folder type changes to post changes + # - post changes will be commited at the end + post_changes[changes_key] = copy.deepcopy(new_types) + + for type_name in diff_names: + new_types.append(orig_types_by_name[type_name]) + + def _pre_commit_project(self): + """Some project changes cannot be committed before hierarchy changes. + + It is not possible to change folder types or task types if there are + existing hierarchy items using the removed types. For that purposes + is first committed union of all old and new types and post changes + are prepared when all existing entities are changed. + + Returns: + dict[str, Any]: Changes that will be committed after hierarchy + changes. + """ + + project_changes = self.project_entity.changes + + post_changes = {} + if not project_changes: + return post_changes + + self._pre_commit_types_changes( + project_changes, + self.project_entity.get_orig_folder_types(), + "folderType", + post_changes + ) + self._pre_commit_types_changes( + project_changes, + self.project_entity.get_orig_task_types(), + "taskType", + post_changes + ) + self._connection.update_project(self.project_name, **project_changes) + return post_changes + + def commit_changes(self): + """Commit any changes that happened on entities. + + Todos: + Use Operations Session instead of known operations body. + """ + + post_project_changes = self._pre_commit_project() + self.project_entity.lock() + + project_changes = self.project_entity.changes + if project_changes: + response = self._connection.patch( + "projects/{}".format(self.project_name), + **project_changes + ) + response.raise_for_status() + + self.project_entity.lock() + + operations_body = [] + + created_entity_ids, other_entity_ids, removed_entity_ids = ( + self._split_entities() + ) + processed_ids = set() + for entity_id in other_entity_ids: + if entity_id in processed_ids: + continue + + entity = self._entities_by_id[entity_id] + changes = entity.changes + processed_ids.add(entity_id) + if not changes: + continue + + bodies = [self._get_update_body(entity, changes)] + # Parent was created and was not yet added to operations body + parent_queue = collections.deque() + parent_queue.append(entity.parent_id) + while parent_queue: + # Make sure entity's parents are created + parent_id = parent_queue.popleft() + if ( + parent_id is UNKNOWN_VALUE + or parent_id in processed_ids + or parent_id not in created_entity_ids + ): + continue + + parent = self._entities_by_id.get(parent_id) + processed_ids.add(parent.id) + bodies.append(self._get_create_body(parent)) + parent_queue.append(parent.id) + + operations_body.extend(reversed(bodies)) + + for entity_id in created_entity_ids: + if entity_id in processed_ids: + continue + entity = self._entities_by_id[entity_id] + processed_ids.add(entity_id) + operations_body.append(self._get_create_body(entity)) + + for entity_id in reversed(removed_entity_ids): + if entity_id in processed_ids: + continue + + entity = self._entities_by_id.pop(entity_id) + parent_children = self._entities_by_parent_id[entity.parent_id] + if entity in parent_children: + parent_children.remove(entity) + + if not entity.created: + operations_body.append(self._get_delete_body(entity)) + + self._connection.send_batch_operations( + self.project_name, operations_body + ) + if post_project_changes: + self._connection.update_project( + self.project_name, **post_project_changes) + + self.lock() + + +class AttributeValue(object): + def __init__(self, value): + self._value = value + self._origin_value = copy.deepcopy(value) + + def get_value(self): + return self._value + + def set_value(self, value): + self._value = value + + value = property(get_value, set_value) + + @property + def changed(self): + return self._value != self._origin_value + + def lock(self): + self._origin_value = copy.deepcopy(self._value) + + +class Attributes(object): + """Object representing attribs of entity. + + Todos: + This could be enhanced to know attribute schema and validate values + based on the schema. + + Args: + attrib_keys (Iterable[str]): Keys that are available in attribs of the + entity. + values (Union[None, Dict[str, Any]]): Values of attributes. + """ + + def __init__(self, attrib_keys, values=UNKNOWN_VALUE): + if values in (UNKNOWN_VALUE, None): + values = {} + self._attributes = { + key: AttributeValue(values.get(key)) + for key in attrib_keys + } + + def __contains__(self, key): + return key in self._attributes + + def __getitem__(self, key): + return self._attributes[key].value + + def __setitem__(self, key, value): + self._attributes[key].set_value(value) + + def __iter__(self): + for key in self._attributes: + yield key + + def keys(self): + return self._attributes.keys() + + def values(self): + for attribute in self._attributes.values(): + yield attribute.value + + def items(self): + for key, attribute in self._attributes.items(): + yield key, attribute.value + + def get(self, key, default=None): + """Get value of attribute. + + Args: + key (str): Attribute name. + default (Any): Default value to return when attribute was not + found. + """ + + attribute = self._attributes.get(key) + if attribute is None: + return default + return attribute.value + + def set(self, key, value): + """Change value of attribute. + + Args: + key (str): Attribute name. + value (Any): New value of the attribute. + """ + + self[key] = value + + def get_attribute(self, key): + """Access to attribute object. + + Args: + key (str): Name of attribute. + + Returns: + AttributeValue: Object of attribute value. + + Raises: + KeyError: When attribute is not available. + """ + + return self._attributes[key] + + def lock(self): + for attribute in self._attributes.values(): + attribute.lock() + + @property + def changes(self): + """Attribute value changes. + + Returns: + Dict[str, Any]: Key mapping with new values. + """ + + return { + attr_key: attribute.value + for attr_key, attribute in self._attributes.items() + if attribute.changed + } + + def to_dict(self, ignore_none=True): + output = {} + for key, value in self.items(): + if ( + value is UNKNOWN_VALUE + or (ignore_none and value is None) + ): + continue + + output[key] = value + return output + + +@six.add_metaclass(ABCMeta) +class BaseEntity(object): + """Object representation of entity from server which is capturing changes. + + All data on created object are expected as "current data" on server entity + unless the entity has set 'created' to 'True'. So if new data should be + stored to server entity then fill entity with server data first and + then change them. + + Calling 'lock' method will mark entity as "saved" and all changes made on + entity are set as "current data" on server. + + Args: + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + def __init__( + self, + entity_id=None, + parent_id=UNKNOWN_VALUE, + name=UNKNOWN_VALUE, + attribs=UNKNOWN_VALUE, + data=UNKNOWN_VALUE, + thumbnail_id=UNKNOWN_VALUE, + active=UNKNOWN_VALUE, + entity_hub=None, + created=None + ): + if entity_hub is None: + raise ValueError("Missing required kwarg 'entity_hub'") + + self._entity_hub = entity_hub + + if created is None: + created = entity_id is None + + entity_id = self._prepare_entity_id(entity_id) + + if data is None: + data = {} + + children_ids = UNKNOWN_VALUE + if created: + children_ids = set() + + if not created and parent_id is UNKNOWN_VALUE: + raise ValueError("Existing entity is missing parent id.") + + # These are public without any validation at this moment + # may change in future (e.g. name will have regex validation) + self._entity_id = entity_id + + self._parent_id = parent_id + self._name = name + self.active = active + self._created = created + self._thumbnail_id = thumbnail_id + self._attribs = Attributes( + self._get_attributes_for_type(self.entity_type), + attribs + ) + self._data = data + self._children_ids = children_ids + + self._orig_parent_id = parent_id + self._orig_name = name + self._orig_data = copy.deepcopy(data) + self._orig_thumbnail_id = thumbnail_id + self._orig_active = active + + self._immutable_for_hierarchy_cache = None + + def __repr__(self): + return "<{} - {}>".format(self.__class__.__name__, self.id) + + def __getitem__(self, item): + return getattr(self, item) + + def __setitem__(self, item, value): + return setattr(self, item, value) + + def _prepare_entity_id(self, entity_id): + entity_id = convert_entity_id(entity_id) + if entity_id is None: + entity_id = create_entity_id() + return entity_id + + @property + def id(self): + """Access to entity id under which is entity available on server. + + Returns: + str: Entity id. + """ + + return self._entity_id + + @property + def removed(self): + return self._parent_id is None + + @property + def orig_parent_id(self): + return self._orig_parent_id + + @property + def attribs(self): + """Entity attributes based on server configuration. + + Returns: + Attributes: Attributes object handling changes and values of + attributes on entity. + """ + + return self._attribs + + @property + def data(self): + """Entity custom data that are not stored by any deterministic model. + + Be aware that 'data' can't be queried using GraphQl and cannot be + updated partially. + + Returns: + Dict[str, Any]: Custom data on entity. + """ + + return self._data + + @property + def project_name(self): + """Quick access to project from entity hub. + + Returns: + str: Name of project under which entity lives. + """ + + return self._entity_hub.project_name + + @property + @abstractmethod + def entity_type(self): + """Entity type coresponding to server. + + Returns: + Literal[project, folder, task]: Entity type. + """ + + pass + + @property + @abstractmethod + def parent_entity_types(self): + """Entity type coresponding to server. + + Returns: + Iterable[str]: Possible entity types of parent. + """ + + pass + + @property + @abstractmethod + def changes(self): + """Receive entity changes. + + Returns: + Union[Dict[str, Any], None]: All values that have changed on + entity. New entity must return None. + """ + + pass + + @classmethod + @abstractmethod + def from_entity_data(cls, entity_data, entity_hub): + """Create entity based on queried data from server. + + Args: + entity_data (Dict[str, Any]): Entity data from server. + entity_hub (EntityHub): Hub which handle the entity. + + Returns: + BaseEntity: Object of the class. + """ + + pass + + @abstractmethod + def to_create_body_data(self): + """Convert object of entity to data for server on creation. + + Returns: + Dict[str, Any]: Entity data. + """ + + pass + + @property + def immutable_for_hierarchy(self): + """Entity is immutable for hierarchy changes. + + Hierarchy changes can be considered as change of name or parents. + + Returns: + bool: Entity is immutable for hierarchy changes. + """ + + if self._immutable_for_hierarchy_cache is not None: + return self._immutable_for_hierarchy_cache + + immutable_for_hierarchy = self._immutable_for_hierarchy + if immutable_for_hierarchy is not None: + self._immutable_for_hierarchy_cache = immutable_for_hierarchy + return self._immutable_for_hierarchy_cache + + for child in self._entity_hub.get_entity_children(self): + if child.immutable_for_hierarchy: + self._immutable_for_hierarchy_cache = True + return self._immutable_for_hierarchy_cache + + self._immutable_for_hierarchy_cache = False + return self._immutable_for_hierarchy_cache + + @property + def _immutable_for_hierarchy(self): + """Override this method to define if entity object is immutable. + + This property was added to define immutable state of Folder entities + which is used in property 'immutable_for_hierarchy'. + + Returns: + Union[bool, None]: Bool to explicitly telling if is immutable or + not otherwise None. + """ + + return None + + @property + def has_cached_immutable_hierarchy(self): + return self._immutable_for_hierarchy_cache is not None + + def reset_immutable_for_hierarchy_cache(self, bottom_to_top=True): + """Clear cache of immutable hierarchy property. + + This is used when entity changed parent or a child was added. + + Args: + bottom_to_top (bool): Reset cache from top hierarchy to bottom or + from bottom hierarchy to top. + """ + + self._immutable_for_hierarchy_cache = None + self._entity_hub.reset_immutable_for_hierarchy_cache( + self.id, bottom_to_top + ) + + def _get_default_changes(self): + """Collect changes of common data on entity. + + Returns: + Dict[str, Any]: Changes on entity. Key and it's new value. + """ + + changes = {} + if self._orig_name != self._name: + changes["name"] = self._name + + if self._entity_hub.allow_data_changes: + if self._orig_data != self._data: + changes["data"] = self._data + + if self._orig_thumbnail_id != self._thumbnail_id: + changes["thumbnailId"] = self._thumbnail_id + + if self._orig_active != self.active: + changes["active"] = self.active + + attrib_changes = self.attribs.changes + if attrib_changes: + changes["attrib"] = attrib_changes + return changes + + def _get_attributes_for_type(self, entity_type): + return self._entity_hub.get_attributes_for_type(entity_type) + + def lock(self): + """Lock entity as 'saved' so all changes are discarded.""" + + self._orig_parent_id = self._parent_id + self._orig_name = self._name + self._orig_data = copy.deepcopy(self._data) + self._orig_thumbnail_id = self.thumbnail_id + self._attribs.lock() + + self._immutable_for_hierarchy_cache = None + self._created = False + + def _get_entity_by_id(self, entity_id): + return self._entity_hub.get_entity_by_id(entity_id) + + def get_name(self): + return self._name + + def set_name(self, name): + self._name = name + + name = property(get_name, set_name) + + def get_parent_id(self): + """Parent entity id. + + Returns: + Union[str, None]: Id of parent entity or none if is not set. + """ + + return self._parent_id + + def set_parent_id(self, parent_id): + """Change parent by id. + + Args: + parent_id (Union[str, None]): Id of new parent for entity. + + Raises: + ValueError: If parent was not found by id. + TypeError: If validation of parent does not pass. + """ + + if parent_id != self._parent_id: + orig_parent_id = self._parent_id + self._parent_id = parent_id + self._entity_hub.set_entity_parent( + self.id, parent_id, orig_parent_id + ) + + parent_id = property(get_parent_id, set_parent_id) + + def get_parent(self, allow_query=True): + """Parent entity. + + Returns: + Union[BaseEntity, None]: Parent object. + """ + + parent = self._entity_hub.get_entity_by_id(self._parent_id) + if parent is not None: + return parent + + if not allow_query: + return self._parent_id + + if self._parent_id is UNKNOWN_VALUE: + return self._parent_id + + return self._entity_hub.get_or_query_entity_by_id( + self._parent_id, self.parent_entity_types + ) + + def set_parent(self, parent): + """Change parent object. + + Args: + parent (BaseEntity): New parent for entity. + + Raises: + TypeError: If validation of parent does not pass. + """ + + parent_id = None + if parent is not None: + parent_id = parent.id + self._entity_hub.set_entity_parent(self.id, parent_id) + + parent = property(get_parent, set_parent) + + def get_children_ids(self, allow_query=True): + """Access to children objects. + + Todos: + Children should be maybe handled by EntityHub instead of entities + themselves. That would simplify 'set_entity_parent', + 'unset_entity_parent' and other logic related to changing + hierarchy. + + Returns: + Union[List[str], Type[UNKNOWN_VALUE]]: Children iterator. + """ + + if self._children_ids is UNKNOWN_VALUE: + if not allow_query: + return self._children_ids + self._entity_hub.get_entity_children(self, True) + return set(self._children_ids) + + children_ids = property(get_children_ids) + + def get_children(self, allow_query=True): + """Access to children objects. + + Returns: + Union[List[BaseEntity], Type[UNKNOWN_VALUE]]: Children iterator. + """ + + if self._children_ids is UNKNOWN_VALUE: + if not allow_query: + return self._children_ids + return self._entity_hub.get_entity_children(self, True) + + return [ + self._entity_hub.get_entity_by_id(children_id) + for children_id in self._children_ids + ] + + children = property(get_children) + + def add_child(self, child): + """Add child entity. + + Args: + child (BaseEntity): Child object to add. + + Raises: + TypeError: When child object has invalid type to be children. + """ + + child_id = child + if isinstance(child_id, BaseEntity): + child_id = child.id + + if self._children_ids is not UNKNOWN_VALUE: + self._children_ids.add(child_id) + + self._entity_hub.set_entity_parent(child_id, self.id) + + def remove_child(self, child): + """Remove child entity. + + Is ignored if child is not in children. + + Args: + child (Union[str, BaseEntity]): Child object or child id to remove. + """ + + child_id = child + if isinstance(child_id, BaseEntity): + child_id = child.id + + if self._children_ids is not UNKNOWN_VALUE: + self._children_ids.discard(child_id) + self._entity_hub.unset_entity_parent(child_id, self.id) + + def get_thumbnail_id(self): + """Thumbnail id of entity. + + Returns: + Union[str, None]: Id of parent entity or none if is not set. + """ + + return self._thumbnail_id + + def set_thumbnail_id(self, thumbnail_id): + """Change thumbnail id. + + Args: + thumbnail_id (Union[str, None]): Id of thumbnail for entity. + """ + + self._thumbnail_id = thumbnail_id + + thumbnail_id = property(get_thumbnail_id, set_thumbnail_id) + + @property + def created(self): + """Entity is new. + + Returns: + bool: Entity is newly created. + """ + + return self._created + + def fill_children_ids(self, children_ids): + """Fill children ids on entity. + + Warning: + This is not an api call but is called from entity hub. + """ + + self._children_ids = set(children_ids) + + +class ProjectStatus: + """Project status class. + + Args: + name (str): Name of the status. e.g. 'In progress' + short_name (Optional[str]): Short name of the status. e.g. 'IP' + state (Optional[Literal[not_started, in_progress, done, blocked]]): A + state of the status. + icon (Optional[str]): Icon of the status. e.g. 'play_arrow'. + color (Optional[str]): Color of the status. e.g. '#eeeeee'. + index (Optional[int]): Index of the status. + project_statuses (Optional[_ProjectStatuses]): Project statuses + wrapper. + """ + + valid_states = ("not_started", "in_progress", "done", "blocked") + color_regex = re.compile(r"#([a-f0-9]{6})$") + default_state = "in_progress" + default_color = "#eeeeee" + + def __init__( + self, + name, + short_name=None, + state=None, + icon=None, + color=None, + index=None, + project_statuses=None, + is_new=None, + ): + short_name = short_name or "" + icon = icon or "" + state = state or self.default_state + color = color or self.default_color + self._name = name + self._short_name = short_name + self._icon = icon + self._slugified_name = None + self._state = None + self._color = None + self.set_state(state) + self.set_color(color) + + self._original_name = name + self._original_short_name = short_name + self._original_icon = icon + self._original_state = state + self._original_color = color + self._original_index = index + + self._index = index + self._project_statuses = project_statuses + if is_new is None: + is_new = index is None or project_statuses is None + self._is_new = is_new + + def __str__(self): + short_name = "" + if self.short_name: + short_name = "({})".format(self.short_name) + return "<{} {}{}>".format( + self.__class__.__name__, self.name, short_name + ) + + def __repr__(self): + return str(self) + + def __getitem__(self, key): + if key in { + "name", "short_name", "icon", "state", "color", "slugified_name" + }: + return getattr(self, key) + raise KeyError(key) + + def __setitem__(self, key, value): + if key in {"name", "short_name", "icon", "state", "color"}: + return setattr(self, key, value) + raise KeyError(key) + + def lock(self): + """Lock status. + + Changes were commited and current values are now the original values. + """ + + self._is_new = False + self._original_name = self.name + self._original_short_name = self.short_name + self._original_icon = self.icon + self._original_state = self.state + self._original_color = self.color + self._original_index = self.index + + @staticmethod + def slugify_name(name): + """Slugify status name for name comparison. + + Args: + name (str): Name of the status. + + Returns: + str: Slugified name. + """ + + return slugify_string(name.lower()) + + def get_project_statuses(self): + """Internal logic method. + + Returns: + _ProjectStatuses: Project statuses object. + """ + + return self._project_statuses + + def set_project_statuses(self, project_statuses): + """Internal logic method to change parent object. + + Args: + project_statuses (_ProjectStatuses): Project statuses object. + """ + + self._project_statuses = project_statuses + + def unset_project_statuses(self, project_statuses): + """Internal logic method to unset parent object. + + Args: + project_statuses (_ProjectStatuses): Project statuses object. + """ + + if self._project_statuses is project_statuses: + self._project_statuses = None + self._index = None + + @property + def changed(self): + """Status has changed. + + Returns: + bool: Status has changed. + """ + + return ( + self._is_new + or self._original_name != self._name + or self._original_short_name != self._short_name + or self._original_index != self._index + or self._original_state != self._state + or self._original_icon != self._icon + or self._original_color != self._color + ) + + def delete(self): + """Remove status from project statuses object.""" + + if self._project_statuses is not None: + self._project_statuses.remove(self) + + def get_index(self): + """Get index of status. + + Returns: + Union[int, None]: Index of status or None if status is not under + project. + """ + + return self._index + + def set_index(self, index, **kwargs): + """Change status index. + + Returns: + Union[int, None]: Index of status or None if status is not under + project. + """ + + if kwargs.get("from_parent"): + self._index = index + else: + self._project_statuses.set_status_index(self, index) + + def get_name(self): + """Status name. + + Returns: + str: Status name. + """ + + return self._name + + def set_name(self, name): + """Change status name. + + Args: + name (str): New status name. + """ + + if not isinstance(name, six.string_types): + raise TypeError("Name must be a string.") + if name == self._name: + return + self._name = name + self._slugified_name = None + + def get_short_name(self): + """Status short name 3 letters tops. + + Returns: + str: Status short name. + """ + + return self._short_name + + def set_short_name(self, short_name): + """Change status short name. + + Args: + short_name (str): New status short name. 3 letters tops. + """ + + if not isinstance(short_name, six.string_types): + raise TypeError("Short name must be a string.") + self._short_name = short_name + + def get_icon(self): + """Name of icon to use for status. + + Returns: + str: Name of the icon. + """ + + return self._icon + + def set_icon(self, icon): + """Change status icon name. + + Args: + icon (str): Name of the icon. + """ + + if icon is None: + icon = "" + if not isinstance(icon, six.string_types): + raise TypeError("Icon name must be a string.") + self._icon = icon + + @property + def slugified_name(self): + """Slugified and lowere status name. + + Can be used for comparison of existing statuses. e.g. 'In Progress' + vs. 'in-progress'. + + Returns: + str: Slugified and lower status name. + """ + + if self._slugified_name is None: + self._slugified_name = self.slugify_name(self.name) + return self._slugified_name + + def get_state(self): + """Get state of project status. + + Return: + Literal[not_started, in_progress, done, blocked]: General + state of status. + """ + + return self._state + + def set_state(self, state): + """Set color of project status. + + Args: + state (Literal[not_started, in_progress, done, blocked]): General + state of status. + """ + + if state not in self.valid_states: + raise ValueError("Invalid state '{}'".format(str(state))) + self._state = state + + def get_color(self): + """Get color of project status. + + Returns: + str: Status color. + """ + + return self._color + + def set_color(self, color): + """Set color of project status. + + Args: + color (str): Color in hex format. Example: '#ff0000'. + """ + + if not isinstance(color, six.string_types): + raise TypeError( + "Color must be string got '{}'".format(type(color))) + color = color.lower() + if self.color_regex.fullmatch(color) is None: + raise ValueError("Invalid color value '{}'".format(color)) + self._color = color + + name = property(get_name, set_name) + short_name = property(get_short_name, set_short_name) + project_statuses = property(get_project_statuses, set_project_statuses) + index = property(get_index, set_index) + state = property(get_state, set_state) + color = property(get_color, set_color) + icon = property(get_icon, set_icon) + + def _validate_other_p_statuses(self, other): + """Validate if other status can be used for move. + + To be able to work with other status, and position them in relation, + they must belong to same existing object of '_ProjectStatuses'. + + Args: + other (ProjectStatus): Other status to validate. + """ + + o_project_statuses = other.project_statuses + m_project_statuses = self.project_statuses + if o_project_statuses is None and m_project_statuses is None: + raise ValueError("Both statuses are not assigned to a project.") + + missing_status = None + if o_project_statuses is None: + missing_status = other + elif m_project_statuses is None: + missing_status = self + if missing_status is not None: + raise ValueError( + "Status '{}' is not assigned to a project.".format( + missing_status.name)) + if m_project_statuses is not o_project_statuses: + raise ValueError( + "Statuse are assigned to different projects." + " Cannot execute move." + ) + + def move_before(self, other): + """Move status before other status. + + Args: + other (ProjectStatus): Status to move before. + """ + + self._validate_other_p_statuses(other) + self._project_statuses.set_status_index(self, other.index) + + def move_after(self, other): + """Move status after other status. + + Args: + other (ProjectStatus): Status to move after. + """ + + self._validate_other_p_statuses(other) + self._project_statuses.set_status_index(self, other.index + 1) + + def to_data(self): + """Convert status to data. + + Returns: + dict[str, str]: Status data. + """ + + output = { + "name": self.name, + "shortName": self.short_name, + "state": self.state, + "icon": self.icon, + "color": self.color, + } + if ( + not self._is_new + and self._original_name + and self.name != self._original_name + ): + output["original_name"] = self._original_name + return output + + @classmethod + def from_data(cls, data, index=None, project_statuses=None): + """Create project status from data. + + Args: + data (dict[str, str]): Status data. + index (Optional[int]): Status index. + project_statuses (Optional[ProjectStatuses]): Project statuses + object which wraps the status for a project. + """ + + return cls( + data["name"], + data.get("shortName", data.get("short_name")), + data.get("state"), + data.get("icon"), + data.get("color"), + index=index, + project_statuses=project_statuses + ) + + +class _ProjectStatuses: + """Wrapper for project statuses. + + Supports basic methods to add, change or remove statuses from a project. + + To add new statuses use 'create' or 'add_status' methods. To change + statuses receive them by one of the getter methods and change their + values. + + Todos: + Validate if statuses are duplicated. + """ + + def __init__(self, statuses): + self._statuses = [ + ProjectStatus.from_data(status, idx, self) + for idx, status in enumerate(statuses) + ] + self._orig_status_length = len(self._statuses) + self._set_called = False + + def __len__(self): + return len(self._statuses) + + def __iter__(self): + """Iterate over statuses. + + Yields: + ProjectStatus: Project status. + """ + + for status in self._statuses: + yield status + + def create( + self, + name, + short_name=None, + state=None, + icon=None, + color=None, + ): + """Create project status. + + Args: + name (str): Name of the status. e.g. 'In progress' + short_name (Optional[str]): Short name of the status. e.g. 'IP' + state (Optional[Literal[not_started, in_progress, done, blocked]]): A + state of the status. + icon (Optional[str]): Icon of the status. e.g. 'play_arrow'. + color (Optional[str]): Color of the status. e.g. '#eeeeee'. + + Returns: + ProjectStatus: Created project status. + """ + + status = ProjectStatus( + name, short_name, state, icon, color, is_new=True + ) + self.append(status) + return status + + def lock(self): + """Lock statuses. + + Changes were commited and current values are now the original values. + """ + + self._orig_status_length = len(self._statuses) + self._set_called = False + for status in self._statuses: + status.lock() + + def to_data(self): + """Convert to project statuses data.""" + + return [ + status.to_data() + for status in self._statuses + ] + + def set(self, statuses): + """Explicitly override statuses. + + This method does not handle if statuses changed or not. + + Args: + statuses (list[dict[str, str]]): List of statuses data. + """ + + self._set_called = True + self._statuses = [ + ProjectStatus.from_data(status, idx, self) + for idx, status in enumerate(statuses) + ] + + @property + def changed(self): + """Statuses have changed. + + Returns: + bool: True if statuses changed, False otherwise. + """ + + if self._set_called: + return True + + # Check if status length changed + # - when all statuses are removed it is a changed + if self._orig_status_length != len(self._statuses): + return True + # Go through all statuses and check if any of them changed + for status in self._statuses: + if status.changed: + return True + return False + + def get(self, name, default=None): + """Get status by name. + + Args: + name (str): Status name. + default (Any): Default value of status is not found. + + Returns: + Union[ProjectStatus, Any]: Status or default value. + """ + + return next( + ( + status + for status in self._statuses + if status.name == name + ), + default + ) + + get_status_by_name = get + + def index(self, status, **kwargs): + """Get status index. + + Args: + status (ProjectStatus): Status to get index of. + default (Optional[Any]): Default value if status is not found. + + Returns: + Union[int, Any]: Status index. + + Raises: + ValueError: If status is not found and default value is not + defined. + """ + + output = next( + ( + idx + for idx, st in enumerate(self._statuses) + if st is status + ), + None + ) + if output is not None: + return output + + if "default" in kwargs: + return kwargs["default"] + raise ValueError("Status '{}' not found".format(status.name)) + + def get_status_by_slugified_name(self, name): + """Get status by slugified name. + + Args: + name (str): Status name. Is slugified before search. + + Returns: + Union[ProjectStatus, None]: Status or None if not found. + """ + + slugified_name = ProjectStatus.slugify_name(name) + return next( + ( + status + for status in self._statuses + if status.slugified_name == slugified_name + ), + None + ) + + def remove_by_name(self, name, ignore_missing=False): + """Remove status by name. + + Args: + name (str): Status name. + ignore_missing (Optional[bool]): If True, no error is raised if + status is not found. + + Returns: + ProjectStatus: Removed status. + """ + + matching_status = self.get(name) + if matching_status is None: + if ignore_missing: + return + raise ValueError( + "Status '{}' not found in project".format(name)) + return self.remove(matching_status) + + def remove(self, status, ignore_missing=False): + """Remove status. + + Args: + status (ProjectStatus): Status to remove. + ignore_missing (Optional[bool]): If True, no error is raised if + status is not found. + + Returns: + Union[ProjectStatus, None]: Removed status. + """ + + index = self.index(status, default=None) + if index is None: + if ignore_missing: + return None + raise ValueError("Status '{}' not in project".format(status)) + + return self.pop(index) + + def pop(self, index): + """Remove status by index. + + Args: + index (int): Status index. + + Returns: + ProjectStatus: Removed status. + """ + + status = self._statuses.pop(index) + status.unset_project_statuses(self) + for st in self._statuses[index:]: + st.set_index(st.index - 1, from_parent=True) + return status + + def insert(self, index, status): + """Insert status at index. + + Args: + index (int): Status index. + status (Union[ProjectStatus, dict[str, str]]): Status to insert. + Can be either status object or status data. + + Returns: + ProjectStatus: Inserted status. + """ + + if not isinstance(status, ProjectStatus): + status = ProjectStatus.from_data(status) + + start_index = index + end_index = len(self._statuses) + 1 + matching_index = self.index(status, default=None) + if matching_index is not None: + if matching_index == index: + status.set_index(index, from_parent=True) + return + + self._statuses.pop(matching_index) + if matching_index < index: + start_index = matching_index + end_index = index + 1 + else: + end_index -= 1 + + status.set_project_statuses(self) + self._statuses.insert(index, status) + for idx, st in enumerate(self._statuses[start_index:end_index]): + st.set_index(start_index + idx, from_parent=True) + return status + + def append(self, status): + """Add new status to the end of the list. + + Args: + status (Union[ProjectStatus, dict[str, str]]): Status to insert. + Can be either status object or status data. + + Returns: + ProjectStatus: Inserted status. + """ + + return self.insert(len(self._statuses), status) + + def set_status_index(self, status, index): + """Set status index. + + Args: + status (ProjectStatus): Status to set index. + index (int): New status index. + """ + + return self.insert(index, status) + + +class ProjectEntity(BaseEntity): + """Entity representing project on AYON server. + + Args: + project_code (str): Project code. + library (bool): Is project library project. + folder_types (list[dict[str, Any]]): Folder types definition. + task_types (list[dict[str, Any]]): Task types definition. + entity_id (Optional[str]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + entity_type = "project" + parent_entity_types = [] + # TODO These are hardcoded but maybe should be used from server??? + default_folder_type_icon = "folder" + default_task_type_icon = "task_alt" + + def __init__( + self, + project_code, + library, + folder_types, + task_types, + statuses, + *args, + **kwargs + ): + super(ProjectEntity, self).__init__(*args, **kwargs) + + self._project_code = project_code + self._library_project = library + self._folder_types = folder_types + self._task_types = task_types + self._statuses_obj = _ProjectStatuses(statuses) + + self._orig_project_code = project_code + self._orig_library_project = library + self._orig_folder_types = copy.deepcopy(folder_types) + self._orig_task_types = copy.deepcopy(task_types) + self._orig_statuses = copy.deepcopy(statuses) + + def _prepare_entity_id(self, entity_id): + if entity_id != self.project_name: + raise ValueError( + "Unexpected entity id value \"{}\". Expected \"{}\"".format( + entity_id, self.project_name)) + return entity_id + + def get_parent(self, *args, **kwargs): + return None + + def set_parent(self, parent): + raise ValueError( + "Parent of project cannot be set to {}".format(parent) + ) + + parent = property(get_parent, set_parent) + + def get_orig_folder_types(self): + return copy.deepcopy(self._orig_folder_types) + + def get_folder_types(self): + return copy.deepcopy(self._folder_types) + + def set_folder_types(self, folder_types): + new_folder_types = [] + for folder_type in folder_types: + if "icon" not in folder_type: + folder_type["icon"] = self.default_folder_type_icon + new_folder_types.append(folder_type) + self._folder_types = new_folder_types + + def get_orig_task_types(self): + return copy.deepcopy(self._orig_task_types) + + def get_task_types(self): + return copy.deepcopy(self._task_types) + + def set_task_types(self, task_types): + new_task_types = [] + for task_type in task_types: + if "icon" not in task_type: + task_type["icon"] = self.default_task_type_icon + new_task_types.append(task_type) + self._task_types = new_task_types + + def get_orig_statuses(self): + return copy.deepcopy(self._orig_statuses) + + def get_statuses(self): + return self._statuses_obj + + def set_statuses(self, statuses): + self._statuses_obj.set(statuses) + + folder_types = property(get_folder_types, set_folder_types) + task_types = property(get_task_types, set_task_types) + statuses = property(get_statuses, set_statuses) + + def lock(self): + super(ProjectEntity, self).lock() + self._orig_folder_types = copy.deepcopy(self._folder_types) + self._orig_task_types = copy.deepcopy(self._task_types) + self._statuses_obj.lock() + + @property + def changes(self): + changes = self._get_default_changes() + if self._orig_folder_types != self._folder_types: + changes["folderTypes"] = self.get_folder_types() + + if self._orig_task_types != self._task_types: + changes["taskTypes"] = self.get_task_types() + + if self._statuses_obj.changed: + changes["statuses"] = self._statuses_obj.to_data() + + return changes + + @classmethod + def from_entity_data(cls, project, entity_hub): + return cls( + project["code"], + parent_id=PROJECT_PARENT_ID, + entity_id=project["name"], + library=project["library"], + folder_types=project["folderTypes"], + task_types=project["taskTypes"], + name=project["name"], + attribs=project["ownAttrib"], + data=project["data"], + active=project["active"], + entity_hub=entity_hub + ) + + def to_create_body_data(self): + raise NotImplementedError( + "ProjectEntity does not support conversion to entity data" + ) + + +class FolderEntity(BaseEntity): + """Entity representing a folder on AYON server. + + Args: + folder_type (str): Type of folder. Folder type must be available in + config of project folder types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + label (Optional[str]): Folder label. + path (Optional[str]): Folder path. Path consist of all parent names + with slash('/') used as separator. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + entity_type = "folder" + parent_entity_types = ["folder", "project"] + + def __init__(self, folder_type, *args, label=None, path=None, **kwargs): + super(FolderEntity, self).__init__(*args, **kwargs) + # Autofill project as parent of folder if is not yet set + # - this can be guessed only if folder was just created + if self.created and self._parent_id is UNKNOWN_VALUE: + self._parent_id = self.project_name + + self._folder_type = folder_type + self._label = label + + self._orig_folder_type = folder_type + self._orig_label = label + # Know if folder has any products + # - is used to know if folder allows hierarchy changes + self._has_published_content = False + self._path = path + + def get_folder_type(self): + return self._folder_type + + def set_folder_type(self, folder_type): + self._folder_type = folder_type + + folder_type = property(get_folder_type, set_folder_type) + + def get_label(self): + return self._label + + def set_label(self, label): + self._label = label + + label = property(get_label, set_label) + + def get_path(self, dynamic_value=True): + if not dynamic_value: + return self._path + + if self._path is None: + parent = self.parent + path = self.name + if parent.entity_type == "folder": + parent_path = parent.path + path = "/".join([parent_path, path]) + self._path = path + return self._path + + def reset_path(self): + self._path = None + self._entity_hub.folder_path_reseted(self.id) + + path = property(get_path) + + def get_has_published_content(self): + return self._has_published_content + + def set_has_published_content(self, has_published_content): + if self._has_published_content is has_published_content: + return + + self._has_published_content = has_published_content + # Reset immutable cache of parents + self._entity_hub.reset_immutable_for_hierarchy_cache(self.id) + + has_published_content = property( + get_has_published_content, set_has_published_content + ) + + @property + def _immutable_for_hierarchy(self): + if self.has_published_content: + return True + return None + + def lock(self): + super(FolderEntity, self).lock() + self._orig_folder_type = self._folder_type + + @property + def changes(self): + changes = self._get_default_changes() + + if self._orig_parent_id != self._parent_id: + parent_id = self._parent_id + if parent_id == self.project_name: + parent_id = None + changes["parentId"] = parent_id + + if self._orig_folder_type != self._folder_type: + changes["folderType"] = self._folder_type + + label = self._label + if self._name == label: + label = None + + if label != self._orig_label: + changes["label"] = label + + return changes + + @classmethod + def from_entity_data(cls, folder, entity_hub): + parent_id = folder["parentId"] + if parent_id is None: + parent_id = entity_hub.project_entity.id + return cls( + folder["folderType"], + label=folder["label"], + path=folder["path"], + entity_id=folder["id"], + parent_id=parent_id, + name=folder["name"], + data=folder.get("data"), + attribs=folder["ownAttrib"], + active=folder["active"], + thumbnail_id=folder["thumbnailId"], + created=False, + entity_hub=entity_hub + ) + + def to_create_body_data(self): + parent_id = self._parent_id + if parent_id is UNKNOWN_VALUE: + raise ValueError("Folder does not have set 'parent_id'") + + if parent_id == self.project_name: + parent_id = None + + if not self.name or self.name is UNKNOWN_VALUE: + raise ValueError("Folder does not have set 'name'") + + output = { + "name": self.name, + "folderType": self.folder_type, + "parentId": parent_id, + } + attrib = self.attribs.to_dict() + if attrib: + output["attrib"] = attrib + + if self.active is not UNKNOWN_VALUE: + output["active"] = self.active + + if self.thumbnail_id is not UNKNOWN_VALUE: + output["thumbnailId"] = self.thumbnail_id + + if self._entity_hub.allow_data_changes: + output["data"] = self._data + return output + + +class TaskEntity(BaseEntity): + """Entity representing a task on AYON server. + + Args: + task_type (str): Type of task. Task type must be available in config + of project task types. + entity_id (Union[str, None]): Id of the entity. New id is created if + not passed. + parent_id (Union[str, None]): Id of parent entity. + name (str): Name of entity. + label (Optional[str]): Task label. + attribs (Dict[str, Any]): Attribute values. + data (Dict[str, Any]): Entity data (custom data). + thumbnail_id (Union[str, None]): Id of entity's thumbnail. + active (bool): Is entity active. + entity_hub (EntityHub): Object of entity hub which created object of + the entity. + created (Optional[bool]): Entity is new. When 'None' is passed the + value is defined based on value of 'entity_id'. + """ + + entity_type = "task" + parent_entity_types = ["folder"] + + def __init__(self, task_type, *args, label=None, **kwargs): + super(TaskEntity, self).__init__(*args, **kwargs) + + self._task_type = task_type + self._label = label + + self._orig_task_type = task_type + self._orig_label = label + + self._children_ids = set() + + def lock(self): + super(TaskEntity, self).lock() + self._orig_task_type = self._task_type + + def get_task_type(self): + return self._task_type + + def set_task_type(self, task_type): + self._task_type = task_type + + task_type = property(get_task_type, set_task_type) + + def get_label(self): + return self._label + + def set_label(self, label): + self._label = label + + label = property(get_label, set_label) + + def add_child(self, child): + raise ValueError("Task does not support to add children") + + @property + def changes(self): + changes = self._get_default_changes() + + if self._orig_parent_id != self._parent_id: + changes["folderId"] = self._parent_id + + if self._orig_task_type != self._task_type: + changes["taskType"] = self._task_type + + label = self._label + if self._name == label: + label = None + + if label != self._orig_label: + changes["label"] = label + + return changes + + @classmethod + def from_entity_data(cls, task, entity_hub): + return cls( + task["taskType"], + entity_id=task["id"], + label=task["label"], + parent_id=task["folderId"], + name=task["name"], + data=task.get("data"), + attribs=task["ownAttrib"], + active=task["active"], + created=False, + entity_hub=entity_hub + ) + + def to_create_body_data(self): + if self.parent_id is UNKNOWN_VALUE: + raise ValueError("Task does not have set 'parent_id'") + + output = { + "name": self.name, + "taskType": self.task_type, + "folderId": self.parent_id, + "attrib": self.attribs.to_dict(), + } + attrib = self.attribs.to_dict() + if attrib: + output["attrib"] = attrib + + if self.active is not UNKNOWN_VALUE: + output["active"] = self.active + + if ( + self._entity_hub.allow_data_changes + and self._data is not UNKNOWN_VALUE + ): + output["data"] = self._data + return output diff --git a/openpype/vendor/python/common/ayon_api/events.py b/openpype/vendor/python/common/ayon_api/events.py new file mode 100644 index 0000000000..aa256f6cfc --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/events.py @@ -0,0 +1,52 @@ +import copy + + +class ServerEvent(object): + def __init__( + self, + topic, + sender=None, + event_hash=None, + project_name=None, + username=None, + dependencies=None, + description=None, + summary=None, + payload=None, + finished=True, + store=True, + ): + if dependencies is None: + dependencies = [] + if payload is None: + payload = {} + if summary is None: + summary = {} + + self.topic = topic + self.sender = sender + self.event_hash = event_hash + self.project_name = project_name + self.username = username + self.dependencies = dependencies + self.description = description + self.summary = summary + self.payload = payload + self.finished = finished + self.store = store + + def to_data(self): + return { + "topic": self.topic, + "sender": self.sender, + "hash": self.event_hash, + "project": self.project_name, + "user": self.username, + "dependencies": copy.deepcopy(self.dependencies), + "description": self.description, + "description": self.description, + "summary": copy.deepcopy(self.summary), + "payload": self.payload, + "finished": self.finished, + "store": self.store + } diff --git a/openpype/vendor/python/common/ayon_api/exceptions.py b/openpype/vendor/python/common/ayon_api/exceptions.py new file mode 100644 index 0000000000..db4917e90a --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/exceptions.py @@ -0,0 +1,107 @@ +import copy + + +class UrlError(Exception): + """Url cannot be parsed as url. + + Exception may contain hints of possible fixes of url that can be used in + UI if needed. + """ + + def __init__(self, message, title, hints=None): + if hints is None: + hints = [] + + self.title = title + self.hints = hints + super(UrlError, self).__init__(message) + + +class ServerError(Exception): + pass + + +class UnauthorizedError(ServerError): + pass + + +class AuthenticationError(ServerError): + pass + + +class ServerNotReached(ServerError): + pass + + +class RequestError(Exception): + def __init__(self, message, response): + self.response = response + super(RequestError, self).__init__(message) + + +class HTTPRequestError(RequestError): + pass + + +class GraphQlQueryFailed(Exception): + def __init__(self, errors, query, variables): + if variables is None: + variables = {} + + error_messages = [] + for error in errors: + msg = error["message"] + path = error.get("path") + if path: + msg += " on item '{}'".format("/".join(path)) + locations = error.get("locations") + if locations: + _locations = [ + "Line {} Column {}".format( + location["line"], location["column"] + ) + for location in locations + ] + + msg += " ({})".format(" and ".join(_locations)) + error_messages.append(msg) + + message = "GraphQl query Failed" + if error_messages: + message = "{}: {}".format(message, " | ".join(error_messages)) + + self.errors = errors + self.query = query + self.variables = copy.deepcopy(variables) + super(GraphQlQueryFailed, self).__init__(message) + + +class MissingEntityError(Exception): + pass + + +class ProjectNotFound(MissingEntityError): + def __init__(self, project_name, message=None): + if not message: + message = "Project \"{}\" was not found".format(project_name) + self.project_name = project_name + super(ProjectNotFound, self).__init__(message) + + +class FolderNotFound(MissingEntityError): + def __init__(self, project_name, folder_id, message=None): + self.project_name = project_name + self.folder_id = folder_id + if not message: + message = ( + "Folder with id \"{}\" was not found in project \"{}\"" + ).format(folder_id, project_name) + super(FolderNotFound, self).__init__(message) + + +class FailedOperations(Exception): + pass + + +class FailedServiceInit(Exception): + pass diff --git a/openpype/vendor/python/common/ayon_api/graphql.py b/openpype/vendor/python/common/ayon_api/graphql.py new file mode 100644 index 0000000000..854f207a00 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/graphql.py @@ -0,0 +1,983 @@ +import copy +import numbers +from abc import ABCMeta, abstractmethod + +import six + +from .exceptions import GraphQlQueryFailed + +FIELD_VALUE = object() + + +def fields_to_dict(fields): + if not fields: + return None + + output = {} + for field in fields: + hierarchy = field.split(".") + last = hierarchy.pop(-1) + value = output + for part in hierarchy: + if value is FIELD_VALUE: + break + + if part not in value: + value[part] = {} + value = value[part] + + if value is not FIELD_VALUE: + value[last] = FIELD_VALUE + return output + + +class QueryVariable(object): + """Object representing single varible used in GraphQlQuery. + + Variable definition is in GraphQl query header but it's value is used + in fields. + + Args: + variable_name (str): Name of variable in query. + """ + + def __init__(self, variable_name): + self._variable_name = variable_name + self._name = "${}".format(variable_name) + + @property + def name(self): + """Name used in field filter.""" + + return self._name + + @property + def variable_name(self): + """Name of variable in query definition.""" + + return self._variable_name + + def __hash__(self): + return self._name.__hash__() + + def __str__(self): + return self._name + + def __format__(self, *args, **kwargs): + return self._name.__format__(*args, **kwargs) + + +class GraphQlQuery: + """GraphQl query which can have fields to query. + + Single use object which can be used only for one query. Object and children + objects keep track about paging and progress. + + Args: + name (str): Name of query. + """ + + offset = 2 + + def __init__(self, name): + self._name = name + self._variables = {} + self._children = [] + self._has_multiple_edge_fields = None + + @property + def indent(self): + """Indentation for preparation of query string. + + Returns: + int: Ident spaces. + """ + + return 0 + + @property + def child_indent(self): + """Indentation for preparation of query string used by children. + + Returns: + int: Ident spaces for children. + """ + + return self.indent + + @property + def need_query(self): + """Still need query from server. + + Needed for edges which use pagination. + + Returns: + bool: If still need query from server. + """ + + for child in self._children: + if child.need_query: + return True + return False + + @property + def has_multiple_edge_fields(self): + if self._has_multiple_edge_fields is None: + edge_counter = 0 + for child in self._children: + edge_counter += child.sum_edge_fields(2) + if edge_counter > 1: + break + self._has_multiple_edge_fields = edge_counter > 1 + + return self._has_multiple_edge_fields + + def add_variable(self, key, value_type, value=None): + """Add variable to query. + + Args: + key (str): Variable name. + value_type (str): Type of expected value in variables. This is + graphql type e.g. "[String!]", "Int", "Boolean", etc. + value (Any): Default value for variable. Can be changed later. + + Returns: + QueryVariable: Created variable object. + + Raises: + KeyError: If variable was already added before. + """ + + if key in self._variables: + raise KeyError( + "Variable \"{}\" was already set with type {}.".format( + key, value_type + ) + ) + + variable = QueryVariable(key) + self._variables[key] = { + "type": value_type, + "variable": variable, + "value": value + } + return variable + + def get_variable(self, key): + """Variable object. + + Args: + key (str): Variable name added to headers. + + Returns: + QueryVariable: Variable object used in query string. + """ + + return self._variables[key]["variable"] + + def get_variable_value(self, key, default=None): + """Get Current value of variable. + + Args: + key (str): Variable name. + default (Any): Default value if variable is available. + + Returns: + Any: Variable value. + """ + + variable_item = self._variables.get(key) + if variable_item: + return variable_item["value"] + return default + + def set_variable_value(self, key, value): + """Set value for variable. + + Args: + key (str): Variable name under which the value is stored. + value (Any): Variable value used in query. Variable is not used + if value is 'None'. + """ + + self._variables[key]["value"] = value + + def get_variables_values(self): + """Calculate variable values used that should be used in query. + + Variables with value set to 'None' are skipped. + + Returns: + Dict[str, Any]: Variable values by their name. + """ + + output = {} + for key, item in self._variables.items(): + value = item["value"] + if value is not None: + output[key] = item["value"] + + return output + + def add_obj_field(self, field): + """Add field object to children. + + Args: + field (BaseGraphQlQueryField): Add field to query children. + """ + + if field in self._children: + return + + self._children.append(field) + field.set_parent(self) + + def add_field_with_edges(self, name): + """Add field with edges to query. + + Args: + name (str): Field name e.g. 'tasks'. + + Returns: + GraphQlQueryEdgeField: Created field object. + """ + + item = GraphQlQueryEdgeField(name, self) + self.add_obj_field(item) + return item + + def add_field(self, name): + """Add field to query. + + Args: + name (str): Field name e.g. 'id'. + + Returns: + GraphQlQueryField: Created field object. + """ + + item = GraphQlQueryField(name, self) + self.add_obj_field(item) + return item + + def calculate_query(self): + """Calculate query string which is sent to server. + + Returns: + str: GraphQl string with variables and headers. + + Raises: + ValueError: Query has no fiels. + """ + + if not self._children: + raise ValueError("Missing fields to query") + + variables = [] + for item in self._variables.values(): + if item["value"] is None: + continue + + variables.append( + "{}: {}".format(item["variable"], item["type"]) + ) + + variables_str = "" + if variables: + variables_str = "({})".format(",".join(variables)) + header = "query {}{}".format(self._name, variables_str) + + output = [] + output.append(header + " {") + for field in self._children: + output.append(field.calculate_query()) + output.append("}") + + return "\n".join(output) + + def parse_result(self, data, output, progress_data): + """Parse data from response for output. + + Output is stored to passed 'output' variable. That's because of paging + during which objects must have access to both new and previous values. + + Args: + data (Dict[str, Any]): Data received using calculated query. + output (Dict[str, Any]): Where parsed data are stored. + """ + + if not data: + return + + for child in self._children: + child.parse_result(data, output, progress_data) + + def query(self, con): + """Do a query from server. + + Args: + con (ServerAPI): Connection to server with 'query' method. + + Returns: + Dict[str, Any]: Parsed output from GraphQl query. + """ + + progress_data = {} + output = {} + while self.need_query: + query_str = self.calculate_query() + variables = self.get_variables_values() + response = con.query_graphql( + query_str, + self.get_variables_values() + ) + if response.errors: + raise GraphQlQueryFailed(response.errors, query_str, variables) + self.parse_result(response.data["data"], output, progress_data) + + return output + + def continuous_query(self, con): + """Do a query from server. + + Args: + con (ServerAPI): Connection to server with 'query' method. + + Returns: + Dict[str, Any]: Parsed output from GraphQl query. + """ + + progress_data = {} + if self.has_multiple_edge_fields: + output = {} + while self.need_query: + query_str = self.calculate_query() + variables = self.get_variables_values() + response = con.query_graphql(query_str, variables) + if response.errors: + raise GraphQlQueryFailed( + response.errors, query_str, variables + ) + self.parse_result(response.data["data"], output, progress_data) + + yield output + + else: + while self.need_query: + output = {} + query_str = self.calculate_query() + variables = self.get_variables_values() + response = con.query_graphql(query_str, variables) + if response.errors: + raise GraphQlQueryFailed( + response.errors, query_str, variables + ) + + self.parse_result(response.data["data"], output, progress_data) + + yield output + + +@six.add_metaclass(ABCMeta) +class BaseGraphQlQueryField(object): + """Field in GraphQl query. + + Args: + name (str): Name of field. + parent (Union[BaseGraphQlQueryField, GraphQlQuery]): Parent object of a + field. + """ + + def __init__(self, name, parent): + if isinstance(parent, GraphQlQuery): + query_item = parent + else: + query_item = parent.query_item + + self._name = name + self._parent = parent + + self._filters = {} + + self._children = [] + # Value is changed on first parse of result + self._need_query = True + + self._query_item = query_item + + self._path = None + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.path) + + def add_variable(self, key, value_type, value=None): + """Add variable to query. + + Args: + key (str): Variable name. + value_type (str): Type of expected value in variables. This is + graphql type e.g. "[String!]", "Int", "Boolean", etc. + value (Any): Default value for variable. Can be changed later. + + Returns: + QueryVariable: Created variable object. + + Raises: + KeyError: If variable was already added before. + """ + + return self._parent.add_variable(key, value_type, value) + + def get_variable(self, key): + """Variable object. + + Args: + key (str): Variable name added to headers. + + Returns: + QueryVariable: Variable object used in query string. + """ + + return self._parent.get_variable(key) + + @property + def need_query(self): + """Still need query from server. + + Needed for edges which use pagination. Look into children values too. + + Returns: + bool: If still need query from server. + """ + + if self._need_query: + return True + + for child in self._children_iter(): + if child.need_query: + return True + return False + + def _children_iter(self): + """Iterate over all children fields of object. + + Returns: + Iterator[BaseGraphQlQueryField]: Children fields. + """ + + for child in self._children: + yield child + + def sum_edge_fields(self, max_limit=None): + """Check how many edge fields query has. + + In case there are multiple edge fields or are nested the query can't + yield mid cursor results. + + Args: + max_limit (int): Skip rest of counting if counter is bigger then + entered number. + + Returns: + int: Counter edge fields + """ + + counter = 0 + if isinstance(self, GraphQlQueryEdgeField): + counter = 1 + + for child in self._children_iter(): + counter += child.sum_edge_fields(max_limit) + if max_limit is not None and counter >= max_limit: + break + return counter + + @property + def offset(self): + return self._query_item.offset + + @property + def indent(self): + return self._parent.child_indent + self.offset + + @property + @abstractmethod + def child_indent(self): + pass + + @property + def query_item(self): + return self._query_item + + @property + @abstractmethod + def has_edges(self): + pass + + @property + def child_has_edges(self): + for child in self._children_iter(): + if child.has_edges or child.child_has_edges: + return True + return False + + @property + def path(self): + """Field path for debugging purposes. + + Returns: + str: Field path in query. + """ + + if self._path is None: + if isinstance(self._parent, GraphQlQuery): + path = self._name + else: + path = "/".join((self._parent.path, self._name)) + self._path = path + return self._path + + def reset_cursor(self): + for child in self._children_iter(): + child.reset_cursor() + + def get_variable_value(self, *args, **kwargs): + return self._query_item.get_variable_value(*args, **kwargs) + + def set_variable_value(self, *args, **kwargs): + return self._query_item.set_variable_value(*args, **kwargs) + + def set_filter(self, key, value): + self._filters[key] = value + + def has_filter(self, key): + return key in self._filters + + def remove_filter(self, key): + self._filters.pop(key, None) + + def set_parent(self, parent): + if self._parent is parent: + return + self._parent = parent + parent.add_obj_field(self) + + def add_obj_field(self, field): + if field in self._children: + return + + self._children.append(field) + field.set_parent(self) + + def add_field_with_edges(self, name): + item = GraphQlQueryEdgeField(name, self) + self.add_obj_field(item) + return item + + def add_field(self, name): + item = GraphQlQueryField(name, self) + self.add_obj_field(item) + return item + + def _filter_value_to_str(self, value): + if isinstance(value, QueryVariable): + if self.get_variable_value(value.variable_name) is None: + return None + return str(value) + + if isinstance(value, numbers.Number): + return str(value) + + if isinstance(value, six.string_types): + return '"{}"'.format(value) + + if isinstance(value, (list, set, tuple)): + return "[{}]".format( + ", ".join( + self._filter_value_to_str(item) + for item in iter(value) + ) + ) + raise TypeError( + "Unknown type to convert '{}'".format(str(type(value))) + ) + + def get_filters(self): + """Receive filters for item. + + By default just use copy of set filters. + + Returns: + Dict[str, Any]: Fields filters. + """ + + return copy.deepcopy(self._filters) + + def _filters_to_string(self): + filters = self.get_filters() + if not filters: + return "" + + filter_items = [] + for key, value in filters.items(): + string_value = self._filter_value_to_str(value) + if string_value is None: + continue + + filter_items.append("{}: {}".format(key, string_value)) + + if not filter_items: + return "" + return "({})".format(", ".join(filter_items)) + + def _fake_children_parse(self): + """Mark children as they don't need query.""" + + for child in self._children_iter(): + child.parse_result({}, {}, {}) + + @abstractmethod + def calculate_query(self): + pass + + @abstractmethod + def parse_result(self, data, output, progress_data): + pass + + +class GraphQlQueryField(BaseGraphQlQueryField): + has_edges = False + + @property + def child_indent(self): + return self.indent + + def parse_result(self, data, output, progress_data): + if not isinstance(data, dict): + raise TypeError("{} Expected 'dict' type got '{}'".format( + self._name, str(type(data)) + )) + + self._need_query = False + value = data.get(self._name) + if value is None: + self._fake_children_parse() + if self._name in data: + output[self._name] = None + return + + if not self._children: + output[self._name] = value + return + + output_value = output.get(self._name) + if isinstance(value, dict): + if output_value is None: + output_value = {} + output[self._name] = output_value + + for child in self._children: + child.parse_result(value, output_value, progress_data) + return + + if output_value is None: + output_value = [] + output[self._name] = output_value + + if not value: + self._fake_children_parse() + return + + diff = len(value) - len(output_value) + if diff > 0: + for _ in range(diff): + output_value.append({}) + + for idx, item in enumerate(value): + item_value = output_value[idx] + for child in self._children: + child.parse_result(item, item_value, progress_data) + + def calculate_query(self): + offset = self.indent * " " + header = "{}{}{}".format( + offset, + self._name, + self._filters_to_string() + ) + if not self._children: + return header + + output = [] + output.append(header + " {") + + output.extend([ + field.calculate_query() + for field in self._children + ]) + output.append(offset + "}") + + return "\n".join(output) + + +class GraphQlQueryEdgeField(BaseGraphQlQueryField): + has_edges = True + + def __init__(self, *args, **kwargs): + super(GraphQlQueryEdgeField, self).__init__(*args, **kwargs) + self._cursor = None + self._edge_children = [] + + @property + def child_indent(self): + offset = self.offset * 2 + return self.indent + offset + + def _children_iter(self): + for child in super(GraphQlQueryEdgeField, self)._children_iter(): + yield child + + for child in self._edge_children: + yield child + + def add_obj_field(self, field): + if field in self._edge_children: + return + + super(GraphQlQueryEdgeField, self).add_obj_field(field) + + def add_obj_edge_field(self, field): + if field in self._edge_children or field in self._children: + return + + self._edge_children.append(field) + field.set_parent(self) + + def add_edge_field(self, name): + item = GraphQlQueryField(name, self) + self.add_obj_edge_field(item) + return item + + def reset_cursor(self): + # Reset cursor only for edges + self._cursor = None + self._need_query = True + + super(GraphQlQueryEdgeField, self).reset_cursor() + + def parse_result(self, data, output, progress_data): + if not isinstance(data, dict): + raise TypeError("{} Expected 'dict' type got '{}'".format( + self._name, str(type(data)) + )) + + value = data.get(self._name) + if value is None: + self._fake_children_parse() + self._need_query = False + return + + if self._name in output: + node_values = output[self._name] + else: + node_values = [] + output[self._name] = node_values + + handle_cursors = self.child_has_edges + if handle_cursors: + cursor_key = self._get_cursor_key() + if cursor_key in progress_data: + nodes_by_cursor = progress_data[cursor_key] + else: + nodes_by_cursor = {} + progress_data[cursor_key] = nodes_by_cursor + + page_info = value["pageInfo"] + new_cursor = page_info["endCursor"] + self._need_query = page_info["hasNextPage"] + edges = value["edges"] + # Fake result parse + if not edges: + self._fake_children_parse() + + for edge in edges: + if not handle_cursors: + edge_value = {} + node_values.append(edge_value) + else: + edge_cursor = edge["cursor"] + edge_value = nodes_by_cursor.get(edge_cursor) + if edge_value is None: + edge_value = {} + nodes_by_cursor[edge_cursor] = edge_value + node_values.append(edge_value) + + for child in self._edge_children: + child.parse_result(edge, edge_value, progress_data) + + for child in self._children: + child.parse_result(edge["node"], edge_value, progress_data) + + if not self._need_query: + return + + change_cursor = True + for child in self._children_iter(): + if child.need_query: + change_cursor = False + + if change_cursor: + for child in self._children_iter(): + child.reset_cursor() + self._cursor = new_cursor + + def _get_cursor_key(self): + return "{}/__cursor__".format(self.path) + + def get_filters(self): + filters = super(GraphQlQueryEdgeField, self).get_filters() + + filters["first"] = 300 + if self._cursor: + filters["after"] = self._cursor + return filters + + def calculate_query(self): + if not self._children and not self._edge_children: + raise ValueError("Missing child definitions for edges {}".format( + self.path + )) + + offset = self.indent * " " + header = "{}{}{}".format( + offset, + self._name, + self._filters_to_string() + ) + + output = [] + output.append(header + " {") + + edges_offset = offset + self.offset * " " + node_offset = edges_offset + self.offset * " " + output.append(edges_offset + "edges {") + for field in self._edge_children: + output.append(field.calculate_query()) + + if self._children: + output.append(node_offset + "node {") + + for field in self._children: + output.append( + field.calculate_query() + ) + + output.append(node_offset + "}") + if self.child_has_edges: + output.append(node_offset + "cursor") + + output.append(edges_offset + "}") + + # Add page information + output.append(edges_offset + "pageInfo {") + for page_key in ( + "endCursor", + "hasNextPage", + ): + output.append(node_offset + page_key) + output.append(edges_offset + "}") + output.append(offset + "}") + + return "\n".join(output) + + +INTROSPECTION_QUERY = """ + query IntrospectionQuery { + __schema { + queryType { name } + mutationType { name } + subscriptionType { name } + types { + ...FullType + } + directives { + name + description + locations + args { + ...InputValue + } + } + } + } + fragment FullType on __Type { + kind + name + description + fields(includeDeprecated: true) { + name + description + args { + ...InputValue + } + type { + ...TypeRef + } + isDeprecated + deprecationReason + } + inputFields { + ...InputValue + } + interfaces { + ...TypeRef + } + enumValues(includeDeprecated: true) { + name + description + isDeprecated + deprecationReason + } + possibleTypes { + ...TypeRef + } + } + fragment InputValue on __InputValue { + name + description + type { ...TypeRef } + defaultValue + } + fragment TypeRef on __Type { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + ofType { + kind + name + } + } + } + } + } + } + } + } +""" diff --git a/openpype/vendor/python/common/ayon_api/graphql_queries.py b/openpype/vendor/python/common/ayon_api/graphql_queries.py new file mode 100644 index 0000000000..f31134a04d --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/graphql_queries.py @@ -0,0 +1,489 @@ +import collections + +from .graphql import FIELD_VALUE, GraphQlQuery + + +def fields_to_dict(fields): + if not fields: + return None + + output = {} + for field in fields: + hierarchy = field.split(".") + last = hierarchy.pop(-1) + value = output + for part in hierarchy: + if value is FIELD_VALUE: + break + + if part not in value: + value[part] = {} + value = value[part] + + if value is not FIELD_VALUE: + value[last] = FIELD_VALUE + return output + + +def add_links_fields(entity_field, nested_fields): + if "links" not in nested_fields: + return + links_fields = nested_fields.pop("links") + + link_edge_fields = { + "id", + "linkType", + "projectName", + "entityType", + "entityId", + "direction", + "description", + "author", + } + if isinstance(links_fields, dict): + simple_fields = set(links_fields) + simple_variant = len(simple_fields - link_edge_fields) == 0 + else: + simple_variant = True + simple_fields = link_edge_fields + + link_field = entity_field.add_field_with_edges("links") + + link_type_var = link_field.add_variable("linkTypes", "[String!]") + link_dir_var = link_field.add_variable("linkDirection", "String!") + link_field.set_filter("linkTypes", link_type_var) + link_field.set_filter("direction", link_dir_var) + + if simple_variant: + for key in simple_fields: + link_field.add_edge_field(key) + return + + query_queue = collections.deque() + for key, value in links_fields.items(): + if key in link_edge_fields: + link_field.add_edge_field(key) + continue + query_queue.append((key, value, link_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + + +def project_graphql_query(fields): + query = GraphQlQuery("ProjectQuery") + project_name_var = query.add_variable("projectName", "String!") + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, project_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def projects_graphql_query(fields): + query = GraphQlQuery("ProjectsQuery") + projects_field = query.add_field_with_edges("projects") + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, projects_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def product_types_query(fields): + query = GraphQlQuery("ProductTypes") + product_types_field = query.add_field("productTypes") + + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, product_types_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + +def project_product_types_query(fields): + query = GraphQlQuery("ProjectProductTypes") + project_query = query.add_field("project") + project_name_var = query.add_variable("projectName", "String!") + project_query.set_filter("name", project_name_var) + product_types_field = project_query.add_field("productTypes") + nested_fields = fields_to_dict(fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, product_types_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def folders_graphql_query(fields): + query = GraphQlQuery("FoldersQuery") + project_name_var = query.add_variable("projectName", "String!") + folder_ids_var = query.add_variable("folderIds", "[String!]") + parent_folder_ids_var = query.add_variable("parentFolderIds", "[String!]") + folder_paths_var = query.add_variable("folderPaths", "[String!]") + folder_names_var = query.add_variable("folderNames", "[String!]") + has_products_var = query.add_variable("folderHasProducts", "Boolean!") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + folders_field = project_field.add_field_with_edges("folders") + folders_field.set_filter("ids", folder_ids_var) + folders_field.set_filter("parentIds", parent_folder_ids_var) + folders_field.set_filter("names", folder_names_var) + folders_field.set_filter("paths", folder_paths_var) + folders_field.set_filter("hasProducts", has_products_var) + + nested_fields = fields_to_dict(fields) + add_links_fields(folders_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, folders_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def tasks_graphql_query(fields): + query = GraphQlQuery("TasksQuery") + project_name_var = query.add_variable("projectName", "String!") + task_ids_var = query.add_variable("taskIds", "[String!]") + task_names_var = query.add_variable("taskNames", "[String!]") + task_types_var = query.add_variable("taskTypes", "[String!]") + folder_ids_var = query.add_variable("folderIds", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + tasks_field = project_field.add_field_with_edges("tasks") + tasks_field.set_filter("ids", task_ids_var) + # WARNING: At moment when this been created 'names' filter is not supported + tasks_field.set_filter("names", task_names_var) + tasks_field.set_filter("taskTypes", task_types_var) + tasks_field.set_filter("folderIds", folder_ids_var) + + nested_fields = fields_to_dict(fields) + add_links_fields(tasks_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, tasks_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def products_graphql_query(fields): + query = GraphQlQuery("ProductsQuery") + + project_name_var = query.add_variable("projectName", "String!") + folder_ids_var = query.add_variable("folderIds", "[String!]") + product_ids_var = query.add_variable("productIds", "[String!]") + product_names_var = query.add_variable("productNames", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + products_field = project_field.add_field_with_edges("products") + products_field.set_filter("ids", product_ids_var) + products_field.set_filter("names", product_names_var) + products_field.set_filter("folderIds", folder_ids_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(products_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, products_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def versions_graphql_query(fields): + query = GraphQlQuery("VersionsQuery") + + project_name_var = query.add_variable("projectName", "String!") + product_ids_var = query.add_variable("productIds", "[String!]") + version_ids_var = query.add_variable("versionIds", "[String!]") + versions_var = query.add_variable("versions", "[Int!]") + hero_only_var = query.add_variable("heroOnly", "Boolean") + latest_only_var = query.add_variable("latestOnly", "Boolean") + hero_or_latest_only_var = query.add_variable( + "heroOrLatestOnly", "Boolean" + ) + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + products_field = project_field.add_field_with_edges("versions") + products_field.set_filter("ids", version_ids_var) + products_field.set_filter("productIds", product_ids_var) + products_field.set_filter("versions", versions_var) + products_field.set_filter("heroOnly", hero_only_var) + products_field.set_filter("latestOnly", latest_only_var) + products_field.set_filter("heroOrLatestOnly", hero_or_latest_only_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(products_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, products_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def representations_graphql_query(fields): + query = GraphQlQuery("RepresentationsQuery") + + project_name_var = query.add_variable("projectName", "String!") + repre_ids_var = query.add_variable("representationIds", "[String!]") + repre_names_var = query.add_variable("representationNames", "[String!]") + version_ids_var = query.add_variable("versionIds", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + repres_field = project_field.add_field_with_edges("representations") + repres_field.set_filter("ids", repre_ids_var) + repres_field.set_filter("versionIds", version_ids_var) + repres_field.set_filter("names", repre_names_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(repres_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, repres_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def representations_parents_qraphql_query( + version_fields, product_fields, folder_fields +): + query = GraphQlQuery("RepresentationsParentsQuery") + + project_name_var = query.add_variable("projectName", "String!") + repre_ids_var = query.add_variable("representationIds", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + repres_field = project_field.add_field_with_edges("representations") + repres_field.add_field("id") + repres_field.set_filter("ids", repre_ids_var) + version_field = repres_field.add_field("version") + + fields_queue = collections.deque() + for key, value in fields_to_dict(version_fields).items(): + fields_queue.append((key, value, version_field)) + + product_field = version_field.add_field("product") + for key, value in fields_to_dict(product_fields).items(): + fields_queue.append((key, value, product_field)) + + folder_field = product_field.add_field("folder") + for key, value in fields_to_dict(folder_fields).items(): + fields_queue.append((key, value, folder_field)) + + while fields_queue: + item = fields_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + fields_queue.append((k, v, field)) + + return query + + +def workfiles_info_graphql_query(fields): + query = GraphQlQuery("WorkfilesInfo") + project_name_var = query.add_variable("projectName", "String!") + workfiles_info_ids = query.add_variable("workfileIds", "[String!]") + task_ids_var = query.add_variable("taskIds", "[String!]") + paths_var = query.add_variable("paths", "[String!]") + + project_field = query.add_field("project") + project_field.set_filter("name", project_name_var) + + workfiles_field = project_field.add_field_with_edges("workfiles") + workfiles_field.set_filter("ids", workfiles_info_ids) + workfiles_field.set_filter("taskIds", task_ids_var) + workfiles_field.set_filter("paths", paths_var) + + nested_fields = fields_to_dict(set(fields)) + add_links_fields(workfiles_field, nested_fields) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, workfiles_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def events_graphql_query(fields): + query = GraphQlQuery("Events") + topics_var = query.add_variable("eventTopics", "[String!]") + projects_var = query.add_variable("projectNames", "[String!]") + states_var = query.add_variable("eventStates", "[String!]") + users_var = query.add_variable("eventUsers", "[String!]") + include_logs_var = query.add_variable("includeLogsFilter", "Boolean!") + + events_field = query.add_field_with_edges("events") + events_field.set_filter("topics", topics_var) + events_field.set_filter("projects", projects_var) + events_field.set_filter("states", states_var) + events_field.set_filter("users", users_var) + events_field.set_filter("includeLogs", include_logs_var) + + nested_fields = fields_to_dict(set(fields)) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, events_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query + + +def users_graphql_query(fields): + query = GraphQlQuery("Users") + names_var = query.add_variable("userNames", "[String!]") + + users_field = query.add_field_with_edges("users") + users_field.set_filter("names", names_var) + + nested_fields = fields_to_dict(set(fields)) + + query_queue = collections.deque() + for key, value in nested_fields.items(): + query_queue.append((key, value, users_field)) + + while query_queue: + item = query_queue.popleft() + key, value, parent = item + field = parent.add_field(key) + if value is FIELD_VALUE: + continue + + for k, v in value.items(): + query_queue.append((k, v, field)) + return query diff --git a/openpype/vendor/python/common/ayon_api/operations.py b/openpype/vendor/python/common/ayon_api/operations.py new file mode 100644 index 0000000000..eb2ca8afe3 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/operations.py @@ -0,0 +1,775 @@ +import os +import copy +import collections +import uuid +from abc import ABCMeta, abstractmethod + +import six + +from ._api import get_server_api_connection +from .utils import create_entity_id, REMOVED_VALUE + + +def _create_or_convert_to_id(entity_id=None): + if entity_id is None: + return create_entity_id() + + # Validate if can be converted to uuid + uuid.UUID(entity_id) + return entity_id + + +def new_folder_entity( + name, + folder_type, + parent_id=None, + status=None, + tags=None, + attribs=None, + data=None, + thumbnail_id=None, + entity_id=None +): + """Create skeleton data of folder entity. + + Args: + name (str): Is considered as unique identifier of folder in project. + folder_type (str): Type of folder. + parent_id (Optional[str]): Parent folder id. + status (Optional[str]): Product status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of folder. + data (Optional[Dict[str, Any]]): Custom folder data. Empty dictionary + is used if not passed. + thumbnail_id (Optional[str]): Thumbnail id related to folder. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of folder entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + if parent_id is not None: + parent_id = _create_or_convert_to_id(parent_id) + + output = { + "id": _create_or_convert_to_id(entity_id), + "name": name, + # This will be ignored + "folderType": folder_type, + "parentId": parent_id, + "data": data, + "attrib": attribs, + "thumbnailId": thumbnail_id + } + if status: + output["status"] = status + if tags: + output["tags"] = tags + return output + + +def new_product_entity( + name, + product_type, + folder_id, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of product entity. + + Args: + name (str): Is considered as unique identifier of + product under folder. + product_type (str): Product type. + folder_id (str): Parent folder id. + status (Optional[str]): Product status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of product. + data (Optional[Dict[str, Any]]): product entity data. Empty dictionary + is used if not passed. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of product entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "name": name, + "productType": product_type, + "attrib": attribs, + "data": data, + "folderId": _create_or_convert_to_id(folder_id), + } + if status: + output["status"] = status + if tags: + output["tags"] = tags + return output + + +def new_version_entity( + version, + product_id, + task_id=None, + thumbnail_id=None, + author=None, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of version entity. + + Args: + version (int): Is considered as unique identifier of version + under product. + product_id (str): Parent product id. + task_id (Optional[str]): Task id under which product was created. + thumbnail_id (Optional[str]): Thumbnail related to version. + author (Optional[str]): Name of version author. + status (Optional[str]): Version status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of version. + data (Optional[Dict[str, Any]]): Version entity custom data. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "version": int(version), + "productId": _create_or_convert_to_id(product_id), + "attrib": attribs, + "data": data + } + if task_id: + output["taskId"] = task_id + if thumbnail_id: + output["thumbnailId"] = thumbnail_id + if author: + output["author"] = author + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output + + +def new_hero_version_entity( + version, + product_id, + task_id=None, + thumbnail_id=None, + author=None, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of hero version entity. + + Args: + version (int): Is considered as unique identifier of version + under product. Should be same as standard version if there is any. + product_id (str): Parent product id. + task_id (Optional[str]): Task id under which product was created. + thumbnail_id (Optional[str]): Thumbnail related to version. + author (Optional[str]): Name of version author. + status (Optional[str]): Version status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of version. + data (Optional[Dict[str, Any]]): Version entity data. + entity_id (Optional[str]): Predefined id of entity. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of version entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "version": -abs(int(version)), + "productId": product_id, + "attrib": attribs, + "data": data + } + if task_id: + output["taskId"] = task_id + if thumbnail_id: + output["thumbnailId"] = thumbnail_id + if author: + output["author"] = author + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output + + +def new_representation_entity( + name, + version_id, + files, + status=None, + tags=None, + attribs=None, + data=None, + entity_id=None +): + """Create skeleton data of representation entity. + + Args: + name (str): Representation name considered as unique identifier + of representation under version. + version_id (str): Parent version id. + files (list[dict[str, str]]): List of files in representation. + status (Optional[str]): Representation status. + tags (Optional[List[str]]): List of tags. + attribs (Optional[Dict[str, Any]]): Explicitly set attributes + of representation. + data (Optional[Dict[str, Any]]): Representation entity data. + entity_id (Optional[str]): Predefined id of entity. New id is created + if not passed. + + Returns: + Dict[str, Any]: Skeleton of representation entity. + """ + + if attribs is None: + attribs = {} + + if data is None: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "versionId": _create_or_convert_to_id(version_id), + "files": files, + "name": name, + "data": data, + "attrib": attribs + } + if tags: + output["tags"] = tags + if status: + output["status"] = status + return output + + +def new_workfile_info( + filepath, + task_id, + status=None, + tags=None, + attribs=None, + description=None, + data=None, + entity_id=None +): + """Create skeleton data of workfile info entity. + + Workfile entity is at this moment used primarily for artist notes. + + Args: + filepath (str): Rootless workfile filepath. + task_id (str): Task under which was workfile created. + status (Optional[str]): Workfile status. + tags (Optional[List[str]]): Workfile tags. + attribs (Options[dic[str, Any]]): Explicitly set attributes. + description (Optional[str]): Workfile description. + data (Optional[Dict[str, Any]]): Additional metadata. + entity_id (Optional[str]): Predefined id of entity. New id is created + if not passed. + + Returns: + Dict[str, Any]: Skeleton of workfile info entity. + """ + + if attribs is None: + attribs = {} + + if "extension" not in attribs: + attribs["extension"] = os.path.splitext(filepath)[-1] + + if description: + attribs["description"] = description + + if not data: + data = {} + + output = { + "id": _create_or_convert_to_id(entity_id), + "taskId": task_id, + "path": filepath, + "data": data, + "attrib": attribs + } + if status: + output["status"] = status + + if tags: + output["tags"] = tags + return output + + +@six.add_metaclass(ABCMeta) +class AbstractOperation(object): + """Base operation class. + + Opration represent a call into database. The call can create, change or + remove data. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + """ + + def __init__(self, project_name, entity_type, session): + self._project_name = project_name + self._entity_type = entity_type + self._session = session + self._id = str(uuid.uuid4()) + + @property + def project_name(self): + return self._project_name + + @property + def id(self): + """Identifier of operation.""" + + return self._id + + @property + def entity_type(self): + return self._entity_type + + @property + @abstractmethod + def operation_name(self): + """Stringified type of operation.""" + + pass + + def to_data(self): + """Convert opration to data that can be converted to json or others. + + Returns: + Dict[str, Any]: Description of operation. + """ + + return { + "id": self._id, + "entity_type": self.entity_type, + "project_name": self.project_name, + "operation": self.operation_name + } + + +class CreateOperation(AbstractOperation): + """Opeartion to create an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + data (Dict[str, Any]): Data of entity that will be created. + """ + + operation_name = "create" + + def __init__(self, project_name, entity_type, data, session): + if not data: + data = {} + else: + data = copy.deepcopy(dict(data)) + + if "id" not in data: + data["id"] = create_entity_id() + + self._data = data + super(CreateOperation, self).__init__( + project_name, entity_type, session + ) + + def __setitem__(self, key, value): + self.set_value(key, value) + + def __getitem__(self, key): + return self.data[key] + + def set_value(self, key, value): + self.data[key] = value + + def get(self, key, *args, **kwargs): + return self.data.get(key, *args, **kwargs) + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + @property + def entity_id(self): + return self._data["id"] + + @property + def data(self): + return self._data + + def to_data(self): + output = super(CreateOperation, self).to_data() + output["data"] = copy.deepcopy(self.data) + return output + + def to_server_operation(self): + return { + "id": self.id, + "type": "create", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": self._data + } + + +class UpdateOperation(AbstractOperation): + """Operation to update an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + entity_id (str): Identifier of an entity. + update_data (Dict[str, Any]): Key -> value changes that will be set in + database. If value is set to 'REMOVED_VALUE' the key will be + removed. Only first level of dictionary is checked (on purpose). + """ + + operation_name = "update" + + def __init__( + self, project_name, entity_type, entity_id, update_data, session + ): + super(UpdateOperation, self).__init__( + project_name, entity_type, session + ) + + self._entity_id = entity_id + self._update_data = update_data + + @property + def entity_id(self): + return self._entity_id + + @property + def update_data(self): + return self._update_data + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_data(self): + changes = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + changes[key] = value + + output = super(UpdateOperation, self).to_data() + output.update({ + "entity_id": self.entity_id, + "changes": changes + }) + return output + + def to_server_operation(self): + if not self._update_data: + return None + + update_data = {} + for key, value in self._update_data.items(): + if value is REMOVED_VALUE: + value = None + update_data[key] = value + + return { + "id": self.id, + "type": "update", + "entityType": self.entity_type, + "entityId": self.entity_id, + "data": update_data + } + + +class DeleteOperation(AbstractOperation): + """Opeartion to delete an entity. + + Args: + project_name (str): On which project operation will happen. + entity_type (str): Type of entity on which change happens. + e.g. 'folder', 'representation' etc. + entity_id (str): Entity id that will be removed. + """ + + operation_name = "delete" + + def __init__(self, project_name, entity_type, entity_id, session): + self._entity_id = entity_id + + super(DeleteOperation, self).__init__( + project_name, entity_type, session + ) + + @property + def entity_id(self): + return self._entity_id + + @property + def con(self): + return self.session.con + + @property + def session(self): + return self._session + + def to_data(self): + output = super(DeleteOperation, self).to_data() + output["entity_id"] = self.entity_id + return output + + def to_server_operation(self): + return { + "id": self.id, + "type": self.operation_name, + "entityId": self.entity_id, + "entityType": self.entity_type, + } + + +class OperationsSession(object): + """Session storing operations that should happen in an order. + + At this moment does not handle anything special can be sonsidered as + stupid list of operations that will happen after each other. If creation + of same entity is there multiple times it's handled in any way and entity + values are not validated. + + All operations must be related to single project. + + Args: + project_name (str): Project name to which are operations related. + """ + + def __init__(self, con=None): + if con is None: + con = get_server_api_connection() + self._con = con + self._project_cache = {} + self._operations = [] + self._nested_operations = collections.defaultdict(list) + + @property + def con(self): + return self._con + + def get_project(self, project_name): + if project_name not in self._project_cache: + self._project_cache[project_name] = self.con.get_project( + project_name) + return copy.deepcopy(self._project_cache[project_name]) + + def __len__(self): + return len(self._operations) + + def add(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + if not isinstance( + operation, + (CreateOperation, UpdateOperation, DeleteOperation) + ): + raise TypeError("Expected Operation object got {}".format( + str(type(operation)) + )) + + self._operations.append(operation) + + def append(self, operation): + """Add operation to be processed. + + Args: + operation (BaseOperation): Operation that should be processed. + """ + + self.add(operation) + + def extend(self, operations): + """Add operations to be processed. + + Args: + operations (List[BaseOperation]): Operations that should be + processed. + """ + + for operation in operations: + self.add(operation) + + def remove(self, operation): + """Remove operation.""" + + self._operations.remove(operation) + + def clear(self): + """Clear all registered operations.""" + + self._operations = [] + + def to_data(self): + return [ + operation.to_data() + for operation in self._operations + ] + + def commit(self): + """Commit session operations.""" + + operations, self._operations = self._operations, [] + if not operations: + return + + operations_by_project = collections.defaultdict(list) + for operation in operations: + operations_by_project[operation.project_name].append(operation) + + for project_name, operations in operations_by_project.items(): + operations_body = [] + for operation in operations: + body = operation.to_server_operation() + if body is not None: + operations_body.append(body) + + self._con.send_batch_operations( + project_name, operations_body, can_fail=False + ) + + def create_entity(self, project_name, entity_type, data, nested_id=None): + """Fast access to 'CreateOperation'. + + Args: + project_name (str): On which project the creation happens. + entity_type (str): Which entity type will be created. + data (Dicst[str, Any]): Entity data. + nested_id (str): Id of other operation from which is triggered + operation -> Operations can trigger suboperations but they + must be added to operations list after it's parent is added. + + Returns: + CreateOperation: Object of update operation. + """ + + operation = CreateOperation( + project_name, entity_type, data, self + ) + + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + + return operation + + def update_entity( + self, project_name, entity_type, entity_id, update_data, nested_id=None + ): + """Fast access to 'UpdateOperation'. + + Returns: + UpdateOperation: Object of update operation. + """ + + operation = UpdateOperation( + project_name, entity_type, entity_id, update_data, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation + + def delete_entity( + self, project_name, entity_type, entity_id, nested_id=None + ): + """Fast access to 'DeleteOperation'. + + Returns: + DeleteOperation: Object of delete operation. + """ + + operation = DeleteOperation( + project_name, entity_type, entity_id, self + ) + if nested_id: + self._nested_operations[nested_id].append(operation) + else: + self.add(operation) + if operation.id in self._nested_operations: + self.extend(self._nested_operations.pop(operation.id)) + return operation diff --git a/openpype/vendor/python/common/ayon_api/server_api.py b/openpype/vendor/python/common/ayon_api/server_api.py new file mode 100644 index 0000000000..f2689e88dc --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/server_api.py @@ -0,0 +1,6098 @@ +import os +import re +import io +import json +import logging +import collections +import platform +import copy +import uuid +from contextlib import contextmanager +try: + from http import HTTPStatus +except ImportError: + HTTPStatus = None + +import requests +try: + # This should be used if 'requests' have it available + from requests.exceptions import JSONDecodeError as RequestsJSONDecodeError +except ImportError: + # Older versions of 'requests' don't have custom exception for json + # decode error + try: + from simplejson import JSONDecodeError as RequestsJSONDecodeError + except ImportError: + from json import JSONDecodeError as RequestsJSONDecodeError + +from .constants import ( + DEFAULT_PRODUCT_TYPE_FIELDS, + DEFAULT_PROJECT_FIELDS, + DEFAULT_FOLDER_FIELDS, + DEFAULT_TASK_FIELDS, + DEFAULT_PRODUCT_FIELDS, + DEFAULT_VERSION_FIELDS, + DEFAULT_REPRESENTATION_FIELDS, + REPRESENTATION_FILES_FIELDS, + DEFAULT_WORKFILE_INFO_FIELDS, + DEFAULT_EVENT_FIELDS, + DEFAULT_USER_FIELDS, +) +from .graphql import GraphQlQuery, INTROSPECTION_QUERY +from .graphql_queries import ( + project_graphql_query, + projects_graphql_query, + project_product_types_query, + product_types_query, + folders_graphql_query, + tasks_graphql_query, + products_graphql_query, + versions_graphql_query, + representations_graphql_query, + representations_parents_qraphql_query, + workfiles_info_graphql_query, + events_graphql_query, + users_graphql_query, +) +from .exceptions import ( + FailedOperations, + UnauthorizedError, + AuthenticationError, + ServerNotReached, + ServerError, + HTTPRequestError, +) +from .utils import ( + RepresentationParents, + prepare_query_string, + logout_from_server, + create_entity_id, + entity_data_json_default, + failed_json_default, + TransferProgress, + create_dependency_package_basename, + ThumbnailContent, +) + +PatternType = type(re.compile("")) +JSONDecodeError = getattr(json, "JSONDecodeError", ValueError) +# This should be collected from server schema +PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_" +PROJECT_NAME_REGEX = re.compile( + "^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS) +) + +VERSION_REGEX = re.compile( + r"(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"\.(?P0|[1-9]\d*)" + r"(?:-(?P[a-zA-Z\d\-.]*))?" + r"(?:\+(?P[a-zA-Z\d\-.]*))?" +) + + +def _get_description(response): + if HTTPStatus is None: + return str(response.orig_response) + return HTTPStatus(response.status).description + + +class RequestType: + def __init__(self, name): + self.name = name + + def __hash__(self): + return self.name.__hash__() + + +class RequestTypes: + get = RequestType("GET") + post = RequestType("POST") + put = RequestType("PUT") + patch = RequestType("PATCH") + delete = RequestType("DELETE") + + +class RestApiResponse(object): + """API Response.""" + + def __init__(self, response, data=None): + if response is None: + status_code = 500 + else: + status_code = response.status_code + self._response = response + self.status = status_code + self._data = data + + @property + def text(self): + return self._response.text + + @property + def orig_response(self): + return self._response + + @property + def headers(self): + return self._response.headers + + @property + def data(self): + if self._data is None: + try: + self._data = self.orig_response.json() + except RequestsJSONDecodeError: + self._data = {} + return self._data + + @property + def content(self): + return self._response.content + + @property + def content_type(self): + return self.headers.get("Content-Type") + + @property + def detail(self): + detail = self.get("detail") + if detail: + return detail + return _get_description(self) + + @property + def status_code(self): + return self.status + + def raise_for_status(self, message=None): + try: + self._response.raise_for_status() + except requests.exceptions.HTTPError as exc: + if message is None: + message = str(exc) + raise HTTPRequestError(message, exc.response) + + def __enter__(self, *args, **kwargs): + return self._response.__enter__(*args, **kwargs) + + def __contains__(self, key): + return key in self.data + + def __repr__(self): + return "<{} [{}]>".format(self.__class__.__name__, self.status) + + def __len__(self): + return 200 <= self.status < 400 + + def __getitem__(self, key): + return self.data[key] + + def get(self, key, default=None): + data = self.data + if isinstance(data, dict): + return self.data.get(key, default) + return default + + +class GraphQlResponse: + def __init__(self, data): + self.data = data + self.errors = data.get("errors") + + def __len__(self): + if self.errors: + return 0 + return 1 + + def __repr__(self): + if self.errors: + return "<{} errors={}>".format( + self.__class__.__name__, self.errors[0]['message'] + ) + return "<{}>".format(self.__class__.__name__) + + +def fill_own_attribs(entity): + if not entity or not entity.get("attrib"): + return + + attributes = set(entity["ownAttrib"]) + + own_attrib = {} + entity["ownAttrib"] = own_attrib + + for key, value in entity["attrib"].items(): + if key not in attributes: + own_attrib[key] = None + else: + own_attrib[key] = copy.deepcopy(value) + + +class _AsUserStack: + """Handle stack of users used over server api connection in service mode. + + ServerAPI can behave as other users if it is using special API key. + + Examples: + >>> stack = _AsUserStack() + >>> stack.set_default_username("DefaultName") + >>> print(stack.username) + DefaultName + >>> with stack.as_user("Other1"): + ... print(stack.username) + ... with stack.as_user("Other2"): + ... print(stack.username) + ... print(stack.username) + ... stack.clear() + ... print(stack.username) + Other1 + Other2 + Other1 + None + >>> print(stack.username) + None + >>> stack.set_default_username("DefaultName") + >>> print(stack.username) + DefaultName + """ + + def __init__(self): + self._users_by_id = {} + self._user_ids = [] + self._last_user = None + self._default_user = None + + def clear(self): + self._users_by_id = {} + self._user_ids = [] + self._last_user = None + self._default_user = None + + @property + def username(self): + # Use '_user_ids' for boolean check to have ability "unset" + # default user + if self._user_ids: + return self._last_user + return self._default_user + + def get_default_username(self): + return self._default_user + + def set_default_username(self, username=None): + self._default_user = username + + default_username = property(get_default_username, set_default_username) + + @contextmanager + def as_user(self, username): + self._last_user = username + user_id = uuid.uuid4().hex + self._user_ids.append(user_id) + self._users_by_id[user_id] = username + try: + yield + finally: + self._users_by_id.pop(user_id, None) + if not self._user_ids: + return + + # First check if is the user id the last one + was_last = self._user_ids[-1] == user_id + # Remove id from variables + if user_id in self._user_ids: + self._user_ids.remove(user_id) + + if not was_last: + return + + new_last_user = None + if self._user_ids: + new_last_user = self._users_by_id.get(self._user_ids[-1]) + self._last_user = new_last_user + + +class ServerAPI(object): + """Base handler of connection to server. + + Requires url to server which is used as base for api and graphql calls. + + Login cause that a session is used + + Args: + base_url (str): Example: http://localhost:5000 + token (Optional[str]): Access token (api key) to server. + site_id (Optional[str]): Unique name of site. Should be the same when + connection is created from the same machine under same user. + client_version (Optional[str]): Version of client application (used in + desktop client application). + default_settings_variant (Optional[Literal["production", "staging"]]): + Settings variant used by default if a method for settings won't + get any (by default is 'production'). + sender (Optional[str]): Sender of requests. Used in server logs and + propagated into events. + ssl_verify (Union[bool, str, None]): Verify SSL certificate + Looks for env variable value 'AYON_CA_FILE' by default. If not + available then 'True' is used. + cert (Optional[str]): Path to certificate file. Looks for env + variable value 'AYON_CERT_FILE' by default. + create_session (Optional[bool]): Create session for connection if + token is available. Default is True. + """ + + def __init__( + self, + base_url, + token=None, + site_id=None, + client_version=None, + default_settings_variant=None, + sender=None, + ssl_verify=None, + cert=None, + create_session=True, + ): + if not base_url: + raise ValueError("Invalid server URL {}".format(str(base_url))) + + base_url = base_url.rstrip("/") + self._base_url = base_url + self._rest_url = "{}/api".format(base_url) + self._graphql_url = "{}/graphql".format(base_url) + self._log = None + self._access_token = token + self._site_id = site_id + self._client_version = client_version + self._default_settings_variant = ( + default_settings_variant + or "production" + ) + self._sender = sender + + if ssl_verify is None: + # Custom AYON env variable for CA file or 'True' + # - that should cover most default behaviors in 'requests' + # with 'certifi' + ssl_verify = os.environ.get("AYON_CA_FILE") or True + + if cert is None: + cert = os.environ.get("AYON_CERT_FILE") + + self._ssl_verify = ssl_verify + self._cert = cert + + self._access_token_is_service = None + self._token_is_valid = None + self._token_validation_started = False + self._server_available = None + self._server_version = None + self._server_version_tuple = None + + self._session = None + + self._base_functions_mapping = { + RequestTypes.get: requests.get, + RequestTypes.post: requests.post, + RequestTypes.put: requests.put, + RequestTypes.patch: requests.patch, + RequestTypes.delete: requests.delete + } + self._session_functions_mapping = {} + + # Attributes cache + self._attributes_schema = None + self._entity_type_attributes_cache = {} + + self._as_user_stack = _AsUserStack() + + # Create session + if self._access_token and create_session: + self.validate_server_availability() + self.create_session() + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + def get_base_url(self): + return self._base_url + + def get_rest_url(self): + return self._rest_url + + base_url = property(get_base_url) + rest_url = property(get_rest_url) + + def get_ssl_verify(self): + """Enable ssl verification. + + Returns: + bool: Current state of ssl verification. + """ + + return self._ssl_verify + + def set_ssl_verify(self, ssl_verify): + """Change ssl verification state. + + Args: + ssl_verify (Union[bool, str, None]): Enabled/disable + ssl verification, can be a path to file. + """ + + if self._ssl_verify == ssl_verify: + return + self._ssl_verify = ssl_verify + if self._session is not None: + self._session.verify = ssl_verify + + def get_cert(self): + """Current cert file used for connection to server. + + Returns: + Union[str, None]: Path to cert file. + """ + + return self._cert + + def set_cert(self, cert): + """Change cert file used for connection to server. + + Args: + cert (Union[str, None]): Path to cert file. + """ + + if cert == self._cert: + return + self._cert = cert + if self._session is not None: + self._session.cert = cert + + ssl_verify = property(get_ssl_verify, set_ssl_verify) + cert = property(get_cert, set_cert) + + @property + def access_token(self): + """Access token used for authorization to server. + + Returns: + Union[str, None]: Token string or None if not authorized yet. + """ + + return self._access_token + + def get_site_id(self): + """Site id used for connection. + + Site id tells server from which machine/site is connection created and + is used for default site overrides when settings are received. + + Returns: + Union[str, None]: Site id value or None if not filled. + """ + + return self._site_id + + def set_site_id(self, site_id): + """Change site id of connection. + + Behave as specific site for server. It affects default behavior of + settings getter methods. + + Args: + site_id (Union[str, None]): Site id value, or 'None' to unset. + """ + + if self._site_id == site_id: + return + self._site_id = site_id + # Recreate session on machine id change + self._update_session_headers() + + site_id = property(get_site_id, set_site_id) + + def get_client_version(self): + """Version of client used to connect to server. + + Client version is AYON client build desktop application. + + Returns: + str: Client version string used in connection. + """ + + return self._client_version + + def set_client_version(self, client_version): + """Set version of client used to connect to server. + + Client version is AYON client build desktop application. + + Args: + client_version (Union[str, None]): Client version string. + """ + + if self._client_version == client_version: + return + + self._client_version = client_version + self._update_session_headers() + + client_version = property(get_client_version, set_client_version) + + def get_default_settings_variant(self): + """Default variant used for settings. + + Returns: + Union[str, None]: name of variant or None. + """ + + return self._default_settings_variant + + def set_default_settings_variant(self, variant): + """Change default variant for addon settings. + + Note: + It is recommended to set only 'production' or 'staging' variants + as default variant. + + Args: + variant (Literal['production', 'staging']): Settings variant name. + """ + + if variant not in ("production", "staging"): + raise ValueError(( + "Invalid variant name {}. Expected 'production' or 'staging'" + ).format(variant)) + self._default_settings_variant = variant + + default_settings_variant = property( + get_default_settings_variant, + set_default_settings_variant + ) + + def get_sender(self): + """Sender used to send requests. + + Returns: + Union[str, None]: Sender name or None. + """ + + return self._sender + + def set_sender(self, sender): + """Change sender used for requests. + + Args: + sender (Union[str, None]): Sender name or None. + """ + + if sender == self._sender: + return + self._sender = sender + self._update_session_headers() + + sender = property(get_sender, set_sender) + + def get_default_service_username(self): + """Default username used for callbacks when used with service API key. + + Returns: + Union[str, None]: Username if any was filled. + """ + + return self._as_user_stack.get_default_username() + + def set_default_service_username(self, username=None): + """Service API will work as other user. + + Service API keys can work as other user. It can be temporary using + context manager 'as_user' or it is possible to set default username if + 'as_user' context manager is not entered. + + Args: + username (Optional[str]): Username to work as when service. + + Raises: + ValueError: When connection is not yet authenticated or api key + is not service token. + """ + + current_username = self._as_user_stack.get_default_username() + if current_username == username: + return + + if not self.has_valid_token: + raise ValueError( + "Authentication of connection did not happen yet." + ) + + if not self._access_token_is_service: + raise ValueError( + "Can't set service username. API key is not a service token." + ) + + self._as_user_stack.set_default_username(username) + if self._as_user_stack.username == username: + self._update_session_headers() + + @contextmanager + def as_username(self, username): + """Service API will temporarily work as other user. + + This method can be used only if service API key is logged in. + + Args: + username (Union[str, None]): Username to work as when service. + + Raises: + ValueError: When connection is not yet authenticated or api key + is not service token. + """ + + if not self.has_valid_token: + raise ValueError( + "Authentication of connection did not happen yet." + ) + + if not self._access_token_is_service: + raise ValueError( + "Can't set service username. API key is not a service token." + ) + + with self._as_user_stack.as_user(username) as o: + self._update_session_headers() + try: + yield o + finally: + self._update_session_headers() + + @property + def is_server_available(self): + if self._server_available is None: + response = requests.get( + self._base_url, + cert=self._cert, + verify=self._ssl_verify + ) + self._server_available = response.status_code == 200 + return self._server_available + + @property + def has_valid_token(self): + if self._access_token is None: + return False + + if self._token_is_valid is None: + self.validate_token() + return self._token_is_valid + + def validate_server_availability(self): + if not self.is_server_available: + raise ServerNotReached("Server \"{}\" can't be reached".format( + self._base_url + )) + + def validate_token(self): + try: + self._token_validation_started = True + # TODO add other possible validations + # - existence of 'user' key in info + # - validate that 'site_id' is in 'sites' in info + self.get_info() + self.get_user() + self._token_is_valid = True + + except UnauthorizedError: + self._token_is_valid = False + + finally: + self._token_validation_started = False + return self._token_is_valid + + def set_token(self, token): + self.reset_token() + self._access_token = token + self.get_user() + + def reset_token(self): + self._access_token = None + self._token_is_valid = None + self.close_session() + + def create_session(self, ignore_existing=True, force=False): + """Create a connection session. + + Session helps to keep connection with server without + need to reconnect on each call. + + Args: + ignore_existing (bool): If session already exists, + ignore creation. + force (bool): If session already exists, close it and + create new. + """ + + if force and self._session is not None: + self.close_session() + + if self._session is not None: + if ignore_existing: + return + raise ValueError("Session is already created.") + + self._as_user_stack.clear() + # Validate token before session creation + self.validate_token() + + session = requests.Session() + session.cert = self._cert + session.verify = self._ssl_verify + session.headers.update(self.get_headers()) + + self._session_functions_mapping = { + RequestTypes.get: session.get, + RequestTypes.post: session.post, + RequestTypes.put: session.put, + RequestTypes.patch: session.patch, + RequestTypes.delete: session.delete + } + self._session = session + + def close_session(self): + if self._session is None: + return + + session = self._session + self._session = None + self._session_functions_mapping = {} + session.close() + + def _update_session_headers(self): + if self._session is None: + return + + # Header keys that may change over time + for key, value in ( + ("X-as-user", self._as_user_stack.username), + ("x-ayon-version", self._client_version), + ("x-ayon-site-id", self._site_id), + ("x-sender", self._sender), + ): + if value is not None: + self._session.headers[key] = value + elif key in self._session.headers: + self._session.headers.pop(key) + + def get_info(self): + """Get information about current used api key. + + By default, the 'info' contains only 'uptime' and 'version'. With + logged user info also contains information about user and machines on + which was logged in. + + Todos: + Use this method for validation of token instead of 'get_user'. + + Returns: + dict[str, Any]: Information from server. + """ + + response = self.get("info") + return response.data + + def get_server_version(self): + """Get server version. + + Version should match semantic version (https://semver.org/). + + Returns: + str: Server version. + """ + + if self._server_version is None: + self._server_version = self.get_info()["version"] + return self._server_version + + def get_server_version_tuple(self): + """Get server version as tuple. + + Version should match semantic version (https://semver.org/). + + This function only returns first three numbers of version. + + Returns: + Tuple[int, int, int, Union[str, None], Union[str, None]]: Server + version. + """ + + if self._server_version_tuple is None: + re_match = VERSION_REGEX.fullmatch( + self.get_server_version()) + self._server_version_tuple = ( + int(re_match.group("major")), + int(re_match.group("minor")), + int(re_match.group("patch")), + re_match.group("prerelease"), + re_match.group("buildmetadata") + ) + return self._server_version_tuple + + server_version = property(get_server_version) + server_version_tuple = property(get_server_version_tuple) + + def _get_user_info(self): + if self._access_token is None: + return None + + if self._access_token_is_service is not None: + response = self.get("users/me") + return response.data + + self._access_token_is_service = False + response = self.get("users/me") + if response.status == 200: + return response.data + + self._access_token_is_service = True + response = self.get("users/me") + if response.status == 200: + return response.data + + self._access_token_is_service = None + return None + + def get_users(self, usernames=None, fields=None): + """Get Users. + + Args: + usernames (Optional[Iterable[str]]): Filter by usernames. + fields (Optional[Iterable[str]]): fields to be queried + for users. + + Returns: + Generator[dict[str, Any]]: Queried users. + """ + + filters = {} + if usernames is not None: + usernames = set(usernames) + if not usernames: + return + filters["userNames"] = list(usernames) + + if not fields: + fields = self.get_default_fields_for_type("user") + + query = users_graphql_query(set(fields)) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for user in parsed_data["users"]: + user["roles"] = json.loads(user["roles"]) + yield user + + def get_user(self, username=None): + output = None + if username is None: + output = self._get_user_info() + else: + response = self.get("users/{}".format(username)) + if response.status == 200: + output = response.data + + if output is None: + raise UnauthorizedError("User is not authorized.") + return output + + def get_headers(self, content_type=None): + if content_type is None: + content_type = "application/json" + + headers = { + "Content-Type": content_type, + "x-ayon-platform": platform.system().lower(), + "x-ayon-hostname": platform.node(), + } + if self._site_id is not None: + headers["x-ayon-site-id"] = self._site_id + + if self._client_version is not None: + headers["x-ayon-version"] = self._client_version + + if self._sender is not None: + headers["x-sender"] = self._sender + + if self._access_token: + if self._access_token_is_service: + headers["X-Api-Key"] = self._access_token + username = self._as_user_stack.username + if username: + headers["X-as-user"] = username + else: + headers["Authorization"] = "Bearer {}".format( + self._access_token) + return headers + + def login(self, username, password, create_session=True): + """Login to server. + + Args: + username (str): Username. + password (str): Password. + create_session (Optional[bool]): Create session after login. + Default: True. + + Raises: + AuthenticationError: Login failed. + """ + + if self.has_valid_token: + try: + user_info = self.get_user() + except UnauthorizedError: + user_info = {} + + current_username = user_info.get("name") + if current_username == username: + self.close_session() + if create_session: + self.create_session() + return + + self.reset_token() + + self.validate_server_availability() + + self._token_validation_started = True + + try: + response = self.post( + "auth/login", + name=username, + password=password + ) + if response.status_code != 200: + _detail = response.data.get("detail") + details = "" + if _detail: + details = " {}".format(_detail) + + raise AuthenticationError("Login failed {}".format(details)) + + finally: + self._token_validation_started = False + + self._access_token = response["token"] + + if not self.has_valid_token: + raise AuthenticationError("Invalid credentials") + + if create_session: + self.create_session() + + def logout(self, soft=False): + if self._access_token: + if not soft: + self._logout() + self.reset_token() + + def _logout(self): + logout_from_server(self._base_url, self._access_token) + + def _do_rest_request(self, function, url, **kwargs): + if self._session is None: + # Validate token if was not yet validated + # - ignore validation if we're in middle of + # validation + if ( + self._token_is_valid is None + and not self._token_validation_started + ): + self.validate_token() + + if "headers" not in kwargs: + kwargs["headers"] = self.get_headers() + + if isinstance(function, RequestType): + function = self._base_functions_mapping[function] + + elif isinstance(function, RequestType): + function = self._session_functions_mapping[function] + + try: + response = function(url, **kwargs) + + except ConnectionRefusedError: + new_response = RestApiResponse( + None, + {"detail": "Unable to connect the server. Connection refused"} + ) + except requests.exceptions.ConnectionError: + new_response = RestApiResponse( + None, + {"detail": "Unable to connect the server. Connection error"} + ) + else: + content_type = response.headers.get("Content-Type") + if content_type == "application/json": + try: + new_response = RestApiResponse(response) + except JSONDecodeError: + new_response = RestApiResponse( + None, + { + "detail": "The response is not a JSON: {}".format( + response.text) + } + ) + + elif content_type in ("image/jpeg", "image/png"): + new_response = RestApiResponse(response) + + else: + new_response = RestApiResponse(response) + + self.log.debug("Response {}".format(str(new_response))) + return new_response + + def raw_post(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [POST] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.post, + url, + **kwargs + ) + + def raw_put(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [PUT] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.put, + url, + **kwargs + ) + + def raw_patch(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [PATCH] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.patch, + url, + **kwargs + ) + + def raw_get(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [GET] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.get, + url, + **kwargs + ) + + def raw_delete(self, entrypoint, **kwargs): + entrypoint = entrypoint.lstrip("/").rstrip("/") + self.log.debug("Executing [DELETE] {}".format(entrypoint)) + url = "{}/{}".format(self._rest_url, entrypoint) + return self._do_rest_request( + RequestTypes.delete, + url, + **kwargs + ) + + def post(self, entrypoint, **kwargs): + return self.raw_post(entrypoint, json=kwargs) + + def put(self, entrypoint, **kwargs): + return self.raw_put(entrypoint, json=kwargs) + + def patch(self, entrypoint, **kwargs): + return self.raw_patch(entrypoint, json=kwargs) + + def get(self, entrypoint, **kwargs): + return self.raw_get(entrypoint, params=kwargs) + + def delete(self, entrypoint, **kwargs): + return self.raw_delete(entrypoint, params=kwargs) + + def get_event(self, event_id): + """Query full event data by id. + + Events received using event server do not contain full information. To + get the full event information is required to receive it explicitly. + + Args: + event_id (str): Id of event. + + Returns: + dict[str, Any]: Full event data. + """ + + response = self.get("events/{}".format(event_id)) + response.raise_for_status() + return response.data + + def get_events( + self, + topics=None, + project_names=None, + states=None, + users=None, + include_logs=None, + fields=None + ): + """Get events from server with filtering options. + + Notes: + Not all event happen on a project. + + Args: + topics (Optional[Iterable[str]]): Name of topics. + project_names (Optional[Iterable[str]]): Project on which + event happened. + states (Optional[Iterable[str]]): Filtering by states. + users (Optional[Iterable[str]]): Filtering by users + who created/triggered an event. + include_logs (Optional[bool]): Query also log events. + fields (Optional[Iterable[str]]): Fields that should be received + for each event. + + Returns: + Generator[dict[str, Any]]: Available events matching filters. + """ + + filters = {} + if topics is not None: + topics = set(topics) + if not topics: + return + filters["eventTopics"] = list(topics) + + if project_names is not None: + project_names = set(project_names) + if not project_names: + return + filters["projectNames"] = list(project_names) + + if states is not None: + states = set(states) + if not states: + return + filters["eventStates"] = list(states) + + if users is not None: + users = set(users) + if not users: + return + filters["eventUsers"] = list(users) + + if include_logs is None: + include_logs = False + filters["includeLogsFilter"] = include_logs + + if not fields: + fields = self.get_default_fields_for_type("event") + + query = events_graphql_query(set(fields)) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for event in parsed_data["events"]: + yield event + + def update_event( + self, + event_id, + sender=None, + project_name=None, + status=None, + description=None, + summary=None, + payload=None + ): + kwargs = { + key: value + for key, value in ( + ("sender", sender), + ("project", project_name), + ("status", status), + ("description", description), + ("summary", summary), + ("payload", payload), + ) + if value is not None + } + response = self.patch( + "events/{}".format(event_id), + **kwargs + ) + response.raise_for_status() + + def dispatch_event( + self, + topic, + sender=None, + event_hash=None, + project_name=None, + username=None, + dependencies=None, + description=None, + summary=None, + payload=None, + finished=True, + store=True, + ): + """Dispatch event to server. + + Arg: + topic (str): Event topic used for filtering of listeners. + sender (Optional[str]): Sender of event. + hash (Optional[str]): Event hash. + project_name (Optional[str]): Project name. + username (Optional[str]): Username which triggered event. + dependencies (Optional[list[str]]): List of event id dependencies. + description (Optional[str]): Description of event. + summary (Optional[dict[str, Any]]): Summary of event that can be used + for simple filtering on listeners. + payload (Optional[dict[str, Any]]): Full payload of event data with + all details. + finished (Optional[bool]): Mark event as finished on dispatch. + store (Optional[bool]): Store event in event queue for possible + future processing otherwise is event send only + to active listeners. + """ + + if summary is None: + summary = {} + if payload is None: + payload = {} + event_data = { + "topic": topic, + "sender": sender, + "hash": event_hash, + "project": project_name, + "user": username, + "dependencies": dependencies, + "description": description, + "summary": summary, + "payload": payload, + "finished": finished, + "store": store, + } + if self.post("events", **event_data): + self.log.debug("Dispatched event {}".format(topic)) + return True + self.log.error("Unable to dispatch event {}".format(topic)) + return False + + def enroll_event_job( + self, + source_topic, + target_topic, + sender, + description=None, + sequential=None, + events_filter=None, + ): + """Enroll job based on events. + + Enroll will find first unprocessed event with 'source_topic' and will + create new event with 'target_topic' for it and return the new event + data. + + Use 'sequential' to control that only single target event is created + at same time. Creation of new target events is blocked while there is + at least one unfinished event with target topic, when set to 'True'. + This helps when order of events matter and more than one process using + the same target is running at the same time. + - Make sure the new event has updated status to '"finished"' status + when you're done with logic + + Target topic should not clash with other processes/services. + + Created target event have 'dependsOn' key where is id of source topic. + + Use-case: + - Service 1 is creating events with topic 'my.leech' + - Service 2 process 'my.leech' and uses target topic 'my.process' + - this service can run on 1-n machines + - all events must be processed in a sequence by their creation + time and only one event can be processed at a time + - in this case 'sequential' should be set to 'True' so only + one machine is actually processing events, but if one goes + down there are other that can take place + - Service 3 process 'my.leech' and uses target topic 'my.discover' + - this service can run on 1-n machines + - order of events is not important + - 'sequential' should be 'False' + + Args: + source_topic (str): Source topic to enroll. + target_topic (str): Topic of dependent event. + sender (str): Identifier of sender (e.g. service name or username). + description (Optional[str]): Human readable text shown + in target event. + sequential (Optional[bool]): The source topic must be processed + in sequence. + events_filter (Optional[ayon_server.sqlfilter.Filter]): A dict-like + with conditions to filter the source event. + + Returns: + Union[None, dict[str, Any]]: None if there is no event matching + filters. Created event with 'target_topic'. + """ + + kwargs = { + "sourceTopic": source_topic, + "targetTopic": target_topic, + "sender": sender, + } + if sequential is not None: + kwargs["sequential"] = sequential + if description is not None: + kwargs["description"] = description + if events_filter is not None: + kwargs["filter"] = events_filter + response = self.post("enroll", **kwargs) + if response.status_code == 204: + return None + elif response.status_code >= 400: + self.log.error(response.text) + return None + + return response.data + + def _download_file(self, url, filepath, chunk_size, progress): + dst_directory = os.path.dirname(filepath) + if not os.path.exists(dst_directory): + os.makedirs(dst_directory) + + kwargs = {"stream": True} + if self._session is None: + kwargs["headers"] = self.get_headers() + get_func = self._base_functions_mapping[RequestTypes.get] + else: + get_func = self._session_functions_mapping[RequestTypes.get] + + with open(filepath, "wb") as f_stream: + with get_func(url, **kwargs) as response: + response.raise_for_status() + progress.set_content_size(response.headers["Content-length"]) + for chunk in response.iter_content(chunk_size=chunk_size): + f_stream.write(chunk) + progress.add_transferred_chunk(len(chunk)) + + def download_file( + self, endpoint, filepath, chunk_size=None, progress=None + ): + """Download file from AYON server. + + Endpoint can be full url (must start with 'base_url' of api object). + + Progress object can be used to track download. Can be used when + download happens in thread and other thread want to catch changes over + time. + + Args: + endpoint (str): Endpoint or URL to file that should be downloaded. + filepath (str): Path where file will be downloaded. + chunk_size (Optional[int]): Size of chunks that are received + in single loop. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + """ + + if not chunk_size: + # 1 MB chunk by default + chunk_size = 1024 * 1024 + + if endpoint.startswith(self._base_url): + url = endpoint + else: + endpoint = endpoint.lstrip("/").rstrip("/") + url = "{}/{}".format(self._rest_url, endpoint) + + # Create dummy object so the function does not have to check + # 'progress' variable everywhere + if progress is None: + progress = TransferProgress() + + progress.set_source_url(url) + progress.set_destination_url(filepath) + progress.set_started() + try: + self._download_file(url, filepath, chunk_size, progress) + + except Exception as exc: + progress.set_failed(str(exc)) + raise + + finally: + progress.set_transfer_done() + return progress + + def _upload_file(self, url, filepath, progress, request_type=None): + if request_type is None: + request_type = RequestTypes.put + kwargs = {} + if self._session is None: + kwargs["headers"] = self.get_headers() + post_func = self._base_functions_mapping[request_type] + else: + post_func = self._session_functions_mapping[request_type] + + with open(filepath, "rb") as stream: + stream.seek(0, io.SEEK_END) + size = stream.tell() + stream.seek(0) + progress.set_content_size(size) + response = post_func(url, data=stream, **kwargs) + response.raise_for_status() + progress.set_transferred_size(size) + return response + + def upload_file( + self, endpoint, filepath, progress=None, request_type=None + ): + """Upload file to server. + + Todos: + Uploading with more detailed progress. + + Args: + endpoint (str): Endpoint or url where file will be uploaded. + filepath (str): Source filepath. + progress (Optional[TransferProgress]): Object that gives ability + to track upload progress. + request_type (Optional[RequestType]): Type of request that will + be used to upload file. + + Returns: + requests.Response: Response object. + """ + + if endpoint.startswith(self._base_url): + url = endpoint + else: + endpoint = endpoint.lstrip("/").rstrip("/") + url = "{}/{}".format(self._rest_url, endpoint) + + # Create dummy object so the function does not have to check + # 'progress' variable everywhere + if progress is None: + progress = TransferProgress() + + progress.set_source_url(filepath) + progress.set_destination_url(url) + progress.set_started() + + try: + return self._upload_file(url, filepath, progress, request_type) + + except Exception as exc: + progress.set_failed(str(exc)) + raise + + finally: + progress.set_transfer_done() + + def trigger_server_restart(self): + """Trigger server restart. + + Restart may be required when a change of specific value happened on + server. + """ + + result = self.post("system/restart") + if result.status_code != 204: + # TODO add better exception + raise ValueError("Failed to restart server") + + def query_graphql(self, query, variables=None): + """Execute GraphQl query. + + Args: + query (str): GraphQl query string. + variables (Optional[dict[str, Any]): Variables that can be + used in query. + + Returns: + GraphQlResponse: Response from server. + """ + + data = {"query": query, "variables": variables or {}} + response = self._do_rest_request( + RequestTypes.post, + self._graphql_url, + json=data + ) + response.raise_for_status() + return GraphQlResponse(response) + + def get_graphql_schema(self): + return self.query_graphql(INTROSPECTION_QUERY).data + + def get_server_schema(self): + """Get server schema with info, url paths, components etc. + + Todos: + Cache schema - How to find out it is outdated? + + Returns: + dict[str, Any]: Full server schema. + """ + + url = "{}/openapi.json".format(self._base_url) + response = self._do_rest_request(RequestTypes.get, url) + if response: + return response.data + return None + + def get_schemas(self): + """Get components schema. + + Name of components does not match entity type names e.g. 'project' is + under 'ProjectModel'. We should find out some mapping. Also, there + are properties which don't have information about reference to object + e.g. 'config' has just object definition without reference schema. + + Returns: + dict[str, Any]: Component schemas. + """ + + server_schema = self.get_server_schema() + return server_schema["components"]["schemas"] + + def get_attributes_schema(self, use_cache=True): + if not use_cache: + self.reset_attributes_schema() + + if self._attributes_schema is None: + result = self.get("attributes") + if result.status_code != 200: + raise UnauthorizedError( + "User must be authorized to receive attributes" + ) + self._attributes_schema = result.data + return copy.deepcopy(self._attributes_schema) + + def reset_attributes_schema(self): + self._attributes_schema = None + self._entity_type_attributes_cache = {} + + def set_attribute_config( + self, attribute_name, data, scope, position=None, builtin=False + ): + if position is None: + attributes = self.get("attributes").data["attributes"] + origin_attr = next( + ( + attr for attr in attributes + if attr["name"] == attribute_name + ), + None + ) + if origin_attr: + position = origin_attr["position"] + else: + position = len(attributes) + + response = self.put( + "attributes/{}".format(attribute_name), + data=data, + scope=scope, + position=position, + builtin=builtin + ) + if response.status_code != 204: + # TODO raise different exception + raise ValueError( + "Attribute \"{}\" was not created/updated. {}".format( + attribute_name, response.detail + ) + ) + + self.reset_attributes_schema() + + def remove_attribute_config(self, attribute_name): + """Remove attribute from server. + + This can't be un-done, please use carefully. + + Args: + attribute_name (str): Name of attribute to remove. + """ + + response = self.delete("attributes/{}".format(attribute_name)) + response.raise_for_status( + "Attribute \"{}\" was not created/updated. {}".format( + attribute_name, response.detail + ) + ) + + self.reset_attributes_schema() + + def get_attributes_for_type(self, entity_type): + """Get attribute schemas available for an entity type. + + ``` + # Example attribute schema + { + # Common + "type": "integer", + "title": "Clip Out", + "description": null, + "example": 1, + "default": 1, + # These can be filled based on value of 'type' + "gt": null, + "ge": null, + "lt": null, + "le": null, + "minLength": null, + "maxLength": null, + "minItems": null, + "maxItems": null, + "regex": null, + "enum": null + } + ``` + + Args: + entity_type (str): Entity type for which should be attributes + received. + + Returns: + dict[str, dict[str, Any]]: Attribute schemas that are available + for entered entity type. + """ + attributes = self._entity_type_attributes_cache.get(entity_type) + if attributes is None: + attributes_schema = self.get_attributes_schema() + attributes = {} + for attr in attributes_schema["attributes"]: + if entity_type not in attr["scope"]: + continue + attr_name = attr["name"] + attributes[attr_name] = attr["data"] + + self._entity_type_attributes_cache[entity_type] = attributes + + return copy.deepcopy(attributes) + + def get_attributes_fields_for_type(self, entity_type): + """Prepare attribute fields for entity type. + + Returns: + set[str]: Attributes fields for entity type. + """ + + attributes = self.get_attributes_for_type(entity_type) + return { + "attrib.{}".format(attr) + for attr in attributes + } + + def get_default_fields_for_type(self, entity_type): + """Default fields for entity type. + + Returns most of commonly used fields from server. + + Args: + entity_type (str): Name of entity type. + + Returns: + set[str]: Fields that should be queried from server. + """ + + # Event does not have attributes + if entity_type == "event": + return set(DEFAULT_EVENT_FIELDS) + + if entity_type == "project": + entity_type_defaults = DEFAULT_PROJECT_FIELDS + + elif entity_type == "folder": + entity_type_defaults = DEFAULT_FOLDER_FIELDS + + elif entity_type == "task": + entity_type_defaults = DEFAULT_TASK_FIELDS + + elif entity_type == "product": + entity_type_defaults = DEFAULT_PRODUCT_FIELDS + + elif entity_type == "version": + entity_type_defaults = DEFAULT_VERSION_FIELDS + + elif entity_type == "representation": + entity_type_defaults = ( + DEFAULT_REPRESENTATION_FIELDS + | REPRESENTATION_FILES_FIELDS + ) + + elif entity_type == "productType": + entity_type_defaults = DEFAULT_PRODUCT_TYPE_FIELDS + + elif entity_type == "workfile": + entity_type_defaults = DEFAULT_WORKFILE_INFO_FIELDS + + elif entity_type == "user": + entity_type_defaults = DEFAULT_USER_FIELDS + + else: + raise ValueError("Unknown entity type \"{}\"".format(entity_type)) + return ( + entity_type_defaults + | self.get_attributes_fields_for_type(entity_type) + ) + + def get_addons_info(self, details=True): + """Get information about addons available on server. + + Args: + details (Optional[bool]): Detailed data with information how + to get client code. + """ + + endpoint = "addons" + if details: + endpoint += "?details=1" + response = self.get(endpoint) + response.raise_for_status() + return response.data + + def get_addon_url(self, addon_name, addon_version, *subpaths): + """Calculate url to addon route. + + Example: + >>> api = ServerAPI("https://your.url.com") + >>> api.get_addon_url( + ... "example", "1.0.0", "private", "my.zip") + 'https://your.url.com/addons/example/1.0.0/private/my.zip' + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + *subpaths (str): Any amount of subpaths that are added to + addon url. + + Returns: + str: Final url. + """ + + ending = "" + if subpaths: + ending = "/{}".format("/".join(subpaths)) + return "{}/addons/{}/{}{}".format( + self._base_url, + addon_name, + addon_version, + ending + ) + + def download_addon_private_file( + self, + addon_name, + addon_version, + filename, + destination_dir, + destination_filename=None, + chunk_size=None, + progress=None, + ): + """Download a file from addon private files. + + This method requires to have authorized token available. Private files + are not under '/api' restpoint. + + Args: + addon_name (str): Addon name. + addon_version (str): Addon version. + filename (str): Filename in private folder on server. + destination_dir (str): Where the file should be downloaded. + destination_filename (Optional[str]): Name of destination + filename. Source filename is used if not passed. + chunk_size (Optional[int]): Download chunk size. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + + Returns: + str: Filepath to downloaded file. + """ + + if not destination_filename: + destination_filename = filename + dst_filepath = os.path.join(destination_dir, destination_filename) + # Filename can contain "subfolders" + dst_dirpath = os.path.dirname(dst_filepath) + if not os.path.exists(dst_dirpath): + os.makedirs(dst_dirpath) + + url = self.get_addon_url( + addon_name, + addon_version, + "private", + filename + ) + self.download_file( + url, dst_filepath, chunk_size=chunk_size, progress=progress + ) + return dst_filepath + + def get_installers(self, version=None, platform_name=None): + """Information about desktop application installers on server. + + Desktop application installers are helpers to download/update AYON + desktop application for artists. + + Args: + version (Optional[str]): Filter installers by version. + platform_name (Optional[str]): Filter installers by platform name. + + Returns: + list[dict[str, Any]]: + """ + + query_fields = [ + "{}={}".format(key, value) + for key, value in ( + ("version", version), + ("platform", platform_name), + ) + if value + ] + query = "" + if query_fields: + query = "?{}".format(",".join(query_fields)) + + response = self.get("desktop/installers{}".format(query)) + response.raise_for_status() + return response.data + + def create_installer( + self, + filename, + version, + python_version, + platform_name, + python_modules, + runtime_python_modules, + checksum, + checksum_algorithm, + file_size, + sources=None, + ): + """Create new installer information on server. + + This step will create only metadata. Make sure to upload installer + to the server using 'upload_installer' method. + + Runtime python modules are modules that are required to run AYON + desktop application, but are not added to PYTHONPATH for any + subprocess. + + Args: + filename (str): Installer filename. + version (str): Version of installer. + python_version (str): Version of Python. + platform_name (str): Name of platform. + python_modules (dict[str, str]): Python modules that are available + in installer. + runtime_python_modules (dict[str, str]): Runtime python modules + that are available in installer. + checksum (str): Installer file checksum. + checksum_algorithm (str): Type of checksum used to create checksum. + file_size (int): File size. + sources (Optional[list[dict[str, Any]]]): List of sources that + can be used to download file. + """ + + body = { + "filename": filename, + "version": version, + "pythonVersion": python_version, + "platform": platform_name, + "pythonModules": python_modules, + "runtimePythonModules": runtime_python_modules, + "checksum": checksum, + "checksumAlgorithm": checksum_algorithm, + "size": file_size, + } + if sources: + body["sources"] = sources + + response = self.post("desktop/installers", **body) + response.raise_for_status() + + def update_installer(self, filename, sources): + """Update installer information on server. + + Args: + filename (str): Installer filename. + sources (list[dict[str, Any]]): List of sources that + can be used to download file. Fully replaces existing sources. + """ + + response = self.patch( + "desktop/installers/{}".format(filename), + sources=sources + ) + response.raise_for_status() + + def delete_installer(self, filename): + """Delete installer from server. + + Args: + filename (str): Installer filename. + """ + + response = self.delete("desktop/installers/{}".format(filename)) + response.raise_for_status() + + def download_installer( + self, + filename, + dst_filepath, + chunk_size=None, + progress=None + ): + """Download installer file from server. + + Args: + filename (str): Installer filename. + dst_filepath (str): Destination filepath. + chunk_size (Optional[int]): Download chunk size. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + """ + + self.download_file( + "desktop/installers/{}".format(filename), + dst_filepath, + chunk_size=chunk_size, + progress=progress + ) + + def upload_installer(self, src_filepath, dst_filename, progress=None): + """Upload installer file to server. + + Args: + src_filepath (str): Source filepath. + dst_filename (str): Destination filename. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + + Returns: + requests.Response: Response object. + """ + + return self.upload_file( + "desktop/installers/{}".format(dst_filename), + src_filepath, + progress=progress + ) + + def get_dependencies_info(self): + """Information about dependency packages on server. + + Example data structure: + { + "packages": [ + { + "name": str, + "platform": str, + "checksum": str, + "sources": list[dict[str, Any]], + "supportedAddons": dict[str, str], + "pythonModules": dict[str, str] + } + ], + "productionPackage": str + } + + Deprecated: + Deprecated since server version 0.2.1. Use + 'get_dependency_packages' instead. + + Returns: + dict[str, Any]: Information about dependency packages known for + server. + """ + + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and (minor < 2 or (minor == 2 and patch < 1)): + result = self.get("dependencies") + return result.data + packages = self.get_dependency_packages() + packages["productionPackage"] = None + return packages + + def update_dependency_info( + self, + name, + platform_name, + size, + checksum, + checksum_algorithm=None, + supported_addons=None, + python_modules=None, + sources=None + ): + """Update or create dependency package for identifiers. + + The endpoint can be used to create or update dependency package. + + + Deprecated: + Deprecated for server version 0.2.1. Use + 'create_dependency_pacakge' instead. + + Args: + name (str): Name of dependency package. + platform_name (Literal["windows", "linux", "darwin"]): Platform + for which is dependency package targeted. + size (int): Size of dependency package in bytes. + checksum (str): Checksum of archive file where dependencies are. + checksum_algorithm (Optional[str]): Algorithm used to calculate + checksum. By default, is used 'md5' (defined by server). + supported_addons (Optional[dict[str, str]]): Name of addons for + which was the package created. + '{"": "", ...}' + python_modules (Optional[dict[str, str]]): Python modules in + dependencies package. + '{"": "", ...}' + sources (Optional[list[dict[str, Any]]]): Information about + sources where dependency package is available. + """ + + kwargs = { + key: value + for key, value in ( + ("checksumAlgorithm", checksum_algorithm), + ("supportedAddons", supported_addons), + ("pythonModules", python_modules), + ("sources", sources), + ) + if value + } + + response = self.put( + "dependencies", + name=name, + platform=platform_name, + size=size, + checksum=checksum, + **kwargs + ) + response.raise_for_status("Failed to create/update dependency") + return response.data + + def get_dependency_packages(self): + """Information about dependency packages on server. + + To download dependency package, use 'download_dependency_package' + method and pass in 'filename'. + + Example data structure: + { + "packages": [ + { + "filename": str, + "platform": str, + "checksum": str, + "checksumAlgorithm": str, + "size": int, + "sources": list[dict[str, Any]], + "supportedAddons": dict[str, str], + "pythonModules": dict[str, str] + } + ] + } + + Returns: + dict[str, Any]: Information about dependency packages known for + server. + """ + + result = self.get("desktop/dependency_packages") + result.raise_for_status() + return result.data + + def _get_dependency_package_route( + self, filename=None, platform_name=None + ): + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and (minor > 2 or (minor == 2 and patch >= 1)): + base = "desktop/dependency_packages" + if not filename: + return base + return "{}/{}".format(base, filename) + + # Backwards compatibility for AYON server 0.2.0 and lower + if platform_name is None: + platform_name = platform.system().lower() + base = "dependencies" + if not filename: + return base + return "{}/{}/{}".format(base, filename, platform_name) + + def create_dependency_package( + self, + filename, + python_modules, + source_addons, + installer_version, + checksum, + checksum_algorithm, + file_size, + sources=None, + platform_name=None, + ): + """Create dependency package on server. + + The package will be created on a server, it is also required to upload + the package archive file (using 'upload_dependency_package'). + + Args: + filename (str): Filename of dependency package. + python_modules (dict[str, str]): Python modules in dependency + package. + '{"": "", ...}' + source_addons (dict[str, str]): Name of addons for which is + dependency package created. + '{"": "", ...}' + installer_version (str): Version of installer for which was + package created. + checksum (str): Checksum of archive file where dependencies are. + checksum_algorithm (str): Algorithm used to calculate checksum. + file_size (Optional[int]): Size of file. + sources (Optional[list[dict[str, Any]]]): Information about + sources from where it is possible to get file. + platform_name (Optional[str]): Name of platform for which is + dependency package targeted. Default value is + current platform. + """ + + post_body = { + "filename": filename, + "pythonModules": python_modules, + "sourceAddons": source_addons, + "installerVersion": installer_version, + "checksum": checksum, + "checksumAlgorithm": checksum_algorithm, + "size": file_size, + "platform": platform_name or platform.system().lower(), + } + if sources: + post_body["sources"] = sources + + route = self._get_dependency_package_route() + response = self.post(route, **post_body) + response.raise_for_status() + + def update_dependency_package(self, filename, sources): + """Update dependency package metadata on server. + + Args: + filename (str): Filename of dependency package. + sources (list[dict[str, Any]]): Information about + sources from where it is possible to get file. Fully replaces + existing sources. + """ + + response = self.patch( + self._get_dependency_package_route(filename), + sources=sources + ) + response.raise_for_status() + + def delete_dependency_package(self, filename, platform_name=None): + """Remove dependency package for specific platform. + + Args: + filename (str): Filename of dependency package. Or name of package + for server version 0.2.0 or lower. + platform_name (Optional[str]): Which platform of the package + should be removed. Current platform is used if not passed. + Deprecated since version 0.2.1 + """ + + route = self._get_dependency_package_route(filename, platform_name) + response = self.delete(route) + response.raise_for_status("Failed to delete dependency file") + return response.data + + def download_dependency_package( + self, + src_filename, + dst_directory, + dst_filename, + platform_name=None, + chunk_size=None, + progress=None, + ): + """Download dependency package from server. + + This method requires to have authorized token available. The package + is only downloaded. + + Args: + src_filename (str): Filename of dependency pacakge. + For server version 0.2.0 and lower it is name of package + to download. + dst_directory (str): Where the file should be downloaded. + dst_filename (str): Name of destination filename. + platform_name (Optional[str]): Name of platform for which the + dependency package is targeted. Default value is + current platform. Deprecated since server version 0.2.1. + chunk_size (Optional[int]): Download chunk size. + progress (Optional[TransferProgress]): Object that gives ability + to track download progress. + + Returns: + str: Filepath to downloaded file. + """ + + route = self._get_dependency_package_route(src_filename, platform_name) + package_filepath = os.path.join(dst_directory, dst_filename) + self.download_file( + route, + package_filepath, + chunk_size=chunk_size, + progress=progress + ) + return package_filepath + + def upload_dependency_package( + self, src_filepath, dst_filename, platform_name=None, progress=None + ): + """Upload dependency package to server. + + Args: + src_filepath (str): Path to a package file. + dst_filename (str): Dependency package filename or name of package + for server version 0.2.0 or lower. Must be unique. + platform_name (Optional[str]): For which platform is the + package targeted. Deprecated since server version 0.2.1. + progress (Optional[TransferProgress]): Object to keep track about + upload state. + """ + + route = self._get_dependency_package_route(dst_filename, platform_name) + self.upload_file(route, src_filepath, progress=progress) + + def create_dependency_package_basename(self, platform_name=None): + """Create basename for dependency package file. + + Deprecated: + Use 'create_dependency_package_basename' from `ayon_api` or + `ayon_api.utils` instead. + + Args: + platform_name (Optional[str]): Name of platform for which the + bundle is targeted. Default value is current platform. + + Returns: + str: Dependency package name with timestamp and platform. + """ + + return create_dependency_package_basename(platform_name) + + def upload_addon_zip(self, src_filepath, progress=None): + """Upload addon zip file to server. + + File is validated on server. If it is valid, it is installed. It will + create an event job which can be tracked (tracking part is not + implemented yet). + + Example output: + {'eventId': 'a1bfbdee27c611eea7580242ac120003'} + + Args: + src_filepath (str): Path to a zip file. + progress (Optional[TransferProgress]): Object to keep track about + upload state. + + Returns: + dict[str, Any]: Response data from server. + """ + + response = self.upload_file( + "addons/install", + src_filepath, + progress=progress, + request_type=RequestTypes.post, + ) + return response.json() + + def _get_bundles_route(self): + major, minor, patch, _, _ = self.server_version_tuple + # Backwards compatibility for AYON server 0.3.0 + # - first version where bundles were available + if major == 0 and minor == 3 and patch == 0: + return "desktop/bundles" + return "bundles" + + def get_bundles(self): + """Server bundles with basic information. + + Example output: + { + "bundles": [ + { + "name": "my_bundle", + "createdAt": "2023-06-12T15:37:02.420260", + "installerVersion": "1.0.0", + "addons": { + "core": "1.2.3" + }, + "dependencyPackages": { + "windows": "a_windows_package123.zip", + "linux": "a_linux_package123.zip", + "darwin": "a_mac_package123.zip" + }, + "isProduction": False, + "isStaging": False + } + ], + "productionBundle": "my_bundle", + "stagingBundle": "test_bundle" + } + + Returns: + dict[str, Any]: Server bundles with basic information. + """ + + response = self.get(self._get_bundles_route()) + response.raise_for_status() + return response.data + + def create_bundle( + self, + name, + addon_versions, + installer_version, + dependency_packages=None, + is_production=None, + is_staging=None + ): + """Create bundle on server. + + Bundle cannot be changed once is created. Only isProduction, isStaging + and dependency packages can change after creation. + + Args: + name (str): Name of bundle. + addon_versions (dict[str, str]): Addon versions. + installer_version (Union[str, None]): Installer version. + dependency_packages (Optional[dict[str, str]]): Dependency + package names. Keys are platform names and values are name of + packages. + is_production (Optional[bool]): Bundle will be marked as + production. + is_staging (Optional[bool]): Bundle will be marked as staging. + """ + + body = { + "name": name, + "installerVersion": installer_version, + "addons": addon_versions, + } + for key, value in ( + ("dependencyPackages", dependency_packages), + ("isProduction", is_production), + ("isStaging", is_staging), + ): + if value is not None: + body[key] = value + + response = self.post(self._get_bundles_route(), **body) + response.raise_for_status() + + def update_bundle( + self, + bundle_name, + dependency_packages=None, + is_production=None, + is_staging=None + ): + """Update bundle on server. + + Dependency packages can be update only for single platform. Others + will be left untouched. Use 'None' value to unset dependency package + from bundle. + + Args: + bundle_name (str): Name of bundle. + dependency_packages (Optional[dict[str, str]]): Dependency pacakge + names that should be used with the bundle. + is_production (Optional[bool]): Bundle will be marked as + production. + is_staging (Optional[bool]): Bundle will be marked as staging. + """ + + body = { + key: value + for key, value in ( + ("dependencyPackages", dependency_packages), + ("isProduction", is_production), + ("isStaging", is_staging), + ) + if value is not None + } + response = self.patch( + "{}/{}".format(self._get_bundles_route(), bundle_name), + **body + ) + response.raise_for_status() + + def delete_bundle(self, bundle_name): + """Delete bundle from server. + + Args: + bundle_name (str): Name of bundle to delete. + """ + + response = self.delete( + "{}/{}".format(self._get_bundles_route(), bundle_name) + ) + response.raise_for_status() + + # Anatomy presets + def get_project_anatomy_presets(self): + """Anatomy presets available on server. + + Content has basic information about presets. Example output: + [ + { + "name": "netflix_VFX", + "primary": false, + "version": "1.0.0" + }, + { + ... + }, + ... + ] + + Returns: + list[dict[str, str]]: Anatomy presets available on server. + """ + + result = self.get("anatomy/presets") + result.raise_for_status() + return result.data.get("presets") or [] + + def get_project_anatomy_preset(self, preset_name=None): + """Anatomy preset values by name. + + Get anatomy preset values by preset name. Primary preset is returned + if preset name is set to 'None'. + + Args: + preset_name (Optional[str]): Preset name. + + Returns: + dict[str, Any]: Anatomy preset values. + """ + + if preset_name is None: + preset_name = "_" + result = self.get("anatomy/presets/{}".format(preset_name)) + result.raise_for_status() + return result.data + + def get_project_roots_by_site(self, project_name): + """Root overrides per site name. + + Method is based on logged user and can't be received for any other + user on server. + + Output will contain only roots per site id used by logged user. + + Args: + project_name (str): Name of project. + + Returns: + dict[str, dict[str, str]]: Root values by root name by site id. + """ + + result = self.get("projects/{}/roots".format(project_name)) + result.raise_for_status() + return result.data + + def get_project_roots_for_site(self, project_name, site_id=None): + """Root overrides for site. + + If site id is not passed a site set in current api object is used + instead. + + Args: + project_name (str): Name of project. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + + Returns: + dict[str, str]: Root values by root name or None if + site does not have overrides. + """ + + if site_id is None: + site_id = self.site_id + + if site_id is None: + return {} + roots = self.get_project_roots_by_site(project_name) + return roots.get(site_id, {}) + + def get_addon_settings_schema( + self, addon_name, addon_version, project_name=None + ): + """Sudio/Project settings schema of an addon. + + Project schema may look differently as some enums are based on project + values. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + project_name (Optional[str]): Schema for specific project or + default studio schemas. + + Returns: + dict[str, Any]: Schema of studio/project settings. + """ + + args = tuple() + if project_name: + args = (project_name, ) + + endpoint = self.get_addon_url( + addon_name, addon_version, "schema", *args + ) + result = self.get(endpoint) + result.raise_for_status() + return result.data + + def get_addon_site_settings_schema(self, addon_name, addon_version): + """Site settings schema of an addon. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + + Returns: + dict[str, Any]: Schema of site settings. + """ + + result = self.get("addons/{}/{}/siteSettings/schema".format( + addon_name, addon_version + )) + result.raise_for_status() + return result.data + + def get_addon_studio_settings( + self, + addon_name, + addon_version, + variant=None + ): + """Addon studio settings. + + Receive studio settings for specific version of an addon. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + + Returns: + dict[str, Any]: Addon settings. + """ + + if variant is None: + variant = self.default_settings_variant + + query_items = {} + if variant: + query_items["variant"] = variant + query = prepare_query_string(query_items) + + result = self.get( + "addons/{}/{}/settings{}".format(addon_name, addon_version, query) + ) + result.raise_for_status() + return result.data + + def get_addon_project_settings( + self, + addon_name, + addon_version, + project_name, + variant=None, + site_id=None, + use_site=True + ): + """Addon project settings. + + Receive project settings for specific version of an addon. The settings + may be with site overrides when enabled. + + Site id is filled with current connection site id if not passed. To + make sure any site id is used set 'use_site' to 'False'. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + project_name (str): Name of project for which the settings are + received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Name of site which is used for site + overrides. Is filled with connection 'site_id' attribute + if not passed. + use_site (Optional[bool]): To force disable option of using site + overrides set to 'False'. In that case won't be applied + any site overrides. + + Returns: + dict[str, Any]: Addon settings. + """ + + if not use_site: + site_id = None + elif not site_id: + site_id = self.site_id + + query_items = {} + if site_id: + query_items["site"] = site_id + + if variant is None: + variant = self.default_settings_variant + + if variant: + query_items["variant"] = variant + + query = prepare_query_string(query_items) + result = self.get( + "addons/{}/{}/settings/{}{}".format( + addon_name, addon_version, project_name, query + ) + ) + result.raise_for_status() + return result.data + + def get_addon_settings( + self, + addon_name, + addon_version, + project_name=None, + variant=None, + site_id=None, + use_site=True + ): + """Receive addon settings. + + Receive addon settings based on project name value. Some arguments may + be ignored if 'project_name' is set to 'None'. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + project_name (Optional[str]): Name of project for which the + settings are received. A studio settings values are received + if is 'None'. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Name of site which is used for site + overrides. Is filled with connection 'site_id' attribute + if not passed. + use_site (Optional[bool]): To force disable option of using + site overrides set to 'False'. In that case won't be applied + any site overrides. + + Returns: + dict[str, Any]: Addon settings. + """ + + if project_name is None: + return self.get_addon_studio_settings( + addon_name, addon_version, variant + ) + return self.get_addon_project_settings( + addon_name, addon_version, project_name, variant, site_id, use_site + ) + + def get_addon_site_settings( + self, addon_name, addon_version, site_id=None + ): + """Site settings of an addon. + + If site id is not available an empty dictionary is returned. + + Args: + addon_name (str): Name of addon. + addon_version (str): Version of addon. + site_id (Optional[str]): Name of site for which should be settings + returned. using 'site_id' attribute if not passed. + + Returns: + dict[str, Any]: Site settings. + """ + + if site_id is None: + site_id = self.site_id + + if not site_id: + return {} + + query = prepare_query_string({"site": site_id}) + result = self.get("addons/{}/{}/siteSettings{}".format( + addon_name, addon_version, query + )) + result.raise_for_status() + return result.data + + def get_bundle_settings( + self, + bundle_name=None, + project_name=None, + variant=None, + site_id=None, + use_site=True + ): + """Get complete set of settings for given data. + + If project is not passed then studio settings are returned. If variant + is not passed 'default_settings_variant' is used. If bundle name is + not passed then current production/staging bundle is used, based on + variant value. + + Output contains addon settings and site settings in single dictionary. + + TODOs: + - test how it behaves if there is not any bundle. + - test how it behaves if there is not any production/staging + bundle. + + Warnings: + For AYON server < 0.3.0 bundle name will be ignored. + + Example output: + { + "addons": [ + { + "name": "addon-name", + "version": "addon-version", + "settings": {...} + "siteSettings": {...} + } + ] + } + + Returns: + dict[str, Any]: All settings for single bundle. + """ + + major, minor, _, _, _ = self.server_version_tuple + query_values = { + key: value + for key, value in ( + ("project_name", project_name), + ("variant", variant or self.default_settings_variant), + ("bundle_name", bundle_name), + ) + if value + } + if use_site: + if not site_id: + site_id = self.site_id + if site_id: + query_values["site_id"] = site_id + + if major == 0 and minor >= 3: + url = "settings" + else: + # Backward compatibility for AYON server < 0.3.0 + url = "settings/addons" + query_values.pop("bundle_name", None) + for new_key, old_key in ( + ("project_name", "project"), + ("site_id", "site"), + ): + if new_key in query_values: + query_values[old_key] = query_values.pop(new_key) + + query = prepare_query_string(query_values) + response = self.get("{}{}".format(url, query)) + response.raise_for_status() + return response.data + + def get_addons_studio_settings( + self, + bundle_name=None, + variant=None, + site_id=None, + use_site=True, + only_values=True + ): + """All addons settings in one bulk. + + Warnings: + Behavior of this function changed with AYON server version 0.3.0. + Structure of output from server changed. If using + 'only_values=True' then output should be same as before. + + Args: + bundle_name (Optional[str]): Name of bundle for which should be + settings received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + use_site (bool): To force disable option of using site overrides + set to 'False'. In that case won't be applied any site + overrides. + only_values (Optional[bool]): Output will contain only settings + values without metadata about addons. + + Returns: + dict[str, Any]: Settings of all addons on server. + """ + + output = self.get_bundle_settings( + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site + ) + if only_values: + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and minor >= 3: + output = { + addon["name"]: addon["settings"] + for addon in output["addons"] + } + else: + # Backward compatibility for AYON server < 0.3.0 + output = output["settings"] + return output + + def get_addons_project_settings( + self, + project_name, + bundle_name=None, + variant=None, + site_id=None, + use_site=True, + only_values=True + ): + """Project settings of all addons. + + Server returns information about used addon versions, so full output + looks like: + { + "settings": {...}, + "addons": {...} + } + + The output can be limited to only values. To do so is 'only_values' + argument which is by default set to 'True'. In that case output + contains only value of 'settings' key. + + Warnings: + Behavior of this function changed with AYON server version 0.3.0. + Structure of output from server changed. If using + 'only_values=True' then output should be same as before. + + Args: + project_name (str): Name of project for which are settings + received. + bundle_name (Optional[str]): Name of bundle for which should be + settings received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + use_site (bool): To force disable option of using site overrides + set to 'False'. In that case won't be applied any site + overrides. + only_values (Optional[bool]): Output will contain only settings + values without metadata about addons. + + Returns: + dict[str, Any]: Settings of all addons on server for passed + project. + """ + + if not project_name: + raise ValueError("Project name must be passed.") + + output = self.get_bundle_settings( + project_name=project_name, + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site + ) + if only_values: + major, minor, patch, _, _ = self.server_version_tuple + if major == 0 and minor >= 3: + output = { + addon["name"]: addon["settings"] + for addon in output["addons"] + } + else: + # Backward compatibility for AYON server < 0.3.0 + output = output["settings"] + return output + + def get_addons_settings( + self, + bundle_name=None, + project_name=None, + variant=None, + site_id=None, + use_site=True, + only_values=True + ): + """Universal function to receive all addon settings. + + Based on 'project_name' will receive studio settings or project + settings. In case project is not passed is 'site_id' ignored. + + Warnings: + Behavior of this function changed with AYON server version 0.3.0. + Structure of output from server changed. If using + 'only_values=True' then output should be same as before. + + Args: + bundle_name (Optional[str]): Name of bundle for which should be + settings received. + project_name (Optional[str]): Name of project for which should be + settings received. + variant (Optional[Literal['production', 'staging']]): Name of + settings variant. Used 'default_settings_variant' by default. + site_id (Optional[str]): Id of site for which want to receive + site overrides. + use_site (Optional[bool]): To force disable option of using site + overrides set to 'False'. In that case won't be applied + any site overrides. + only_values (Optional[bool]): Only settings values will be + returned. By default, is set to 'True'. + """ + + if project_name is None: + return self.get_addons_studio_settings( + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site, + only_values=only_values + ) + + return self.get_addons_project_settings( + project_name=project_name, + bundle_name=bundle_name, + variant=variant, + site_id=site_id, + use_site=use_site, + only_values=only_values + ) + + def get_secrets(self): + """Get all secrets. + + Example output: + [ + { + "name": "secret_1", + "value": "secret_value_1", + }, + { + "name": "secret_2", + "value": "secret_value_2", + } + ] + + Returns: + list[dict[str, str]]: List of secret entities. + """ + + response = self.get("secrets") + response.raise_for_status() + return response.data + + def get_secret(self, secret_name): + """Get secret by name. + + Example output: + { + "name": "secret_name", + "value": "secret_value", + } + + Args: + secret_name (str): Name of secret. + + Returns: + dict[str, str]: Secret entity data. + """ + + response = self.get("secrets/{}".format(secret_name)) + response.raise_for_status() + return response.data + + def save_secret(self, secret_name, secret_value): + """Save secret. + + This endpoint can create and update secret. + + Args: + secret_name (str): Name of secret. + secret_value (str): Value of secret. + """ + + response = self.put( + "secrets/{}".format(secret_name), + name=secret_name, + value=secret_value, + ) + response.raise_for_status() + return response.data + + + def delete_secret(self, secret_name): + """Delete secret by name. + + Args: + secret_name (str): Name of secret to delete. + """ + + response = self.delete("secrets/{}".format(secret_name)) + response.raise_for_status() + return response.data + + # Entity getters + def get_rest_project(self, project_name): + """Query project by name. + + This call returns project with anatomy data. + + Args: + project_name (str): Name of project. + + Returns: + Union[dict[str, Any], None]: Project entity data or 'None' if + project was not found. + """ + + if not project_name: + return None + + response = self.get("projects/{}".format(project_name)) + if response.status == 200: + return response.data + return None + + def get_rest_projects(self, active=True, library=None): + """Query available project entities. + + User must be logged in. + + Args: + active (Optional[bool]): Filter active/inactive projects. Both + are returned if 'None' is passed. + library (Optional[bool]): Filter standard/library projects. Both + are returned if 'None' is passed. + + Returns: + Generator[dict[str, Any]]: Available projects. + """ + + for project_name in self.get_project_names(active, library): + project = self.get_rest_project(project_name) + if project: + yield project + + def get_rest_entity_by_id(self, project_name, entity_type, entity_id): + """Get entity using REST on a project by its id. + + Args: + project_name (str): Name of project where entity is. + entity_type (Literal["folder", "task", "product", "version"]): The + entity type which should be received. + entity_id (str): Id of entity. + + Returns: + dict[str, Any]: Received entity data. + """ + + if not all((project_name, entity_type, entity_id)): + return None + + entity_endpoint = "{}s".format(entity_type) + response = self.get("projects/{}/{}/{}".format( + project_name, entity_endpoint, entity_id + )) + if response.status == 200: + return response.data + return None + + def get_rest_folder(self, project_name, folder_id): + return self.get_rest_entity_by_id(project_name, "folder", folder_id) + + def get_rest_task(self, project_name, task_id): + return self.get_rest_entity_by_id(project_name, "task", task_id) + + def get_rest_product(self, project_name, product_id): + return self.get_rest_entity_by_id(project_name, "product", product_id) + + def get_rest_version(self, project_name, version_id): + return self.get_rest_entity_by_id(project_name, "version", version_id) + + def get_rest_representation(self, project_name, representation_id): + return self.get_rest_entity_by_id( + project_name, "representation", representation_id + ) + + def get_project_names(self, active=True, library=None): + """Receive available project names. + + User must be logged in. + + Args: + active (Optional[bool]): Filter active/inactive projects. Both + are returned if 'None' is passed. + library (Optional[bool]): Filter standard/library projects. Both + are returned if 'None' is passed. + + Returns: + list[str]: List of available project names. + """ + + query_keys = {} + if active is not None: + query_keys["active"] = "true" if active else "false" + + if library is not None: + query_keys["library"] = "true" if library else "false" + query = "" + if query_keys: + query = "?{}".format(",".join([ + "{}={}".format(key, value) + for key, value in query_keys.items() + ])) + + response = self.get("projects{}".format(query), **query_keys) + response.raise_for_status() + data = response.data + project_names = [] + if data: + for project in data["projects"]: + project_names.append(project["name"]) + return project_names + + def get_projects( + self, active=True, library=None, fields=None, own_attributes=False + ): + """Get projects. + + Args: + active (Optional[bool]): Filter active or inactive projects. + Filter is disabled when 'None' is passed. + library (Optional[bool]): Filter library projects. Filter is + disabled when 'None' is passed. + fields (Optional[Iterable[str]]): fields to be queried + for project. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried projects. + """ + + if fields is None: + use_rest = True + else: + use_rest = False + fields = set(fields) + for field in fields: + if field.startswith("config"): + use_rest = True + break + + if use_rest: + for project in self.get_rest_projects(active, library): + if own_attributes: + fill_own_attribs(project) + yield project + + else: + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("project") + + if own_attributes: + fields.add("ownAttrib") + + query = projects_graphql_query(fields) + for parsed_data in query.continuous_query(self): + for project in parsed_data["projects"]: + if own_attributes: + fill_own_attribs(project) + yield project + + def get_project(self, project_name, fields=None, own_attributes=False): + """Get project. + + Args: + project_name (str): Name of project. + fields (Optional[Iterable[str]]): fields to be queried + for project. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Project entity data or None + if project was not found. + """ + + use_rest = True + if fields is not None: + use_rest = False + _fields = set() + for field in fields: + if field.startswith("config") or field == "data": + use_rest = True + break + _fields.add(field) + + fields = _fields + + if use_rest: + project = self.get_rest_project(project_name) + if own_attributes: + fill_own_attribs(project) + return project + + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("project") + + if own_attributes: + fields.add("ownAttrib") + query = project_graphql_query(fields) + query.set_variable_value("projectName", project_name) + + parsed_data = query.query(self) + + project = parsed_data["project"] + if project is not None: + project["name"] = project_name + if own_attributes: + fill_own_attribs(project) + return project + + def get_folders_hierarchy( + self, + project_name, + search_string=None, + folder_types=None + ): + """Get project hierarchy. + + All folders in project in hierarchy data structure. + + Example output: + { + "hierarchy": [ + { + "id": "...", + "name": "...", + "label": "...", + "status": "...", + "folderType": "...", + "hasTasks": False, + "taskNames": [], + "parents": [], + "parentId": None, + "children": [...children folders...] + }, + ... + ] + } + + Args: + project_name (str): Project where to look for folders. + search_string (Optional[str]): Search string to filter folders. + folder_types (Optional[Iterable[str]]): Folder types to filter. + + Returns: + dict[str, Any]: Response data from server. + """ + + if folder_types: + folder_types = ",".join(folder_types) + + query_fields = [ + "{}={}".format(key, value) + for key, value in ( + ("search", search_string), + ("types", folder_types), + ) + if value + ] + query = "" + if query_fields: + query = "?{}".format(",".join(query_fields)) + + response = self.get( + "projects/{}/hierarchy{}".format(project_name, query) + ) + response.raise_for_status() + return response.data + + def get_folders( + self, + project_name, + folder_ids=None, + folder_paths=None, + folder_names=None, + parent_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query folders from server. + + Todos: + Folder name won't be unique identifier, so we should add folder path + filtering. + + Notes: + Filter 'active' don't have direct filter in GraphQl. + + Args: + project_name (str): Name of project. + folder_ids (Optional[Iterable[str]]): Folder ids to filter. + folder_paths (Optional[Iterable[str]]): Folder paths used + for filtering. + folder_names (Optional[Iterable[str]]): Folder names used + for filtering. + parent_ids (Optional[Iterable[str]]): Ids of folder parents. + Use 'None' if folder is direct child of project. + active (Optional[bool]): Filter active/inactive folders. + Both are returned if is set to None. + fields (Optional[Iterable[str]]): Fields to be queried for + folder. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried folder entities. + """ + + if not project_name: + return + + filters = { + "projectName": project_name + } + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return + filters["folderIds"] = list(folder_ids) + + if folder_paths is not None: + folder_paths = set(folder_paths) + if not folder_paths: + return + filters["folderPaths"] = list(folder_paths) + + if folder_names is not None: + folder_names = set(folder_names) + if not folder_names: + return + filters["folderNames"] = list(folder_names) + + if parent_ids is not None: + parent_ids = set(parent_ids) + if not parent_ids: + return + if None in parent_ids: + # Replace 'None' with '"root"' which is used during GraphQl + # query for parent ids filter for folders without folder + # parent + parent_ids.remove(None) + parent_ids.add("root") + + if project_name in parent_ids: + # Replace project name with '"root"' which is used during + # GraphQl query for parent ids filter for folders without + # folder parent + parent_ids.remove(project_name) + parent_ids.add("root") + + filters["parentFolderIds"] = list(parent_ids) + + if not fields: + fields = self.get_default_fields_for_type("folder") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("folder") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes and not use_rest: + fields.add("ownAttrib") + + query = folders_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for folder in parsed_data["project"]["folders"]: + if active is not None and active is not folder["active"]: + continue + + if use_rest: + folder = self.get_rest_folder(project_name, folder["id"]) + + if own_attributes: + fill_own_attribs(folder) + yield folder + + def get_folder_by_id( + self, + project_name, + folder_id, + fields=None, + own_attributes=False + ): + """Query folder entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_id (str): Folder id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Folder entity data or None if was not found. + """ + + folders = self.get_folders( + project_name, + folder_ids=[folder_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for folder in folders: + return folder + return None + + def get_folder_by_path( + self, + project_name, + folder_path, + fields=None, + own_attributes=False + ): + """Query folder entity by path. + + Folder path is a path to folder with all parent names joined by slash. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_path (str): Folder path. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Folder entity data or None if was not found. + """ + + folders = self.get_folders( + project_name, + folder_paths=[folder_path], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for folder in folders: + return folder + return None + + def get_folder_by_name( + self, + project_name, + folder_name, + fields=None, + own_attributes=False + ): + """Query folder entity by path. + + Warnings: + Folder name is not a unique identifier of a folder. Function is + kept for OpenPype 3 compatibility. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_name (str): Folder name. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Folder entity data or None if was not found. + """ + + folders = self.get_folders( + project_name, + folder_names=[folder_name], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for folder in folders: + return folder + return None + + def get_folder_ids_with_products(self, project_name, folder_ids=None): + """Find folders which have at least one product. + + Folders that have at least one product should be immutable, so they + should not change path -> change of name or name of any parent + is not possible. + + Args: + project_name (str): Name of project. + folder_ids (Optional[Iterable[str]]): Limit folder ids filtering + to a set of folders. If set to None all folders on project are + checked. + + Returns: + set[str]: Folder ids that have at least one product. + """ + + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return set() + + query = folders_graphql_query({"id"}) + query.set_variable_value("projectName", project_name) + query.set_variable_value("folderHasProducts", True) + if folder_ids: + query.set_variable_value("folderIds", list(folder_ids)) + + parsed_data = query.query(self) + folders = parsed_data["project"]["folders"] + return { + folder["id"] + for folder in folders + } + + def get_tasks( + self, + project_name, + task_ids=None, + task_names=None, + task_types=None, + folder_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query task entities from server. + + Args: + project_name (str): Name of project. + task_ids (Iterable[str]): Task ids to filter. + task_names (Iterable[str]): Task names used for filtering. + task_types (Iterable[str]): Task types used for filtering. + folder_ids (Iterable[str]): Ids of task parents. Use 'None' + if folder is direct child of project. + active (Optional[bool]): Filter active/inactive tasks. + Both are returned if is set to None. + fields (Optional[Iterable[str]]): Fields to be queried for + folder. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried task entities. + """ + + if not project_name: + return + + filters = { + "projectName": project_name + } + + if task_ids is not None: + task_ids = set(task_ids) + if not task_ids: + return + filters["taskIds"] = list(task_ids) + + if task_names is not None: + task_names = set(task_names) + if not task_names: + return + filters["taskNames"] = list(task_names) + + if task_types is not None: + task_types = set(task_types) + if not task_types: + return + filters["taskTypes"] = list(task_types) + + if folder_ids is not None: + folder_ids = set(folder_ids) + if not folder_ids: + return + filters["folderIds"] = list(folder_ids) + + if not fields: + fields = self.get_default_fields_for_type("task") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("task") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes: + fields.add("ownAttrib") + + query = tasks_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for task in parsed_data["project"]["tasks"]: + if active is not None and active is not task["active"]: + continue + + if use_rest: + task = self.get_rest_task(project_name, task["id"]) + + if own_attributes: + fill_own_attribs(task) + yield task + + def get_task_by_name( + self, + project_name, + folder_id, + task_name, + fields=None, + own_attributes=False + ): + """Query task entity by name and folder id. + + Args: + project_name (str): Name of project where to look for queried + entities. + folder_id (str): Folder id. + task_name (str): Task name + fields (Optional[Iterable[str]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Task entity data or None if was not found. + """ + + for task in self.get_tasks( + project_name, + folder_ids=[folder_id], + task_names=[task_name], + active=None, + fields=fields, + own_attributes=own_attributes + ): + return task + return None + + def get_task_by_id( + self, + project_name, + task_id, + fields=None, + own_attributes=False + ): + """Query task entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + task_id (str): Task id. + fields (Optional[Iterable[str]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Task entity data or None if was not found. + """ + + for task in self.get_tasks( + project_name, + task_ids=[task_id], + active=None, + fields=fields, + own_attributes=own_attributes + ): + return task + return None + + def _filter_product( + self, project_name, product, active, own_attributes, use_rest + ): + if active is not None and product["active"] is not active: + return None + + if use_rest: + product = self.get_rest_product(project_name, product["id"]) + + if own_attributes: + fill_own_attribs(product) + + return product + + def get_products( + self, + project_name, + product_ids=None, + product_names=None, + folder_ids=None, + names_by_folder_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query products from server. + + Todos: + Separate 'name_by_folder_ids' filtering to separated method. It + cannot be combined with some other filters. + + Args: + project_name (str): Name of project. + product_ids (Optional[Iterable[str]]): Task ids to filter. + product_names (Optional[Iterable[str]]): Task names used for + filtering. + folder_ids (Optional[Iterable[str]]): Ids of task parents. + Use 'None' if folder is direct child of project. + names_by_folder_ids (Optional[dict[str, Iterable[str]]]): Product + name filtering by folder id. + active (Optional[bool]): Filter active/inactive products. + Both are returned if is set to None. + fields (Optional[Iterable[str]]): Fields to be queried for + folder. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried product entities. + """ + + if not project_name: + return + + if product_ids is not None: + product_ids = set(product_ids) + if not product_ids: + return + + filter_product_names = None + if product_names is not None: + filter_product_names = set(product_names) + if not filter_product_names: + return + + filter_folder_ids = None + if folder_ids is not None: + filter_folder_ids = set(folder_ids) + if not filter_folder_ids: + return + + # This will disable 'folder_ids' and 'product_names' filters + # - maybe could be enhanced in future? + if names_by_folder_ids is not None: + filter_product_names = set() + filter_folder_ids = set() + + for folder_id, names in names_by_folder_ids.items(): + if folder_id and names: + filter_folder_ids.add(folder_id) + filter_product_names |= set(names) + + if not filter_product_names or not filter_folder_ids: + return + + # Convert fields and add minimum required fields + if fields: + fields = set(fields) | {"id"} + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("folder") + else: + fields = self.get_default_fields_for_type("product") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes: + fields.add("ownAttrib") + + # Add 'name' and 'folderId' if 'names_by_folder_ids' filter is entered + if names_by_folder_ids: + fields.add("name") + fields.add("folderId") + + # Prepare filters for query + filters = { + "projectName": project_name + } + if filter_folder_ids: + filters["folderIds"] = list(filter_folder_ids) + + if product_ids: + filters["productIds"] = list(product_ids) + + if filter_product_names: + filters["productNames"] = list(filter_product_names) + + query = products_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + parsed_data = query.query(self) + + products = parsed_data.get("project", {}).get("products", []) + # Filter products by 'names_by_folder_ids' + if names_by_folder_ids: + products_by_folder_id = collections.defaultdict(list) + for product in products: + filtered_product = self._filter_product( + project_name, product, active, own_attributes, use_rest + ) + if filtered_product is not None: + folder_id = filtered_product["folderId"] + products_by_folder_id[folder_id].append(filtered_product) + + for folder_id, names in names_by_folder_ids.items(): + for folder_product in products_by_folder_id[folder_id]: + if folder_product["name"] in names: + yield folder_product + + else: + for product in products: + filtered_product = self._filter_product( + project_name, product, active, own_attributes, use_rest + ) + if filtered_product is not None: + yield filtered_product + + def get_product_by_id( + self, + project_name, + product_id, + fields=None, + own_attributes=False + ): + """Query product entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_id (str): Product id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Product entity data or None if was not found. + """ + + products = self.get_products( + project_name, + product_ids=[product_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for product in products: + return product + return None + + def get_product_by_name( + self, + project_name, + product_name, + folder_id, + fields=None, + own_attributes=False + ): + """Query product entity by name and folder id. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_name (str): Product name. + folder_id (str): Folder id (Folder is a parent of products). + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Product entity data or None if was not found. + """ + + products = self.get_products( + project_name, + product_names=[product_name], + folder_ids=[folder_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for product in products: + return product + return None + + def get_product_types(self, fields=None): + """Types of products. + + This is server wide information. Product types have 'name', 'icon' and + 'color'. + + Args: + fields (Optional[Iterable[str]]): Product types fields to query. + + Returns: + list[dict[str, Any]]: Product types information. + """ + + if not fields: + fields = self.get_default_fields_for_type("productType") + + query = product_types_query(fields) + + parsed_data = query.query(self) + + return parsed_data.get("productTypes", []) + + def get_project_product_types(self, project_name, fields=None): + """Types of products available on a project. + + Filter only product types available on project. + + Args: + project_name (str): Name of project where to look for + product types. + fields (Optional[Iterable[str]]): Product types fields to query. + + Returns: + list[dict[str, Any]]: Product types information. + """ + + if not fields: + fields = self.get_default_fields_for_type("productType") + + query = project_product_types_query(fields) + query.set_variable_value("projectName", project_name) + + parsed_data = query.query(self) + + return parsed_data.get("project", {}).get("productTypes", []) + + def get_product_type_names(self, project_name=None, product_ids=None): + """Product type names. + + Warnings: + This function will be probably removed. Matters if 'products_id' + filter has real use-case. + + Args: + project_name (Optional[str]): Name of project where to look for + queried entities. + product_ids (Optional[Iterable[str]]): Product ids filter. Can be + used only with 'project_name'. + + Returns: + set[str]: Product type names. + """ + + if project_name and product_ids: + products = self.get_products( + project_name, + product_ids=product_ids, + fields=["productType"], + active=None, + ) + return { + product["productType"] + for product in products + } + + return { + product_info["name"] + for product_info in self.get_project_product_types( + project_name, fields=["name"] + ) + } + + def get_versions( + self, + project_name, + version_ids=None, + product_ids=None, + versions=None, + hero=True, + standard=True, + latest=None, + active=True, + fields=None, + own_attributes=False + ): + """Get version entities based on passed filters from server. + + Args: + project_name (str): Name of project where to look for versions. + version_ids (Optional[Iterable[str]]): Version ids used for + version filtering. + product_ids (Optional[Iterable[str]]): Product ids used for + version filtering. + versions (Optional[Iterable[int]]): Versions we're interested in. + hero (Optional[bool]): Receive also hero versions when set to true. + standard (Optional[bool]): Receive versions which are not hero when + set to true. + latest (Optional[bool]): Return only latest version of standard + versions. This can be combined only with 'standard' attribute + set to True. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): Fields to be queried + for version. All possible folder fields are returned + if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried version entities. + """ + + if not fields: + fields = self.get_default_fields_for_type("version") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("version") + + if active is not None: + fields.add("active") + + # Make sure fields have minimum required fields + fields |= {"id", "version"} + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if own_attributes: + fields.add("ownAttrib") + + filters = { + "projectName": project_name + } + if version_ids is not None: + version_ids = set(version_ids) + if not version_ids: + return + filters["versionIds"] = list(version_ids) + + if product_ids is not None: + product_ids = set(product_ids) + if not product_ids: + return + filters["productIds"] = list(product_ids) + + # TODO versions can't be used as filter at this moment! + if versions is not None: + versions = set(versions) + if not versions: + return + filters["versions"] = list(versions) + + if not hero and not standard: + return + + queries = [] + # Add filters based on 'hero' and 'standard' + # NOTE: There is not a filter to "ignore" hero versions or to get + # latest and hero version + # - if latest and hero versions should be returned it must be done in + # 2 graphql queries + if standard and not latest: + # This query all versions standard + hero + # - hero must be filtered out if is not enabled during loop + query = versions_graphql_query(fields) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + queries.append(query) + else: + if hero: + # Add hero query if hero is enabled + hero_query = versions_graphql_query(fields) + for attr, filter_value in filters.items(): + hero_query.set_variable_value(attr, filter_value) + + hero_query.set_variable_value("heroOnly", True) + queries.append(hero_query) + + if standard: + standard_query = versions_graphql_query(fields) + for attr, filter_value in filters.items(): + standard_query.set_variable_value(attr, filter_value) + + if latest: + standard_query.set_variable_value("latestOnly", True) + queries.append(standard_query) + + for query in queries: + for parsed_data in query.continuous_query(self): + for version in parsed_data["project"]["versions"]: + if active is not None and version["active"] is not active: + continue + + if not hero and version["version"] < 0: + continue + + if use_rest: + version = self.get_rest_version( + project_name, version["id"] + ) + + if own_attributes: + fill_own_attribs(version) + + yield version + + def get_version_by_id( + self, + project_name, + version_id, + fields=None, + own_attributes=False + ): + """Query version entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + version_id (str): Version id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_versions( + project_name, + version_ids=[version_id], + active=None, + hero=True, + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_version_by_name( + self, + project_name, + version, + product_id, + fields=None, + own_attributes=False + ): + """Query version entity by version and product id. + + Args: + project_name (str): Name of project where to look for queried + entities. + version (int): Version of version entity. + product_id (str): Product id. Product is a parent of version. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_versions( + project_name, + product_ids=[product_id], + versions=[version], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_hero_version_by_id( + self, + project_name, + version_id, + fields=None, + own_attributes=False + ): + """Query hero version entity by id. + + Args: + project_name (str): Name of project where to look for queried + entities. + version_id (int): Hero version id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_hero_versions( + project_name, + version_ids=[version_id], + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_hero_version_by_product_id( + self, + project_name, + product_id, + fields=None, + own_attributes=False + ): + """Query hero version entity by product id. + + Only one hero version is available on a product. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_id (int): Product id. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + versions = self.get_hero_versions( + project_name, + product_ids=[product_id], + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_hero_versions( + self, + project_name, + product_ids=None, + version_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Query hero versions by multiple filters. + + Only one hero version is available on a product. + + Args: + project_name (str): Name of project where to look for queried + entities. + product_ids (Optional[Iterable[str]]): Product ids. + version_ids (Optional[Iterable[str]]): Version ids. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. + All fields are returned if 'None' is passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict, None]: Version entity data or None if was not found. + """ + + return self.get_versions( + project_name, + version_ids=version_ids, + product_ids=product_ids, + hero=True, + standard=False, + active=active, + fields=fields, + own_attributes=own_attributes + ) + + def get_last_versions( + self, + project_name, + product_ids, + active=True, + fields=None, + own_attributes=False + ): + """Query last version entities by product ids. + + Args: + project_name (str): Project where to look for representation. + product_ids (Iterable[str]): Product ids. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + dict[str, dict[str, Any]]: Last versions by product id. + """ + + versions = self.get_versions( + project_name, + product_ids=product_ids, + latest=True, + active=active, + fields=fields, + own_attributes=own_attributes + ) + return { + version["parent"]: version + for version in versions + } + + def get_last_version_by_product_id( + self, + project_name, + product_id, + active=True, + fields=None, + own_attributes=False + ): + """Query last version entity by product id. + + Args: + project_name (str): Project where to look for representation. + product_id (str): Product id. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried version entity or None. + """ + + versions = self.get_versions( + project_name, + product_ids=[product_id], + latest=True, + active=active, + fields=fields, + own_attributes=own_attributes + ) + for version in versions: + return version + return None + + def get_last_version_by_product_name( + self, + project_name, + product_name, + folder_id, + active=True, + fields=None, + own_attributes=False + ): + """Query last version entity by product name and folder id. + + Args: + project_name (str): Project where to look for representation. + product_name (str): Product name. + folder_id (str): Folder id. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried version entity or None. + """ + + if not folder_id: + return None + + product = self.get_product_by_name( + project_name, product_name, folder_id, fields=["_id"] + ) + if not product: + return None + return self.get_last_version_by_product_id( + project_name, + product["id"], + active=active, + fields=fields, + own_attributes=own_attributes + ) + + def version_is_latest(self, project_name, version_id): + """Is version latest from a product. + + Args: + project_name (str): Project where to look for representation. + version_id (str): Version id. + + Returns: + bool: Version is latest or not. + """ + + query = GraphQlQuery("VersionIsLatest") + project_name_var = query.add_variable( + "projectName", "String!", project_name + ) + version_id_var = query.add_variable( + "versionId", "String!", version_id + ) + project_query = query.add_field("project") + project_query.set_filter("name", project_name_var) + version_query = project_query.add_field("version") + version_query.set_filter("id", version_id_var) + product_query = version_query.add_field("product") + latest_version_query = product_query.add_field("latestVersion") + latest_version_query.add_field("id") + + parsed_data = query.query(self) + latest_version = ( + parsed_data["project"]["version"]["product"]["latestVersion"] + ) + return latest_version["id"] == version_id + + def get_representations( + self, + project_name, + representation_ids=None, + representation_names=None, + version_ids=None, + names_by_version_ids=None, + active=True, + fields=None, + own_attributes=False + ): + """Get representation entities based on passed filters from server. + + Todos: + Add separated function for 'names_by_version_ids' filtering. + Because can't be combined with others. + + Args: + project_name (str): Name of project where to look for versions. + representation_ids (Optional[Iterable[str]]): Representation ids + used for representation filtering. + representation_names (Optional[Iterable[str]]): Representation + names used for representation filtering. + version_ids (Optional[Iterable[str]]): Version ids used for + representation filtering. Versions are parents of + representations. + names_by_version_ids (Optional[bool]): Find representations + by names and version ids. This filter discard all + other filters. + active (Optional[bool]): Receive active/inactive entities. + Both are returned when 'None' is passed. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried representation entities. + """ + + if not fields: + fields = self.get_default_fields_for_type("representation") + else: + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= self.get_attributes_fields_for_type("representation") + + use_rest = False + if "data" in fields: + use_rest = True + fields = {"id"} + + if active is not None: + fields.add("active") + + if own_attributes: + fields.add("ownAttrib") + + filters = { + "projectName": project_name + } + + if representation_ids is not None: + representation_ids = set(representation_ids) + if not representation_ids: + return + filters["representationIds"] = list(representation_ids) + + version_ids_filter = None + representaion_names_filter = None + if names_by_version_ids is not None: + version_ids_filter = set() + representaion_names_filter = set() + for version_id, names in names_by_version_ids.items(): + version_ids_filter.add(version_id) + representaion_names_filter |= set(names) + + if not version_ids_filter or not representaion_names_filter: + return + + else: + if representation_names is not None: + representaion_names_filter = set(representation_names) + if not representaion_names_filter: + return + + if version_ids is not None: + version_ids_filter = set(version_ids) + if not version_ids_filter: + return + + if version_ids_filter: + filters["versionIds"] = list(version_ids_filter) + + if representaion_names_filter: + filters["representationNames"] = list(representaion_names_filter) + + query = representations_graphql_query(fields) + + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for repre in parsed_data["project"]["representations"]: + if active is not None and active is not repre["active"]: + continue + + if use_rest: + repre = self.get_rest_representation( + project_name, repre["id"] + ) + + if "context" in repre: + orig_context = repre["context"] + context = {} + if orig_context and orig_context != "null": + context = json.loads(orig_context) + repre["context"] = context + + if own_attributes: + fill_own_attribs(repre) + yield repre + + def get_representation_by_id( + self, + project_name, + representation_id, + fields=None, + own_attributes=False + ): + """Query representation entity from server based on id filter. + + Args: + project_name (str): Project where to look for representation. + representation_id (str): Id of representation. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried representation entity or None. + """ + + representations = self.get_representations( + project_name, + representation_ids=[representation_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for representation in representations: + return representation + return None + + def get_representation_by_name( + self, + project_name, + representation_name, + version_id, + fields=None, + own_attributes=False + ): + """Query representation entity by name and version id. + + Args: + project_name (str): Project where to look for representation. + representation_name (str): Representation name. + version_id (str): Version id. + fields (Optional[Iterable[str]]): fields to be queried + for representations. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Queried representation entity or None. + """ + + representations = self.get_representations( + project_name, + representation_names=[representation_name], + version_ids=[version_id], + active=None, + fields=fields, + own_attributes=own_attributes + ) + for representation in representations: + return representation + return None + + def get_representations_parents(self, project_name, representation_ids): + """Find representations parents by representation id. + + Representation parent entities up to project. + + Args: + project_name (str): Project where to look for entities. + representation_ids (Iterable[str]): Representation ids. + + Returns: + dict[str, RepresentationParents]: Parent entities by + representation id. + """ + + if not representation_ids: + return {} + + project = self.get_project(project_name) + repre_ids = set(representation_ids) + output = { + repre_id: RepresentationParents(None, None, None, None) + for repre_id in representation_ids + } + + version_fields = self.get_default_fields_for_type("version") + product_fields = self.get_default_fields_for_type("product") + folder_fields = self.get_default_fields_for_type("folder") + + query = representations_parents_qraphql_query( + version_fields, product_fields, folder_fields + ) + query.set_variable_value("projectName", project_name) + query.set_variable_value("representationIds", list(repre_ids)) + + parsed_data = query.query(self) + for repre in parsed_data["project"]["representations"]: + repre_id = repre["id"] + version = repre.pop("version") + product = version.pop("product") + folder = product.pop("folder") + output[repre_id] = RepresentationParents( + version, product, folder, project + ) + + return output + + def get_representation_parents(self, project_name, representation_id): + """Find representation parents by representation id. + + Representation parent entities up to project. + + Args: + project_name (str): Project where to look for entities. + representation_id (str): Representation id. + + Returns: + RepresentationParents: Representation parent entities. + """ + + if not representation_id: + return None + + parents_by_repre_id = self.get_representations_parents( + project_name, [representation_id] + ) + return parents_by_repre_id[representation_id] + + def get_repre_ids_by_context_filters( + self, + project_name, + context_filters, + representation_names=None, + version_ids=None + ): + """Find representation ids which match passed context filters. + + Each representation has context integrated on representation entity in + database. The context may contain project, folder, task name or + product name, product type and many more. This implementation gives + option to quickly filter representation based on representation data + in database. + + Context filters have defined structure. To define filter of nested + subfield use dot '.' as delimiter (For example 'task.name'). + Filter values can be regex filters. String or 're.Pattern' can be used. + + Args: + project_name (str): Project where to look for representations. + context_filters (dict[str, list[str]]): Filters of context fields. + representation_names (Optional[Iterable[str]]): Representation + names, can be used as additional filter for representations + by their names. + version_ids (Optional[Iterable[str]]): Version ids, can be used + as additional filter for representations by their parent ids. + + Returns: + list[str]: Representation ids that match passed filters. + + Example: + The function returns just representation ids so if entities are + required for funtionality they must be queried afterwards by + their ids. + >>> project_name = "testProject" + >>> filters = { + ... "task.name": ["[aA]nimation"], + ... "product": [".*[Mm]ain"] + ... } + >>> repre_ids = get_repre_ids_by_context_filters( + ... project_name, filters) + >>> repres = get_representations(project_name, repre_ids) + """ + + if not isinstance(context_filters, dict): + raise TypeError( + "Expected 'dict' got {}".format(str(type(context_filters))) + ) + + filter_body = {} + if representation_names is not None: + if not representation_names: + return [] + filter_body["names"] = list(set(representation_names)) + + if version_ids is not None: + if not version_ids: + return [] + filter_body["versionIds"] = list(set(version_ids)) + + body_context_filters = [] + for key, filters in context_filters.items(): + if not isinstance(filters, (set, list, tuple)): + raise TypeError( + "Expected 'set', 'list', 'tuple' got {}".format( + str(type(filters)))) + + + new_filters = set() + for filter_value in filters: + if isinstance(filter_value, PatternType): + filter_value = filter_value.pattern + new_filters.add(filter_value) + + body_context_filters.append({ + "key": key, + "values": list(new_filters) + }) + + response = self.post( + "projects/{}/repreContextFilter".format(project_name), + context=body_context_filters, + **filter_body + ) + response.raise_for_status() + return response.data["ids"] + + def get_workfiles_info( + self, + project_name, + workfile_ids=None, + task_ids=None, + paths=None, + fields=None, + own_attributes=False + ): + """Workfile info entities by passed filters. + + Args: + project_name (str): Project under which the entity is located. + workfile_ids (Optional[Iterable[str]]): Workfile ids. + task_ids (Optional[Iterable[str]]): Task ids. + paths (Optional[Iterable[str]]): Rootless workfiles paths. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Generator[dict[str, Any]]: Queried workfile info entites. + """ + + filters = {"projectName": project_name} + if task_ids is not None: + task_ids = set(task_ids) + if not task_ids: + return + filters["taskIds"] = list(task_ids) + + if paths is not None: + paths = set(paths) + if not paths: + return + filters["paths"] = list(paths) + + if workfile_ids is not None: + workfile_ids = set(workfile_ids) + if not workfile_ids: + return + filters["workfileIds"] = list(workfile_ids) + + if not fields: + fields = self.get_default_fields_for_type("workfile") + + fields = set(fields) + if "attrib" in fields: + fields.remove("attrib") + fields |= { + "attrib.{}".format(attr) + for attr in self.get_attributes_for_type("workfile") + } + if own_attributes: + fields.add("ownAttrib") + + query = workfiles_info_graphql_query(fields) + + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for workfile_info in parsed_data["project"]["workfiles"]: + if own_attributes: + fill_own_attribs(workfile_info) + yield workfile_info + + def get_workfile_info( + self, project_name, task_id, path, fields=None, own_attributes=False + ): + """Workfile info entity by task id and workfile path. + + Args: + project_name (str): Project under which the entity is located. + task_id (str): Task id. + path (str): Rootless workfile path. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Workfile info entity or None. + """ + + if not task_id or not path: + return None + + for workfile_info in self.get_workfiles_info( + project_name, + task_ids=[task_id], + paths=[path], + fields=fields, + own_attributes=own_attributes + ): + return workfile_info + return None + + def get_workfile_info_by_id( + self, project_name, workfile_id, fields=None, own_attributes=False + ): + """Workfile info entity by id. + + Args: + project_name (str): Project under which the entity is located. + workfile_id (str): Workfile info id. + fields (Optional[Iterable[str]]): Fields to be queried for + representation. All possible fields are returned if 'None' is + passed. + own_attributes (Optional[bool]): Attribute values that are + not explicitly set on entity will have 'None' value. + + Returns: + Union[dict[str, Any], None]: Workfile info entity or None. + """ + + if not workfile_id: + return None + + for workfile_info in self.get_workfiles_info( + project_name, + workfile_ids=[workfile_id], + fields=fields, + own_attributes=own_attributes + ): + return workfile_info + return None + + def _prepare_thumbnail_content(self, project_name, response): + content = None + content_type = response.content_type + + # It is expected the response contains thumbnail id otherwise the + # content cannot be cached and filepath returned + thumbnail_id = response.headers.get("X-Thumbnail-Id") + if thumbnail_id is not None: + content = response.content + + return ThumbnailContent( + project_name, thumbnail_id, content, content_type + ) + + def get_thumbnail_by_id(self, project_name, thumbnail_id): + """Get thumbnail from server by id. + + Permissions of thumbnails are related to entities so thumbnails must + be queried per entity. So an entity type and entity type is required + to be passed. + + Notes: + It is recommended to use one of prepared entity type specific + methods 'get_folder_thumbnail', 'get_version_thumbnail' or + 'get_workfile_thumbnail'. + We do recommend pass thumbnail id if you have access to it. Each + entity that allows thumbnails has 'thumbnailId' field, so it + can be queried. + + Args: + project_name (str): Project under which the entity is located. + thumbnail_id (Optional[str]): DEPRECATED Use + 'get_thumbnail_by_id'. + + Returns: + ThumbnailContent: Thumbnail content wrapper. Does not have to be + valid. + """ + + response = self.raw_get( + "projects/{}/thumbnails/{}".format( + project_name, + thumbnail_id + ) + ) + return self._prepare_thumbnail_content(project_name, response) + + def get_thumbnail( + self, project_name, entity_type, entity_id, thumbnail_id=None + ): + """Get thumbnail from server. + + Permissions of thumbnails are related to entities so thumbnails must + be queried per entity. So an entity type and entity type is required + to be passed. + + Notes: + It is recommended to use one of prepared entity type specific + methods 'get_folder_thumbnail', 'get_version_thumbnail' or + 'get_workfile_thumbnail'. + We do recommend pass thumbnail id if you have access to it. Each + entity that allows thumbnails has 'thumbnailId' field, so it + can be queried. + + Args: + project_name (str): Project under which the entity is located. + entity_type (str): Entity type which passed entity id represents. + entity_id (str): Entity id for which thumbnail should be returned. + thumbnail_id (Optional[str]): DEPRECATED Use + 'get_thumbnail_by_id'. + + Returns: + ThumbnailContent: Thumbnail content wrapper. Does not have to be + valid. + """ + + if thumbnail_id: + return self.get_thumbnail_by_id(project_name, thumbnail_id) + + if entity_type in ( + "folder", + "version", + "workfile", + ): + entity_type += "s" + + response = self.raw_get("projects/{}/{}/{}/thumbnail".format( + project_name, + entity_type, + entity_id + )) + return self._prepare_thumbnail_content(project_name, response) + + def get_folder_thumbnail( + self, project_name, folder_id, thumbnail_id=None + ): + """Prepared method to receive thumbnail for folder entity. + + Args: + project_name (str): Project under which the entity is located. + folder_id (str): Folder id for which thumbnail should be returned. + thumbnail_id (Optional[str]): Prepared thumbnail id from entity. + Used only to check if thumbnail was already cached. + + Returns: + Union[str, None]: Path to downloaded thumbnail or none if entity + does not have any (or if user does not have permissions). + """ + + return self.get_thumbnail( + project_name, "folder", folder_id, thumbnail_id + ) + + def get_version_thumbnail( + self, project_name, version_id, thumbnail_id=None + ): + """Prepared method to receive thumbnail for version entity. + + Args: + project_name (str): Project under which the entity is located. + version_id (str): Version id for which thumbnail should be + returned. + thumbnail_id (Optional[str]): Prepared thumbnail id from entity. + Used only to check if thumbnail was already cached. + + Returns: + Union[str, None]: Path to downloaded thumbnail or none if entity + does not have any (or if user does not have permissions). + """ + + return self.get_thumbnail( + project_name, "version", version_id, thumbnail_id + ) + + def get_workfile_thumbnail( + self, project_name, workfile_id, thumbnail_id=None + ): + """Prepared method to receive thumbnail for workfile entity. + + Args: + project_name (str): Project under which the entity is located. + workfile_id (str): Worfile id for which thumbnail should be + returned. + thumbnail_id (Optional[str]): Prepared thumbnail id from entity. + Used only to check if thumbnail was already cached. + + Returns: + Union[str, None]: Path to downloaded thumbnail or none if entity + does not have any (or if user does not have permissions). + """ + + return self.get_thumbnail( + project_name, "workfile", workfile_id, thumbnail_id + ) + + def _get_thumbnail_mime_type(self, thumbnail_path): + """Get thumbnail mime type on thumbnail creation based on source path. + + Args: + thumbnail_path (str): Path to thumbnail source fie. + + Returns: + str: Mime type used for thumbnail creation. + + Raises: + ValueError: Mime type cannot be determined. + """ + + ext = os.path.splitext(thumbnail_path)[-1].lower() + if ext == ".png": + return "image/png" + + elif ext in (".jpeg", ".jpg"): + return "image/jpeg" + + raise ValueError( + "Thumbnail source file has unknown extensions {}".format(ext)) + + def create_thumbnail(self, project_name, src_filepath, thumbnail_id=None): + """Create new thumbnail on server from passed path. + + Args: + project_name (str): Project where the thumbnail will be created + and can be used. + src_filepath (str): Filepath to thumbnail which should be uploaded. + thumbnail_id (Optional[str]): Prepared if of thumbnail. + + Returns: + str: Created thumbnail id. + + Raises: + ValueError: When thumbnail source cannot be processed. + """ + + if not os.path.exists(src_filepath): + raise ValueError("Entered filepath does not exist.") + + if thumbnail_id: + self.update_thumbnail( + project_name, + thumbnail_id, + src_filepath + ) + return thumbnail_id + + mime_type = self._get_thumbnail_mime_type(src_filepath) + with open(src_filepath, "rb") as stream: + content = stream.read() + + response = self.raw_post( + "projects/{}/thumbnails".format(project_name), + headers={"Content-Type": mime_type}, + data=content + ) + response.raise_for_status() + return response.data["id"] + + def update_thumbnail(self, project_name, thumbnail_id, src_filepath): + """Change thumbnail content by id. + + Update can be also used to create new thumbnail. + + Args: + project_name (str): Project where the thumbnail will be created + and can be used. + thumbnail_id (str): Thumbnail id to update. + src_filepath (str): Filepath to thumbnail which should be uploaded. + + Raises: + ValueError: When thumbnail source cannot be processed. + """ + + if not os.path.exists(src_filepath): + raise ValueError("Entered filepath does not exist.") + + mime_type = self._get_thumbnail_mime_type(src_filepath) + with open(src_filepath, "rb") as stream: + content = stream.read() + + response = self.raw_put( + "projects/{}/thumbnails/{}".format(project_name, thumbnail_id), + headers={"Content-Type": mime_type}, + data=content + ) + response.raise_for_status() + + def create_project( + self, + project_name, + project_code, + library_project=False, + preset_name=None + ): + """Create project using Ayon settings. + + This project creation function is not validating project entity on + creation. It is because project entity is created blindly with only + minimum required information about project which is name and code. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name (str): New project name. Should be unique. + project_code (str): Project's code should be unique too. + library_project (Optional[bool]): Project is library project. + preset_name (Optional[str]): Name of anatomy preset. Default is + used if not passed. + + Raises: + ValueError: When project name already exists. + + Returns: + dict[str, Any]: Created project entity. + """ + + if self.get_project(project_name): + raise ValueError("Project with name \"{}\" already exists".format( + project_name + )) + + if not PROJECT_NAME_REGEX.match(project_name): + raise ValueError(( + "Project name \"{}\" contain invalid characters" + ).format(project_name)) + + preset = self.get_project_anatomy_preset(preset_name) + + result = self.post( + "projects", + name=project_name, + code=project_code, + anatomy=preset, + library=library_project + ) + + if result.status != 201: + details = "Unknown details ({})".format(result.status) + if result.data: + details = result.data.get("detail") or details + raise ValueError("Failed to create project \"{}\": {}".format( + project_name, details + )) + + return self.get_project(project_name) + + def update_project( + self, + project_name, + library=None, + folder_types=None, + task_types=None, + link_types=None, + statuses=None, + tags=None, + config=None, + attrib=None, + data=None, + active=None, + project_code=None, + **changes + ): + """Update project entity on server. + + Args: + project_name (str): Name of project. + library (Optional[bool]): Change library state. + folder_types (Optional[list[dict[str, Any]]]): Folder type + definitions. + task_types (Optional[list[dict[str, Any]]]): Task type + definitions. + link_types (Optional[list[dict[str, Any]]]): Link type + definitions. + statuses (Optional[list[dict[str, Any]]]): Status definitions. + tags (Optional[list[dict[str, Any]]]): List of tags available to + set on entities. + config (Optional[dict[dict[str, Any]]]): Project anatomy config + with templates and roots. + attrib (Optional[dict[str, Any]]): Project attributes to change. + data (Optional[dict[str, Any]]): Custom data of a project. This + value will 100% override project data. + active (Optional[bool]): Change active state of a project. + project_code (Optional[str]): Change project code. Not recommended + during production. + **changes: Other changed keys based on Rest API documentation. + """ + + changes.update({ + key: value + for key, value in ( + ("library", library), + ("folderTypes", folder_types), + ("taskTypes", task_types), + ("linkTypes", link_types), + ("statuses", statuses), + ("tags", tags), + ("config", config), + ("attrib", attrib), + ("data", data), + ("active", active), + ("code", project_code), + ) + if value is not None + }) + response = self.patch( + "projects/{}".format(project_name), + **changes + ) + response.raise_for_status() + + def delete_project(self, project_name): + """Delete project from server. + + This will completely remove project from server without any step back. + + Args: + project_name (str): Project name that will be removed. + """ + + if not self.get_project(project_name): + raise ValueError("Project with name \"{}\" was not found".format( + project_name + )) + + result = self.delete("projects/{}".format(project_name)) + if result.status_code != 204: + raise ValueError( + "Failed to delete project \"{}\". {}".format( + project_name, result.data["detail"] + ) + ) + + # --- Links --- + def get_full_link_type_name(self, link_type_name, input_type, output_type): + """Calculate full link type name used for query from server. + + Args: + link_type_name (str): Type of link. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + + Returns: + str: Full name of link type used for query from server. + """ + + return "|".join([link_type_name, input_type, output_type]) + + def get_link_types(self, project_name): + """All link types available on a project. + + Example output: + [ + { + "name": "reference|folder|folder", + "link_type": "reference", + "input_type": "folder", + "output_type": "folder", + "data": {} + } + ] + + Args: + project_name (str): Name of project where to look for link types. + + Returns: + list[dict[str, Any]]: Link types available on project. + """ + + response = self.get("projects/{}/links/types".format(project_name)) + response.raise_for_status() + return response.data["types"] + + def get_link_type( + self, project_name, link_type_name, input_type, output_type + ): + """Get link type data. + + There is not dedicated REST endpoint to get single link type, + so method 'get_link_types' is used. + + Example output: + { + "name": "reference|folder|folder", + "link_type": "reference", + "input_type": "folder", + "output_type": "folder", + "data": {} + } + + Args: + project_name (str): Project where link type is available. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + + Returns: + Union[None, dict[str, Any]]: Link type information. + """ + + full_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type + ) + for link_type in self.get_link_types(project_name): + if link_type["name"] == full_type_name: + return link_type + return None + + def create_link_type( + self, project_name, link_type_name, input_type, output_type, data=None + ): + """Create or update link type on server. + + Warning: + Because PUT is used for creation it is also used for update. + + Args: + project_name (str): Project where link type is created. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + data (Optional[dict[str, Any]]): Additional data related to link. + + Raises: + HTTPRequestError: Server error happened. + """ + + if data is None: + data = {} + full_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type + ) + response = self.put( + "projects/{}/links/types/{}".format(project_name, full_type_name), + **data + ) + response.raise_for_status() + + def delete_link_type( + self, project_name, link_type_name, input_type, output_type + ): + """Remove link type from project. + + Args: + project_name (str): Project where link type is created. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + + Raises: + HTTPRequestError: Server error happened. + """ + + full_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type + ) + response = self.delete( + "projects/{}/links/types/{}".format(project_name, full_type_name)) + response.raise_for_status() + + def make_sure_link_type_exists( + self, project_name, link_type_name, input_type, output_type, data=None + ): + """Make sure link type exists on a project. + + Args: + project_name (str): Name of project. + link_type_name (str): Name of link type. + input_type (str): Input entity type of link. + output_type (str): Output entity type of link. + data (Optional[dict[str, Any]]): Link type related data. + """ + + link_type = self.get_link_type( + project_name, link_type_name, input_type, output_type) + if ( + link_type + and (data is None or data == link_type["data"]) + ): + return + self.create_link_type( + project_name, link_type_name, input_type, output_type, data + ) + + def create_link( + self, + project_name, + link_type_name, + input_id, + input_type, + output_id, + output_type + ): + """Create link between 2 entities. + + Link has a type which must already exists on a project. + + Example output: + { + "id": "59a212c0d2e211eda0e20242ac120002" + } + + Args: + project_name (str): Project where the link is created. + link_type_name (str): Type of link. + input_id (str): Id of input entity. + input_type (str): Entity type of input entity. + output_id (str): Id of output entity. + output_type (str): Entity type of output entity. + + Returns: + dict[str, str]: Information about link. + + Raises: + HTTPRequestError: Server error happened. + """ + + full_link_type_name = self.get_full_link_type_name( + link_type_name, input_type, output_type) + response = self.post( + "projects/{}/links".format(project_name), + link=full_link_type_name, + input=input_id, + output=output_id + ) + response.raise_for_status() + return response.data + + def delete_link(self, project_name, link_id): + """Remove link by id. + + Args: + project_name (str): Project where link exists. + link_id (str): Id of link. + + Raises: + HTTPRequestError: Server error happened. + """ + + response = self.delete( + "projects/{}/links/{}".format(project_name, link_id) + ) + response.raise_for_status() + + def _prepare_link_filters(self, filters, link_types, link_direction): + """Add links filters for GraphQl queries. + + Args: + filters (dict[str, Any]): Object where filters will be added. + link_types (Union[Iterable[str], None]): Link types filters. + link_direction (Union[Literal["in", "out"], None]): Direction of + link "in", "out" or 'None' for both. + + Returns: + bool: Links are valid, and query from server can happen. + """ + + if link_types is not None: + link_types = set(link_types) + if not link_types: + return False + filters["linkTypes"] = list(link_types) + + if link_direction is not None: + if link_direction not in ("in", "out"): + return False + filters["linkDirection"] = link_direction + return True + + def get_entities_links( + self, + project_name, + entity_type, + entity_ids=None, + link_types=None, + link_direction=None + ): + """Helper method to get links from server for entity types. + + Example output: + [ + { + "id": "59a212c0d2e211eda0e20242ac120002", + "linkType": "reference", + "description": "reference link between folders", + "projectName": "my_project", + "author": "frantadmin", + "entityId": "b1df109676db11ed8e8c6c9466b19aa8", + "entityType": "folder", + "direction": "out" + }, + ... + ] + + Args: + project_name (str): Project where links are. + entity_type (Literal["folder", "task", "product", + "version", "representations"]): Entity type. + entity_ids (Optional[Iterable[str]]): Ids of entities for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by entity ids. + """ + + if entity_type == "folder": + query_func = folders_graphql_query + id_filter_key = "folderIds" + project_sub_key = "folders" + elif entity_type == "task": + query_func = tasks_graphql_query + id_filter_key = "taskIds" + project_sub_key = "tasks" + elif entity_type == "product": + query_func = products_graphql_query + id_filter_key = "productIds" + project_sub_key = "products" + elif entity_type == "version": + query_func = versions_graphql_query + id_filter_key = "versionIds" + project_sub_key = "versions" + elif entity_type == "representation": + query_func = representations_graphql_query + id_filter_key = "representationIds" + project_sub_key = "representations" + else: + raise ValueError("Unknown type \"{}\". Expected {}".format( + entity_type, + ", ".join( + ("folder", "task", "product", "version", "representation") + ) + )) + + output = collections.defaultdict(list) + filters = { + "projectName": project_name + } + if entity_ids is not None: + entity_ids = set(entity_ids) + if not entity_ids: + return output + filters[id_filter_key] = list(entity_ids) + + if not self._prepare_link_filters(filters, link_types, link_direction): + return output + + query = query_func({"id", "links"}) + for attr, filter_value in filters.items(): + query.set_variable_value(attr, filter_value) + + for parsed_data in query.continuous_query(self): + for entity in parsed_data["project"][project_sub_key]: + entity_id = entity["id"] + output[entity_id].extend(entity["links"]) + return output + + def get_folders_links( + self, + project_name, + folder_ids=None, + link_types=None, + link_direction=None + ): + """Query folders links from server. + + Args: + project_name (str): Project where links are. + folder_ids (Optional[Iterable[str]]): Ids of folders for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by folder ids. + """ + + return self.get_entities_links( + project_name, "folder", folder_ids, link_types, link_direction + ) + + def get_folder_links( + self, + project_name, + folder_id, + link_types=None, + link_direction=None + ): + """Query folder links from server. + + Args: + project_name (str): Project where links are. + folder_id (str): Folder id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of folder. + """ + + return self.get_folders_links( + project_name, [folder_id], link_types, link_direction + )[folder_id] + + def get_tasks_links( + self, + project_name, + task_ids=None, + link_types=None, + link_direction=None + ): + """Query tasks links from server. + + Args: + project_name (str): Project where links are. + task_ids (Optional[Iterable[str]]): Ids of tasks for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by task ids. + """ + + return self.get_entities_links( + project_name, "task", task_ids, link_types, link_direction + ) + + def get_task_links( + self, + project_name, + task_id, + link_types=None, + link_direction=None + ): + """Query task links from server. + + Args: + project_name (str): Project where links are. + task_id (str): Task id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of task. + """ + + return self.get_tasks_links( + project_name, [task_id], link_types, link_direction + )[task_id] + + def get_products_links( + self, + project_name, + product_ids=None, + link_types=None, + link_direction=None + ): + """Query products links from server. + + Args: + project_name (str): Project where links are. + product_ids (Optional[Iterable[str]]): Ids of products for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by product ids. + """ + + return self.get_entities_links( + project_name, "product", product_ids, link_types, link_direction + ) + + def get_product_links( + self, + project_name, + product_id, + link_types=None, + link_direction=None + ): + """Query product links from server. + + Args: + project_name (str): Project where links are. + product_id (str): Product id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of product. + """ + + return self.get_products_links( + project_name, [product_id], link_types, link_direction + )[product_id] + + def get_versions_links( + self, + project_name, + version_ids=None, + link_types=None, + link_direction=None + ): + """Query versions links from server. + + Args: + project_name (str): Project where links are. + version_ids (Optional[Iterable[str]]): Ids of versions for which + links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by version ids. + """ + + return self.get_entities_links( + project_name, "version", version_ids, link_types, link_direction + ) + + def get_version_links( + self, + project_name, + version_id, + link_types=None, + link_direction=None + ): + """Query version links from server. + + Args: + project_name (str): Project where links are. + version_id (str): Version id for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of version. + """ + + return self.get_versions_links( + project_name, [version_id], link_types, link_direction + )[version_id] + + def get_representations_links( + self, + project_name, + representation_ids=None, + link_types=None, + link_direction=None + ): + """Query representations links from server. + + Args: + project_name (str): Project where links are. + representation_ids (Optional[Iterable[str]]): Ids of + representations for which links should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + dict[str, list[dict[str, Any]]]: Link info by representation ids. + """ + + return self.get_entities_links( + project_name, + "representation", + representation_ids, + link_types, + link_direction + ) + + def get_representation_links( + self, + project_name, + representation_id, + link_types=None, + link_direction=None + ): + """Query representation links from server. + + Args: + project_name (str): Project where links are. + representation_id (str): Representation id for which links + should be received. + link_types (Optional[Iterable[str]]): Link type filters. + link_direction (Optional[Literal["in", "out"]]): Link direction + filter. + + Returns: + list[dict[str, Any]]: Link info of representation. + """ + + return self.get_representations_links( + project_name, [representation_id], link_types, link_direction + )[representation_id] + + # --- Batch operations processing --- + def send_batch_operations( + self, + project_name, + operations, + can_fail=False, + raise_on_fail=True + ): + """Post multiple CRUD operations to server. + + When multiple changes should be made on server side this is the best + way to go. It is possible to pass multiple operations to process on a + server side and do the changes in a transaction. + + Args: + project_name (str): On which project should be operations + processed. + operations (list[dict[str, Any]]): Operations to be processed. + can_fail (Optional[bool]): Server will try to process all + operations even if one of them fails. + raise_on_fail (Optional[bool]): Raise exception if an operation + fails. You can handle failed operations on your own + when set to 'False'. + + Raises: + ValueError: Operations can't be converted to json string. + FailedOperations: When output does not contain server operations + or 'raise_on_fail' is enabled and any operation fails. + + Returns: + list[dict[str, Any]]: Operations result with process details. + """ + + if not operations: + return [] + + body_by_id = {} + operations_body = [] + for operation in operations: + if not operation: + continue + + op_id = operation.get("id") + if not op_id: + op_id = create_entity_id() + operation["id"] = op_id + + try: + body = json.loads( + json.dumps(operation, default=entity_data_json_default) + ) + except: + raise ValueError("Couldn't json parse body: {}".format( + json.dumps( + operation, indent=4, default=failed_json_default + ) + )) + + body_by_id[op_id] = body + operations_body.append(body) + + if not operations_body: + return [] + + result = self.post( + "projects/{}/operations".format(project_name), + operations=operations_body, + canFail=can_fail + ) + + op_results = result.get("operations") + if op_results is None: + raise FailedOperations( + "Operation failed. Content: {}".format(str(result)) + ) + + if result.get("success") or not raise_on_fail: + return op_results + + for op_result in op_results: + if not op_result["success"]: + operation_id = op_result["id"] + raise FailedOperations(( + "Operation \"{}\" failed with data:\n{}\nDetail: {}." + ).format( + operation_id, + json.dumps(body_by_id[operation_id], indent=4), + op_result["detail"], + )) + return op_results diff --git a/openpype/vendor/python/common/ayon_api/utils.py b/openpype/vendor/python/common/ayon_api/utils.py new file mode 100644 index 0000000000..314d13faec --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/utils.py @@ -0,0 +1,633 @@ +import re +import datetime +import uuid +import string +import platform +import collections +try: + # Python 3 + from urllib.parse import urlparse, urlencode +except ImportError: + # Python 2 + from urlparse import urlparse + from urllib import urlencode + +import requests +import unidecode + +from .exceptions import UrlError + +REMOVED_VALUE = object() +SLUGIFY_WHITELIST = string.ascii_letters + string.digits +SLUGIFY_SEP_WHITELIST = " ,./\\;:!|*^#@~+-_=" + +RepresentationParents = collections.namedtuple( + "RepresentationParents", + ("version", "product", "folder", "project") +) + + +class ThumbnailContent: + """Wrapper for thumbnail content. + + Args: + project_name (str): Project name. + thumbnail_id (Union[str, None]): Thumbnail id. + content_type (Union[str, None]): Content type e.g. 'image/png'. + content (Union[bytes, None]): Thumbnail content. + """ + + def __init__(self, project_name, thumbnail_id, content, content_type): + self.project_name = project_name + self.thumbnail_id = thumbnail_id + self.content_type = content_type + self.content = content or b"" + + @property + def id(self): + """Wrapper for thumbnail id. + + Returns: + + """ + + return self.thumbnail_id + + @property + def is_valid(self): + """Content of thumbnail is valid. + + Returns: + bool: Content is valid and can be used. + """ + return ( + self.thumbnail_id is not None + and self.content_type is not None + ) + + +def prepare_query_string(key_values): + """Prepare data to query string. + + If there are any values a query starting with '?' is returned otherwise + an empty string. + + Args: + dict[str, Any]: Query values. + + Returns: + str: Query string. + """ + + if not key_values: + return "" + return "?{}".format(urlencode(key_values)) + + +def create_entity_id(): + return uuid.uuid1().hex + + +def convert_entity_id(entity_id): + if not entity_id: + return None + + if isinstance(entity_id, uuid.UUID): + return entity_id.hex + + try: + return uuid.UUID(entity_id).hex + + except (TypeError, ValueError, AttributeError): + pass + return None + + +def convert_or_create_entity_id(entity_id=None): + output = convert_entity_id(entity_id) + if output is None: + output = create_entity_id() + return output + + +def entity_data_json_default(value): + if isinstance(value, datetime.datetime): + return int(value.timestamp()) + + raise TypeError( + "Object of type {} is not JSON serializable".format(str(type(value))) + ) + + +def slugify_string( + input_string, + separator="_", + slug_whitelist=SLUGIFY_WHITELIST, + split_chars=SLUGIFY_SEP_WHITELIST, + min_length=1, + lower=False, + make_set=False, +): + """Slugify a text string. + + This function removes transliterates input string to ASCII, removes + special characters and use join resulting elements using + specified separator. + + Args: + input_string (str): Input string to slugify + separator (str): A string used to separate returned elements + (default: "_") + slug_whitelist (str): Characters allowed in the output + (default: ascii letters, digits and the separator) + split_chars (str): Set of characters used for word splitting + (there is a sane default) + lower (bool): Convert to lower-case (default: False) + make_set (bool): Return "set" object instead of string. + min_length (int): Minimal length of an element (word). + + Returns: + Union[str, Set[str]]: Based on 'make_set' value returns slugified + string. + """ + + tmp_string = unidecode.unidecode(input_string) + if lower: + tmp_string = tmp_string.lower() + + parts = [ + # Remove all characters that are not in whitelist + re.sub("[^{}]".format(re.escape(slug_whitelist)), "", part) + # Split text into part by split characters + for part in re.split("[{}]".format(re.escape(split_chars)), tmp_string) + ] + # Filter text parts by length + filtered_parts = [ + part + for part in parts + if len(part) >= min_length + ] + if make_set: + return set(filtered_parts) + return separator.join(filtered_parts) + + +def failed_json_default(value): + return "< Failed value {} > {}".format(type(value), str(value)) + + +def prepare_attribute_changes(old_entity, new_entity, replace=False): + attrib_changes = {} + new_attrib = new_entity.get("attrib") + old_attrib = old_entity.get("attrib") + if new_attrib is None: + if not replace: + return attrib_changes + new_attrib = {} + + if old_attrib is None: + return new_attrib + + for attr, new_attr_value in new_attrib.items(): + old_attr_value = old_attrib.get(attr) + if old_attr_value != new_attr_value: + attrib_changes[attr] = new_attr_value + + if replace: + for attr in old_attrib: + if attr not in new_attrib: + attrib_changes[attr] = REMOVED_VALUE + + return attrib_changes + + +def prepare_entity_changes(old_entity, new_entity, replace=False): + """Prepare changes of entities.""" + + changes = {} + for key, new_value in new_entity.items(): + if key == "attrib": + continue + + old_value = old_entity.get(key) + if old_value != new_value: + changes[key] = new_value + + if replace: + for key in old_entity: + if key not in new_entity: + changes[key] = REMOVED_VALUE + + attr_changes = prepare_attribute_changes(old_entity, new_entity, replace) + if attr_changes: + changes["attrib"] = attr_changes + return changes + + +def _try_parse_url(url): + try: + return urlparse(url) + except BaseException: + return None + + +def _try_connect_to_server(url): + try: + # TODO add validation if the url lead to Ayon server + # - thiw won't validate if the url lead to 'google.com' + requests.get(url) + + except BaseException: + return False + return True + + +def login_to_server(url, username, password): + """Use login to the server to receive token. + + Args: + url (str): Server url. + username (str): User's username. + password (str): User's password. + + Returns: + Union[str, None]: User's token if login was successfull. + Otherwise 'None'. + """ + + headers = {"Content-Type": "application/json"} + response = requests.post( + "{}/api/auth/login".format(url), + headers=headers, + json={ + "name": username, + "password": password + } + ) + token = None + # 200 - success + # 401 - invalid credentials + # * - other issues + if response.status_code == 200: + token = response.json()["token"] + return token + + +def logout_from_server(url, token): + """Logout from server and throw token away. + + Args: + url (str): Url from which should be logged out. + token (str): Token which should be used to log out. + """ + + headers = { + "Content-Type": "application/json", + "Authorization": "Bearer {}".format(token) + } + requests.post( + url + "/api/auth/logout", + headers=headers + ) + + +def is_token_valid(url, token): + """Check if token is valid. + + Args: + url (str): Server url. + token (str): User's token. + + Returns: + bool: True if token is valid. + """ + + headers = { + "Content-Type": "application/json", + "Authorization": "Bearer {}".format(token) + } + response = requests.get( + "{}/api/users/me".format(url), + headers=headers + ) + return response.status_code == 200 + + +def validate_url(url): + """Validate url if is valid and server is available. + + Validation checks if can be parsed as url and contains scheme. + + Function will try to autofix url thus will return modified url when + connection to server works. + + ```python + my_url = "my.server.url" + try: + # Store new url + validated_url = validate_url(my_url) + + except UrlError: + # Handle invalid url + ... + ``` + + Args: + url (str): Server url. + + Returns: + Url which was used to connect to server. + + Raises: + UrlError: Error with short description and hints for user. + """ + + stripperd_url = url.strip() + if not stripperd_url: + raise UrlError( + "Invalid url format. Url is empty.", + title="Invalid url format", + hints=["url seems to be empty"] + ) + + # Not sure if this is good idea? + modified_url = stripperd_url.rstrip("/") + parsed_url = _try_parse_url(modified_url) + universal_hints = [ + "does the url work in browser?" + ] + if parsed_url is None: + raise UrlError( + "Invalid url format. Url cannot be parsed as url \"{}\".".format( + modified_url + ), + title="Invalid url format", + hints=universal_hints + ) + + # Try add 'https://' scheme if is missing + # - this will trigger UrlError if both will crash + if not parsed_url.scheme: + new_url = "https://" + modified_url + if _try_connect_to_server(new_url): + return new_url + + if _try_connect_to_server(modified_url): + return modified_url + + hints = [] + if "/" in parsed_url.path or not parsed_url.scheme: + new_path = parsed_url.path.split("/")[0] + if not parsed_url.scheme: + new_path = "https://" + new_path + + hints.append( + "did you mean \"{}\"?".format(parsed_url.scheme + new_path) + ) + + raise UrlError( + "Couldn't connect to server on \"{}\"".format(url), + title="Couldn't connect to server", + hints=hints + universal_hints + ) + + +class TransferProgress: + """Object to store progress of download/upload from/to server.""" + + def __init__(self): + self._started = False + self._transfer_done = False + self._transferred = 0 + self._content_size = None + + self._failed = False + self._fail_reason = None + + self._source_url = "N/A" + self._destination_url = "N/A" + + def get_content_size(self): + """Content size in bytes. + + Returns: + Union[int, None]: Content size in bytes or None + if is unknown. + """ + + return self._content_size + + def set_content_size(self, content_size): + """Set content size in bytes. + + Args: + content_size (int): Content size in bytes. + + Raises: + ValueError: If content size was already set. + """ + + if self._content_size is not None: + raise ValueError("Content size was set more then once") + self._content_size = content_size + + def get_started(self): + """Transfer was started. + + Returns: + bool: True if transfer started. + """ + + return self._started + + def set_started(self): + """Mark that transfer started. + + Raises: + ValueError: If transfer was already started. + """ + + if self._started: + raise ValueError("Progress already started") + self._started = True + + def get_transfer_done(self): + """Transfer finished. + + Returns: + bool: Transfer finished. + """ + + return self._transfer_done + + def set_transfer_done(self): + """Mark progress as transfer finished. + + Raises: + ValueError: If progress was already marked as done + or wasn't started yet. + """ + + if self._transfer_done: + raise ValueError("Progress was already marked as done") + if not self._started: + raise ValueError("Progress didn't start yet") + self._transfer_done = True + + def get_failed(self): + """Transfer failed. + + Returns: + bool: True if transfer failed. + """ + + return self._failed + + def get_fail_reason(self): + """Get reason why transfer failed. + + Returns: + Union[str, None]: Reason why transfer + failed or None. + """ + + return self._fail_reason + + def set_failed(self, reason): + """Mark progress as failed. + + Args: + reason (str): Reason why transfer failed. + """ + + self._fail_reason = reason + self._failed = True + + def get_transferred_size(self): + """Already transferred size in bytes. + + Returns: + int: Already transferred size in bytes. + """ + + return self._transferred + + def set_transferred_size(self, transferred): + """Set already transferred size in bytes. + + Args: + transferred (int): Already transferred size in bytes. + """ + + self._transferred = transferred + + def add_transferred_chunk(self, chunk_size): + """Add transferred chunk size in bytes. + + Args: + chunk_size (int): Add transferred chunk size + in bytes. + """ + + self._transferred += chunk_size + + def get_source_url(self): + """Source url from where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Source url from where transfer happens. + """ + + return self._source_url + + def set_source_url(self, url): + """Set source url from where transfer happens. + + Args: + url (str): Source url from where transfer happens. + """ + + self._source_url = url + + def get_destination_url(self): + """Destination url where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Destination url where transfer happens. + """ + + return self._destination_url + + def set_destination_url(self, url): + """Set destination url where transfer happens. + + Args: + url (str): Destination url where transfer happens. + """ + + self._destination_url = url + + @property + def is_running(self): + """Check if transfer is running. + + Returns: + bool: True if transfer is running. + """ + + if ( + not self.started + or self.transfer_done + or self.failed + ): + return False + return True + + @property + def transfer_progress(self): + """Get transfer progress in percents. + + Returns: + Union[float, None]: Transfer progress in percents or 'None' + if content size is unknown. + """ + + if self._content_size is None: + return None + return (self._transferred * 100.0) / float(self._content_size) + + content_size = property(get_content_size, set_content_size) + started = property(get_started) + transfer_done = property(get_transfer_done) + failed = property(get_failed) + fail_reason = property(get_fail_reason) + source_url = property(get_source_url, set_source_url) + destination_url = property(get_destination_url, set_destination_url) + transferred_size = property(get_transferred_size, set_transferred_size) + + +def create_dependency_package_basename(platform_name=None): + """Create basename for dependency package file. + + Args: + platform_name (Optional[str]): Name of platform for which the + bundle is targeted. Default value is current platform. + + Returns: + str: Dependency package name with timestamp and platform. + """ + + if platform_name is None: + platform_name = platform.system().lower() + + now_date = datetime.datetime.now() + time_stamp = now_date.strftime("%y%m%d%H%M") + return "ayon_{}_{}".format(time_stamp, platform_name) diff --git a/openpype/vendor/python/common/ayon_api/version.py b/openpype/vendor/python/common/ayon_api/version.py new file mode 100644 index 0000000000..df841e0829 --- /dev/null +++ b/openpype/vendor/python/common/ayon_api/version.py @@ -0,0 +1,2 @@ +"""Package declaring Python API for Ayon server.""" +__version__ = "0.3.5" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/__init__.py b/openpype/vendor/python/python_2/arrow/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/__init__.py rename to openpype/vendor/python/python_2/arrow/__init__.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/_version.py b/openpype/vendor/python/python_2/arrow/_version.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/_version.py rename to openpype/vendor/python/python_2/arrow/_version.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/api.py b/openpype/vendor/python/python_2/arrow/api.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/api.py rename to openpype/vendor/python/python_2/arrow/api.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/arrow.py b/openpype/vendor/python/python_2/arrow/arrow.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/arrow.py rename to openpype/vendor/python/python_2/arrow/arrow.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/constants.py b/openpype/vendor/python/python_2/arrow/constants.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/constants.py rename to openpype/vendor/python/python_2/arrow/constants.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/factory.py b/openpype/vendor/python/python_2/arrow/factory.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/factory.py rename to openpype/vendor/python/python_2/arrow/factory.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/formatter.py b/openpype/vendor/python/python_2/arrow/formatter.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/formatter.py rename to openpype/vendor/python/python_2/arrow/formatter.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/locales.py b/openpype/vendor/python/python_2/arrow/locales.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/locales.py rename to openpype/vendor/python/python_2/arrow/locales.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/parser.py b/openpype/vendor/python/python_2/arrow/parser.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/parser.py rename to openpype/vendor/python/python_2/arrow/parser.py diff --git a/openpype/modules/ftrack/python2_vendor/arrow/arrow/util.py b/openpype/vendor/python/python_2/arrow/util.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/arrow/util.py rename to openpype/vendor/python/python_2/arrow/util.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/__init__.py b/openpype/vendor/python/python_2/backports/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/__init__.py rename to openpype/vendor/python/python_2/backports/__init__.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/__init__.py b/openpype/vendor/python/python_2/backports/configparser/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/__init__.py rename to openpype/vendor/python/python_2/backports/configparser/__init__.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/helpers.py b/openpype/vendor/python/python_2/backports/configparser/helpers.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/configparser/helpers.py rename to openpype/vendor/python/python_2/backports/configparser/helpers.py diff --git a/openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/functools_lru_cache.py b/openpype/vendor/python/python_2/backports/functools_lru_cache.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/backports.functools_lru_cache/backports/functools_lru_cache.py rename to openpype/vendor/python/python_2/backports/functools_lru_cache.py diff --git a/openpype/modules/ftrack/python2_vendor/builtins/builtins/__init__.py b/openpype/vendor/python/python_2/builtins/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/builtins/builtins/__init__.py rename to openpype/vendor/python/python_2/builtins/__init__.py diff --git a/openpype/vendor/python/python_2/click/__init__.py b/openpype/vendor/python/python_2/click/__init__.py new file mode 100644 index 0000000000..2b6008f2dd --- /dev/null +++ b/openpype/vendor/python/python_2/click/__init__.py @@ -0,0 +1,79 @@ +""" +Click is a simple Python module inspired by the stdlib optparse to make +writing command line scripts fun. Unlike other modules, it's based +around a simple API that does not come with too much magic and is +composable. +""" +from .core import Argument +from .core import BaseCommand +from .core import Command +from .core import CommandCollection +from .core import Context +from .core import Group +from .core import MultiCommand +from .core import Option +from .core import Parameter +from .decorators import argument +from .decorators import command +from .decorators import confirmation_option +from .decorators import group +from .decorators import help_option +from .decorators import make_pass_decorator +from .decorators import option +from .decorators import pass_context +from .decorators import pass_obj +from .decorators import password_option +from .decorators import version_option +from .exceptions import Abort +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import FileError +from .exceptions import MissingParameter +from .exceptions import NoSuchOption +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import wrap_text +from .globals import get_current_context +from .parser import OptionParser +from .termui import clear +from .termui import confirm +from .termui import echo_via_pager +from .termui import edit +from .termui import get_terminal_size +from .termui import getchar +from .termui import launch +from .termui import pause +from .termui import progressbar +from .termui import prompt +from .termui import secho +from .termui import style +from .termui import unstyle +from .types import BOOL +from .types import Choice +from .types import DateTime +from .types import File +from .types import FLOAT +from .types import FloatRange +from .types import INT +from .types import IntRange +from .types import ParamType +from .types import Path +from .types import STRING +from .types import Tuple +from .types import UNPROCESSED +from .types import UUID +from .utils import echo +from .utils import format_filename +from .utils import get_app_dir +from .utils import get_binary_stream +from .utils import get_os_args +from .utils import get_text_stream +from .utils import open_file + +# Controls if click should emit the warning about the use of unicode +# literals. +disable_unicode_literals_warning = False + +__version__ = "7.1.2" diff --git a/openpype/vendor/python/python_2/click/_bashcomplete.py b/openpype/vendor/python/python_2/click/_bashcomplete.py new file mode 100644 index 0000000000..8bca24480f --- /dev/null +++ b/openpype/vendor/python/python_2/click/_bashcomplete.py @@ -0,0 +1,375 @@ +import copy +import os +import re + +from .core import Argument +from .core import MultiCommand +from .core import Option +from .parser import split_arg_string +from .types import Choice +from .utils import echo + +try: + from collections import abc +except ImportError: + import collections as abc + +WORDBREAK = "=" + +# Note, only BASH version 4.4 and later have the nosort option. +COMPLETION_SCRIPT_BASH = """ +%(complete_func)s() { + local IFS=$'\n' + COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\ + COMP_CWORD=$COMP_CWORD \\ + %(autocomplete_var)s=complete $1 ) ) + return 0 +} + +%(complete_func)setup() { + local COMPLETION_OPTIONS="" + local BASH_VERSION_ARR=(${BASH_VERSION//./ }) + # Only BASH version 4.4 and later have the nosort option. + if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] \ +&& [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then + COMPLETION_OPTIONS="-o nosort" + fi + + complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s +} + +%(complete_func)setup +""" + +COMPLETION_SCRIPT_ZSH = """ +#compdef %(script_names)s + +%(complete_func)s() { + local -a completions + local -a completions_with_descriptions + local -a response + (( ! $+commands[%(script_names)s] )) && return 1 + + response=("${(@f)$( env COMP_WORDS=\"${words[*]}\" \\ + COMP_CWORD=$((CURRENT-1)) \\ + %(autocomplete_var)s=\"complete_zsh\" \\ + %(script_names)s )}") + + for key descr in ${(kv)response}; do + if [[ "$descr" == "_" ]]; then + completions+=("$key") + else + completions_with_descriptions+=("$key":"$descr") + fi + done + + if [ -n "$completions_with_descriptions" ]; then + _describe -V unsorted completions_with_descriptions -U + fi + + if [ -n "$completions" ]; then + compadd -U -V unsorted -a completions + fi + compstate[insert]="automenu" +} + +compdef %(complete_func)s %(script_names)s +""" + +COMPLETION_SCRIPT_FISH = ( + "complete --no-files --command %(script_names)s --arguments" + ' "(env %(autocomplete_var)s=complete_fish' + " COMP_WORDS=(commandline -cp) COMP_CWORD=(commandline -t)" + ' %(script_names)s)"' +) + +_completion_scripts = { + "bash": COMPLETION_SCRIPT_BASH, + "zsh": COMPLETION_SCRIPT_ZSH, + "fish": COMPLETION_SCRIPT_FISH, +} + +_invalid_ident_char_re = re.compile(r"[^a-zA-Z0-9_]") + + +def get_completion_script(prog_name, complete_var, shell): + cf_name = _invalid_ident_char_re.sub("", prog_name.replace("-", "_")) + script = _completion_scripts.get(shell, COMPLETION_SCRIPT_BASH) + return ( + script + % { + "complete_func": "_{}_completion".format(cf_name), + "script_names": prog_name, + "autocomplete_var": complete_var, + } + ).strip() + ";" + + +def resolve_ctx(cli, prog_name, args): + """Parse into a hierarchy of contexts. Contexts are connected + through the parent variable. + + :param cli: command definition + :param prog_name: the program that is running + :param args: full list of args + :return: the final context/command parsed + """ + ctx = cli.make_context(prog_name, args, resilient_parsing=True) + args = ctx.protected_args + ctx.args + while args: + if isinstance(ctx.command, MultiCommand): + if not ctx.command.chain: + cmd_name, cmd, args = ctx.command.resolve_command(ctx, args) + if cmd is None: + return ctx + ctx = cmd.make_context( + cmd_name, args, parent=ctx, resilient_parsing=True + ) + args = ctx.protected_args + ctx.args + else: + # Walk chained subcommand contexts saving the last one. + while args: + cmd_name, cmd, args = ctx.command.resolve_command(ctx, args) + if cmd is None: + return ctx + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + resilient_parsing=True, + ) + args = sub_ctx.args + ctx = sub_ctx + args = sub_ctx.protected_args + sub_ctx.args + else: + break + return ctx + + +def start_of_option(param_str): + """ + :param param_str: param_str to check + :return: whether or not this is the start of an option declaration + (i.e. starts "-" or "--") + """ + return param_str and param_str[:1] == "-" + + +def is_incomplete_option(all_args, cmd_param): + """ + :param all_args: the full original list of args supplied + :param cmd_param: the current command paramter + :return: whether or not the last option declaration (i.e. starts + "-" or "--") is incomplete and corresponds to this cmd_param. In + other words whether this cmd_param option can still accept + values + """ + if not isinstance(cmd_param, Option): + return False + if cmd_param.is_flag: + return False + last_option = None + for index, arg_str in enumerate( + reversed([arg for arg in all_args if arg != WORDBREAK]) + ): + if index + 1 > cmd_param.nargs: + break + if start_of_option(arg_str): + last_option = arg_str + + return True if last_option and last_option in cmd_param.opts else False + + +def is_incomplete_argument(current_params, cmd_param): + """ + :param current_params: the current params and values for this + argument as already entered + :param cmd_param: the current command parameter + :return: whether or not the last argument is incomplete and + corresponds to this cmd_param. In other words whether or not the + this cmd_param argument can still accept values + """ + if not isinstance(cmd_param, Argument): + return False + current_param_values = current_params[cmd_param.name] + if current_param_values is None: + return True + if cmd_param.nargs == -1: + return True + if ( + isinstance(current_param_values, abc.Iterable) + and cmd_param.nargs > 1 + and len(current_param_values) < cmd_param.nargs + ): + return True + return False + + +def get_user_autocompletions(ctx, args, incomplete, cmd_param): + """ + :param ctx: context associated with the parsed command + :param args: full list of args + :param incomplete: the incomplete text to autocomplete + :param cmd_param: command definition + :return: all the possible user-specified completions for the param + """ + results = [] + if isinstance(cmd_param.type, Choice): + # Choices don't support descriptions. + results = [ + (c, None) for c in cmd_param.type.choices if str(c).startswith(incomplete) + ] + elif cmd_param.autocompletion is not None: + dynamic_completions = cmd_param.autocompletion( + ctx=ctx, args=args, incomplete=incomplete + ) + results = [ + c if isinstance(c, tuple) else (c, None) for c in dynamic_completions + ] + return results + + +def get_visible_commands_starting_with(ctx, starts_with): + """ + :param ctx: context associated with the parsed command + :starts_with: string that visible commands must start with. + :return: all visible (not hidden) commands that start with starts_with. + """ + for c in ctx.command.list_commands(ctx): + if c.startswith(starts_with): + command = ctx.command.get_command(ctx, c) + if not command.hidden: + yield command + + +def add_subcommand_completions(ctx, incomplete, completions_out): + # Add subcommand completions. + if isinstance(ctx.command, MultiCommand): + completions_out.extend( + [ + (c.name, c.get_short_help_str()) + for c in get_visible_commands_starting_with(ctx, incomplete) + ] + ) + + # Walk up the context list and add any other completion + # possibilities from chained commands + while ctx.parent is not None: + ctx = ctx.parent + if isinstance(ctx.command, MultiCommand) and ctx.command.chain: + remaining_commands = [ + c + for c in get_visible_commands_starting_with(ctx, incomplete) + if c.name not in ctx.protected_args + ] + completions_out.extend( + [(c.name, c.get_short_help_str()) for c in remaining_commands] + ) + + +def get_choices(cli, prog_name, args, incomplete): + """ + :param cli: command definition + :param prog_name: the program that is running + :param args: full list of args + :param incomplete: the incomplete text to autocomplete + :return: all the possible completions for the incomplete + """ + all_args = copy.deepcopy(args) + + ctx = resolve_ctx(cli, prog_name, args) + if ctx is None: + return [] + + has_double_dash = "--" in all_args + + # In newer versions of bash long opts with '='s are partitioned, but + # it's easier to parse without the '=' + if start_of_option(incomplete) and WORDBREAK in incomplete: + partition_incomplete = incomplete.partition(WORDBREAK) + all_args.append(partition_incomplete[0]) + incomplete = partition_incomplete[2] + elif incomplete == WORDBREAK: + incomplete = "" + + completions = [] + if not has_double_dash and start_of_option(incomplete): + # completions for partial options + for param in ctx.command.params: + if isinstance(param, Option) and not param.hidden: + param_opts = [ + param_opt + for param_opt in param.opts + param.secondary_opts + if param_opt not in all_args or param.multiple + ] + completions.extend( + [(o, param.help) for o in param_opts if o.startswith(incomplete)] + ) + return completions + # completion for option values from user supplied values + for param in ctx.command.params: + if is_incomplete_option(all_args, param): + return get_user_autocompletions(ctx, all_args, incomplete, param) + # completion for argument values from user supplied values + for param in ctx.command.params: + if is_incomplete_argument(ctx.params, param): + return get_user_autocompletions(ctx, all_args, incomplete, param) + + add_subcommand_completions(ctx, incomplete, completions) + # Sort before returning so that proper ordering can be enforced in custom types. + return sorted(completions) + + +def do_complete(cli, prog_name, include_descriptions): + cwords = split_arg_string(os.environ["COMP_WORDS"]) + cword = int(os.environ["COMP_CWORD"]) + args = cwords[1:cword] + try: + incomplete = cwords[cword] + except IndexError: + incomplete = "" + + for item in get_choices(cli, prog_name, args, incomplete): + echo(item[0]) + if include_descriptions: + # ZSH has trouble dealing with empty array parameters when + # returned from commands, use '_' to indicate no description + # is present. + echo(item[1] if item[1] else "_") + + return True + + +def do_complete_fish(cli, prog_name): + cwords = split_arg_string(os.environ["COMP_WORDS"]) + incomplete = os.environ["COMP_CWORD"] + args = cwords[1:] + + for item in get_choices(cli, prog_name, args, incomplete): + if item[1]: + echo("{arg}\t{desc}".format(arg=item[0], desc=item[1])) + else: + echo(item[0]) + + return True + + +def bashcomplete(cli, prog_name, complete_var, complete_instr): + if "_" in complete_instr: + command, shell = complete_instr.split("_", 1) + else: + command = complete_instr + shell = "bash" + + if command == "source": + echo(get_completion_script(prog_name, complete_var, shell)) + return True + elif command == "complete": + if shell == "fish": + return do_complete_fish(cli, prog_name) + elif shell in {"bash", "zsh"}: + return do_complete(cli, prog_name, shell == "zsh") + + return False diff --git a/openpype/vendor/python/python_2/click/_compat.py b/openpype/vendor/python/python_2/click/_compat.py new file mode 100644 index 0000000000..60cb115bc5 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_compat.py @@ -0,0 +1,786 @@ +# flake8: noqa +import codecs +import io +import os +import re +import sys +from weakref import WeakKeyDictionary + +PY2 = sys.version_info[0] == 2 +CYGWIN = sys.platform.startswith("cygwin") +MSYS2 = sys.platform.startswith("win") and ("GCC" in sys.version) +# Determine local App Engine environment, per Google's own suggestion +APP_ENGINE = "APPENGINE_RUNTIME" in os.environ and "Development/" in os.environ.get( + "SERVER_SOFTWARE", "" +) +WIN = sys.platform.startswith("win") and not APP_ENGINE and not MSYS2 +DEFAULT_COLUMNS = 80 + + +_ansi_re = re.compile(r"\033\[[;?0-9]*[a-zA-Z]") + + +def get_filesystem_encoding(): + return sys.getfilesystemencoding() or sys.getdefaultencoding() + + +def _make_text_stream( + stream, encoding, errors, force_readable=False, force_writable=False +): + if encoding is None: + encoding = get_best_encoding(stream) + if errors is None: + errors = "replace" + return _NonClosingTextIOWrapper( + stream, + encoding, + errors, + line_buffering=True, + force_readable=force_readable, + force_writable=force_writable, + ) + + +def is_ascii_encoding(encoding): + """Checks if a given encoding is ascii.""" + try: + return codecs.lookup(encoding).name == "ascii" + except LookupError: + return False + + +def get_best_encoding(stream): + """Returns the default stream encoding if not found.""" + rv = getattr(stream, "encoding", None) or sys.getdefaultencoding() + if is_ascii_encoding(rv): + return "utf-8" + return rv + + +class _NonClosingTextIOWrapper(io.TextIOWrapper): + def __init__( + self, + stream, + encoding, + errors, + force_readable=False, + force_writable=False, + **extra + ): + self._stream = stream = _FixupStream(stream, force_readable, force_writable) + io.TextIOWrapper.__init__(self, stream, encoding, errors, **extra) + + # The io module is a place where the Python 3 text behavior + # was forced upon Python 2, so we need to unbreak + # it to look like Python 2. + if PY2: + + def write(self, x): + if isinstance(x, str) or is_bytes(x): + try: + self.flush() + except Exception: + pass + return self.buffer.write(str(x)) + return io.TextIOWrapper.write(self, x) + + def writelines(self, lines): + for line in lines: + self.write(line) + + def __del__(self): + try: + self.detach() + except Exception: + pass + + def isatty(self): + # https://bitbucket.org/pypy/pypy/issue/1803 + return self._stream.isatty() + + +class _FixupStream(object): + """The new io interface needs more from streams than streams + traditionally implement. As such, this fix-up code is necessary in + some circumstances. + + The forcing of readable and writable flags are there because some tools + put badly patched objects on sys (one such offender are certain version + of jupyter notebook). + """ + + def __init__(self, stream, force_readable=False, force_writable=False): + self._stream = stream + self._force_readable = force_readable + self._force_writable = force_writable + + def __getattr__(self, name): + return getattr(self._stream, name) + + def read1(self, size): + f = getattr(self._stream, "read1", None) + if f is not None: + return f(size) + # We only dispatch to readline instead of read in Python 2 as we + # do not want cause problems with the different implementation + # of line buffering. + if PY2: + return self._stream.readline(size) + return self._stream.read(size) + + def readable(self): + if self._force_readable: + return True + x = getattr(self._stream, "readable", None) + if x is not None: + return x() + try: + self._stream.read(0) + except Exception: + return False + return True + + def writable(self): + if self._force_writable: + return True + x = getattr(self._stream, "writable", None) + if x is not None: + return x() + try: + self._stream.write("") + except Exception: + try: + self._stream.write(b"") + except Exception: + return False + return True + + def seekable(self): + x = getattr(self._stream, "seekable", None) + if x is not None: + return x() + try: + self._stream.seek(self._stream.tell()) + except Exception: + return False + return True + + +if PY2: + text_type = unicode + raw_input = raw_input + string_types = (str, unicode) + int_types = (int, long) + iteritems = lambda x: x.iteritems() + range_type = xrange + + def is_bytes(x): + return isinstance(x, (buffer, bytearray)) + + _identifier_re = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$") + + # For Windows, we need to force stdout/stdin/stderr to binary if it's + # fetched for that. This obviously is not the most correct way to do + # it as it changes global state. Unfortunately, there does not seem to + # be a clear better way to do it as just reopening the file in binary + # mode does not change anything. + # + # An option would be to do what Python 3 does and to open the file as + # binary only, patch it back to the system, and then use a wrapper + # stream that converts newlines. It's not quite clear what's the + # correct option here. + # + # This code also lives in _winconsole for the fallback to the console + # emulation stream. + # + # There are also Windows environments where the `msvcrt` module is not + # available (which is why we use try-catch instead of the WIN variable + # here), such as the Google App Engine development server on Windows. In + # those cases there is just nothing we can do. + def set_binary_mode(f): + return f + + try: + import msvcrt + except ImportError: + pass + else: + + def set_binary_mode(f): + try: + fileno = f.fileno() + except Exception: + pass + else: + msvcrt.setmode(fileno, os.O_BINARY) + return f + + try: + import fcntl + except ImportError: + pass + else: + + def set_binary_mode(f): + try: + fileno = f.fileno() + except Exception: + pass + else: + flags = fcntl.fcntl(fileno, fcntl.F_GETFL) + fcntl.fcntl(fileno, fcntl.F_SETFL, flags & ~os.O_NONBLOCK) + return f + + def isidentifier(x): + return _identifier_re.search(x) is not None + + def get_binary_stdin(): + return set_binary_mode(sys.stdin) + + def get_binary_stdout(): + _wrap_std_stream("stdout") + return set_binary_mode(sys.stdout) + + def get_binary_stderr(): + _wrap_std_stream("stderr") + return set_binary_mode(sys.stderr) + + def get_text_stdin(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stdin, encoding, errors, force_readable=True) + + def get_text_stdout(encoding=None, errors=None): + _wrap_std_stream("stdout") + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stdout, encoding, errors, force_writable=True) + + def get_text_stderr(encoding=None, errors=None): + _wrap_std_stream("stderr") + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _make_text_stream(sys.stderr, encoding, errors, force_writable=True) + + def filename_to_ui(value): + if isinstance(value, bytes): + value = value.decode(get_filesystem_encoding(), "replace") + return value + + +else: + import io + + text_type = str + raw_input = input + string_types = (str,) + int_types = (int,) + range_type = range + isidentifier = lambda x: x.isidentifier() + iteritems = lambda x: iter(x.items()) + + def is_bytes(x): + return isinstance(x, (bytes, memoryview, bytearray)) + + def _is_binary_reader(stream, default=False): + try: + return isinstance(stream.read(0), bytes) + except Exception: + return default + # This happens in some cases where the stream was already + # closed. In this case, we assume the default. + + def _is_binary_writer(stream, default=False): + try: + stream.write(b"") + except Exception: + try: + stream.write("") + return False + except Exception: + pass + return default + return True + + def _find_binary_reader(stream): + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detaching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_reader(stream, False): + return stream + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_reader(buf, True): + return buf + + def _find_binary_writer(stream): + # We need to figure out if the given stream is already binary. + # This can happen because the official docs recommend detatching + # the streams to get binary streams. Some code might do this, so + # we need to deal with this case explicitly. + if _is_binary_writer(stream, False): + return stream + + buf = getattr(stream, "buffer", None) + + # Same situation here; this time we assume that the buffer is + # actually binary in case it's closed. + if buf is not None and _is_binary_writer(buf, True): + return buf + + def _stream_is_misconfigured(stream): + """A stream is misconfigured if its encoding is ASCII.""" + # If the stream does not have an encoding set, we assume it's set + # to ASCII. This appears to happen in certain unittest + # environments. It's not quite clear what the correct behavior is + # but this at least will force Click to recover somehow. + return is_ascii_encoding(getattr(stream, "encoding", None) or "ascii") + + def _is_compat_stream_attr(stream, attr, value): + """A stream attribute is compatible if it is equal to the + desired value or the desired value is unset and the attribute + has a value. + """ + stream_value = getattr(stream, attr, None) + return stream_value == value or (value is None and stream_value is not None) + + def _is_compatible_text_stream(stream, encoding, errors): + """Check if a stream's encoding and errors attributes are + compatible with the desired values. + """ + return _is_compat_stream_attr( + stream, "encoding", encoding + ) and _is_compat_stream_attr(stream, "errors", errors) + + def _force_correct_text_stream( + text_stream, + encoding, + errors, + is_binary, + find_binary, + force_readable=False, + force_writable=False, + ): + if is_binary(text_stream, False): + binary_reader = text_stream + else: + # If the stream looks compatible, and won't default to a + # misconfigured ascii encoding, return it as-is. + if _is_compatible_text_stream(text_stream, encoding, errors) and not ( + encoding is None and _stream_is_misconfigured(text_stream) + ): + return text_stream + + # Otherwise, get the underlying binary reader. + binary_reader = find_binary(text_stream) + + # If that's not possible, silently use the original reader + # and get mojibake instead of exceptions. + if binary_reader is None: + return text_stream + + # Default errors to replace instead of strict in order to get + # something that works. + if errors is None: + errors = "replace" + + # Wrap the binary stream in a text stream with the correct + # encoding parameters. + return _make_text_stream( + binary_reader, + encoding, + errors, + force_readable=force_readable, + force_writable=force_writable, + ) + + def _force_correct_text_reader(text_reader, encoding, errors, force_readable=False): + return _force_correct_text_stream( + text_reader, + encoding, + errors, + _is_binary_reader, + _find_binary_reader, + force_readable=force_readable, + ) + + def _force_correct_text_writer(text_writer, encoding, errors, force_writable=False): + return _force_correct_text_stream( + text_writer, + encoding, + errors, + _is_binary_writer, + _find_binary_writer, + force_writable=force_writable, + ) + + def get_binary_stdin(): + reader = _find_binary_reader(sys.stdin) + if reader is None: + raise RuntimeError("Was not able to determine binary stream for sys.stdin.") + return reader + + def get_binary_stdout(): + writer = _find_binary_writer(sys.stdout) + if writer is None: + raise RuntimeError( + "Was not able to determine binary stream for sys.stdout." + ) + return writer + + def get_binary_stderr(): + writer = _find_binary_writer(sys.stderr) + if writer is None: + raise RuntimeError( + "Was not able to determine binary stream for sys.stderr." + ) + return writer + + def get_text_stdin(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdin, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_reader( + sys.stdin, encoding, errors, force_readable=True + ) + + def get_text_stdout(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stdout, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer( + sys.stdout, encoding, errors, force_writable=True + ) + + def get_text_stderr(encoding=None, errors=None): + rv = _get_windows_console_stream(sys.stderr, encoding, errors) + if rv is not None: + return rv + return _force_correct_text_writer( + sys.stderr, encoding, errors, force_writable=True + ) + + def filename_to_ui(value): + if isinstance(value, bytes): + value = value.decode(get_filesystem_encoding(), "replace") + else: + value = value.encode("utf-8", "surrogateescape").decode("utf-8", "replace") + return value + + +def get_streerror(e, default=None): + if hasattr(e, "strerror"): + msg = e.strerror + else: + if default is not None: + msg = default + else: + msg = str(e) + if isinstance(msg, bytes): + msg = msg.decode("utf-8", "replace") + return msg + + +def _wrap_io_open(file, mode, encoding, errors): + """On Python 2, :func:`io.open` returns a text file wrapper that + requires passing ``unicode`` to ``write``. Need to open the file in + binary mode then wrap it in a subclass that can write ``str`` and + ``unicode``. + + Also handles not passing ``encoding`` and ``errors`` in binary mode. + """ + binary = "b" in mode + + if binary: + kwargs = {} + else: + kwargs = {"encoding": encoding, "errors": errors} + + if not PY2 or binary: + return io.open(file, mode, **kwargs) + + f = io.open(file, "{}b".format(mode.replace("t", ""))) + return _make_text_stream(f, **kwargs) + + +def open_stream(filename, mode="r", encoding=None, errors="strict", atomic=False): + binary = "b" in mode + + # Standard streams first. These are simple because they don't need + # special handling for the atomic flag. It's entirely ignored. + if filename == "-": + if any(m in mode for m in ["w", "a", "x"]): + if binary: + return get_binary_stdout(), False + return get_text_stdout(encoding=encoding, errors=errors), False + if binary: + return get_binary_stdin(), False + return get_text_stdin(encoding=encoding, errors=errors), False + + # Non-atomic writes directly go out through the regular open functions. + if not atomic: + return _wrap_io_open(filename, mode, encoding, errors), True + + # Some usability stuff for atomic writes + if "a" in mode: + raise ValueError( + "Appending to an existing file is not supported, because that" + " would involve an expensive `copy`-operation to a temporary" + " file. Open the file in normal `w`-mode and copy explicitly" + " if that's what you're after." + ) + if "x" in mode: + raise ValueError("Use the `overwrite`-parameter instead.") + if "w" not in mode: + raise ValueError("Atomic writes only make sense with `w`-mode.") + + # Atomic writes are more complicated. They work by opening a file + # as a proxy in the same folder and then using the fdopen + # functionality to wrap it in a Python file. Then we wrap it in an + # atomic file that moves the file over on close. + import errno + import random + + try: + perm = os.stat(filename).st_mode + except OSError: + perm = None + + flags = os.O_RDWR | os.O_CREAT | os.O_EXCL + + if binary: + flags |= getattr(os, "O_BINARY", 0) + + while True: + tmp_filename = os.path.join( + os.path.dirname(filename), + ".__atomic-write{:08x}".format(random.randrange(1 << 32)), + ) + try: + fd = os.open(tmp_filename, flags, 0o666 if perm is None else perm) + break + except OSError as e: + if e.errno == errno.EEXIST or ( + os.name == "nt" + and e.errno == errno.EACCES + and os.path.isdir(e.filename) + and os.access(e.filename, os.W_OK) + ): + continue + raise + + if perm is not None: + os.chmod(tmp_filename, perm) # in case perm includes bits in umask + + f = _wrap_io_open(fd, mode, encoding, errors) + return _AtomicFile(f, tmp_filename, os.path.realpath(filename)), True + + +# Used in a destructor call, needs extra protection from interpreter cleanup. +if hasattr(os, "replace"): + _replace = os.replace + _can_replace = True +else: + _replace = os.rename + _can_replace = not WIN + + +class _AtomicFile(object): + def __init__(self, f, tmp_filename, real_filename): + self._f = f + self._tmp_filename = tmp_filename + self._real_filename = real_filename + self.closed = False + + @property + def name(self): + return self._real_filename + + def close(self, delete=False): + if self.closed: + return + self._f.close() + if not _can_replace: + try: + os.remove(self._real_filename) + except OSError: + pass + _replace(self._tmp_filename, self._real_filename) + self.closed = True + + def __getattr__(self, name): + return getattr(self._f, name) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + self.close(delete=exc_type is not None) + + def __repr__(self): + return repr(self._f) + + +auto_wrap_for_ansi = None +colorama = None +get_winterm_size = None + + +def strip_ansi(value): + return _ansi_re.sub("", value) + + +def _is_jupyter_kernel_output(stream): + if WIN: + # TODO: Couldn't test on Windows, should't try to support until + # someone tests the details wrt colorama. + return + + while isinstance(stream, (_FixupStream, _NonClosingTextIOWrapper)): + stream = stream._stream + + return stream.__class__.__module__.startswith("ipykernel.") + + +def should_strip_ansi(stream=None, color=None): + if color is None: + if stream is None: + stream = sys.stdin + return not isatty(stream) and not _is_jupyter_kernel_output(stream) + return not color + + +# If we're on Windows, we provide transparent integration through +# colorama. This will make ANSI colors through the echo function +# work automatically. +if WIN: + # Windows has a smaller terminal + DEFAULT_COLUMNS = 79 + + from ._winconsole import _get_windows_console_stream, _wrap_std_stream + + def _get_argv_encoding(): + import locale + + return locale.getpreferredencoding() + + if PY2: + + def raw_input(prompt=""): + sys.stderr.flush() + if prompt: + stdout = _default_text_stdout() + stdout.write(prompt) + stdin = _default_text_stdin() + return stdin.readline().rstrip("\r\n") + + try: + import colorama + except ImportError: + pass + else: + _ansi_stream_wrappers = WeakKeyDictionary() + + def auto_wrap_for_ansi(stream, color=None): + """This function wraps a stream so that calls through colorama + are issued to the win32 console API to recolor on demand. It + also ensures to reset the colors if a write call is interrupted + to not destroy the console afterwards. + """ + try: + cached = _ansi_stream_wrappers.get(stream) + except Exception: + cached = None + if cached is not None: + return cached + strip = should_strip_ansi(stream, color) + ansi_wrapper = colorama.AnsiToWin32(stream, strip=strip) + rv = ansi_wrapper.stream + _write = rv.write + + def _safe_write(s): + try: + return _write(s) + except: + ansi_wrapper.reset_all() + raise + + rv.write = _safe_write + try: + _ansi_stream_wrappers[stream] = rv + except Exception: + pass + return rv + + def get_winterm_size(): + win = colorama.win32.GetConsoleScreenBufferInfo( + colorama.win32.STDOUT + ).srWindow + return win.Right - win.Left, win.Bottom - win.Top + + +else: + + def _get_argv_encoding(): + return getattr(sys.stdin, "encoding", None) or get_filesystem_encoding() + + _get_windows_console_stream = lambda *x: None + _wrap_std_stream = lambda *x: None + + +def term_len(x): + return len(strip_ansi(x)) + + +def isatty(stream): + try: + return stream.isatty() + except Exception: + return False + + +def _make_cached_stream_func(src_func, wrapper_func): + cache = WeakKeyDictionary() + + def func(): + stream = src_func() + try: + rv = cache.get(stream) + except Exception: + rv = None + if rv is not None: + return rv + rv = wrapper_func() + try: + stream = src_func() # In case wrapper_func() modified the stream + cache[stream] = rv + except Exception: + pass + return rv + + return func + + +_default_text_stdin = _make_cached_stream_func(lambda: sys.stdin, get_text_stdin) +_default_text_stdout = _make_cached_stream_func(lambda: sys.stdout, get_text_stdout) +_default_text_stderr = _make_cached_stream_func(lambda: sys.stderr, get_text_stderr) + + +binary_streams = { + "stdin": get_binary_stdin, + "stdout": get_binary_stdout, + "stderr": get_binary_stderr, +} + +text_streams = { + "stdin": get_text_stdin, + "stdout": get_text_stdout, + "stderr": get_text_stderr, +} diff --git a/openpype/vendor/python/python_2/click/_termui_impl.py b/openpype/vendor/python/python_2/click/_termui_impl.py new file mode 100644 index 0000000000..88bec37701 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_termui_impl.py @@ -0,0 +1,657 @@ +# -*- coding: utf-8 -*- +""" +This module contains implementations for the termui module. To keep the +import time of Click down, some infrequently used functionality is +placed in this module and only imported as needed. +""" +import contextlib +import math +import os +import sys +import time + +from ._compat import _default_text_stdout +from ._compat import CYGWIN +from ._compat import get_best_encoding +from ._compat import int_types +from ._compat import isatty +from ._compat import open_stream +from ._compat import range_type +from ._compat import strip_ansi +from ._compat import term_len +from ._compat import WIN +from .exceptions import ClickException +from .utils import echo + +if os.name == "nt": + BEFORE_BAR = "\r" + AFTER_BAR = "\n" +else: + BEFORE_BAR = "\r\033[?25l" + AFTER_BAR = "\033[?25h\n" + + +def _length_hint(obj): + """Returns the length hint of an object.""" + try: + return len(obj) + except (AttributeError, TypeError): + try: + get_hint = type(obj).__length_hint__ + except AttributeError: + return None + try: + hint = get_hint(obj) + except TypeError: + return None + if hint is NotImplemented or not isinstance(hint, int_types) or hint < 0: + return None + return hint + + +class ProgressBar(object): + def __init__( + self, + iterable, + length=None, + fill_char="#", + empty_char=" ", + bar_template="%(bar)s", + info_sep=" ", + show_eta=True, + show_percent=None, + show_pos=False, + item_show_func=None, + label=None, + file=None, + color=None, + width=30, + ): + self.fill_char = fill_char + self.empty_char = empty_char + self.bar_template = bar_template + self.info_sep = info_sep + self.show_eta = show_eta + self.show_percent = show_percent + self.show_pos = show_pos + self.item_show_func = item_show_func + self.label = label or "" + if file is None: + file = _default_text_stdout() + self.file = file + self.color = color + self.width = width + self.autowidth = width == 0 + + if length is None: + length = _length_hint(iterable) + if iterable is None: + if length is None: + raise TypeError("iterable or length is required") + iterable = range_type(length) + self.iter = iter(iterable) + self.length = length + self.length_known = length is not None + self.pos = 0 + self.avg = [] + self.start = self.last_eta = time.time() + self.eta_known = False + self.finished = False + self.max_width = None + self.entered = False + self.current_item = None + self.is_hidden = not isatty(self.file) + self._last_line = None + self.short_limit = 0.5 + + def __enter__(self): + self.entered = True + self.render_progress() + return self + + def __exit__(self, exc_type, exc_value, tb): + self.render_finish() + + def __iter__(self): + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + self.render_progress() + return self.generator() + + def __next__(self): + # Iteration is defined in terms of a generator function, + # returned by iter(self); use that to define next(). This works + # because `self.iter` is an iterable consumed by that generator, + # so it is re-entry safe. Calling `next(self.generator())` + # twice works and does "what you want". + return next(iter(self)) + + # Python 2 compat + next = __next__ + + def is_fast(self): + return time.time() - self.start <= self.short_limit + + def render_finish(self): + if self.is_hidden or self.is_fast(): + return + self.file.write(AFTER_BAR) + self.file.flush() + + @property + def pct(self): + if self.finished: + return 1.0 + return min(self.pos / (float(self.length) or 1), 1.0) + + @property + def time_per_iteration(self): + if not self.avg: + return 0.0 + return sum(self.avg) / float(len(self.avg)) + + @property + def eta(self): + if self.length_known and not self.finished: + return self.time_per_iteration * (self.length - self.pos) + return 0.0 + + def format_eta(self): + if self.eta_known: + t = int(self.eta) + seconds = t % 60 + t //= 60 + minutes = t % 60 + t //= 60 + hours = t % 24 + t //= 24 + if t > 0: + return "{}d {:02}:{:02}:{:02}".format(t, hours, minutes, seconds) + else: + return "{:02}:{:02}:{:02}".format(hours, minutes, seconds) + return "" + + def format_pos(self): + pos = str(self.pos) + if self.length_known: + pos += "/{}".format(self.length) + return pos + + def format_pct(self): + return "{: 4}%".format(int(self.pct * 100))[1:] + + def format_bar(self): + if self.length_known: + bar_length = int(self.pct * self.width) + bar = self.fill_char * bar_length + bar += self.empty_char * (self.width - bar_length) + elif self.finished: + bar = self.fill_char * self.width + else: + bar = list(self.empty_char * (self.width or 1)) + if self.time_per_iteration != 0: + bar[ + int( + (math.cos(self.pos * self.time_per_iteration) / 2.0 + 0.5) + * self.width + ) + ] = self.fill_char + bar = "".join(bar) + return bar + + def format_progress_line(self): + show_percent = self.show_percent + + info_bits = [] + if self.length_known and show_percent is None: + show_percent = not self.show_pos + + if self.show_pos: + info_bits.append(self.format_pos()) + if show_percent: + info_bits.append(self.format_pct()) + if self.show_eta and self.eta_known and not self.finished: + info_bits.append(self.format_eta()) + if self.item_show_func is not None: + item_info = self.item_show_func(self.current_item) + if item_info is not None: + info_bits.append(item_info) + + return ( + self.bar_template + % { + "label": self.label, + "bar": self.format_bar(), + "info": self.info_sep.join(info_bits), + } + ).rstrip() + + def render_progress(self): + from .termui import get_terminal_size + + if self.is_hidden: + return + + buf = [] + # Update width in case the terminal has been resized + if self.autowidth: + old_width = self.width + self.width = 0 + clutter_length = term_len(self.format_progress_line()) + new_width = max(0, get_terminal_size()[0] - clutter_length) + if new_width < old_width: + buf.append(BEFORE_BAR) + buf.append(" " * self.max_width) + self.max_width = new_width + self.width = new_width + + clear_width = self.width + if self.max_width is not None: + clear_width = self.max_width + + buf.append(BEFORE_BAR) + line = self.format_progress_line() + line_len = term_len(line) + if self.max_width is None or self.max_width < line_len: + self.max_width = line_len + + buf.append(line) + buf.append(" " * (clear_width - line_len)) + line = "".join(buf) + # Render the line only if it changed. + + if line != self._last_line and not self.is_fast(): + self._last_line = line + echo(line, file=self.file, color=self.color, nl=False) + self.file.flush() + + def make_step(self, n_steps): + self.pos += n_steps + if self.length_known and self.pos >= self.length: + self.finished = True + + if (time.time() - self.last_eta) < 1.0: + return + + self.last_eta = time.time() + + # self.avg is a rolling list of length <= 7 of steps where steps are + # defined as time elapsed divided by the total progress through + # self.length. + if self.pos: + step = (time.time() - self.start) / self.pos + else: + step = time.time() - self.start + + self.avg = self.avg[-6:] + [step] + + self.eta_known = self.length_known + + def update(self, n_steps): + self.make_step(n_steps) + self.render_progress() + + def finish(self): + self.eta_known = 0 + self.current_item = None + self.finished = True + + def generator(self): + """Return a generator which yields the items added to the bar + during construction, and updates the progress bar *after* the + yielded block returns. + """ + # WARNING: the iterator interface for `ProgressBar` relies on + # this and only works because this is a simple generator which + # doesn't create or manage additional state. If this function + # changes, the impact should be evaluated both against + # `iter(bar)` and `next(bar)`. `next()` in particular may call + # `self.generator()` repeatedly, and this must remain safe in + # order for that interface to work. + if not self.entered: + raise RuntimeError("You need to use progress bars in a with block.") + + if self.is_hidden: + for rv in self.iter: + yield rv + else: + for rv in self.iter: + self.current_item = rv + yield rv + self.update(1) + self.finish() + self.render_progress() + + +def pager(generator, color=None): + """Decide what method to use for paging through text.""" + stdout = _default_text_stdout() + if not isatty(sys.stdin) or not isatty(stdout): + return _nullpager(stdout, generator, color) + pager_cmd = (os.environ.get("PAGER", None) or "").strip() + if pager_cmd: + if WIN: + return _tempfilepager(generator, pager_cmd, color) + return _pipepager(generator, pager_cmd, color) + if os.environ.get("TERM") in ("dumb", "emacs"): + return _nullpager(stdout, generator, color) + if WIN or sys.platform.startswith("os2"): + return _tempfilepager(generator, "more <", color) + if hasattr(os, "system") and os.system("(less) 2>/dev/null") == 0: + return _pipepager(generator, "less", color) + + import tempfile + + fd, filename = tempfile.mkstemp() + os.close(fd) + try: + if hasattr(os, "system") and os.system('more "{}"'.format(filename)) == 0: + return _pipepager(generator, "more", color) + return _nullpager(stdout, generator, color) + finally: + os.unlink(filename) + + +def _pipepager(generator, cmd, color): + """Page through text by feeding it to another program. Invoking a + pager through this might support colors. + """ + import subprocess + + env = dict(os.environ) + + # If we're piping to less we might support colors under the + # condition that + cmd_detail = cmd.rsplit("/", 1)[-1].split() + if color is None and cmd_detail[0] == "less": + less_flags = "{}{}".format(os.environ.get("LESS", ""), " ".join(cmd_detail[1:])) + if not less_flags: + env["LESS"] = "-R" + color = True + elif "r" in less_flags or "R" in less_flags: + color = True + + c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE, env=env) + encoding = get_best_encoding(c.stdin) + try: + for text in generator: + if not color: + text = strip_ansi(text) + + c.stdin.write(text.encode(encoding, "replace")) + except (IOError, KeyboardInterrupt): + pass + else: + c.stdin.close() + + # Less doesn't respect ^C, but catches it for its own UI purposes (aborting + # search or other commands inside less). + # + # That means when the user hits ^C, the parent process (click) terminates, + # but less is still alive, paging the output and messing up the terminal. + # + # If the user wants to make the pager exit on ^C, they should set + # `LESS='-K'`. It's not our decision to make. + while True: + try: + c.wait() + except KeyboardInterrupt: + pass + else: + break + + +def _tempfilepager(generator, cmd, color): + """Page through text by invoking a program on a temporary file.""" + import tempfile + + filename = tempfile.mktemp() + # TODO: This never terminates if the passed generator never terminates. + text = "".join(generator) + if not color: + text = strip_ansi(text) + encoding = get_best_encoding(sys.stdout) + with open_stream(filename, "wb")[0] as f: + f.write(text.encode(encoding)) + try: + os.system('{} "{}"'.format(cmd, filename)) + finally: + os.unlink(filename) + + +def _nullpager(stream, generator, color): + """Simply print unformatted text. This is the ultimate fallback.""" + for text in generator: + if not color: + text = strip_ansi(text) + stream.write(text) + + +class Editor(object): + def __init__(self, editor=None, env=None, require_save=True, extension=".txt"): + self.editor = editor + self.env = env + self.require_save = require_save + self.extension = extension + + def get_editor(self): + if self.editor is not None: + return self.editor + for key in "VISUAL", "EDITOR": + rv = os.environ.get(key) + if rv: + return rv + if WIN: + return "notepad" + for editor in "sensible-editor", "vim", "nano": + if os.system("which {} >/dev/null 2>&1".format(editor)) == 0: + return editor + return "vi" + + def edit_file(self, filename): + import subprocess + + editor = self.get_editor() + if self.env: + environ = os.environ.copy() + environ.update(self.env) + else: + environ = None + try: + c = subprocess.Popen( + '{} "{}"'.format(editor, filename), env=environ, shell=True, + ) + exit_code = c.wait() + if exit_code != 0: + raise ClickException("{}: Editing failed!".format(editor)) + except OSError as e: + raise ClickException("{}: Editing failed: {}".format(editor, e)) + + def edit(self, text): + import tempfile + + text = text or "" + if text and not text.endswith("\n"): + text += "\n" + + fd, name = tempfile.mkstemp(prefix="editor-", suffix=self.extension) + try: + if WIN: + encoding = "utf-8-sig" + text = text.replace("\n", "\r\n") + else: + encoding = "utf-8" + text = text.encode(encoding) + + f = os.fdopen(fd, "wb") + f.write(text) + f.close() + timestamp = os.path.getmtime(name) + + self.edit_file(name) + + if self.require_save and os.path.getmtime(name) == timestamp: + return None + + f = open(name, "rb") + try: + rv = f.read() + finally: + f.close() + return rv.decode("utf-8-sig").replace("\r\n", "\n") + finally: + os.unlink(name) + + +def open_url(url, wait=False, locate=False): + import subprocess + + def _unquote_file(url): + try: + import urllib + except ImportError: + import urllib + if url.startswith("file://"): + url = urllib.unquote(url[7:]) + return url + + if sys.platform == "darwin": + args = ["open"] + if wait: + args.append("-W") + if locate: + args.append("-R") + args.append(_unquote_file(url)) + null = open("/dev/null", "w") + try: + return subprocess.Popen(args, stderr=null).wait() + finally: + null.close() + elif WIN: + if locate: + url = _unquote_file(url) + args = 'explorer /select,"{}"'.format(_unquote_file(url.replace('"', ""))) + else: + args = 'start {} "" "{}"'.format( + "/WAIT" if wait else "", url.replace('"', "") + ) + return os.system(args) + elif CYGWIN: + if locate: + url = _unquote_file(url) + args = 'cygstart "{}"'.format(os.path.dirname(url).replace('"', "")) + else: + args = 'cygstart {} "{}"'.format("-w" if wait else "", url.replace('"', "")) + return os.system(args) + + try: + if locate: + url = os.path.dirname(_unquote_file(url)) or "." + else: + url = _unquote_file(url) + c = subprocess.Popen(["xdg-open", url]) + if wait: + return c.wait() + return 0 + except OSError: + if url.startswith(("http://", "https://")) and not locate and not wait: + import webbrowser + + webbrowser.open(url) + return 0 + return 1 + + +def _translate_ch_to_exc(ch): + if ch == u"\x03": + raise KeyboardInterrupt() + if ch == u"\x04" and not WIN: # Unix-like, Ctrl+D + raise EOFError() + if ch == u"\x1a" and WIN: # Windows, Ctrl+Z + raise EOFError() + + +if WIN: + import msvcrt + + @contextlib.contextmanager + def raw_terminal(): + yield + + def getchar(echo): + # The function `getch` will return a bytes object corresponding to + # the pressed character. Since Windows 10 build 1803, it will also + # return \x00 when called a second time after pressing a regular key. + # + # `getwch` does not share this probably-bugged behavior. Moreover, it + # returns a Unicode object by default, which is what we want. + # + # Either of these functions will return \x00 or \xe0 to indicate + # a special key, and you need to call the same function again to get + # the "rest" of the code. The fun part is that \u00e0 is + # "latin small letter a with grave", so if you type that on a French + # keyboard, you _also_ get a \xe0. + # E.g., consider the Up arrow. This returns \xe0 and then \x48. The + # resulting Unicode string reads as "a with grave" + "capital H". + # This is indistinguishable from when the user actually types + # "a with grave" and then "capital H". + # + # When \xe0 is returned, we assume it's part of a special-key sequence + # and call `getwch` again, but that means that when the user types + # the \u00e0 character, `getchar` doesn't return until a second + # character is typed. + # The alternative is returning immediately, but that would mess up + # cross-platform handling of arrow keys and others that start with + # \xe0. Another option is using `getch`, but then we can't reliably + # read non-ASCII characters, because return values of `getch` are + # limited to the current 8-bit codepage. + # + # Anyway, Click doesn't claim to do this Right(tm), and using `getwch` + # is doing the right thing in more situations than with `getch`. + if echo: + func = msvcrt.getwche + else: + func = msvcrt.getwch + + rv = func() + if rv in (u"\x00", u"\xe0"): + # \x00 and \xe0 are control characters that indicate special key, + # see above. + rv += func() + _translate_ch_to_exc(rv) + return rv + + +else: + import tty + import termios + + @contextlib.contextmanager + def raw_terminal(): + if not isatty(sys.stdin): + f = open("/dev/tty") + fd = f.fileno() + else: + fd = sys.stdin.fileno() + f = None + try: + old_settings = termios.tcgetattr(fd) + try: + tty.setraw(fd) + yield fd + finally: + termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) + sys.stdout.flush() + if f is not None: + f.close() + except termios.error: + pass + + def getchar(echo): + with raw_terminal() as fd: + ch = os.read(fd, 32) + ch = ch.decode(get_best_encoding(sys.stdin), "replace") + if echo and isatty(sys.stdout): + sys.stdout.write(ch) + _translate_ch_to_exc(ch) + return ch diff --git a/openpype/vendor/python/python_2/click/_textwrap.py b/openpype/vendor/python/python_2/click/_textwrap.py new file mode 100644 index 0000000000..6959087b7f --- /dev/null +++ b/openpype/vendor/python/python_2/click/_textwrap.py @@ -0,0 +1,37 @@ +import textwrap +from contextlib import contextmanager + + +class TextWrapper(textwrap.TextWrapper): + def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width): + space_left = max(width - cur_len, 1) + + if self.break_long_words: + last = reversed_chunks[-1] + cut = last[:space_left] + res = last[space_left:] + cur_line.append(cut) + reversed_chunks[-1] = res + elif not cur_line: + cur_line.append(reversed_chunks.pop()) + + @contextmanager + def extra_indent(self, indent): + old_initial_indent = self.initial_indent + old_subsequent_indent = self.subsequent_indent + self.initial_indent += indent + self.subsequent_indent += indent + try: + yield + finally: + self.initial_indent = old_initial_indent + self.subsequent_indent = old_subsequent_indent + + def indent_only(self, text): + rv = [] + for idx, line in enumerate(text.splitlines()): + indent = self.initial_indent + if idx > 0: + indent = self.subsequent_indent + rv.append(indent + line) + return "\n".join(rv) diff --git a/openpype/vendor/python/python_2/click/_unicodefun.py b/openpype/vendor/python/python_2/click/_unicodefun.py new file mode 100644 index 0000000000..781c365227 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_unicodefun.py @@ -0,0 +1,131 @@ +import codecs +import os +import sys + +from ._compat import PY2 + + +def _find_unicode_literals_frame(): + import __future__ + + if not hasattr(sys, "_getframe"): # not all Python implementations have it + return 0 + frm = sys._getframe(1) + idx = 1 + while frm is not None: + if frm.f_globals.get("__name__", "").startswith("click."): + frm = frm.f_back + idx += 1 + elif frm.f_code.co_flags & __future__.unicode_literals.compiler_flag: + return idx + else: + break + return 0 + + +def _check_for_unicode_literals(): + if not __debug__: + return + + from . import disable_unicode_literals_warning + + if not PY2 or disable_unicode_literals_warning: + return + bad_frame = _find_unicode_literals_frame() + if bad_frame <= 0: + return + from warnings import warn + + warn( + Warning( + "Click detected the use of the unicode_literals __future__" + " import. This is heavily discouraged because it can" + " introduce subtle bugs in your code. You should instead" + ' use explicit u"" literals for your unicode strings. For' + " more information see" + " https://click.palletsprojects.com/python3/" + ), + stacklevel=bad_frame, + ) + + +def _verify_python3_env(): + """Ensures that the environment is good for unicode on Python 3.""" + if PY2: + return + try: + import locale + + fs_enc = codecs.lookup(locale.getpreferredencoding()).name + except Exception: + fs_enc = "ascii" + if fs_enc != "ascii": + return + + extra = "" + if os.name == "posix": + import subprocess + + try: + rv = subprocess.Popen( + ["locale", "-a"], stdout=subprocess.PIPE, stderr=subprocess.PIPE + ).communicate()[0] + except OSError: + rv = b"" + good_locales = set() + has_c_utf8 = False + + # Make sure we're operating on text here. + if isinstance(rv, bytes): + rv = rv.decode("ascii", "replace") + + for line in rv.splitlines(): + locale = line.strip() + if locale.lower().endswith((".utf-8", ".utf8")): + good_locales.add(locale) + if locale.lower() in ("c.utf8", "c.utf-8"): + has_c_utf8 = True + + extra += "\n\n" + if not good_locales: + extra += ( + "Additional information: on this system no suitable" + " UTF-8 locales were discovered. This most likely" + " requires resolving by reconfiguring the locale" + " system." + ) + elif has_c_utf8: + extra += ( + "This system supports the C.UTF-8 locale which is" + " recommended. You might be able to resolve your issue" + " by exporting the following environment variables:\n\n" + " export LC_ALL=C.UTF-8\n" + " export LANG=C.UTF-8" + ) + else: + extra += ( + "This system lists a couple of UTF-8 supporting locales" + " that you can pick from. The following suitable" + " locales were discovered: {}".format(", ".join(sorted(good_locales))) + ) + + bad_locale = None + for locale in os.environ.get("LC_ALL"), os.environ.get("LANG"): + if locale and locale.lower().endswith((".utf-8", ".utf8")): + bad_locale = locale + if locale is not None: + break + if bad_locale is not None: + extra += ( + "\n\nClick discovered that you exported a UTF-8 locale" + " but the locale system could not pick up from it" + " because it does not exist. The exported locale is" + " '{}' but it is not supported".format(bad_locale) + ) + + raise RuntimeError( + "Click will abort further execution because Python 3 was" + " configured to use ASCII as encoding for the environment." + " Consult https://click.palletsprojects.com/python3/ for" + " mitigation steps.{}".format(extra) + ) diff --git a/openpype/vendor/python/python_2/click/_winconsole.py b/openpype/vendor/python/python_2/click/_winconsole.py new file mode 100644 index 0000000000..b6c4274af0 --- /dev/null +++ b/openpype/vendor/python/python_2/click/_winconsole.py @@ -0,0 +1,370 @@ +# -*- coding: utf-8 -*- +# This module is based on the excellent work by Adam Bartoš who +# provided a lot of what went into the implementation here in +# the discussion to issue1602 in the Python bug tracker. +# +# There are some general differences in regards to how this works +# compared to the original patches as we do not need to patch +# the entire interpreter but just work in our little world of +# echo and prmopt. +import ctypes +import io +import os +import sys +import time +import zlib +from ctypes import byref +from ctypes import c_char +from ctypes import c_char_p +from ctypes import c_int +from ctypes import c_ssize_t +from ctypes import c_ulong +from ctypes import c_void_p +from ctypes import POINTER +from ctypes import py_object +from ctypes import windll +from ctypes import WinError +from ctypes import WINFUNCTYPE +from ctypes.wintypes import DWORD +from ctypes.wintypes import HANDLE +from ctypes.wintypes import LPCWSTR +from ctypes.wintypes import LPWSTR + +import msvcrt + +from ._compat import _NonClosingTextIOWrapper +from ._compat import PY2 +from ._compat import text_type + +try: + from ctypes import pythonapi + + PyObject_GetBuffer = pythonapi.PyObject_GetBuffer + PyBuffer_Release = pythonapi.PyBuffer_Release +except ImportError: + pythonapi = None + + +c_ssize_p = POINTER(c_ssize_t) + +kernel32 = windll.kernel32 +GetStdHandle = kernel32.GetStdHandle +ReadConsoleW = kernel32.ReadConsoleW +WriteConsoleW = kernel32.WriteConsoleW +GetConsoleMode = kernel32.GetConsoleMode +GetLastError = kernel32.GetLastError +GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) +CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( + ("CommandLineToArgvW", windll.shell32) +) +LocalFree = WINFUNCTYPE(ctypes.c_void_p, ctypes.c_void_p)( + ("LocalFree", windll.kernel32) +) + + +STDIN_HANDLE = GetStdHandle(-10) +STDOUT_HANDLE = GetStdHandle(-11) +STDERR_HANDLE = GetStdHandle(-12) + + +PyBUF_SIMPLE = 0 +PyBUF_WRITABLE = 1 + +ERROR_SUCCESS = 0 +ERROR_NOT_ENOUGH_MEMORY = 8 +ERROR_OPERATION_ABORTED = 995 + +STDIN_FILENO = 0 +STDOUT_FILENO = 1 +STDERR_FILENO = 2 + +EOF = b"\x1a" +MAX_BYTES_WRITTEN = 32767 + + +class Py_buffer(ctypes.Structure): + _fields_ = [ + ("buf", c_void_p), + ("obj", py_object), + ("len", c_ssize_t), + ("itemsize", c_ssize_t), + ("readonly", c_int), + ("ndim", c_int), + ("format", c_char_p), + ("shape", c_ssize_p), + ("strides", c_ssize_p), + ("suboffsets", c_ssize_p), + ("internal", c_void_p), + ] + + if PY2: + _fields_.insert(-1, ("smalltable", c_ssize_t * 2)) + + +# On PyPy we cannot get buffers so our ability to operate here is +# serverly limited. +if pythonapi is None: + get_buffer = None +else: + + def get_buffer(obj, writable=False): + buf = Py_buffer() + flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE + PyObject_GetBuffer(py_object(obj), byref(buf), flags) + try: + buffer_type = c_char * buf.len + return buffer_type.from_address(buf.buf) + finally: + PyBuffer_Release(byref(buf)) + + +class _WindowsConsoleRawIOBase(io.RawIOBase): + def __init__(self, handle): + self.handle = handle + + def isatty(self): + io.RawIOBase.isatty(self) + return True + + +class _WindowsConsoleReader(_WindowsConsoleRawIOBase): + def readable(self): + return True + + def readinto(self, b): + bytes_to_be_read = len(b) + if not bytes_to_be_read: + return 0 + elif bytes_to_be_read % 2: + raise ValueError( + "cannot read odd number of bytes from UTF-16-LE encoded console" + ) + + buffer = get_buffer(b, writable=True) + code_units_to_be_read = bytes_to_be_read // 2 + code_units_read = c_ulong() + + rv = ReadConsoleW( + HANDLE(self.handle), + buffer, + code_units_to_be_read, + byref(code_units_read), + None, + ) + if GetLastError() == ERROR_OPERATION_ABORTED: + # wait for KeyboardInterrupt + time.sleep(0.1) + if not rv: + raise OSError("Windows error: {}".format(GetLastError())) + + if buffer[0] == EOF: + return 0 + return 2 * code_units_read.value + + +class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): + def writable(self): + return True + + @staticmethod + def _get_error_message(errno): + if errno == ERROR_SUCCESS: + return "ERROR_SUCCESS" + elif errno == ERROR_NOT_ENOUGH_MEMORY: + return "ERROR_NOT_ENOUGH_MEMORY" + return "Windows error {}".format(errno) + + def write(self, b): + bytes_to_be_written = len(b) + buf = get_buffer(b) + code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 + code_units_written = c_ulong() + + WriteConsoleW( + HANDLE(self.handle), + buf, + code_units_to_be_written, + byref(code_units_written), + None, + ) + bytes_written = 2 * code_units_written.value + + if bytes_written == 0 and bytes_to_be_written > 0: + raise OSError(self._get_error_message(GetLastError())) + return bytes_written + + +class ConsoleStream(object): + def __init__(self, text_stream, byte_stream): + self._text_stream = text_stream + self.buffer = byte_stream + + @property + def name(self): + return self.buffer.name + + def write(self, x): + if isinstance(x, text_type): + return self._text_stream.write(x) + try: + self.flush() + except Exception: + pass + return self.buffer.write(x) + + def writelines(self, lines): + for line in lines: + self.write(line) + + def __getattr__(self, name): + return getattr(self._text_stream, name) + + def isatty(self): + return self.buffer.isatty() + + def __repr__(self): + return "".format( + self.name, self.encoding + ) + + +class WindowsChunkedWriter(object): + """ + Wraps a stream (such as stdout), acting as a transparent proxy for all + attribute access apart from method 'write()' which we wrap to write in + limited chunks due to a Windows limitation on binary console streams. + """ + + def __init__(self, wrapped): + # double-underscore everything to prevent clashes with names of + # attributes on the wrapped stream object. + self.__wrapped = wrapped + + def __getattr__(self, name): + return getattr(self.__wrapped, name) + + def write(self, text): + total_to_write = len(text) + written = 0 + + while written < total_to_write: + to_write = min(total_to_write - written, MAX_BYTES_WRITTEN) + self.__wrapped.write(text[written : written + to_write]) + written += to_write + + +_wrapped_std_streams = set() + + +def _wrap_std_stream(name): + # Python 2 & Windows 7 and below + if ( + PY2 + and sys.getwindowsversion()[:2] <= (6, 1) + and name not in _wrapped_std_streams + ): + setattr(sys, name, WindowsChunkedWriter(getattr(sys, name))) + _wrapped_std_streams.add(name) + + +def _get_text_stdin(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +def _get_text_stdout(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +def _get_text_stderr(buffer_stream): + text_stream = _NonClosingTextIOWrapper( + io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), + "utf-16-le", + "strict", + line_buffering=True, + ) + return ConsoleStream(text_stream, buffer_stream) + + +if PY2: + + def _hash_py_argv(): + return zlib.crc32("\x00".join(sys.argv[1:])) + + _initial_argv_hash = _hash_py_argv() + + def _get_windows_argv(): + argc = c_int(0) + argv_unicode = CommandLineToArgvW(GetCommandLineW(), byref(argc)) + if not argv_unicode: + raise WinError() + try: + argv = [argv_unicode[i] for i in range(0, argc.value)] + finally: + LocalFree(argv_unicode) + del argv_unicode + + if not hasattr(sys, "frozen"): + argv = argv[1:] + while len(argv) > 0: + arg = argv[0] + if not arg.startswith("-") or arg == "-": + break + argv = argv[1:] + if arg.startswith(("-c", "-m")): + break + + return argv[1:] + + +_stream_factories = { + 0: _get_text_stdin, + 1: _get_text_stdout, + 2: _get_text_stderr, +} + + +def _is_console(f): + if not hasattr(f, "fileno"): + return False + + try: + fileno = f.fileno() + except OSError: + return False + + handle = msvcrt.get_osfhandle(fileno) + return bool(GetConsoleMode(handle, byref(DWORD()))) + + +def _get_windows_console_stream(f, encoding, errors): + if ( + get_buffer is not None + and encoding in ("utf-16-le", None) + and errors in ("strict", None) + and _is_console(f) + ): + func = _stream_factories.get(f.fileno()) + if func is not None: + if not PY2: + f = getattr(f, "buffer", None) + if f is None: + return None + else: + # If we are on Python 2 we need to set the stream that we + # deal with to binary mode as otherwise the exercise if a + # bit moot. The same problems apply as for + # get_binary_stdin and friends from _compat. + msvcrt.setmode(f.fileno(), os.O_BINARY) + return func(f) diff --git a/openpype/vendor/python/python_2/click/core.py b/openpype/vendor/python/python_2/click/core.py new file mode 100644 index 0000000000..f58bf26d2f --- /dev/null +++ b/openpype/vendor/python/python_2/click/core.py @@ -0,0 +1,2030 @@ +import errno +import inspect +import os +import sys +from contextlib import contextmanager +from functools import update_wrapper +from itertools import repeat + +from ._compat import isidentifier +from ._compat import iteritems +from ._compat import PY2 +from ._compat import string_types +from ._unicodefun import _check_for_unicode_literals +from ._unicodefun import _verify_python3_env +from .exceptions import Abort +from .exceptions import BadParameter +from .exceptions import ClickException +from .exceptions import Exit +from .exceptions import MissingParameter +from .exceptions import UsageError +from .formatting import HelpFormatter +from .formatting import join_options +from .globals import pop_context +from .globals import push_context +from .parser import OptionParser +from .parser import split_opt +from .termui import confirm +from .termui import prompt +from .termui import style +from .types import BOOL +from .types import convert_type +from .types import IntRange +from .utils import echo +from .utils import get_os_args +from .utils import make_default_short_help +from .utils import make_str +from .utils import PacifyFlushWrapper + +_missing = object() + +SUBCOMMAND_METAVAR = "COMMAND [ARGS]..." +SUBCOMMANDS_METAVAR = "COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]..." + +DEPRECATED_HELP_NOTICE = " (DEPRECATED)" +DEPRECATED_INVOKE_NOTICE = "DeprecationWarning: The command %(name)s is deprecated." + + +def _maybe_show_deprecated_notice(cmd): + if cmd.deprecated: + echo(style(DEPRECATED_INVOKE_NOTICE % {"name": cmd.name}, fg="red"), err=True) + + +def fast_exit(code): + """Exit without garbage collection, this speeds up exit by about 10ms for + things like bash completion. + """ + sys.stdout.flush() + sys.stderr.flush() + os._exit(code) + + +def _bashcomplete(cmd, prog_name, complete_var=None): + """Internal handler for the bash completion support.""" + if complete_var is None: + complete_var = "_{}_COMPLETE".format(prog_name.replace("-", "_").upper()) + complete_instr = os.environ.get(complete_var) + if not complete_instr: + return + + from ._bashcomplete import bashcomplete + + if bashcomplete(cmd, prog_name, complete_var, complete_instr): + fast_exit(1) + + +def _check_multicommand(base_command, cmd_name, cmd, register=False): + if not base_command.chain or not isinstance(cmd, MultiCommand): + return + if register: + hint = ( + "It is not possible to add multi commands as children to" + " another multi command that is in chain mode." + ) + else: + hint = ( + "Found a multi command as subcommand to a multi command" + " that is in chain mode. This is not supported." + ) + raise RuntimeError( + "{}. Command '{}' is set to chain and '{}' was added as" + " subcommand but it in itself is a multi command. ('{}' is a {}" + " within a chained {} named '{}').".format( + hint, + base_command.name, + cmd_name, + cmd_name, + cmd.__class__.__name__, + base_command.__class__.__name__, + base_command.name, + ) + ) + + +def batch(iterable, batch_size): + return list(zip(*repeat(iter(iterable), batch_size))) + + +def invoke_param_callback(callback, ctx, param, value): + code = getattr(callback, "__code__", None) + args = getattr(code, "co_argcount", 3) + + if args < 3: + from warnings import warn + + warn( + "Parameter callbacks take 3 args, (ctx, param, value). The" + " 2-arg style is deprecated and will be removed in 8.0.".format(callback), + DeprecationWarning, + stacklevel=3, + ) + return callback(ctx, value) + + return callback(ctx, param, value) + + +@contextmanager +def augment_usage_errors(ctx, param=None): + """Context manager that attaches extra information to exceptions.""" + try: + yield + except BadParameter as e: + if e.ctx is None: + e.ctx = ctx + if param is not None and e.param is None: + e.param = param + raise + except UsageError as e: + if e.ctx is None: + e.ctx = ctx + raise + + +def iter_params_for_processing(invocation_order, declaration_order): + """Given a sequence of parameters in the order as should be considered + for processing and an iterable of parameters that exist, this returns + a list in the correct order as they should be processed. + """ + + def sort_key(item): + try: + idx = invocation_order.index(item) + except ValueError: + idx = float("inf") + return (not item.is_eager, idx) + + return sorted(declaration_order, key=sort_key) + + +class Context(object): + """The context is a special internal object that holds state relevant + for the script execution at every single level. It's normally invisible + to commands unless they opt-in to getting access to it. + + The context is useful as it can pass internal objects around and can + control special execution features such as reading data from + environment variables. + + A context can be used as context manager in which case it will call + :meth:`close` on teardown. + + .. versionadded:: 2.0 + Added the `resilient_parsing`, `help_option_names`, + `token_normalize_func` parameters. + + .. versionadded:: 3.0 + Added the `allow_extra_args` and `allow_interspersed_args` + parameters. + + .. versionadded:: 4.0 + Added the `color`, `ignore_unknown_options`, and + `max_content_width` parameters. + + .. versionadded:: 7.1 + Added the `show_default` parameter. + + :param command: the command class for this context. + :param parent: the parent context. + :param info_name: the info name for this invocation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it is usually + the name of the script, for commands below it it's + the name of the script. + :param obj: an arbitrary object of user data. + :param auto_envvar_prefix: the prefix to use for automatic environment + variables. If this is `None` then reading + from environment variables is disabled. This + does not affect manually set environment + variables which are always read. + :param default_map: a dictionary (like object) with default values + for parameters. + :param terminal_width: the width of the terminal. The default is + inherit from parent context. If no context + defines the terminal width then auto + detection will be applied. + :param max_content_width: the maximum width for content rendered by + Click (this currently only affects help + pages). This defaults to 80 characters if + not overridden. In other words: even if the + terminal is larger than that, Click will not + format things wider than 80 characters by + default. In addition to that, formatters might + add some safety mapping on the right. + :param resilient_parsing: if this flag is enabled then Click will + parse without any interactivity or callback + invocation. Default values will also be + ignored. This is useful for implementing + things such as completion support. + :param allow_extra_args: if this is set to `True` then extra arguments + at the end will not raise an error and will be + kept on the context. The default is to inherit + from the command. + :param allow_interspersed_args: if this is set to `False` then options + and arguments cannot be mixed. The + default is to inherit from the command. + :param ignore_unknown_options: instructs click to ignore options it does + not know and keeps them for later + processing. + :param help_option_names: optionally a list of strings that define how + the default help parameter is named. The + default is ``['--help']``. + :param token_normalize_func: an optional function that is used to + normalize tokens (options, choices, + etc.). This for instance can be used to + implement case insensitive behavior. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are used in texts that Click prints which is by + default not the case. This for instance would affect + help output. + :param show_default: if True, shows defaults for all options. + Even if an option is later created with show_default=False, + this command-level setting overrides it. + """ + + def __init__( + self, + command, + parent=None, + info_name=None, + obj=None, + auto_envvar_prefix=None, + default_map=None, + terminal_width=None, + max_content_width=None, + resilient_parsing=False, + allow_extra_args=None, + allow_interspersed_args=None, + ignore_unknown_options=None, + help_option_names=None, + token_normalize_func=None, + color=None, + show_default=None, + ): + #: the parent context or `None` if none exists. + self.parent = parent + #: the :class:`Command` for this context. + self.command = command + #: the descriptive information name + self.info_name = info_name + #: the parsed parameters except if the value is hidden in which + #: case it's not remembered. + self.params = {} + #: the leftover arguments. + self.args = [] + #: protected arguments. These are arguments that are prepended + #: to `args` when certain parsing scenarios are encountered but + #: must be never propagated to another arguments. This is used + #: to implement nested parsing. + self.protected_args = [] + if obj is None and parent is not None: + obj = parent.obj + #: the user object stored. + self.obj = obj + self._meta = getattr(parent, "meta", {}) + + #: A dictionary (-like object) with defaults for parameters. + if ( + default_map is None + and parent is not None + and parent.default_map is not None + ): + default_map = parent.default_map.get(info_name) + self.default_map = default_map + + #: This flag indicates if a subcommand is going to be executed. A + #: group callback can use this information to figure out if it's + #: being executed directly or because the execution flow passes + #: onwards to a subcommand. By default it's None, but it can be + #: the name of the subcommand to execute. + #: + #: If chaining is enabled this will be set to ``'*'`` in case + #: any commands are executed. It is however not possible to + #: figure out which ones. If you require this knowledge you + #: should use a :func:`resultcallback`. + self.invoked_subcommand = None + + if terminal_width is None and parent is not None: + terminal_width = parent.terminal_width + #: The width of the terminal (None is autodetection). + self.terminal_width = terminal_width + + if max_content_width is None and parent is not None: + max_content_width = parent.max_content_width + #: The maximum width of formatted content (None implies a sensible + #: default which is 80 for most things). + self.max_content_width = max_content_width + + if allow_extra_args is None: + allow_extra_args = command.allow_extra_args + #: Indicates if the context allows extra args or if it should + #: fail on parsing. + #: + #: .. versionadded:: 3.0 + self.allow_extra_args = allow_extra_args + + if allow_interspersed_args is None: + allow_interspersed_args = command.allow_interspersed_args + #: Indicates if the context allows mixing of arguments and + #: options or not. + #: + #: .. versionadded:: 3.0 + self.allow_interspersed_args = allow_interspersed_args + + if ignore_unknown_options is None: + ignore_unknown_options = command.ignore_unknown_options + #: Instructs click to ignore options that a command does not + #: understand and will store it on the context for later + #: processing. This is primarily useful for situations where you + #: want to call into external programs. Generally this pattern is + #: strongly discouraged because it's not possibly to losslessly + #: forward all arguments. + #: + #: .. versionadded:: 4.0 + self.ignore_unknown_options = ignore_unknown_options + + if help_option_names is None: + if parent is not None: + help_option_names = parent.help_option_names + else: + help_option_names = ["--help"] + + #: The names for the help options. + self.help_option_names = help_option_names + + if token_normalize_func is None and parent is not None: + token_normalize_func = parent.token_normalize_func + + #: An optional normalization function for tokens. This is + #: options, choices, commands etc. + self.token_normalize_func = token_normalize_func + + #: Indicates if resilient parsing is enabled. In that case Click + #: will do its best to not cause any failures and default values + #: will be ignored. Useful for completion. + self.resilient_parsing = resilient_parsing + + # If there is no envvar prefix yet, but the parent has one and + # the command on this level has a name, we can expand the envvar + # prefix automatically. + if auto_envvar_prefix is None: + if ( + parent is not None + and parent.auto_envvar_prefix is not None + and self.info_name is not None + ): + auto_envvar_prefix = "{}_{}".format( + parent.auto_envvar_prefix, self.info_name.upper() + ) + else: + auto_envvar_prefix = auto_envvar_prefix.upper() + if auto_envvar_prefix is not None: + auto_envvar_prefix = auto_envvar_prefix.replace("-", "_") + self.auto_envvar_prefix = auto_envvar_prefix + + if color is None and parent is not None: + color = parent.color + + #: Controls if styling output is wanted or not. + self.color = color + + self.show_default = show_default + + self._close_callbacks = [] + self._depth = 0 + + def __enter__(self): + self._depth += 1 + push_context(self) + return self + + def __exit__(self, exc_type, exc_value, tb): + self._depth -= 1 + if self._depth == 0: + self.close() + pop_context() + + @contextmanager + def scope(self, cleanup=True): + """This helper method can be used with the context object to promote + it to the current thread local (see :func:`get_current_context`). + The default behavior of this is to invoke the cleanup functions which + can be disabled by setting `cleanup` to `False`. The cleanup + functions are typically used for things such as closing file handles. + + If the cleanup is intended the context object can also be directly + used as a context manager. + + Example usage:: + + with ctx.scope(): + assert get_current_context() is ctx + + This is equivalent:: + + with ctx: + assert get_current_context() is ctx + + .. versionadded:: 5.0 + + :param cleanup: controls if the cleanup functions should be run or + not. The default is to run these functions. In + some situations the context only wants to be + temporarily pushed in which case this can be disabled. + Nested pushes automatically defer the cleanup. + """ + if not cleanup: + self._depth += 1 + try: + with self as rv: + yield rv + finally: + if not cleanup: + self._depth -= 1 + + @property + def meta(self): + """This is a dictionary which is shared with all the contexts + that are nested. It exists so that click utilities can store some + state here if they need to. It is however the responsibility of + that code to manage this dictionary well. + + The keys are supposed to be unique dotted strings. For instance + module paths are a good choice for it. What is stored in there is + irrelevant for the operation of click. However what is important is + that code that places data here adheres to the general semantics of + the system. + + Example usage:: + + LANG_KEY = f'{__name__}.lang' + + def set_language(value): + ctx = get_current_context() + ctx.meta[LANG_KEY] = value + + def get_language(): + return get_current_context().meta.get(LANG_KEY, 'en_US') + + .. versionadded:: 5.0 + """ + return self._meta + + def make_formatter(self): + """Creates the formatter for the help and usage output.""" + return HelpFormatter( + width=self.terminal_width, max_width=self.max_content_width + ) + + def call_on_close(self, f): + """This decorator remembers a function as callback that should be + executed when the context tears down. This is most useful to bind + resource handling to the script execution. For instance, file objects + opened by the :class:`File` type will register their close callbacks + here. + + :param f: the function to execute on teardown. + """ + self._close_callbacks.append(f) + return f + + def close(self): + """Invokes all close callbacks.""" + for cb in self._close_callbacks: + cb() + self._close_callbacks = [] + + @property + def command_path(self): + """The computed command path. This is used for the ``usage`` + information on the help page. It's automatically created by + combining the info names of the chain of contexts to the root. + """ + rv = "" + if self.info_name is not None: + rv = self.info_name + if self.parent is not None: + rv = "{} {}".format(self.parent.command_path, rv) + return rv.lstrip() + + def find_root(self): + """Finds the outermost context.""" + node = self + while node.parent is not None: + node = node.parent + return node + + def find_object(self, object_type): + """Finds the closest object of a given type.""" + node = self + while node is not None: + if isinstance(node.obj, object_type): + return node.obj + node = node.parent + + def ensure_object(self, object_type): + """Like :meth:`find_object` but sets the innermost object to a + new instance of `object_type` if it does not exist. + """ + rv = self.find_object(object_type) + if rv is None: + self.obj = rv = object_type() + return rv + + def lookup_default(self, name): + """Looks up the default for a parameter name. This by default + looks into the :attr:`default_map` if available. + """ + if self.default_map is not None: + rv = self.default_map.get(name) + if callable(rv): + rv = rv() + return rv + + def fail(self, message): + """Aborts the execution of the program with a specific error + message. + + :param message: the error message to fail with. + """ + raise UsageError(message, self) + + def abort(self): + """Aborts the script.""" + raise Abort() + + def exit(self, code=0): + """Exits the application with a given exit code.""" + raise Exit(code) + + def get_usage(self): + """Helper method to get formatted usage string for the current + context and command. + """ + return self.command.get_usage(self) + + def get_help(self): + """Helper method to get formatted help page for the current + context and command. + """ + return self.command.get_help(self) + + def invoke(*args, **kwargs): # noqa: B902 + """Invokes a command callback in exactly the way it expects. There + are two ways to invoke this method: + + 1. the first argument can be a callback and all other arguments and + keyword arguments are forwarded directly to the function. + 2. the first argument is a click command object. In that case all + arguments are forwarded as well but proper click parameters + (options and click arguments) must be keyword arguments and Click + will fill in defaults. + + Note that before Click 3.2 keyword arguments were not properly filled + in against the intention of this code and no context was created. For + more information about this change and why it was done in a bugfix + release see :ref:`upgrade-to-3.2`. + """ + self, callback = args[:2] + ctx = self + + # It's also possible to invoke another command which might or + # might not have a callback. In that case we also fill + # in defaults and make a new context for this command. + if isinstance(callback, Command): + other_cmd = callback + callback = other_cmd.callback + ctx = Context(other_cmd, info_name=other_cmd.name, parent=self) + if callback is None: + raise TypeError( + "The given command does not have a callback that can be invoked." + ) + + for param in other_cmd.params: + if param.name not in kwargs and param.expose_value: + kwargs[param.name] = param.get_default(ctx) + + args = args[2:] + with augment_usage_errors(self): + with ctx: + return callback(*args, **kwargs) + + def forward(*args, **kwargs): # noqa: B902 + """Similar to :meth:`invoke` but fills in default keyword + arguments from the current context if the other command expects + it. This cannot invoke callbacks directly, only other commands. + """ + self, cmd = args[:2] + + # It's also possible to invoke another command which might or + # might not have a callback. + if not isinstance(cmd, Command): + raise TypeError("Callback is not a command.") + + for param in self.params: + if param not in kwargs: + kwargs[param] = self.params[param] + + return self.invoke(cmd, **kwargs) + + +class BaseCommand(object): + """The base command implements the minimal API contract of commands. + Most code will never use this as it does not implement a lot of useful + functionality but it can act as the direct subclass of alternative + parsing methods that do not depend on the Click parser. + + For instance, this can be used to bridge Click and other systems like + argparse or docopt. + + Because base commands do not implement a lot of the API that other + parts of Click take for granted, they are not supported for all + operations. For instance, they cannot be used with the decorators + usually and they have no built-in callback system. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + """ + + #: the default for the :attr:`Context.allow_extra_args` flag. + allow_extra_args = False + #: the default for the :attr:`Context.allow_interspersed_args` flag. + allow_interspersed_args = True + #: the default for the :attr:`Context.ignore_unknown_options` flag. + ignore_unknown_options = False + + def __init__(self, name, context_settings=None): + #: the name the command thinks it has. Upon registering a command + #: on a :class:`Group` the group will default the command name + #: with this information. You should instead use the + #: :class:`Context`\'s :attr:`~Context.info_name` attribute. + self.name = name + if context_settings is None: + context_settings = {} + #: an optional dictionary with defaults passed to the context. + self.context_settings = context_settings + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.name) + + def get_usage(self, ctx): + raise NotImplementedError("Base commands cannot get usage") + + def get_help(self, ctx): + raise NotImplementedError("Base commands cannot get help") + + def make_context(self, info_name, args, parent=None, **extra): + """This function when given an info name and arguments will kick + off the parsing and create a new :class:`Context`. It does not + invoke the actual command callback though. + + :param info_name: the info name for this invokation. Generally this + is the most descriptive name for the script or + command. For the toplevel script it's usually + the name of the script, for commands below it it's + the name of the script. + :param args: the arguments to parse as list of strings. + :param parent: the parent context if available. + :param extra: extra keyword arguments forwarded to the context + constructor. + """ + for key, value in iteritems(self.context_settings): + if key not in extra: + extra[key] = value + ctx = Context(self, info_name=info_name, parent=parent, **extra) + with ctx.scope(cleanup=False): + self.parse_args(ctx, args) + return ctx + + def parse_args(self, ctx, args): + """Given a context and a list of arguments this creates the parser + and parses the arguments, then modifies the context as necessary. + This is automatically invoked by :meth:`make_context`. + """ + raise NotImplementedError("Base commands do not know how to parse arguments.") + + def invoke(self, ctx): + """Given a context, this invokes the command. The default + implementation is raising a not implemented error. + """ + raise NotImplementedError("Base commands are not invokable by default") + + def main( + self, + args=None, + prog_name=None, + complete_var=None, + standalone_mode=True, + **extra + ): + """This is the way to invoke a script with all the bells and + whistles as a command line application. This will always terminate + the application after a call. If this is not wanted, ``SystemExit`` + needs to be caught. + + This method is also available by directly calling the instance of + a :class:`Command`. + + .. versionadded:: 3.0 + Added the `standalone_mode` flag to control the standalone mode. + + :param args: the arguments that should be used for parsing. If not + provided, ``sys.argv[1:]`` is used. + :param prog_name: the program name that should be used. By default + the program name is constructed by taking the file + name from ``sys.argv[0]``. + :param complete_var: the environment variable that controls the + bash completion support. The default is + ``"__COMPLETE"`` with prog_name in + uppercase. + :param standalone_mode: the default behavior is to invoke the script + in standalone mode. Click will then + handle exceptions and convert them into + error messages and the function will never + return but shut down the interpreter. If + this is set to `False` they will be + propagated to the caller and the return + value of this function is the return value + of :meth:`invoke`. + :param extra: extra keyword arguments are forwarded to the context + constructor. See :class:`Context` for more information. + """ + # If we are in Python 3, we will verify that the environment is + # sane at this point or reject further execution to avoid a + # broken script. + if not PY2: + _verify_python3_env() + else: + _check_for_unicode_literals() + + if args is None: + args = get_os_args() + else: + args = list(args) + + if prog_name is None: + prog_name = make_str( + os.path.basename(sys.argv[0] if sys.argv else __file__) + ) + + # Hook for the Bash completion. This only activates if the Bash + # completion is actually enabled, otherwise this is quite a fast + # noop. + _bashcomplete(self, prog_name, complete_var) + + try: + try: + with self.make_context(prog_name, args, **extra) as ctx: + rv = self.invoke(ctx) + if not standalone_mode: + return rv + # it's not safe to `ctx.exit(rv)` here! + # note that `rv` may actually contain data like "1" which + # has obvious effects + # more subtle case: `rv=[None, None]` can come out of + # chained commands which all returned `None` -- so it's not + # even always obvious that `rv` indicates success/failure + # by its truthiness/falsiness + ctx.exit() + except (EOFError, KeyboardInterrupt): + echo(file=sys.stderr) + raise Abort() + except ClickException as e: + if not standalone_mode: + raise + e.show() + sys.exit(e.exit_code) + except IOError as e: + if e.errno == errno.EPIPE: + sys.stdout = PacifyFlushWrapper(sys.stdout) + sys.stderr = PacifyFlushWrapper(sys.stderr) + sys.exit(1) + else: + raise + except Exit as e: + if standalone_mode: + sys.exit(e.exit_code) + else: + # in non-standalone mode, return the exit code + # note that this is only reached if `self.invoke` above raises + # an Exit explicitly -- thus bypassing the check there which + # would return its result + # the results of non-standalone execution may therefore be + # somewhat ambiguous: if there are codepaths which lead to + # `ctx.exit(1)` and to `return 1`, the caller won't be able to + # tell the difference between the two + return e.exit_code + except Abort: + if not standalone_mode: + raise + echo("Aborted!", file=sys.stderr) + sys.exit(1) + + def __call__(self, *args, **kwargs): + """Alias for :meth:`main`.""" + return self.main(*args, **kwargs) + + +class Command(BaseCommand): + """Commands are the basic building block of command line interfaces in + Click. A basic command handles command line parsing and might dispatch + more parsing to commands nested below it. + + .. versionchanged:: 2.0 + Added the `context_settings` parameter. + .. versionchanged:: 7.1 + Added the `no_args_is_help` parameter. + + :param name: the name of the command to use unless a group overrides it. + :param context_settings: an optional dictionary with defaults that are + passed to the context object. + :param callback: the callback to invoke. This is optional. + :param params: the parameters to register with this command. This can + be either :class:`Option` or :class:`Argument` objects. + :param help: the help string to use for this command. + :param epilog: like the help string but it's printed at the end of the + help page after everything else. + :param short_help: the short help to use for this command. This is + shown on the command listing of the parent command. + :param add_help_option: by default each command registers a ``--help`` + option. This can be disabled by this parameter. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is disabled by default. + If enabled this will add ``--help`` as argument + if no arguments are passed + :param hidden: hide this command from help outputs. + + :param deprecated: issues a message indicating that + the command is deprecated. + """ + + def __init__( + self, + name, + context_settings=None, + callback=None, + params=None, + help=None, + epilog=None, + short_help=None, + options_metavar="[OPTIONS]", + add_help_option=True, + no_args_is_help=False, + hidden=False, + deprecated=False, + ): + BaseCommand.__init__(self, name, context_settings) + #: the callback to execute when the command fires. This might be + #: `None` in which case nothing happens. + self.callback = callback + #: the list of parameters for this command in the order they + #: should show up in the help page and execute. Eager parameters + #: will automatically be handled before non eager ones. + self.params = params or [] + # if a form feed (page break) is found in the help text, truncate help + # text to the content preceding the first form feed + if help and "\f" in help: + help = help.split("\f", 1)[0] + self.help = help + self.epilog = epilog + self.options_metavar = options_metavar + self.short_help = short_help + self.add_help_option = add_help_option + self.no_args_is_help = no_args_is_help + self.hidden = hidden + self.deprecated = deprecated + + def get_usage(self, ctx): + """Formats the usage line into a string and returns it. + + Calls :meth:`format_usage` internally. + """ + formatter = ctx.make_formatter() + self.format_usage(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_params(self, ctx): + rv = self.params + help_option = self.get_help_option(ctx) + if help_option is not None: + rv = rv + [help_option] + return rv + + def format_usage(self, ctx, formatter): + """Writes the usage line into the formatter. + + This is a low-level method called by :meth:`get_usage`. + """ + pieces = self.collect_usage_pieces(ctx) + formatter.write_usage(ctx.command_path, " ".join(pieces)) + + def collect_usage_pieces(self, ctx): + """Returns all the pieces that go into the usage line and returns + it as a list of strings. + """ + rv = [self.options_metavar] + for param in self.get_params(ctx): + rv.extend(param.get_usage_pieces(ctx)) + return rv + + def get_help_option_names(self, ctx): + """Returns the names for the help option.""" + all_names = set(ctx.help_option_names) + for param in self.params: + all_names.difference_update(param.opts) + all_names.difference_update(param.secondary_opts) + return all_names + + def get_help_option(self, ctx): + """Returns the help option object.""" + help_options = self.get_help_option_names(ctx) + if not help_options or not self.add_help_option: + return + + def show_help(ctx, param, value): + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + return Option( + help_options, + is_flag=True, + is_eager=True, + expose_value=False, + callback=show_help, + help="Show this message and exit.", + ) + + def make_parser(self, ctx): + """Creates the underlying option parser for this command.""" + parser = OptionParser(ctx) + for param in self.get_params(ctx): + param.add_to_parser(parser, ctx) + return parser + + def get_help(self, ctx): + """Formats the help into a string and returns it. + + Calls :meth:`format_help` internally. + """ + formatter = ctx.make_formatter() + self.format_help(ctx, formatter) + return formatter.getvalue().rstrip("\n") + + def get_short_help_str(self, limit=45): + """Gets short help for the command or makes it by shortening the + long help string. + """ + return ( + self.short_help + or self.help + and make_default_short_help(self.help, limit) + or "" + ) + + def format_help(self, ctx, formatter): + """Writes the help into the formatter if it exists. + + This is a low-level method called by :meth:`get_help`. + + This calls the following methods: + + - :meth:`format_usage` + - :meth:`format_help_text` + - :meth:`format_options` + - :meth:`format_epilog` + """ + self.format_usage(ctx, formatter) + self.format_help_text(ctx, formatter) + self.format_options(ctx, formatter) + self.format_epilog(ctx, formatter) + + def format_help_text(self, ctx, formatter): + """Writes the help text to the formatter if it exists.""" + if self.help: + formatter.write_paragraph() + with formatter.indentation(): + help_text = self.help + if self.deprecated: + help_text += DEPRECATED_HELP_NOTICE + formatter.write_text(help_text) + elif self.deprecated: + formatter.write_paragraph() + with formatter.indentation(): + formatter.write_text(DEPRECATED_HELP_NOTICE) + + def format_options(self, ctx, formatter): + """Writes all the options into the formatter if they exist.""" + opts = [] + for param in self.get_params(ctx): + rv = param.get_help_record(ctx) + if rv is not None: + opts.append(rv) + + if opts: + with formatter.section("Options"): + formatter.write_dl(opts) + + def format_epilog(self, ctx, formatter): + """Writes the epilog into the formatter if it exists.""" + if self.epilog: + formatter.write_paragraph() + with formatter.indentation(): + formatter.write_text(self.epilog) + + def parse_args(self, ctx, args): + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + parser = self.make_parser(ctx) + opts, args, param_order = parser.parse_args(args=args) + + for param in iter_params_for_processing(param_order, self.get_params(ctx)): + value, args = param.handle_parse_result(ctx, opts, args) + + if args and not ctx.allow_extra_args and not ctx.resilient_parsing: + ctx.fail( + "Got unexpected extra argument{} ({})".format( + "s" if len(args) != 1 else "", " ".join(map(make_str, args)) + ) + ) + + ctx.args = args + return args + + def invoke(self, ctx): + """Given a context, this invokes the attached callback (if it exists) + in the right way. + """ + _maybe_show_deprecated_notice(self) + if self.callback is not None: + return ctx.invoke(self.callback, **ctx.params) + + +class MultiCommand(Command): + """A multi command is the basic implementation of a command that + dispatches to subcommands. The most common version is the + :class:`Group`. + + :param invoke_without_command: this controls how the multi command itself + is invoked. By default it's only invoked + if a subcommand is provided. + :param no_args_is_help: this controls what happens if no arguments are + provided. This option is enabled by default if + `invoke_without_command` is disabled or disabled + if it's enabled. If enabled this will add + ``--help`` as argument if no arguments are + passed. + :param subcommand_metavar: the string that is used in the documentation + to indicate the subcommand place. + :param chain: if this is set to `True` chaining of multiple subcommands + is enabled. This restricts the form of commands in that + they cannot have optional arguments but it allows + multiple commands to be chained together. + :param result_callback: the result callback to attach to this multi + command. + """ + + allow_extra_args = True + allow_interspersed_args = False + + def __init__( + self, + name=None, + invoke_without_command=False, + no_args_is_help=None, + subcommand_metavar=None, + chain=False, + result_callback=None, + **attrs + ): + Command.__init__(self, name, **attrs) + if no_args_is_help is None: + no_args_is_help = not invoke_without_command + self.no_args_is_help = no_args_is_help + self.invoke_without_command = invoke_without_command + if subcommand_metavar is None: + if chain: + subcommand_metavar = SUBCOMMANDS_METAVAR + else: + subcommand_metavar = SUBCOMMAND_METAVAR + self.subcommand_metavar = subcommand_metavar + self.chain = chain + #: The result callback that is stored. This can be set or + #: overridden with the :func:`resultcallback` decorator. + self.result_callback = result_callback + + if self.chain: + for param in self.params: + if isinstance(param, Argument) and not param.required: + raise RuntimeError( + "Multi commands in chain mode cannot have" + " optional arguments." + ) + + def collect_usage_pieces(self, ctx): + rv = Command.collect_usage_pieces(self, ctx) + rv.append(self.subcommand_metavar) + return rv + + def format_options(self, ctx, formatter): + Command.format_options(self, ctx, formatter) + self.format_commands(ctx, formatter) + + def resultcallback(self, replace=False): + """Adds a result callback to the chain command. By default if a + result callback is already registered this will chain them but + this can be disabled with the `replace` parameter. The result + callback is invoked with the return value of the subcommand + (or the list of return values from all subcommands if chaining + is enabled) as well as the parameters as they would be passed + to the main callback. + + Example:: + + @click.group() + @click.option('-i', '--input', default=23) + def cli(input): + return 42 + + @cli.resultcallback() + def process_result(result, input): + return result + input + + .. versionadded:: 3.0 + + :param replace: if set to `True` an already existing result + callback will be removed. + """ + + def decorator(f): + old_callback = self.result_callback + if old_callback is None or replace: + self.result_callback = f + return f + + def function(__value, *args, **kwargs): + return f(old_callback(__value, *args, **kwargs), *args, **kwargs) + + self.result_callback = rv = update_wrapper(function, f) + return rv + + return decorator + + def format_commands(self, ctx, formatter): + """Extra format methods for multi methods that adds all the commands + after the options. + """ + commands = [] + for subcommand in self.list_commands(ctx): + cmd = self.get_command(ctx, subcommand) + # What is this, the tool lied about a command. Ignore it + if cmd is None: + continue + if cmd.hidden: + continue + + commands.append((subcommand, cmd)) + + # allow for 3 times the default spacing + if len(commands): + limit = formatter.width - 6 - max(len(cmd[0]) for cmd in commands) + + rows = [] + for subcommand, cmd in commands: + help = cmd.get_short_help_str(limit) + rows.append((subcommand, help)) + + if rows: + with formatter.section("Commands"): + formatter.write_dl(rows) + + def parse_args(self, ctx, args): + if not args and self.no_args_is_help and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + rest = Command.parse_args(self, ctx, args) + if self.chain: + ctx.protected_args = rest + ctx.args = [] + elif rest: + ctx.protected_args, ctx.args = rest[:1], rest[1:] + + return ctx.args + + def invoke(self, ctx): + def _process_result(value): + if self.result_callback is not None: + value = ctx.invoke(self.result_callback, value, **ctx.params) + return value + + if not ctx.protected_args: + # If we are invoked without command the chain flag controls + # how this happens. If we are not in chain mode, the return + # value here is the return value of the command. + # If however we are in chain mode, the return value is the + # return value of the result processor invoked with an empty + # list (which means that no subcommand actually was executed). + if self.invoke_without_command: + if not self.chain: + return Command.invoke(self, ctx) + with ctx: + Command.invoke(self, ctx) + return _process_result([]) + ctx.fail("Missing command.") + + # Fetch args back out + args = ctx.protected_args + ctx.args + ctx.args = [] + ctx.protected_args = [] + + # If we're not in chain mode, we only allow the invocation of a + # single command but we also inform the current context about the + # name of the command to invoke. + if not self.chain: + # Make sure the context is entered so we do not clean up + # resources until the result processor has worked. + with ctx: + cmd_name, cmd, args = self.resolve_command(ctx, args) + ctx.invoked_subcommand = cmd_name + Command.invoke(self, ctx) + sub_ctx = cmd.make_context(cmd_name, args, parent=ctx) + with sub_ctx: + return _process_result(sub_ctx.command.invoke(sub_ctx)) + + # In chain mode we create the contexts step by step, but after the + # base command has been invoked. Because at that point we do not + # know the subcommands yet, the invoked subcommand attribute is + # set to ``*`` to inform the command that subcommands are executed + # but nothing else. + with ctx: + ctx.invoked_subcommand = "*" if args else None + Command.invoke(self, ctx) + + # Otherwise we make every single context and invoke them in a + # chain. In that case the return value to the result processor + # is the list of all invoked subcommand's results. + contexts = [] + while args: + cmd_name, cmd, args = self.resolve_command(ctx, args) + sub_ctx = cmd.make_context( + cmd_name, + args, + parent=ctx, + allow_extra_args=True, + allow_interspersed_args=False, + ) + contexts.append(sub_ctx) + args, sub_ctx.args = sub_ctx.args, [] + + rv = [] + for sub_ctx in contexts: + with sub_ctx: + rv.append(sub_ctx.command.invoke(sub_ctx)) + return _process_result(rv) + + def resolve_command(self, ctx, args): + cmd_name = make_str(args[0]) + original_cmd_name = cmd_name + + # Get the command + cmd = self.get_command(ctx, cmd_name) + + # If we can't find the command but there is a normalization + # function available, we try with that one. + if cmd is None and ctx.token_normalize_func is not None: + cmd_name = ctx.token_normalize_func(cmd_name) + cmd = self.get_command(ctx, cmd_name) + + # If we don't find the command we want to show an error message + # to the user that it was not provided. However, there is + # something else we should do: if the first argument looks like + # an option we want to kick off parsing again for arguments to + # resolve things like --help which now should go to the main + # place. + if cmd is None and not ctx.resilient_parsing: + if split_opt(cmd_name)[0]: + self.parse_args(ctx, ctx.args) + ctx.fail("No such command '{}'.".format(original_cmd_name)) + + return cmd_name, cmd, args[1:] + + def get_command(self, ctx, cmd_name): + """Given a context and a command name, this returns a + :class:`Command` object if it exists or returns `None`. + """ + raise NotImplementedError() + + def list_commands(self, ctx): + """Returns a list of subcommand names in the order they should + appear. + """ + return [] + + +class Group(MultiCommand): + """A group allows a command to have subcommands attached. This is the + most common way to implement nesting in Click. + + :param commands: a dictionary of commands. + """ + + def __init__(self, name=None, commands=None, **attrs): + MultiCommand.__init__(self, name, **attrs) + #: the registered subcommands by their exported names. + self.commands = commands or {} + + def add_command(self, cmd, name=None): + """Registers another :class:`Command` with this group. If the name + is not provided, the name of the command is used. + """ + name = name or cmd.name + if name is None: + raise TypeError("Command has no name.") + _check_multicommand(self, name, cmd, register=True) + self.commands[name] = cmd + + def command(self, *args, **kwargs): + """A shortcut decorator for declaring and attaching a command to + the group. This takes the same arguments as :func:`command` but + immediately registers the created command with this instance by + calling into :meth:`add_command`. + """ + from .decorators import command + + def decorator(f): + cmd = command(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + return decorator + + def group(self, *args, **kwargs): + """A shortcut decorator for declaring and attaching a group to + the group. This takes the same arguments as :func:`group` but + immediately registers the created command with this instance by + calling into :meth:`add_command`. + """ + from .decorators import group + + def decorator(f): + cmd = group(*args, **kwargs)(f) + self.add_command(cmd) + return cmd + + return decorator + + def get_command(self, ctx, cmd_name): + return self.commands.get(cmd_name) + + def list_commands(self, ctx): + return sorted(self.commands) + + +class CommandCollection(MultiCommand): + """A command collection is a multi command that merges multiple multi + commands together into one. This is a straightforward implementation + that accepts a list of different multi commands as sources and + provides all the commands for each of them. + """ + + def __init__(self, name=None, sources=None, **attrs): + MultiCommand.__init__(self, name, **attrs) + #: The list of registered multi commands. + self.sources = sources or [] + + def add_source(self, multi_cmd): + """Adds a new multi command to the chain dispatcher.""" + self.sources.append(multi_cmd) + + def get_command(self, ctx, cmd_name): + for source in self.sources: + rv = source.get_command(ctx, cmd_name) + if rv is not None: + if self.chain: + _check_multicommand(self, cmd_name, rv) + return rv + + def list_commands(self, ctx): + rv = set() + for source in self.sources: + rv.update(source.list_commands(ctx)) + return sorted(rv) + + +class Parameter(object): + r"""A parameter to a command comes in two versions: they are either + :class:`Option`\s or :class:`Argument`\s. Other subclasses are currently + not supported by design as some of the internals for parsing are + intentionally not finalized. + + Some settings are supported by both options and arguments. + + :param param_decls: the parameter declarations for this option or + argument. This is a list of flags or argument + names. + :param type: the type that should be used. Either a :class:`ParamType` + or a Python type. The later is converted into the former + automatically if supported. + :param required: controls if this is optional or not. + :param default: the default value if omitted. This can also be a callable, + in which case it's invoked when the default is needed + without any arguments. + :param callback: a callback that should be executed after the parameter + was matched. This is called as ``fn(ctx, param, + value)`` and needs to return the value. + :param nargs: the number of arguments to match. If not ``1`` the return + value is a tuple instead of single value. The default for + nargs is ``1`` (except if the type is a tuple, then it's + the arity of the tuple). + :param metavar: how the value is represented in the help page. + :param expose_value: if this is `True` then the value is passed onwards + to the command callback and stored on the context, + otherwise it's skipped. + :param is_eager: eager values are processed before non eager ones. This + should not be set for arguments or it will inverse the + order of processing. + :param envvar: a string or list of strings that are environment variables + that should be checked. + + .. versionchanged:: 7.1 + Empty environment variables are ignored rather than taking the + empty string value. This makes it possible for scripts to clear + variables if they can't unset them. + + .. versionchanged:: 2.0 + Changed signature for parameter callback to also be passed the + parameter. The old callback format will still work, but it will + raise a warning to give you a chance to migrate the code easier. + """ + param_type_name = "parameter" + + def __init__( + self, + param_decls=None, + type=None, + required=False, + default=None, + callback=None, + nargs=None, + metavar=None, + expose_value=True, + is_eager=False, + envvar=None, + autocompletion=None, + ): + self.name, self.opts, self.secondary_opts = self._parse_decls( + param_decls or (), expose_value + ) + + self.type = convert_type(type, default) + + # Default nargs to what the type tells us if we have that + # information available. + if nargs is None: + if self.type.is_composite: + nargs = self.type.arity + else: + nargs = 1 + + self.required = required + self.callback = callback + self.nargs = nargs + self.multiple = False + self.expose_value = expose_value + self.default = default + self.is_eager = is_eager + self.metavar = metavar + self.envvar = envvar + self.autocompletion = autocompletion + + def __repr__(self): + return "<{} {}>".format(self.__class__.__name__, self.name) + + @property + def human_readable_name(self): + """Returns the human readable name of this parameter. This is the + same as the name for options, but the metavar for arguments. + """ + return self.name + + def make_metavar(self): + if self.metavar is not None: + return self.metavar + metavar = self.type.get_metavar(self) + if metavar is None: + metavar = self.type.name.upper() + if self.nargs != 1: + metavar += "..." + return metavar + + def get_default(self, ctx): + """Given a context variable this calculates the default value.""" + # Otherwise go with the regular default. + if callable(self.default): + rv = self.default() + else: + rv = self.default + return self.type_cast_value(ctx, rv) + + def add_to_parser(self, parser, ctx): + pass + + def consume_value(self, ctx, opts): + value = opts.get(self.name) + if value is None: + value = self.value_from_envvar(ctx) + if value is None: + value = ctx.lookup_default(self.name) + return value + + def type_cast_value(self, ctx, value): + """Given a value this runs it properly through the type system. + This automatically handles things like `nargs` and `multiple` as + well as composite types. + """ + if self.type.is_composite: + if self.nargs <= 1: + raise TypeError( + "Attempted to invoke composite type but nargs has" + " been set to {}. This is not supported; nargs" + " needs to be set to a fixed value > 1.".format(self.nargs) + ) + if self.multiple: + return tuple(self.type(x or (), self, ctx) for x in value or ()) + return self.type(value or (), self, ctx) + + def _convert(value, level): + if level == 0: + return self.type(value, self, ctx) + return tuple(_convert(x, level - 1) for x in value or ()) + + return _convert(value, (self.nargs != 1) + bool(self.multiple)) + + def process_value(self, ctx, value): + """Given a value and context this runs the logic to convert the + value as necessary. + """ + # If the value we were given is None we do nothing. This way + # code that calls this can easily figure out if something was + # not provided. Otherwise it would be converted into an empty + # tuple for multiple invocations which is inconvenient. + if value is not None: + return self.type_cast_value(ctx, value) + + def value_is_missing(self, value): + if value is None: + return True + if (self.nargs != 1 or self.multiple) and value == (): + return True + return False + + def full_process_value(self, ctx, value): + value = self.process_value(ctx, value) + + if value is None and not ctx.resilient_parsing: + value = self.get_default(ctx) + + if self.required and self.value_is_missing(value): + raise MissingParameter(ctx=ctx, param=self) + + return value + + def resolve_envvar_value(self, ctx): + if self.envvar is None: + return + if isinstance(self.envvar, (tuple, list)): + for envvar in self.envvar: + rv = os.environ.get(envvar) + if rv is not None: + return rv + else: + rv = os.environ.get(self.envvar) + + if rv != "": + return rv + + def value_from_envvar(self, ctx): + rv = self.resolve_envvar_value(ctx) + if rv is not None and self.nargs != 1: + rv = self.type.split_envvar_value(rv) + return rv + + def handle_parse_result(self, ctx, opts, args): + with augment_usage_errors(ctx, param=self): + value = self.consume_value(ctx, opts) + try: + value = self.full_process_value(ctx, value) + except Exception: + if not ctx.resilient_parsing: + raise + value = None + if self.callback is not None: + try: + value = invoke_param_callback(self.callback, ctx, self, value) + except Exception: + if not ctx.resilient_parsing: + raise + + if self.expose_value: + ctx.params[self.name] = value + return value, args + + def get_help_record(self, ctx): + pass + + def get_usage_pieces(self, ctx): + return [] + + def get_error_hint(self, ctx): + """Get a stringified version of the param for use in error messages to + indicate which param caused the error. + """ + hint_list = self.opts or [self.human_readable_name] + return " / ".join(repr(x) for x in hint_list) + + +class Option(Parameter): + """Options are usually optional values on the command line and + have some extra features that arguments don't have. + + All other parameters are passed onwards to the parameter constructor. + + :param show_default: controls if the default value should be shown on the + help page. Normally, defaults are not shown. If this + value is a string, it shows the string instead of the + value. This is particularly useful for dynamic options. + :param show_envvar: controls if an environment variable should be shown on + the help page. Normally, environment variables + are not shown. + :param prompt: if set to `True` or a non empty string then the user will be + prompted for input. If set to `True` the prompt will be the + option name capitalized. + :param confirmation_prompt: if set then the value will need to be confirmed + if it was prompted for. + :param hide_input: if this is `True` then the input on the prompt will be + hidden from the user. This is useful for password + input. + :param is_flag: forces this option to act as a flag. The default is + auto detection. + :param flag_value: which value should be used for this flag if it's + enabled. This is set to a boolean automatically if + the option string contains a slash to mark two options. + :param multiple: if this is set to `True` then the argument is accepted + multiple times and recorded. This is similar to ``nargs`` + in how it works but supports arbitrary number of + arguments. + :param count: this flag makes an option increment an integer. + :param allow_from_autoenv: if this is enabled then the value of this + parameter will be pulled from an environment + variable in case a prefix is defined on the + context. + :param help: the help string. + :param hidden: hide this option from help outputs. + """ + + param_type_name = "option" + + def __init__( + self, + param_decls=None, + show_default=False, + prompt=False, + confirmation_prompt=False, + hide_input=False, + is_flag=None, + flag_value=None, + multiple=False, + count=False, + allow_from_autoenv=True, + type=None, + help=None, + hidden=False, + show_choices=True, + show_envvar=False, + **attrs + ): + default_is_missing = attrs.get("default", _missing) is _missing + Parameter.__init__(self, param_decls, type=type, **attrs) + + if prompt is True: + prompt_text = self.name.replace("_", " ").capitalize() + elif prompt is False: + prompt_text = None + else: + prompt_text = prompt + self.prompt = prompt_text + self.confirmation_prompt = confirmation_prompt + self.hide_input = hide_input + self.hidden = hidden + + # Flags + if is_flag is None: + if flag_value is not None: + is_flag = True + else: + is_flag = bool(self.secondary_opts) + if is_flag and default_is_missing: + self.default = False + if flag_value is None: + flag_value = not self.default + self.is_flag = is_flag + self.flag_value = flag_value + if self.is_flag and isinstance(self.flag_value, bool) and type in [None, bool]: + self.type = BOOL + self.is_bool_flag = True + else: + self.is_bool_flag = False + + # Counting + self.count = count + if count: + if type is None: + self.type = IntRange(min=0) + if default_is_missing: + self.default = 0 + + self.multiple = multiple + self.allow_from_autoenv = allow_from_autoenv + self.help = help + self.show_default = show_default + self.show_choices = show_choices + self.show_envvar = show_envvar + + # Sanity check for stuff we don't support + if __debug__: + if self.nargs < 0: + raise TypeError("Options cannot have nargs < 0") + if self.prompt and self.is_flag and not self.is_bool_flag: + raise TypeError("Cannot prompt for flags that are not bools.") + if not self.is_bool_flag and self.secondary_opts: + raise TypeError("Got secondary option for non boolean flag.") + if self.is_bool_flag and self.hide_input and self.prompt is not None: + raise TypeError("Hidden input does not work with boolean flag prompts.") + if self.count: + if self.multiple: + raise TypeError( + "Options cannot be multiple and count at the same time." + ) + elif self.is_flag: + raise TypeError( + "Options cannot be count and flags at the same time." + ) + + def _parse_decls(self, decls, expose_value): + opts = [] + secondary_opts = [] + name = None + possible_names = [] + + for decl in decls: + if isidentifier(decl): + if name is not None: + raise TypeError("Name defined twice") + name = decl + else: + split_char = ";" if decl[:1] == "/" else "/" + if split_char in decl: + first, second = decl.split(split_char, 1) + first = first.rstrip() + if first: + possible_names.append(split_opt(first)) + opts.append(first) + second = second.lstrip() + if second: + secondary_opts.append(second.lstrip()) + else: + possible_names.append(split_opt(decl)) + opts.append(decl) + + if name is None and possible_names: + possible_names.sort(key=lambda x: -len(x[0])) # group long options first + name = possible_names[0][1].replace("-", "_").lower() + if not isidentifier(name): + name = None + + if name is None: + if not expose_value: + return None, opts, secondary_opts + raise TypeError("Could not determine name for option") + + if not opts and not secondary_opts: + raise TypeError( + "No options defined but a name was passed ({}). Did you" + " mean to declare an argument instead of an option?".format(name) + ) + + return name, opts, secondary_opts + + def add_to_parser(self, parser, ctx): + kwargs = { + "dest": self.name, + "nargs": self.nargs, + "obj": self, + } + + if self.multiple: + action = "append" + elif self.count: + action = "count" + else: + action = "store" + + if self.is_flag: + kwargs.pop("nargs", None) + action_const = "{}_const".format(action) + if self.is_bool_flag and self.secondary_opts: + parser.add_option(self.opts, action=action_const, const=True, **kwargs) + parser.add_option( + self.secondary_opts, action=action_const, const=False, **kwargs + ) + else: + parser.add_option( + self.opts, action=action_const, const=self.flag_value, **kwargs + ) + else: + kwargs["action"] = action + parser.add_option(self.opts, **kwargs) + + def get_help_record(self, ctx): + if self.hidden: + return + any_prefix_is_slash = [] + + def _write_opts(opts): + rv, any_slashes = join_options(opts) + if any_slashes: + any_prefix_is_slash[:] = [True] + if not self.is_flag and not self.count: + rv += " {}".format(self.make_metavar()) + return rv + + rv = [_write_opts(self.opts)] + if self.secondary_opts: + rv.append(_write_opts(self.secondary_opts)) + + help = self.help or "" + extra = [] + if self.show_envvar: + envvar = self.envvar + if envvar is None: + if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None: + envvar = "{}_{}".format(ctx.auto_envvar_prefix, self.name.upper()) + if envvar is not None: + extra.append( + "env var: {}".format( + ", ".join(str(d) for d in envvar) + if isinstance(envvar, (list, tuple)) + else envvar + ) + ) + if self.default is not None and (self.show_default or ctx.show_default): + if isinstance(self.show_default, string_types): + default_string = "({})".format(self.show_default) + elif isinstance(self.default, (list, tuple)): + default_string = ", ".join(str(d) for d in self.default) + elif inspect.isfunction(self.default): + default_string = "(dynamic)" + else: + default_string = self.default + extra.append("default: {}".format(default_string)) + + if self.required: + extra.append("required") + if extra: + help = "{}[{}]".format( + "{} ".format(help) if help else "", "; ".join(extra) + ) + + return ("; " if any_prefix_is_slash else " / ").join(rv), help + + def get_default(self, ctx): + # If we're a non boolean flag our default is more complex because + # we need to look at all flags in the same group to figure out + # if we're the the default one in which case we return the flag + # value as default. + if self.is_flag and not self.is_bool_flag: + for param in ctx.command.params: + if param.name == self.name and param.default: + return param.flag_value + return None + return Parameter.get_default(self, ctx) + + def prompt_for_value(self, ctx): + """This is an alternative flow that can be activated in the full + value processing if a value does not exist. It will prompt the + user until a valid value exists and then returns the processed + value as result. + """ + # Calculate the default before prompting anything to be stable. + default = self.get_default(ctx) + + # If this is a prompt for a flag we need to handle this + # differently. + if self.is_bool_flag: + return confirm(self.prompt, default) + + return prompt( + self.prompt, + default=default, + type=self.type, + hide_input=self.hide_input, + show_choices=self.show_choices, + confirmation_prompt=self.confirmation_prompt, + value_proc=lambda x: self.process_value(ctx, x), + ) + + def resolve_envvar_value(self, ctx): + rv = Parameter.resolve_envvar_value(self, ctx) + if rv is not None: + return rv + if self.allow_from_autoenv and ctx.auto_envvar_prefix is not None: + envvar = "{}_{}".format(ctx.auto_envvar_prefix, self.name.upper()) + return os.environ.get(envvar) + + def value_from_envvar(self, ctx): + rv = self.resolve_envvar_value(ctx) + if rv is None: + return None + value_depth = (self.nargs != 1) + bool(self.multiple) + if value_depth > 0 and rv is not None: + rv = self.type.split_envvar_value(rv) + if self.multiple and self.nargs != 1: + rv = batch(rv, self.nargs) + return rv + + def full_process_value(self, ctx, value): + if value is None and self.prompt is not None and not ctx.resilient_parsing: + return self.prompt_for_value(ctx) + return Parameter.full_process_value(self, ctx, value) + + +class Argument(Parameter): + """Arguments are positional parameters to a command. They generally + provide fewer features than options but can have infinite ``nargs`` + and are required by default. + + All parameters are passed onwards to the parameter constructor. + """ + + param_type_name = "argument" + + def __init__(self, param_decls, required=None, **attrs): + if required is None: + if attrs.get("default") is not None: + required = False + else: + required = attrs.get("nargs", 1) > 0 + Parameter.__init__(self, param_decls, required=required, **attrs) + if self.default is not None and self.nargs < 0: + raise TypeError( + "nargs=-1 in combination with a default value is not supported." + ) + + @property + def human_readable_name(self): + if self.metavar is not None: + return self.metavar + return self.name.upper() + + def make_metavar(self): + if self.metavar is not None: + return self.metavar + var = self.type.get_metavar(self) + if not var: + var = self.name.upper() + if not self.required: + var = "[{}]".format(var) + if self.nargs != 1: + var += "..." + return var + + def _parse_decls(self, decls, expose_value): + if not decls: + if not expose_value: + return None, [], [] + raise TypeError("Could not determine name for argument") + if len(decls) == 1: + name = arg = decls[0] + name = name.replace("-", "_").lower() + else: + raise TypeError( + "Arguments take exactly one parameter declaration, got" + " {}".format(len(decls)) + ) + return name, [arg], [] + + def get_usage_pieces(self, ctx): + return [self.make_metavar()] + + def get_error_hint(self, ctx): + return repr(self.make_metavar()) + + def add_to_parser(self, parser, ctx): + parser.add_argument(dest=self.name, nargs=self.nargs, obj=self) diff --git a/openpype/vendor/python/python_2/click/decorators.py b/openpype/vendor/python/python_2/click/decorators.py new file mode 100644 index 0000000000..c7b5af6cc5 --- /dev/null +++ b/openpype/vendor/python/python_2/click/decorators.py @@ -0,0 +1,333 @@ +import inspect +import sys +from functools import update_wrapper + +from ._compat import iteritems +from ._unicodefun import _check_for_unicode_literals +from .core import Argument +from .core import Command +from .core import Group +from .core import Option +from .globals import get_current_context +from .utils import echo + + +def pass_context(f): + """Marks a callback as wanting to receive the current context + object as first argument. + """ + + def new_func(*args, **kwargs): + return f(get_current_context(), *args, **kwargs) + + return update_wrapper(new_func, f) + + +def pass_obj(f): + """Similar to :func:`pass_context`, but only pass the object on the + context onwards (:attr:`Context.obj`). This is useful if that object + represents the state of a nested system. + """ + + def new_func(*args, **kwargs): + return f(get_current_context().obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + +def make_pass_decorator(object_type, ensure=False): + """Given an object type this creates a decorator that will work + similar to :func:`pass_obj` but instead of passing the object of the + current context, it will find the innermost context of type + :func:`object_type`. + + This generates a decorator that works roughly like this:: + + from functools import update_wrapper + + def decorator(f): + @pass_context + def new_func(ctx, *args, **kwargs): + obj = ctx.find_object(object_type) + return ctx.invoke(f, obj, *args, **kwargs) + return update_wrapper(new_func, f) + return decorator + + :param object_type: the type of the object to pass. + :param ensure: if set to `True`, a new object will be created and + remembered on the context if it's not there yet. + """ + + def decorator(f): + def new_func(*args, **kwargs): + ctx = get_current_context() + if ensure: + obj = ctx.ensure_object(object_type) + else: + obj = ctx.find_object(object_type) + if obj is None: + raise RuntimeError( + "Managed to invoke callback without a context" + " object of type '{}' existing".format(object_type.__name__) + ) + return ctx.invoke(f, obj, *args, **kwargs) + + return update_wrapper(new_func, f) + + return decorator + + +def _make_command(f, name, attrs, cls): + if isinstance(f, Command): + raise TypeError("Attempted to convert a callback into a command twice.") + try: + params = f.__click_params__ + params.reverse() + del f.__click_params__ + except AttributeError: + params = [] + help = attrs.get("help") + if help is None: + help = inspect.getdoc(f) + if isinstance(help, bytes): + help = help.decode("utf-8") + else: + help = inspect.cleandoc(help) + attrs["help"] = help + _check_for_unicode_literals() + return cls( + name=name or f.__name__.lower().replace("_", "-"), + callback=f, + params=params, + **attrs + ) + + +def command(name=None, cls=None, **attrs): + r"""Creates a new :class:`Command` and uses the decorated function as + callback. This will also automatically attach all decorated + :func:`option`\s and :func:`argument`\s as parameters to the command. + + The name of the command defaults to the name of the function with + underscores replaced by dashes. If you want to change that, you can + pass the intended name as the first argument. + + All keyword arguments are forwarded to the underlying command class. + + Once decorated the function turns into a :class:`Command` instance + that can be invoked as a command line utility or be attached to a + command :class:`Group`. + + :param name: the name of the command. This defaults to the function + name with underscores replaced by dashes. + :param cls: the command class to instantiate. This defaults to + :class:`Command`. + """ + if cls is None: + cls = Command + + def decorator(f): + cmd = _make_command(f, name, attrs, cls) + cmd.__doc__ = f.__doc__ + return cmd + + return decorator + + +def group(name=None, **attrs): + """Creates a new :class:`Group` with a function as callback. This + works otherwise the same as :func:`command` just that the `cls` + parameter is set to :class:`Group`. + """ + attrs.setdefault("cls", Group) + return command(name, **attrs) + + +def _param_memo(f, param): + if isinstance(f, Command): + f.params.append(param) + else: + if not hasattr(f, "__click_params__"): + f.__click_params__ = [] + f.__click_params__.append(param) + + +def argument(*param_decls, **attrs): + """Attaches an argument to the command. All positional arguments are + passed as parameter declarations to :class:`Argument`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Argument` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the argument class to instantiate. This defaults to + :class:`Argument`. + """ + + def decorator(f): + ArgumentClass = attrs.pop("cls", Argument) + _param_memo(f, ArgumentClass(param_decls, **attrs)) + return f + + return decorator + + +def option(*param_decls, **attrs): + """Attaches an option to the command. All positional arguments are + passed as parameter declarations to :class:`Option`; all keyword + arguments are forwarded unchanged (except ``cls``). + This is equivalent to creating an :class:`Option` instance manually + and attaching it to the :attr:`Command.params` list. + + :param cls: the option class to instantiate. This defaults to + :class:`Option`. + """ + + def decorator(f): + # Issue 926, copy attrs, so pre-defined options can re-use the same cls= + option_attrs = attrs.copy() + + if "help" in option_attrs: + option_attrs["help"] = inspect.cleandoc(option_attrs["help"]) + OptionClass = option_attrs.pop("cls", Option) + _param_memo(f, OptionClass(param_decls, **option_attrs)) + return f + + return decorator + + +def confirmation_option(*param_decls, **attrs): + """Shortcut for confirmation prompts that can be ignored by passing + ``--yes`` as parameter. + + This is equivalent to decorating a function with :func:`option` with + the following parameters:: + + def callback(ctx, param, value): + if not value: + ctx.abort() + + @click.command() + @click.option('--yes', is_flag=True, callback=callback, + expose_value=False, prompt='Do you want to continue?') + def dropdb(): + pass + """ + + def decorator(f): + def callback(ctx, param, value): + if not value: + ctx.abort() + + attrs.setdefault("is_flag", True) + attrs.setdefault("callback", callback) + attrs.setdefault("expose_value", False) + attrs.setdefault("prompt", "Do you want to continue?") + attrs.setdefault("help", "Confirm the action without prompting.") + return option(*(param_decls or ("--yes",)), **attrs)(f) + + return decorator + + +def password_option(*param_decls, **attrs): + """Shortcut for password prompts. + + This is equivalent to decorating a function with :func:`option` with + the following parameters:: + + @click.command() + @click.option('--password', prompt=True, confirmation_prompt=True, + hide_input=True) + def changeadmin(password): + pass + """ + + def decorator(f): + attrs.setdefault("prompt", True) + attrs.setdefault("confirmation_prompt", True) + attrs.setdefault("hide_input", True) + return option(*(param_decls or ("--password",)), **attrs)(f) + + return decorator + + +def version_option(version=None, *param_decls, **attrs): + """Adds a ``--version`` option which immediately ends the program + printing out the version number. This is implemented as an eager + option that prints the version and exits the program in the callback. + + :param version: the version number to show. If not provided Click + attempts an auto discovery via setuptools. + :param prog_name: the name of the program (defaults to autodetection) + :param message: custom message to show instead of the default + (``'%(prog)s, version %(version)s'``) + :param others: everything else is forwarded to :func:`option`. + """ + if version is None: + if hasattr(sys, "_getframe"): + module = sys._getframe(1).f_globals.get("__name__") + else: + module = "" + + def decorator(f): + prog_name = attrs.pop("prog_name", None) + message = attrs.pop("message", "%(prog)s, version %(version)s") + + def callback(ctx, param, value): + if not value or ctx.resilient_parsing: + return + prog = prog_name + if prog is None: + prog = ctx.find_root().info_name + ver = version + if ver is None: + try: + import pkg_resources + except ImportError: + pass + else: + for dist in pkg_resources.working_set: + scripts = dist.get_entry_map().get("console_scripts") or {} + for _, entry_point in iteritems(scripts): + if entry_point.module_name == module: + ver = dist.version + break + if ver is None: + raise RuntimeError("Could not determine version") + echo(message % {"prog": prog, "version": ver}, color=ctx.color) + ctx.exit() + + attrs.setdefault("is_flag", True) + attrs.setdefault("expose_value", False) + attrs.setdefault("is_eager", True) + attrs.setdefault("help", "Show the version and exit.") + attrs["callback"] = callback + return option(*(param_decls or ("--version",)), **attrs)(f) + + return decorator + + +def help_option(*param_decls, **attrs): + """Adds a ``--help`` option which immediately ends the program + printing out the help page. This is usually unnecessary to add as + this is added by default to all commands unless suppressed. + + Like :func:`version_option`, this is implemented as eager option that + prints in the callback and exits. + + All arguments are forwarded to :func:`option`. + """ + + def decorator(f): + def callback(ctx, param, value): + if value and not ctx.resilient_parsing: + echo(ctx.get_help(), color=ctx.color) + ctx.exit() + + attrs.setdefault("is_flag", True) + attrs.setdefault("expose_value", False) + attrs.setdefault("help", "Show this message and exit.") + attrs.setdefault("is_eager", True) + attrs["callback"] = callback + return option(*(param_decls or ("--help",)), **attrs)(f) + + return decorator diff --git a/openpype/vendor/python/python_2/click/exceptions.py b/openpype/vendor/python/python_2/click/exceptions.py new file mode 100644 index 0000000000..592ee38f0d --- /dev/null +++ b/openpype/vendor/python/python_2/click/exceptions.py @@ -0,0 +1,253 @@ +from ._compat import filename_to_ui +from ._compat import get_text_stderr +from ._compat import PY2 +from .utils import echo + + +def _join_param_hints(param_hint): + if isinstance(param_hint, (tuple, list)): + return " / ".join(repr(x) for x in param_hint) + return param_hint + + +class ClickException(Exception): + """An exception that Click can handle and show to the user.""" + + #: The exit code for this exception + exit_code = 1 + + def __init__(self, message): + ctor_msg = message + if PY2: + if ctor_msg is not None: + ctor_msg = ctor_msg.encode("utf-8") + Exception.__init__(self, ctor_msg) + self.message = message + + def format_message(self): + return self.message + + def __str__(self): + return self.message + + if PY2: + __unicode__ = __str__ + + def __str__(self): + return self.message.encode("utf-8") + + def show(self, file=None): + if file is None: + file = get_text_stderr() + echo("Error: {}".format(self.format_message()), file=file) + + +class UsageError(ClickException): + """An internal exception that signals a usage error. This typically + aborts any further handling. + + :param message: the error message to display. + :param ctx: optionally the context that caused this error. Click will + fill in the context automatically in some situations. + """ + + exit_code = 2 + + def __init__(self, message, ctx=None): + ClickException.__init__(self, message) + self.ctx = ctx + self.cmd = self.ctx.command if self.ctx else None + + def show(self, file=None): + if file is None: + file = get_text_stderr() + color = None + hint = "" + if self.cmd is not None and self.cmd.get_help_option(self.ctx) is not None: + hint = "Try '{} {}' for help.\n".format( + self.ctx.command_path, self.ctx.help_option_names[0] + ) + if self.ctx is not None: + color = self.ctx.color + echo("{}\n{}".format(self.ctx.get_usage(), hint), file=file, color=color) + echo("Error: {}".format(self.format_message()), file=file, color=color) + + +class BadParameter(UsageError): + """An exception that formats out a standardized error message for a + bad parameter. This is useful when thrown from a callback or type as + Click will attach contextual information to it (for instance, which + parameter it is). + + .. versionadded:: 2.0 + + :param param: the parameter object that caused this error. This can + be left out, and Click will attach this info itself + if possible. + :param param_hint: a string that shows up as parameter name. This + can be used as alternative to `param` in cases + where custom validation should happen. If it is + a string it's used as such, if it's a list then + each item is quoted and separated. + """ + + def __init__(self, message, ctx=None, param=None, param_hint=None): + UsageError.__init__(self, message, ctx) + self.param = param + self.param_hint = param_hint + + def format_message(self): + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) + else: + return "Invalid value: {}".format(self.message) + param_hint = _join_param_hints(param_hint) + + return "Invalid value for {}: {}".format(param_hint, self.message) + + +class MissingParameter(BadParameter): + """Raised if click required an option or argument but it was not + provided when invoking the script. + + .. versionadded:: 4.0 + + :param param_type: a string that indicates the type of the parameter. + The default is to inherit the parameter type from + the given `param`. Valid values are ``'parameter'``, + ``'option'`` or ``'argument'``. + """ + + def __init__( + self, message=None, ctx=None, param=None, param_hint=None, param_type=None + ): + BadParameter.__init__(self, message, ctx, param, param_hint) + self.param_type = param_type + + def format_message(self): + if self.param_hint is not None: + param_hint = self.param_hint + elif self.param is not None: + param_hint = self.param.get_error_hint(self.ctx) + else: + param_hint = None + param_hint = _join_param_hints(param_hint) + + param_type = self.param_type + if param_type is None and self.param is not None: + param_type = self.param.param_type_name + + msg = self.message + if self.param is not None: + msg_extra = self.param.type.get_missing_message(self.param) + if msg_extra: + if msg: + msg += ". {}".format(msg_extra) + else: + msg = msg_extra + + return "Missing {}{}{}{}".format( + param_type, + " {}".format(param_hint) if param_hint else "", + ". " if msg else ".", + msg or "", + ) + + def __str__(self): + if self.message is None: + param_name = self.param.name if self.param else None + return "missing parameter: {}".format(param_name) + else: + return self.message + + if PY2: + __unicode__ = __str__ + + def __str__(self): + return self.__unicode__().encode("utf-8") + + +class NoSuchOption(UsageError): + """Raised if click attempted to handle an option that does not + exist. + + .. versionadded:: 4.0 + """ + + def __init__(self, option_name, message=None, possibilities=None, ctx=None): + if message is None: + message = "no such option: {}".format(option_name) + UsageError.__init__(self, message, ctx) + self.option_name = option_name + self.possibilities = possibilities + + def format_message(self): + bits = [self.message] + if self.possibilities: + if len(self.possibilities) == 1: + bits.append("Did you mean {}?".format(self.possibilities[0])) + else: + possibilities = sorted(self.possibilities) + bits.append("(Possible options: {})".format(", ".join(possibilities))) + return " ".join(bits) + + +class BadOptionUsage(UsageError): + """Raised if an option is generally supplied but the use of the option + was incorrect. This is for instance raised if the number of arguments + for an option is not correct. + + .. versionadded:: 4.0 + + :param option_name: the name of the option being used incorrectly. + """ + + def __init__(self, option_name, message, ctx=None): + UsageError.__init__(self, message, ctx) + self.option_name = option_name + + +class BadArgumentUsage(UsageError): + """Raised if an argument is generally supplied but the use of the argument + was incorrect. This is for instance raised if the number of values + for an argument is not correct. + + .. versionadded:: 6.0 + """ + + def __init__(self, message, ctx=None): + UsageError.__init__(self, message, ctx) + + +class FileError(ClickException): + """Raised if a file cannot be opened.""" + + def __init__(self, filename, hint=None): + ui_filename = filename_to_ui(filename) + if hint is None: + hint = "unknown error" + ClickException.__init__(self, hint) + self.ui_filename = ui_filename + self.filename = filename + + def format_message(self): + return "Could not open file {}: {}".format(self.ui_filename, self.message) + + +class Abort(RuntimeError): + """An internal signalling exception that signals Click to abort.""" + + +class Exit(RuntimeError): + """An exception that indicates that the application should exit with some + status code. + + :param code: the status code to exit with. + """ + + __slots__ = ("exit_code",) + + def __init__(self, code=0): + self.exit_code = code diff --git a/openpype/vendor/python/python_2/click/formatting.py b/openpype/vendor/python/python_2/click/formatting.py new file mode 100644 index 0000000000..319c7f6163 --- /dev/null +++ b/openpype/vendor/python/python_2/click/formatting.py @@ -0,0 +1,283 @@ +from contextlib import contextmanager + +from ._compat import term_len +from .parser import split_opt +from .termui import get_terminal_size + +# Can force a width. This is used by the test system +FORCED_WIDTH = None + + +def measure_table(rows): + widths = {} + for row in rows: + for idx, col in enumerate(row): + widths[idx] = max(widths.get(idx, 0), term_len(col)) + return tuple(y for x, y in sorted(widths.items())) + + +def iter_rows(rows, col_count): + for row in rows: + row = tuple(row) + yield row + ("",) * (col_count - len(row)) + + +def wrap_text( + text, width=78, initial_indent="", subsequent_indent="", preserve_paragraphs=False +): + """A helper function that intelligently wraps text. By default, it + assumes that it operates on a single paragraph of text but if the + `preserve_paragraphs` parameter is provided it will intelligently + handle paragraphs (defined by two empty lines). + + If paragraphs are handled, a paragraph can be prefixed with an empty + line containing the ``\\b`` character (``\\x08``) to indicate that + no rewrapping should happen in that block. + + :param text: the text that should be rewrapped. + :param width: the maximum width for the text. + :param initial_indent: the initial indent that should be placed on the + first line as a string. + :param subsequent_indent: the indent string that should be placed on + each consecutive line. + :param preserve_paragraphs: if this flag is set then the wrapping will + intelligently handle paragraphs. + """ + from ._textwrap import TextWrapper + + text = text.expandtabs() + wrapper = TextWrapper( + width, + initial_indent=initial_indent, + subsequent_indent=subsequent_indent, + replace_whitespace=False, + ) + if not preserve_paragraphs: + return wrapper.fill(text) + + p = [] + buf = [] + indent = None + + def _flush_par(): + if not buf: + return + if buf[0].strip() == "\b": + p.append((indent or 0, True, "\n".join(buf[1:]))) + else: + p.append((indent or 0, False, " ".join(buf))) + del buf[:] + + for line in text.splitlines(): + if not line: + _flush_par() + indent = None + else: + if indent is None: + orig_len = term_len(line) + line = line.lstrip() + indent = orig_len - term_len(line) + buf.append(line) + _flush_par() + + rv = [] + for indent, raw, text in p: + with wrapper.extra_indent(" " * indent): + if raw: + rv.append(wrapper.indent_only(text)) + else: + rv.append(wrapper.fill(text)) + + return "\n\n".join(rv) + + +class HelpFormatter(object): + """This class helps with formatting text-based help pages. It's + usually just needed for very special internal cases, but it's also + exposed so that developers can write their own fancy outputs. + + At present, it always writes into memory. + + :param indent_increment: the additional increment for each level. + :param width: the width for the text. This defaults to the terminal + width clamped to a maximum of 78. + """ + + def __init__(self, indent_increment=2, width=None, max_width=None): + self.indent_increment = indent_increment + if max_width is None: + max_width = 80 + if width is None: + width = FORCED_WIDTH + if width is None: + width = max(min(get_terminal_size()[0], max_width) - 2, 50) + self.width = width + self.current_indent = 0 + self.buffer = [] + + def write(self, string): + """Writes a unicode string into the internal buffer.""" + self.buffer.append(string) + + def indent(self): + """Increases the indentation.""" + self.current_indent += self.indent_increment + + def dedent(self): + """Decreases the indentation.""" + self.current_indent -= self.indent_increment + + def write_usage(self, prog, args="", prefix="Usage: "): + """Writes a usage line into the buffer. + + :param prog: the program name. + :param args: whitespace separated list of arguments. + :param prefix: the prefix for the first line. + """ + usage_prefix = "{:>{w}}{} ".format(prefix, prog, w=self.current_indent) + text_width = self.width - self.current_indent + + if text_width >= (term_len(usage_prefix) + 20): + # The arguments will fit to the right of the prefix. + indent = " " * term_len(usage_prefix) + self.write( + wrap_text( + args, + text_width, + initial_indent=usage_prefix, + subsequent_indent=indent, + ) + ) + else: + # The prefix is too long, put the arguments on the next line. + self.write(usage_prefix) + self.write("\n") + indent = " " * (max(self.current_indent, term_len(prefix)) + 4) + self.write( + wrap_text( + args, text_width, initial_indent=indent, subsequent_indent=indent + ) + ) + + self.write("\n") + + def write_heading(self, heading): + """Writes a heading into the buffer.""" + self.write("{:>{w}}{}:\n".format("", heading, w=self.current_indent)) + + def write_paragraph(self): + """Writes a paragraph into the buffer.""" + if self.buffer: + self.write("\n") + + def write_text(self, text): + """Writes re-indented text into the buffer. This rewraps and + preserves paragraphs. + """ + text_width = max(self.width - self.current_indent, 11) + indent = " " * self.current_indent + self.write( + wrap_text( + text, + text_width, + initial_indent=indent, + subsequent_indent=indent, + preserve_paragraphs=True, + ) + ) + self.write("\n") + + def write_dl(self, rows, col_max=30, col_spacing=2): + """Writes a definition list into the buffer. This is how options + and commands are usually formatted. + + :param rows: a list of two item tuples for the terms and values. + :param col_max: the maximum width of the first column. + :param col_spacing: the number of spaces between the first and + second column. + """ + rows = list(rows) + widths = measure_table(rows) + if len(widths) != 2: + raise TypeError("Expected two columns for definition list") + + first_col = min(widths[0], col_max) + col_spacing + + for first, second in iter_rows(rows, len(widths)): + self.write("{:>{w}}{}".format("", first, w=self.current_indent)) + if not second: + self.write("\n") + continue + if term_len(first) <= first_col - col_spacing: + self.write(" " * (first_col - term_len(first))) + else: + self.write("\n") + self.write(" " * (first_col + self.current_indent)) + + text_width = max(self.width - first_col - 2, 10) + wrapped_text = wrap_text(second, text_width, preserve_paragraphs=True) + lines = wrapped_text.splitlines() + + if lines: + self.write("{}\n".format(lines[0])) + + for line in lines[1:]: + self.write( + "{:>{w}}{}\n".format( + "", line, w=first_col + self.current_indent + ) + ) + + if len(lines) > 1: + # separate long help from next option + self.write("\n") + else: + self.write("\n") + + @contextmanager + def section(self, name): + """Helpful context manager that writes a paragraph, a heading, + and the indents. + + :param name: the section name that is written as heading. + """ + self.write_paragraph() + self.write_heading(name) + self.indent() + try: + yield + finally: + self.dedent() + + @contextmanager + def indentation(self): + """A context manager that increases the indentation.""" + self.indent() + try: + yield + finally: + self.dedent() + + def getvalue(self): + """Returns the buffer contents.""" + return "".join(self.buffer) + + +def join_options(options): + """Given a list of option strings this joins them in the most appropriate + way and returns them in the form ``(formatted_string, + any_prefix_is_slash)`` where the second item in the tuple is a flag that + indicates if any of the option prefixes was a slash. + """ + rv = [] + any_prefix_is_slash = False + for opt in options: + prefix = split_opt(opt)[0] + if prefix == "/": + any_prefix_is_slash = True + rv.append((len(prefix), opt)) + + rv.sort(key=lambda x: x[0]) + + rv = ", ".join(x[1] for x in rv) + return rv, any_prefix_is_slash diff --git a/openpype/vendor/python/python_2/click/globals.py b/openpype/vendor/python/python_2/click/globals.py new file mode 100644 index 0000000000..1649f9a0bf --- /dev/null +++ b/openpype/vendor/python/python_2/click/globals.py @@ -0,0 +1,47 @@ +from threading import local + +_local = local() + + +def get_current_context(silent=False): + """Returns the current click context. This can be used as a way to + access the current context object from anywhere. This is a more implicit + alternative to the :func:`pass_context` decorator. This function is + primarily useful for helpers such as :func:`echo` which might be + interested in changing its behavior based on the current context. + + To push the current context, :meth:`Context.scope` can be used. + + .. versionadded:: 5.0 + + :param silent: if set to `True` the return value is `None` if no context + is available. The default behavior is to raise a + :exc:`RuntimeError`. + """ + try: + return _local.stack[-1] + except (AttributeError, IndexError): + if not silent: + raise RuntimeError("There is no active click context.") + + +def push_context(ctx): + """Pushes a new context to the current stack.""" + _local.__dict__.setdefault("stack", []).append(ctx) + + +def pop_context(): + """Removes the top level from the stack.""" + _local.stack.pop() + + +def resolve_color_default(color=None): + """"Internal helper to get the default value of the color flag. If a + value is passed it's returned unchanged, otherwise it's looked up from + the current context. + """ + if color is not None: + return color + ctx = get_current_context(silent=True) + if ctx is not None: + return ctx.color diff --git a/openpype/vendor/python/python_2/click/parser.py b/openpype/vendor/python/python_2/click/parser.py new file mode 100644 index 0000000000..f43ebfe9fc --- /dev/null +++ b/openpype/vendor/python/python_2/click/parser.py @@ -0,0 +1,428 @@ +# -*- coding: utf-8 -*- +""" +This module started out as largely a copy paste from the stdlib's +optparse module with the features removed that we do not need from +optparse because we implement them in Click on a higher level (for +instance type handling, help formatting and a lot more). + +The plan is to remove more and more from here over time. + +The reason this is a different module and not optparse from the stdlib +is that there are differences in 2.x and 3.x about the error messages +generated and optparse in the stdlib uses gettext for no good reason +and might cause us issues. + +Click uses parts of optparse written by Gregory P. Ward and maintained +by the Python Software Foundation. This is limited to code in parser.py. + +Copyright 2001-2006 Gregory P. Ward. All rights reserved. +Copyright 2002-2006 Python Software Foundation. All rights reserved. +""" +import re +from collections import deque + +from .exceptions import BadArgumentUsage +from .exceptions import BadOptionUsage +from .exceptions import NoSuchOption +from .exceptions import UsageError + + +def _unpack_args(args, nargs_spec): + """Given an iterable of arguments and an iterable of nargs specifications, + it returns a tuple with all the unpacked arguments at the first index + and all remaining arguments as the second. + + The nargs specification is the number of arguments that should be consumed + or `-1` to indicate that this position should eat up all the remainders. + + Missing items are filled with `None`. + """ + args = deque(args) + nargs_spec = deque(nargs_spec) + rv = [] + spos = None + + def _fetch(c): + try: + if spos is None: + return c.popleft() + else: + return c.pop() + except IndexError: + return None + + while nargs_spec: + nargs = _fetch(nargs_spec) + if nargs == 1: + rv.append(_fetch(args)) + elif nargs > 1: + x = [_fetch(args) for _ in range(nargs)] + # If we're reversed, we're pulling in the arguments in reverse, + # so we need to turn them around. + if spos is not None: + x.reverse() + rv.append(tuple(x)) + elif nargs < 0: + if spos is not None: + raise TypeError("Cannot have two nargs < 0") + spos = len(rv) + rv.append(None) + + # spos is the position of the wildcard (star). If it's not `None`, + # we fill it with the remainder. + if spos is not None: + rv[spos] = tuple(args) + args = [] + rv[spos + 1 :] = reversed(rv[spos + 1 :]) + + return tuple(rv), list(args) + + +def _error_opt_args(nargs, opt): + if nargs == 1: + raise BadOptionUsage(opt, "{} option requires an argument".format(opt)) + raise BadOptionUsage(opt, "{} option requires {} arguments".format(opt, nargs)) + + +def split_opt(opt): + first = opt[:1] + if first.isalnum(): + return "", opt + if opt[1:2] == first: + return opt[:2], opt[2:] + return first, opt[1:] + + +def normalize_opt(opt, ctx): + if ctx is None or ctx.token_normalize_func is None: + return opt + prefix, opt = split_opt(opt) + return prefix + ctx.token_normalize_func(opt) + + +def split_arg_string(string): + """Given an argument string this attempts to split it into small parts.""" + rv = [] + for match in re.finditer( + r"('([^'\\]*(?:\\.[^'\\]*)*)'|\"([^\"\\]*(?:\\.[^\"\\]*)*)\"|\S+)\s*", + string, + re.S, + ): + arg = match.group().strip() + if arg[:1] == arg[-1:] and arg[:1] in "\"'": + arg = arg[1:-1].encode("ascii", "backslashreplace").decode("unicode-escape") + try: + arg = type(string)(arg) + except UnicodeError: + pass + rv.append(arg) + return rv + + +class Option(object): + def __init__(self, opts, dest, action=None, nargs=1, const=None, obj=None): + self._short_opts = [] + self._long_opts = [] + self.prefixes = set() + + for opt in opts: + prefix, value = split_opt(opt) + if not prefix: + raise ValueError("Invalid start character for option ({})".format(opt)) + self.prefixes.add(prefix[0]) + if len(prefix) == 1 and len(value) == 1: + self._short_opts.append(opt) + else: + self._long_opts.append(opt) + self.prefixes.add(prefix) + + if action is None: + action = "store" + + self.dest = dest + self.action = action + self.nargs = nargs + self.const = const + self.obj = obj + + @property + def takes_value(self): + return self.action in ("store", "append") + + def process(self, value, state): + if self.action == "store": + state.opts[self.dest] = value + elif self.action == "store_const": + state.opts[self.dest] = self.const + elif self.action == "append": + state.opts.setdefault(self.dest, []).append(value) + elif self.action == "append_const": + state.opts.setdefault(self.dest, []).append(self.const) + elif self.action == "count": + state.opts[self.dest] = state.opts.get(self.dest, 0) + 1 + else: + raise ValueError("unknown action '{}'".format(self.action)) + state.order.append(self.obj) + + +class Argument(object): + def __init__(self, dest, nargs=1, obj=None): + self.dest = dest + self.nargs = nargs + self.obj = obj + + def process(self, value, state): + if self.nargs > 1: + holes = sum(1 for x in value if x is None) + if holes == len(value): + value = None + elif holes != 0: + raise BadArgumentUsage( + "argument {} takes {} values".format(self.dest, self.nargs) + ) + state.opts[self.dest] = value + state.order.append(self.obj) + + +class ParsingState(object): + def __init__(self, rargs): + self.opts = {} + self.largs = [] + self.rargs = rargs + self.order = [] + + +class OptionParser(object): + """The option parser is an internal class that is ultimately used to + parse options and arguments. It's modelled after optparse and brings + a similar but vastly simplified API. It should generally not be used + directly as the high level Click classes wrap it for you. + + It's not nearly as extensible as optparse or argparse as it does not + implement features that are implemented on a higher level (such as + types or defaults). + + :param ctx: optionally the :class:`~click.Context` where this parser + should go with. + """ + + def __init__(self, ctx=None): + #: The :class:`~click.Context` for this parser. This might be + #: `None` for some advanced use cases. + self.ctx = ctx + #: This controls how the parser deals with interspersed arguments. + #: If this is set to `False`, the parser will stop on the first + #: non-option. Click uses this to implement nested subcommands + #: safely. + self.allow_interspersed_args = True + #: This tells the parser how to deal with unknown options. By + #: default it will error out (which is sensible), but there is a + #: second mode where it will ignore it and continue processing + #: after shifting all the unknown options into the resulting args. + self.ignore_unknown_options = False + if ctx is not None: + self.allow_interspersed_args = ctx.allow_interspersed_args + self.ignore_unknown_options = ctx.ignore_unknown_options + self._short_opt = {} + self._long_opt = {} + self._opt_prefixes = {"-", "--"} + self._args = [] + + def add_option(self, opts, dest, action=None, nargs=1, const=None, obj=None): + """Adds a new option named `dest` to the parser. The destination + is not inferred (unlike with optparse) and needs to be explicitly + provided. Action can be any of ``store``, ``store_const``, + ``append``, ``appnd_const`` or ``count``. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + if obj is None: + obj = dest + opts = [normalize_opt(opt, self.ctx) for opt in opts] + option = Option(opts, dest, action=action, nargs=nargs, const=const, obj=obj) + self._opt_prefixes.update(option.prefixes) + for opt in option._short_opts: + self._short_opt[opt] = option + for opt in option._long_opts: + self._long_opt[opt] = option + + def add_argument(self, dest, nargs=1, obj=None): + """Adds a positional argument named `dest` to the parser. + + The `obj` can be used to identify the option in the order list + that is returned from the parser. + """ + if obj is None: + obj = dest + self._args.append(Argument(dest=dest, nargs=nargs, obj=obj)) + + def parse_args(self, args): + """Parses positional arguments and returns ``(values, args, order)`` + for the parsed options and arguments as well as the leftover + arguments if there are any. The order is a list of objects as they + appear on the command line. If arguments appear multiple times they + will be memorized multiple times as well. + """ + state = ParsingState(args) + try: + self._process_args_for_options(state) + self._process_args_for_args(state) + except UsageError: + if self.ctx is None or not self.ctx.resilient_parsing: + raise + return state.opts, state.largs, state.order + + def _process_args_for_args(self, state): + pargs, args = _unpack_args( + state.largs + state.rargs, [x.nargs for x in self._args] + ) + + for idx, arg in enumerate(self._args): + arg.process(pargs[idx], state) + + state.largs = args + state.rargs = [] + + def _process_args_for_options(self, state): + while state.rargs: + arg = state.rargs.pop(0) + arglen = len(arg) + # Double dashes always handled explicitly regardless of what + # prefixes are valid. + if arg == "--": + return + elif arg[:1] in self._opt_prefixes and arglen > 1: + self._process_opts(arg, state) + elif self.allow_interspersed_args: + state.largs.append(arg) + else: + state.rargs.insert(0, arg) + return + + # Say this is the original argument list: + # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] + # ^ + # (we are about to process arg(i)). + # + # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of + # [arg0, ..., arg(i-1)] (any options and their arguments will have + # been removed from largs). + # + # The while loop will usually consume 1 or more arguments per pass. + # If it consumes 1 (eg. arg is an option that takes no arguments), + # then after _process_arg() is done the situation is: + # + # largs = subset of [arg0, ..., arg(i)] + # rargs = [arg(i+1), ..., arg(N-1)] + # + # If allow_interspersed_args is false, largs will always be + # *empty* -- still a subset of [arg0, ..., arg(i-1)], but + # not a very interesting subset! + + def _match_long_opt(self, opt, explicit_value, state): + if opt not in self._long_opt: + possibilities = [word for word in self._long_opt if word.startswith(opt)] + raise NoSuchOption(opt, possibilities=possibilities, ctx=self.ctx) + + option = self._long_opt[opt] + if option.takes_value: + # At this point it's safe to modify rargs by injecting the + # explicit value, because no exception is raised in this + # branch. This means that the inserted value will be fully + # consumed. + if explicit_value is not None: + state.rargs.insert(0, explicit_value) + + nargs = option.nargs + if len(state.rargs) < nargs: + _error_opt_args(nargs, opt) + elif nargs == 1: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + elif explicit_value is not None: + raise BadOptionUsage(opt, "{} option does not take a value".format(opt)) + + else: + value = None + + option.process(value, state) + + def _match_short_opt(self, arg, state): + stop = False + i = 1 + prefix = arg[0] + unknown_options = [] + + for ch in arg[1:]: + opt = normalize_opt(prefix + ch, self.ctx) + option = self._short_opt.get(opt) + i += 1 + + if not option: + if self.ignore_unknown_options: + unknown_options.append(ch) + continue + raise NoSuchOption(opt, ctx=self.ctx) + if option.takes_value: + # Any characters left in arg? Pretend they're the + # next arg, and stop consuming characters of arg. + if i < len(arg): + state.rargs.insert(0, arg[i:]) + stop = True + + nargs = option.nargs + if len(state.rargs) < nargs: + _error_opt_args(nargs, opt) + elif nargs == 1: + value = state.rargs.pop(0) + else: + value = tuple(state.rargs[:nargs]) + del state.rargs[:nargs] + + else: + value = None + + option.process(value, state) + + if stop: + break + + # If we got any unknown options we re-combinate the string of the + # remaining options and re-attach the prefix, then report that + # to the state as new larg. This way there is basic combinatorics + # that can be achieved while still ignoring unknown arguments. + if self.ignore_unknown_options and unknown_options: + state.largs.append("{}{}".format(prefix, "".join(unknown_options))) + + def _process_opts(self, arg, state): + explicit_value = None + # Long option handling happens in two parts. The first part is + # supporting explicitly attached values. In any case, we will try + # to long match the option first. + if "=" in arg: + long_opt, explicit_value = arg.split("=", 1) + else: + long_opt = arg + norm_long_opt = normalize_opt(long_opt, self.ctx) + + # At this point we will match the (assumed) long option through + # the long option matching code. Note that this allows options + # like "-foo" to be matched as long options. + try: + self._match_long_opt(norm_long_opt, explicit_value, state) + except NoSuchOption: + # At this point the long option matching failed, and we need + # to try with short options. However there is a special rule + # which says, that if we have a two character options prefix + # (applies to "--foo" for instance), we do not dispatch to the + # short option code and will instead raise the no option + # error. + if arg[:2] not in self._opt_prefixes: + return self._match_short_opt(arg, state) + if not self.ignore_unknown_options: + raise + state.largs.append(arg) diff --git a/openpype/vendor/python/python_2/click/termui.py b/openpype/vendor/python/python_2/click/termui.py new file mode 100644 index 0000000000..02ef9e9f04 --- /dev/null +++ b/openpype/vendor/python/python_2/click/termui.py @@ -0,0 +1,681 @@ +import inspect +import io +import itertools +import os +import struct +import sys + +from ._compat import DEFAULT_COLUMNS +from ._compat import get_winterm_size +from ._compat import isatty +from ._compat import raw_input +from ._compat import string_types +from ._compat import strip_ansi +from ._compat import text_type +from ._compat import WIN +from .exceptions import Abort +from .exceptions import UsageError +from .globals import resolve_color_default +from .types import Choice +from .types import convert_type +from .types import Path +from .utils import echo +from .utils import LazyFile + +# The prompt functions to use. The doc tools currently override these +# functions to customize how they work. +visible_prompt_func = raw_input + +_ansi_colors = { + "black": 30, + "red": 31, + "green": 32, + "yellow": 33, + "blue": 34, + "magenta": 35, + "cyan": 36, + "white": 37, + "reset": 39, + "bright_black": 90, + "bright_red": 91, + "bright_green": 92, + "bright_yellow": 93, + "bright_blue": 94, + "bright_magenta": 95, + "bright_cyan": 96, + "bright_white": 97, +} +_ansi_reset_all = "\033[0m" + + +def hidden_prompt_func(prompt): + import getpass + + return getpass.getpass(prompt) + + +def _build_prompt( + text, suffix, show_default=False, default=None, show_choices=True, type=None +): + prompt = text + if type is not None and show_choices and isinstance(type, Choice): + prompt += " ({})".format(", ".join(map(str, type.choices))) + if default is not None and show_default: + prompt = "{} [{}]".format(prompt, _format_default(default)) + return prompt + suffix + + +def _format_default(default): + if isinstance(default, (io.IOBase, LazyFile)) and hasattr(default, "name"): + return default.name + + return default + + +def prompt( + text, + default=None, + hide_input=False, + confirmation_prompt=False, + type=None, + value_proc=None, + prompt_suffix=": ", + show_default=True, + err=False, + show_choices=True, +): + """Prompts a user for input. This is a convenience function that can + be used to prompt a user for input later. + + If the user aborts the input by sending a interrupt signal, this + function will catch it and raise a :exc:`Abort` exception. + + .. versionadded:: 7.0 + Added the show_choices parameter. + + .. versionadded:: 6.0 + Added unicode support for cmd.exe on Windows. + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param text: the text to show for the prompt. + :param default: the default value to use if no input happens. If this + is not given it will prompt until it's aborted. + :param hide_input: if this is set to true then the input value will + be hidden. + :param confirmation_prompt: asks for confirmation for the value. + :param type: the type to use to check the value against. + :param value_proc: if this parameter is provided it's a function that + is invoked instead of the type conversion to + convert a value. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + :param show_choices: Show or hide choices if the passed type is a Choice. + For example if type is a Choice of either day or week, + show_choices is true and text is "Group by" then the + prompt will be "Group by (day, week): ". + """ + result = None + + def prompt_func(text): + f = hidden_prompt_func if hide_input else visible_prompt_func + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(text, nl=False, err=err) + return f("") + except (KeyboardInterrupt, EOFError): + # getpass doesn't print a newline if the user aborts input with ^C. + # Allegedly this behavior is inherited from getpass(3). + # A doc bug has been filed at https://bugs.python.org/issue24711 + if hide_input: + echo(None, err=err) + raise Abort() + + if value_proc is None: + value_proc = convert_type(type, default) + + prompt = _build_prompt( + text, prompt_suffix, show_default, default, show_choices, type + ) + + while 1: + while 1: + value = prompt_func(prompt) + if value: + break + elif default is not None: + if isinstance(value_proc, Path): + # validate Path default value(exists, dir_okay etc.) + value = default + break + return default + try: + result = value_proc(value) + except UsageError as e: + echo("Error: {}".format(e.message), err=err) # noqa: B306 + continue + if not confirmation_prompt: + return result + while 1: + value2 = prompt_func("Repeat for confirmation: ") + if value2: + break + if value == value2: + return result + echo("Error: the two entered values do not match", err=err) + + +def confirm( + text, default=False, abort=False, prompt_suffix=": ", show_default=True, err=False +): + """Prompts for confirmation (yes/no question). + + If the user aborts the input by sending a interrupt signal this + function will catch it and raise a :exc:`Abort` exception. + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param text: the question to ask. + :param default: the default for the prompt. + :param abort: if this is set to `True` a negative answer aborts the + exception by raising :exc:`Abort`. + :param prompt_suffix: a suffix that should be added to the prompt. + :param show_default: shows or hides the default value in the prompt. + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + prompt = _build_prompt( + text, prompt_suffix, show_default, "Y/n" if default else "y/N" + ) + while 1: + try: + # Write the prompt separately so that we get nice + # coloring through colorama on Windows + echo(prompt, nl=False, err=err) + value = visible_prompt_func("").lower().strip() + except (KeyboardInterrupt, EOFError): + raise Abort() + if value in ("y", "yes"): + rv = True + elif value in ("n", "no"): + rv = False + elif value == "": + rv = default + else: + echo("Error: invalid input", err=err) + continue + break + if abort and not rv: + raise Abort() + return rv + + +def get_terminal_size(): + """Returns the current size of the terminal as tuple in the form + ``(width, height)`` in columns and rows. + """ + # If shutil has get_terminal_size() (Python 3.3 and later) use that + if sys.version_info >= (3, 3): + import shutil + + shutil_get_terminal_size = getattr(shutil, "get_terminal_size", None) + if shutil_get_terminal_size: + sz = shutil_get_terminal_size() + return sz.columns, sz.lines + + # We provide a sensible default for get_winterm_size() when being invoked + # inside a subprocess. Without this, it would not provide a useful input. + if get_winterm_size is not None: + size = get_winterm_size() + if size == (0, 0): + return (79, 24) + else: + return size + + def ioctl_gwinsz(fd): + try: + import fcntl + import termios + + cr = struct.unpack("hh", fcntl.ioctl(fd, termios.TIOCGWINSZ, "1234")) + except Exception: + return + return cr + + cr = ioctl_gwinsz(0) or ioctl_gwinsz(1) or ioctl_gwinsz(2) + if not cr: + try: + fd = os.open(os.ctermid(), os.O_RDONLY) + try: + cr = ioctl_gwinsz(fd) + finally: + os.close(fd) + except Exception: + pass + if not cr or not cr[0] or not cr[1]: + cr = (os.environ.get("LINES", 25), os.environ.get("COLUMNS", DEFAULT_COLUMNS)) + return int(cr[1]), int(cr[0]) + + +def echo_via_pager(text_or_generator, color=None): + """This function takes a text and shows it via an environment specific + pager on stdout. + + .. versionchanged:: 3.0 + Added the `color` flag. + + :param text_or_generator: the text to page, or alternatively, a + generator emitting the text to page. + :param color: controls if the pager supports ANSI colors or not. The + default is autodetection. + """ + color = resolve_color_default(color) + + if inspect.isgeneratorfunction(text_or_generator): + i = text_or_generator() + elif isinstance(text_or_generator, string_types): + i = [text_or_generator] + else: + i = iter(text_or_generator) + + # convert every element of i to a text type if necessary + text_generator = (el if isinstance(el, string_types) else text_type(el) for el in i) + + from ._termui_impl import pager + + return pager(itertools.chain(text_generator, "\n"), color) + + +def progressbar( + iterable=None, + length=None, + label=None, + show_eta=True, + show_percent=None, + show_pos=False, + item_show_func=None, + fill_char="#", + empty_char="-", + bar_template="%(label)s [%(bar)s] %(info)s", + info_sep=" ", + width=36, + file=None, + color=None, +): + """This function creates an iterable context manager that can be used + to iterate over something while showing a progress bar. It will + either iterate over the `iterable` or `length` items (that are counted + up). While iteration happens, this function will print a rendered + progress bar to the given `file` (defaults to stdout) and will attempt + to calculate remaining time and more. By default, this progress bar + will not be rendered if the file is not a terminal. + + The context manager creates the progress bar. When the context + manager is entered the progress bar is already created. With every + iteration over the progress bar, the iterable passed to the bar is + advanced and the bar is updated. When the context manager exits, + a newline is printed and the progress bar is finalized on screen. + + Note: The progress bar is currently designed for use cases where the + total progress can be expected to take at least several seconds. + Because of this, the ProgressBar class object won't display + progress that is considered too fast, and progress where the time + between steps is less than a second. + + No printing must happen or the progress bar will be unintentionally + destroyed. + + Example usage:: + + with progressbar(items) as bar: + for item in bar: + do_something_with(item) + + Alternatively, if no iterable is specified, one can manually update the + progress bar through the `update()` method instead of directly + iterating over the progress bar. The update method accepts the number + of steps to increment the bar with:: + + with progressbar(length=chunks.total_bytes) as bar: + for chunk in chunks: + process_chunk(chunk) + bar.update(chunks.bytes) + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `color` parameter. Added a `update` method to the + progressbar object. + + :param iterable: an iterable to iterate over. If not provided the length + is required. + :param length: the number of items to iterate over. By default the + progressbar will attempt to ask the iterator about its + length, which might or might not work. If an iterable is + also provided this parameter can be used to override the + length. If an iterable is not provided the progress bar + will iterate over a range of that length. + :param label: the label to show next to the progress bar. + :param show_eta: enables or disables the estimated time display. This is + automatically disabled if the length cannot be + determined. + :param show_percent: enables or disables the percentage display. The + default is `True` if the iterable has a length or + `False` if not. + :param show_pos: enables or disables the absolute position display. The + default is `False`. + :param item_show_func: a function called with the current item which + can return a string to show the current item + next to the progress bar. Note that the current + item can be `None`! + :param fill_char: the character to use to show the filled part of the + progress bar. + :param empty_char: the character to use to show the non-filled part of + the progress bar. + :param bar_template: the format string to use as template for the bar. + The parameters in it are ``label`` for the label, + ``bar`` for the progress bar and ``info`` for the + info section. + :param info_sep: the separator between multiple info items (eta etc.) + :param width: the width of the progress bar in characters, 0 means full + terminal width + :param file: the file to write to. If this is not a terminal then + only the label is printed. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. This is only needed if ANSI + codes are included anywhere in the progress bar output + which is not the case by default. + """ + from ._termui_impl import ProgressBar + + color = resolve_color_default(color) + return ProgressBar( + iterable=iterable, + length=length, + show_eta=show_eta, + show_percent=show_percent, + show_pos=show_pos, + item_show_func=item_show_func, + fill_char=fill_char, + empty_char=empty_char, + bar_template=bar_template, + info_sep=info_sep, + file=file, + label=label, + width=width, + color=color, + ) + + +def clear(): + """Clears the terminal screen. This will have the effect of clearing + the whole visible space of the terminal and moving the cursor to the + top left. This does not do anything if not connected to a terminal. + + .. versionadded:: 2.0 + """ + if not isatty(sys.stdout): + return + # If we're on Windows and we don't have colorama available, then we + # clear the screen by shelling out. Otherwise we can use an escape + # sequence. + if WIN: + os.system("cls") + else: + sys.stdout.write("\033[2J\033[1;1H") + + +def style( + text, + fg=None, + bg=None, + bold=None, + dim=None, + underline=None, + blink=None, + reverse=None, + reset=True, +): + """Styles a text with ANSI styles and returns the new string. By + default the styling is self contained which means that at the end + of the string a reset code is issued. This can be prevented by + passing ``reset=False``. + + Examples:: + + click.echo(click.style('Hello World!', fg='green')) + click.echo(click.style('ATTENTION!', blink=True)) + click.echo(click.style('Some things', reverse=True, fg='cyan')) + + Supported color names: + + * ``black`` (might be a gray) + * ``red`` + * ``green`` + * ``yellow`` (might be an orange) + * ``blue`` + * ``magenta`` + * ``cyan`` + * ``white`` (might be light gray) + * ``bright_black`` + * ``bright_red`` + * ``bright_green`` + * ``bright_yellow`` + * ``bright_blue`` + * ``bright_magenta`` + * ``bright_cyan`` + * ``bright_white`` + * ``reset`` (reset the color code only) + + .. versionadded:: 2.0 + + .. versionadded:: 7.0 + Added support for bright colors. + + :param text: the string to style with ansi codes. + :param fg: if provided this will become the foreground color. + :param bg: if provided this will become the background color. + :param bold: if provided this will enable or disable bold mode. + :param dim: if provided this will enable or disable dim mode. This is + badly supported. + :param underline: if provided this will enable or disable underline. + :param blink: if provided this will enable or disable blinking. + :param reverse: if provided this will enable or disable inverse + rendering (foreground becomes background and the + other way round). + :param reset: by default a reset-all code is added at the end of the + string which means that styles do not carry over. This + can be disabled to compose styles. + """ + bits = [] + if fg: + try: + bits.append("\033[{}m".format(_ansi_colors[fg])) + except KeyError: + raise TypeError("Unknown color '{}'".format(fg)) + if bg: + try: + bits.append("\033[{}m".format(_ansi_colors[bg] + 10)) + except KeyError: + raise TypeError("Unknown color '{}'".format(bg)) + if bold is not None: + bits.append("\033[{}m".format(1 if bold else 22)) + if dim is not None: + bits.append("\033[{}m".format(2 if dim else 22)) + if underline is not None: + bits.append("\033[{}m".format(4 if underline else 24)) + if blink is not None: + bits.append("\033[{}m".format(5 if blink else 25)) + if reverse is not None: + bits.append("\033[{}m".format(7 if reverse else 27)) + bits.append(text) + if reset: + bits.append(_ansi_reset_all) + return "".join(bits) + + +def unstyle(text): + """Removes ANSI styling information from a string. Usually it's not + necessary to use this function as Click's echo function will + automatically remove styling if necessary. + + .. versionadded:: 2.0 + + :param text: the text to remove style information from. + """ + return strip_ansi(text) + + +def secho(message=None, file=None, nl=True, err=False, color=None, **styles): + """This function combines :func:`echo` and :func:`style` into one + call. As such the following two calls are the same:: + + click.secho('Hello World!', fg='green') + click.echo(click.style('Hello World!', fg='green')) + + All keyword arguments are forwarded to the underlying functions + depending on which one they go with. + + .. versionadded:: 2.0 + """ + if message is not None: + message = style(message, **styles) + return echo(message, file=file, nl=nl, err=err, color=color) + + +def edit( + text=None, editor=None, env=None, require_save=True, extension=".txt", filename=None +): + r"""Edits the given text in the defined editor. If an editor is given + (should be the full path to the executable but the regular operating + system search path is used for finding the executable) it overrides + the detected editor. Optionally, some environment variables can be + used. If the editor is closed without changes, `None` is returned. In + case a file is edited directly the return value is always `None` and + `require_save` and `extension` are ignored. + + If the editor cannot be opened a :exc:`UsageError` is raised. + + Note for Windows: to simplify cross-platform usage, the newlines are + automatically converted from POSIX to Windows and vice versa. As such, + the message here will have ``\n`` as newline markers. + + :param text: the text to edit. + :param editor: optionally the editor to use. Defaults to automatic + detection. + :param env: environment variables to forward to the editor. + :param require_save: if this is true, then not saving in the editor + will make the return value become `None`. + :param extension: the extension to tell the editor about. This defaults + to `.txt` but changing this might change syntax + highlighting. + :param filename: if provided it will edit this file instead of the + provided text contents. It will not use a temporary + file as an indirection in that case. + """ + from ._termui_impl import Editor + + editor = Editor( + editor=editor, env=env, require_save=require_save, extension=extension + ) + if filename is None: + return editor.edit(text) + editor.edit_file(filename) + + +def launch(url, wait=False, locate=False): + """This function launches the given URL (or filename) in the default + viewer application for this file type. If this is an executable, it + might launch the executable in a new session. The return value is + the exit code of the launched application. Usually, ``0`` indicates + success. + + Examples:: + + click.launch('https://click.palletsprojects.com/') + click.launch('/my/downloaded/file', locate=True) + + .. versionadded:: 2.0 + + :param url: URL or filename of the thing to launch. + :param wait: waits for the program to stop. + :param locate: if this is set to `True` then instead of launching the + application associated with the URL it will attempt to + launch a file manager with the file located. This + might have weird effects if the URL does not point to + the filesystem. + """ + from ._termui_impl import open_url + + return open_url(url, wait=wait, locate=locate) + + +# If this is provided, getchar() calls into this instead. This is used +# for unittesting purposes. +_getchar = None + + +def getchar(echo=False): + """Fetches a single character from the terminal and returns it. This + will always return a unicode character and under certain rare + circumstances this might return more than one character. The + situations which more than one character is returned is when for + whatever reason multiple characters end up in the terminal buffer or + standard input was not actually a terminal. + + Note that this will always read from the terminal, even if something + is piped into the standard input. + + Note for Windows: in rare cases when typing non-ASCII characters, this + function might wait for a second character and then return both at once. + This is because certain Unicode characters look like special-key markers. + + .. versionadded:: 2.0 + + :param echo: if set to `True`, the character read will also show up on + the terminal. The default is to not show it. + """ + f = _getchar + if f is None: + from ._termui_impl import getchar as f + return f(echo) + + +def raw_terminal(): + from ._termui_impl import raw_terminal as f + + return f() + + +def pause(info="Press any key to continue ...", err=False): + """This command stops execution and waits for the user to press any + key to continue. This is similar to the Windows batch "pause" + command. If the program is not run through a terminal, this command + will instead do nothing. + + .. versionadded:: 2.0 + + .. versionadded:: 4.0 + Added the `err` parameter. + + :param info: the info string to print before pausing. + :param err: if set to message goes to ``stderr`` instead of + ``stdout``, the same as with echo. + """ + if not isatty(sys.stdin) or not isatty(sys.stdout): + return + try: + if info: + echo(info, nl=False, err=err) + try: + getchar() + except (KeyboardInterrupt, EOFError): + pass + finally: + if info: + echo(err=err) diff --git a/openpype/vendor/python/python_2/click/testing.py b/openpype/vendor/python/python_2/click/testing.py new file mode 100644 index 0000000000..a3dba3b301 --- /dev/null +++ b/openpype/vendor/python/python_2/click/testing.py @@ -0,0 +1,382 @@ +import contextlib +import os +import shlex +import shutil +import sys +import tempfile + +from . import formatting +from . import termui +from . import utils +from ._compat import iteritems +from ._compat import PY2 +from ._compat import string_types + + +if PY2: + from cStringIO import StringIO +else: + import io + from ._compat import _find_binary_reader + + +class EchoingStdin(object): + def __init__(self, input, output): + self._input = input + self._output = output + + def __getattr__(self, x): + return getattr(self._input, x) + + def _echo(self, rv): + self._output.write(rv) + return rv + + def read(self, n=-1): + return self._echo(self._input.read(n)) + + def readline(self, n=-1): + return self._echo(self._input.readline(n)) + + def readlines(self): + return [self._echo(x) for x in self._input.readlines()] + + def __iter__(self): + return iter(self._echo(x) for x in self._input) + + def __repr__(self): + return repr(self._input) + + +def make_input_stream(input, charset): + # Is already an input stream. + if hasattr(input, "read"): + if PY2: + return input + rv = _find_binary_reader(input) + if rv is not None: + return rv + raise TypeError("Could not find binary reader for input stream.") + + if input is None: + input = b"" + elif not isinstance(input, bytes): + input = input.encode(charset) + if PY2: + return StringIO(input) + return io.BytesIO(input) + + +class Result(object): + """Holds the captured result of an invoked CLI script.""" + + def __init__( + self, runner, stdout_bytes, stderr_bytes, exit_code, exception, exc_info=None + ): + #: The runner that created the result + self.runner = runner + #: The standard output as bytes. + self.stdout_bytes = stdout_bytes + #: The standard error as bytes, or None if not available + self.stderr_bytes = stderr_bytes + #: The exit code as integer. + self.exit_code = exit_code + #: The exception that happened if one did. + self.exception = exception + #: The traceback + self.exc_info = exc_info + + @property + def output(self): + """The (standard) output as unicode string.""" + return self.stdout + + @property + def stdout(self): + """The standard output as unicode string.""" + return self.stdout_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + @property + def stderr(self): + """The standard error as unicode string.""" + if self.stderr_bytes is None: + raise ValueError("stderr not separately captured") + return self.stderr_bytes.decode(self.runner.charset, "replace").replace( + "\r\n", "\n" + ) + + def __repr__(self): + return "<{} {}>".format( + type(self).__name__, repr(self.exception) if self.exception else "okay" + ) + + +class CliRunner(object): + """The CLI runner provides functionality to invoke a Click command line + script for unittesting purposes in a isolated environment. This only + works in single-threaded systems without any concurrency as it changes the + global interpreter state. + + :param charset: the character set for the input and output data. This is + UTF-8 by default and should not be changed currently as + the reporting to Click only works in Python 2 properly. + :param env: a dictionary with environment variables for overriding. + :param echo_stdin: if this is set to `True`, then reading from stdin writes + to stdout. This is useful for showing examples in + some circumstances. Note that regular prompts + will automatically echo the input. + :param mix_stderr: if this is set to `False`, then stdout and stderr are + preserved as independent streams. This is useful for + Unix-philosophy apps that have predictable stdout and + noisy stderr, such that each may be measured + independently + """ + + def __init__(self, charset=None, env=None, echo_stdin=False, mix_stderr=True): + if charset is None: + charset = "utf-8" + self.charset = charset + self.env = env or {} + self.echo_stdin = echo_stdin + self.mix_stderr = mix_stderr + + def get_default_prog_name(self, cli): + """Given a command object it will return the default program name + for it. The default is the `name` attribute or ``"root"`` if not + set. + """ + return cli.name or "root" + + def make_env(self, overrides=None): + """Returns the environment overrides for invoking a script.""" + rv = dict(self.env) + if overrides: + rv.update(overrides) + return rv + + @contextlib.contextmanager + def isolation(self, input=None, env=None, color=False): + """A context manager that sets up the isolation for invoking of a + command line tool. This sets up stdin with the given input data + and `os.environ` with the overrides from the given dictionary. + This also rebinds some internals in Click to be mocked (like the + prompt functionality). + + This is automatically done in the :meth:`invoke` method. + + .. versionadded:: 4.0 + The ``color`` parameter was added. + + :param input: the input stream to put into sys.stdin. + :param env: the environment overrides as dictionary. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + """ + input = make_input_stream(input, self.charset) + + old_stdin = sys.stdin + old_stdout = sys.stdout + old_stderr = sys.stderr + old_forced_width = formatting.FORCED_WIDTH + formatting.FORCED_WIDTH = 80 + + env = self.make_env(env) + + if PY2: + bytes_output = StringIO() + if self.echo_stdin: + input = EchoingStdin(input, bytes_output) + sys.stdout = bytes_output + if not self.mix_stderr: + bytes_error = StringIO() + sys.stderr = bytes_error + else: + bytes_output = io.BytesIO() + if self.echo_stdin: + input = EchoingStdin(input, bytes_output) + input = io.TextIOWrapper(input, encoding=self.charset) + sys.stdout = io.TextIOWrapper(bytes_output, encoding=self.charset) + if not self.mix_stderr: + bytes_error = io.BytesIO() + sys.stderr = io.TextIOWrapper(bytes_error, encoding=self.charset) + + if self.mix_stderr: + sys.stderr = sys.stdout + + sys.stdin = input + + def visible_input(prompt=None): + sys.stdout.write(prompt or "") + val = input.readline().rstrip("\r\n") + sys.stdout.write("{}\n".format(val)) + sys.stdout.flush() + return val + + def hidden_input(prompt=None): + sys.stdout.write("{}\n".format(prompt or "")) + sys.stdout.flush() + return input.readline().rstrip("\r\n") + + def _getchar(echo): + char = sys.stdin.read(1) + if echo: + sys.stdout.write(char) + sys.stdout.flush() + return char + + default_color = color + + def should_strip_ansi(stream=None, color=None): + if color is None: + return not default_color + return not color + + old_visible_prompt_func = termui.visible_prompt_func + old_hidden_prompt_func = termui.hidden_prompt_func + old__getchar_func = termui._getchar + old_should_strip_ansi = utils.should_strip_ansi + termui.visible_prompt_func = visible_input + termui.hidden_prompt_func = hidden_input + termui._getchar = _getchar + utils.should_strip_ansi = should_strip_ansi + + old_env = {} + try: + for key, value in iteritems(env): + old_env[key] = os.environ.get(key) + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + yield (bytes_output, not self.mix_stderr and bytes_error) + finally: + for key, value in iteritems(old_env): + if value is None: + try: + del os.environ[key] + except Exception: + pass + else: + os.environ[key] = value + sys.stdout = old_stdout + sys.stderr = old_stderr + sys.stdin = old_stdin + termui.visible_prompt_func = old_visible_prompt_func + termui.hidden_prompt_func = old_hidden_prompt_func + termui._getchar = old__getchar_func + utils.should_strip_ansi = old_should_strip_ansi + formatting.FORCED_WIDTH = old_forced_width + + def invoke( + self, + cli, + args=None, + input=None, + env=None, + catch_exceptions=True, + color=False, + **extra + ): + """Invokes a command in an isolated environment. The arguments are + forwarded directly to the command line script, the `extra` keyword + arguments are passed to the :meth:`~clickpkg.Command.main` function of + the command. + + This returns a :class:`Result` object. + + .. versionadded:: 3.0 + The ``catch_exceptions`` parameter was added. + + .. versionchanged:: 3.0 + The result object now has an `exc_info` attribute with the + traceback if available. + + .. versionadded:: 4.0 + The ``color`` parameter was added. + + :param cli: the command to invoke + :param args: the arguments to invoke. It may be given as an iterable + or a string. When given as string it will be interpreted + as a Unix shell command. More details at + :func:`shlex.split`. + :param input: the input data for `sys.stdin`. + :param env: the environment overrides. + :param catch_exceptions: Whether to catch any other exceptions than + ``SystemExit``. + :param extra: the keyword arguments to pass to :meth:`main`. + :param color: whether the output should contain color codes. The + application can still override this explicitly. + """ + exc_info = None + with self.isolation(input=input, env=env, color=color) as outstreams: + exception = None + exit_code = 0 + + if isinstance(args, string_types): + args = shlex.split(args) + + try: + prog_name = extra.pop("prog_name") + except KeyError: + prog_name = self.get_default_prog_name(cli) + + try: + cli.main(args=args or (), prog_name=prog_name, **extra) + except SystemExit as e: + exc_info = sys.exc_info() + exit_code = e.code + if exit_code is None: + exit_code = 0 + + if exit_code != 0: + exception = e + + if not isinstance(exit_code, int): + sys.stdout.write(str(exit_code)) + sys.stdout.write("\n") + exit_code = 1 + + except Exception as e: + if not catch_exceptions: + raise + exception = e + exit_code = 1 + exc_info = sys.exc_info() + finally: + sys.stdout.flush() + stdout = outstreams[0].getvalue() + if self.mix_stderr: + stderr = None + else: + stderr = outstreams[1].getvalue() + + return Result( + runner=self, + stdout_bytes=stdout, + stderr_bytes=stderr, + exit_code=exit_code, + exception=exception, + exc_info=exc_info, + ) + + @contextlib.contextmanager + def isolated_filesystem(self): + """A context manager that creates a temporary folder and changes + the current working directory to it for isolated filesystem tests. + """ + cwd = os.getcwd() + t = tempfile.mkdtemp() + os.chdir(t) + try: + yield t + finally: + os.chdir(cwd) + try: + shutil.rmtree(t) + except (OSError, IOError): # noqa: B014 + pass diff --git a/openpype/vendor/python/python_2/click/types.py b/openpype/vendor/python/python_2/click/types.py new file mode 100644 index 0000000000..505c39f850 --- /dev/null +++ b/openpype/vendor/python/python_2/click/types.py @@ -0,0 +1,762 @@ +import os +import stat +from datetime import datetime + +from ._compat import _get_argv_encoding +from ._compat import filename_to_ui +from ._compat import get_filesystem_encoding +from ._compat import get_streerror +from ._compat import open_stream +from ._compat import PY2 +from ._compat import text_type +from .exceptions import BadParameter +from .utils import LazyFile +from .utils import safecall + + +class ParamType(object): + """Helper for converting values through types. The following is + necessary for a valid type: + + * it needs a name + * it needs to pass through None unchanged + * it needs to convert from a string + * it needs to convert its result type through unchanged + (eg: needs to be idempotent) + * it needs to be able to deal with param and context being `None`. + This can be the case when the object is used with prompt + inputs. + """ + + is_composite = False + + #: the descriptive name of this type + name = None + + #: if a list of this type is expected and the value is pulled from a + #: string environment variable, this is what splits it up. `None` + #: means any whitespace. For all parameters the general rule is that + #: whitespace splits them up. The exception are paths and files which + #: are split by ``os.path.pathsep`` by default (":" on Unix and ";" on + #: Windows). + envvar_list_splitter = None + + def __call__(self, value, param=None, ctx=None): + if value is not None: + return self.convert(value, param, ctx) + + def get_metavar(self, param): + """Returns the metavar default for this param if it provides one.""" + + def get_missing_message(self, param): + """Optionally might return extra information about a missing + parameter. + + .. versionadded:: 2.0 + """ + + def convert(self, value, param, ctx): + """Converts the value. This is not invoked for values that are + `None` (the missing value). + """ + return value + + def split_envvar_value(self, rv): + """Given a value from an environment variable this splits it up + into small chunks depending on the defined envvar list splitter. + + If the splitter is set to `None`, which means that whitespace splits, + then leading and trailing whitespace is ignored. Otherwise, leading + and trailing splitters usually lead to empty items being included. + """ + return (rv or "").split(self.envvar_list_splitter) + + def fail(self, message, param=None, ctx=None): + """Helper method to fail with an invalid value message.""" + raise BadParameter(message, ctx=ctx, param=param) + + +class CompositeParamType(ParamType): + is_composite = True + + @property + def arity(self): + raise NotImplementedError() + + +class FuncParamType(ParamType): + def __init__(self, func): + self.name = func.__name__ + self.func = func + + def convert(self, value, param, ctx): + try: + return self.func(value) + except ValueError: + try: + value = text_type(value) + except UnicodeError: + value = str(value).decode("utf-8", "replace") + self.fail(value, param, ctx) + + +class UnprocessedParamType(ParamType): + name = "text" + + def convert(self, value, param, ctx): + return value + + def __repr__(self): + return "UNPROCESSED" + + +class StringParamType(ParamType): + name = "text" + + def convert(self, value, param, ctx): + if isinstance(value, bytes): + enc = _get_argv_encoding() + try: + value = value.decode(enc) + except UnicodeError: + fs_enc = get_filesystem_encoding() + if fs_enc != enc: + try: + value = value.decode(fs_enc) + except UnicodeError: + value = value.decode("utf-8", "replace") + else: + value = value.decode("utf-8", "replace") + return value + return value + + def __repr__(self): + return "STRING" + + +class Choice(ParamType): + """The choice type allows a value to be checked against a fixed set + of supported values. All of these values have to be strings. + + You should only pass a list or tuple of choices. Other iterables + (like generators) may lead to surprising results. + + The resulting value will always be one of the originally passed choices + regardless of ``case_sensitive`` or any ``ctx.token_normalize_func`` + being specified. + + See :ref:`choice-opts` for an example. + + :param case_sensitive: Set to false to make choices case + insensitive. Defaults to true. + """ + + name = "choice" + + def __init__(self, choices, case_sensitive=True): + self.choices = choices + self.case_sensitive = case_sensitive + + def get_metavar(self, param): + return "[{}]".format("|".join(self.choices)) + + def get_missing_message(self, param): + return "Choose from:\n\t{}.".format(",\n\t".join(self.choices)) + + def convert(self, value, param, ctx): + # Match through normalization and case sensitivity + # first do token_normalize_func, then lowercase + # preserve original `value` to produce an accurate message in + # `self.fail` + normed_value = value + normed_choices = {choice: choice for choice in self.choices} + + if ctx is not None and ctx.token_normalize_func is not None: + normed_value = ctx.token_normalize_func(value) + normed_choices = { + ctx.token_normalize_func(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if not self.case_sensitive: + if PY2: + lower = str.lower + else: + lower = str.casefold + + normed_value = lower(normed_value) + normed_choices = { + lower(normed_choice): original + for normed_choice, original in normed_choices.items() + } + + if normed_value in normed_choices: + return normed_choices[normed_value] + + self.fail( + "invalid choice: {}. (choose from {})".format( + value, ", ".join(self.choices) + ), + param, + ctx, + ) + + def __repr__(self): + return "Choice('{}')".format(list(self.choices)) + + +class DateTime(ParamType): + """The DateTime type converts date strings into `datetime` objects. + + The format strings which are checked are configurable, but default to some + common (non-timezone aware) ISO 8601 formats. + + When specifying *DateTime* formats, you should only pass a list or a tuple. + Other iterables, like generators, may lead to surprising results. + + The format strings are processed using ``datetime.strptime``, and this + consequently defines the format strings which are allowed. + + Parsing is tried using each format, in order, and the first format which + parses successfully is used. + + :param formats: A list or tuple of date format strings, in the order in + which they should be tried. Defaults to + ``'%Y-%m-%d'``, ``'%Y-%m-%dT%H:%M:%S'``, + ``'%Y-%m-%d %H:%M:%S'``. + """ + + name = "datetime" + + def __init__(self, formats=None): + self.formats = formats or ["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S", "%Y-%m-%d %H:%M:%S"] + + def get_metavar(self, param): + return "[{}]".format("|".join(self.formats)) + + def _try_to_convert_date(self, value, format): + try: + return datetime.strptime(value, format) + except ValueError: + return None + + def convert(self, value, param, ctx): + # Exact match + for format in self.formats: + dtime = self._try_to_convert_date(value, format) + if dtime: + return dtime + + self.fail( + "invalid datetime format: {}. (choose from {})".format( + value, ", ".join(self.formats) + ) + ) + + def __repr__(self): + return "DateTime" + + +class IntParamType(ParamType): + name = "integer" + + def convert(self, value, param, ctx): + try: + return int(value) + except ValueError: + self.fail("{} is not a valid integer".format(value), param, ctx) + + def __repr__(self): + return "INT" + + +class IntRange(IntParamType): + """A parameter that works similar to :data:`click.INT` but restricts + the value to fit into a range. The default behavior is to fail if the + value falls outside the range, but it can also be silently clamped + between the two edges. + + See :ref:`ranges` for an example. + """ + + name = "integer range" + + def __init__(self, min=None, max=None, clamp=False): + self.min = min + self.max = max + self.clamp = clamp + + def convert(self, value, param, ctx): + rv = IntParamType.convert(self, value, param, ctx) + if self.clamp: + if self.min is not None and rv < self.min: + return self.min + if self.max is not None and rv > self.max: + return self.max + if ( + self.min is not None + and rv < self.min + or self.max is not None + and rv > self.max + ): + if self.min is None: + self.fail( + "{} is bigger than the maximum valid value {}.".format( + rv, self.max + ), + param, + ctx, + ) + elif self.max is None: + self.fail( + "{} is smaller than the minimum valid value {}.".format( + rv, self.min + ), + param, + ctx, + ) + else: + self.fail( + "{} is not in the valid range of {} to {}.".format( + rv, self.min, self.max + ), + param, + ctx, + ) + return rv + + def __repr__(self): + return "IntRange({}, {})".format(self.min, self.max) + + +class FloatParamType(ParamType): + name = "float" + + def convert(self, value, param, ctx): + try: + return float(value) + except ValueError: + self.fail( + "{} is not a valid floating point value".format(value), param, ctx + ) + + def __repr__(self): + return "FLOAT" + + +class FloatRange(FloatParamType): + """A parameter that works similar to :data:`click.FLOAT` but restricts + the value to fit into a range. The default behavior is to fail if the + value falls outside the range, but it can also be silently clamped + between the two edges. + + See :ref:`ranges` for an example. + """ + + name = "float range" + + def __init__(self, min=None, max=None, clamp=False): + self.min = min + self.max = max + self.clamp = clamp + + def convert(self, value, param, ctx): + rv = FloatParamType.convert(self, value, param, ctx) + if self.clamp: + if self.min is not None and rv < self.min: + return self.min + if self.max is not None and rv > self.max: + return self.max + if ( + self.min is not None + and rv < self.min + or self.max is not None + and rv > self.max + ): + if self.min is None: + self.fail( + "{} is bigger than the maximum valid value {}.".format( + rv, self.max + ), + param, + ctx, + ) + elif self.max is None: + self.fail( + "{} is smaller than the minimum valid value {}.".format( + rv, self.min + ), + param, + ctx, + ) + else: + self.fail( + "{} is not in the valid range of {} to {}.".format( + rv, self.min, self.max + ), + param, + ctx, + ) + return rv + + def __repr__(self): + return "FloatRange({}, {})".format(self.min, self.max) + + +class BoolParamType(ParamType): + name = "boolean" + + def convert(self, value, param, ctx): + if isinstance(value, bool): + return bool(value) + value = value.lower() + if value in ("true", "t", "1", "yes", "y"): + return True + elif value in ("false", "f", "0", "no", "n"): + return False + self.fail("{} is not a valid boolean".format(value), param, ctx) + + def __repr__(self): + return "BOOL" + + +class UUIDParameterType(ParamType): + name = "uuid" + + def convert(self, value, param, ctx): + import uuid + + try: + if PY2 and isinstance(value, text_type): + value = value.encode("ascii") + return uuid.UUID(value) + except ValueError: + self.fail("{} is not a valid UUID value".format(value), param, ctx) + + def __repr__(self): + return "UUID" + + +class File(ParamType): + """Declares a parameter to be a file for reading or writing. The file + is automatically closed once the context tears down (after the command + finished working). + + Files can be opened for reading or writing. The special value ``-`` + indicates stdin or stdout depending on the mode. + + By default, the file is opened for reading text data, but it can also be + opened in binary mode or for writing. The encoding parameter can be used + to force a specific encoding. + + The `lazy` flag controls if the file should be opened immediately or upon + first IO. The default is to be non-lazy for standard input and output + streams as well as files opened for reading, `lazy` otherwise. When opening a + file lazily for reading, it is still opened temporarily for validation, but + will not be held open until first IO. lazy is mainly useful when opening + for writing to avoid creating the file until it is needed. + + Starting with Click 2.0, files can also be opened atomically in which + case all writes go into a separate file in the same folder and upon + completion the file will be moved over to the original location. This + is useful if a file regularly read by other users is modified. + + See :ref:`file-args` for more information. + """ + + name = "filename" + envvar_list_splitter = os.path.pathsep + + def __init__( + self, mode="r", encoding=None, errors="strict", lazy=None, atomic=False + ): + self.mode = mode + self.encoding = encoding + self.errors = errors + self.lazy = lazy + self.atomic = atomic + + def resolve_lazy_flag(self, value): + if self.lazy is not None: + return self.lazy + if value == "-": + return False + elif "w" in self.mode: + return True + return False + + def convert(self, value, param, ctx): + try: + if hasattr(value, "read") or hasattr(value, "write"): + return value + + lazy = self.resolve_lazy_flag(value) + + if lazy: + f = LazyFile( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + if ctx is not None: + ctx.call_on_close(f.close_intelligently) + return f + + f, should_close = open_stream( + value, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + # If a context is provided, we automatically close the file + # at the end of the context execution (or flush out). If a + # context does not exist, it's the caller's responsibility to + # properly close the file. This for instance happens when the + # type is used with prompts. + if ctx is not None: + if should_close: + ctx.call_on_close(safecall(f.close)) + else: + ctx.call_on_close(safecall(f.flush)) + return f + except (IOError, OSError) as e: # noqa: B014 + self.fail( + "Could not open file: {}: {}".format( + filename_to_ui(value), get_streerror(e) + ), + param, + ctx, + ) + + +class Path(ParamType): + """The path type is similar to the :class:`File` type but it performs + different checks. First of all, instead of returning an open file + handle it returns just the filename. Secondly, it can perform various + basic checks about what the file or directory should be. + + .. versionchanged:: 6.0 + `allow_dash` was added. + + :param exists: if set to true, the file or directory needs to exist for + this value to be valid. If this is not required and a + file does indeed not exist, then all further checks are + silently skipped. + :param file_okay: controls if a file is a possible value. + :param dir_okay: controls if a directory is a possible value. + :param writable: if true, a writable check is performed. + :param readable: if true, a readable check is performed. + :param resolve_path: if this is true, then the path is fully resolved + before the value is passed onwards. This means + that it's absolute and symlinks are resolved. It + will not expand a tilde-prefix, as this is + supposed to be done by the shell only. + :param allow_dash: If this is set to `True`, a single dash to indicate + standard streams is permitted. + :param path_type: optionally a string type that should be used to + represent the path. The default is `None` which + means the return value will be either bytes or + unicode depending on what makes most sense given the + input data Click deals with. + """ + + envvar_list_splitter = os.path.pathsep + + def __init__( + self, + exists=False, + file_okay=True, + dir_okay=True, + writable=False, + readable=True, + resolve_path=False, + allow_dash=False, + path_type=None, + ): + self.exists = exists + self.file_okay = file_okay + self.dir_okay = dir_okay + self.writable = writable + self.readable = readable + self.resolve_path = resolve_path + self.allow_dash = allow_dash + self.type = path_type + + if self.file_okay and not self.dir_okay: + self.name = "file" + self.path_type = "File" + elif self.dir_okay and not self.file_okay: + self.name = "directory" + self.path_type = "Directory" + else: + self.name = "path" + self.path_type = "Path" + + def coerce_path_result(self, rv): + if self.type is not None and not isinstance(rv, self.type): + if self.type is text_type: + rv = rv.decode(get_filesystem_encoding()) + else: + rv = rv.encode(get_filesystem_encoding()) + return rv + + def convert(self, value, param, ctx): + rv = value + + is_dash = self.file_okay and self.allow_dash and rv in (b"-", "-") + + if not is_dash: + if self.resolve_path: + rv = os.path.realpath(rv) + + try: + st = os.stat(rv) + except OSError: + if not self.exists: + return self.coerce_path_result(rv) + self.fail( + "{} '{}' does not exist.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + + if not self.file_okay and stat.S_ISREG(st.st_mode): + self.fail( + "{} '{}' is a file.".format(self.path_type, filename_to_ui(value)), + param, + ctx, + ) + if not self.dir_okay and stat.S_ISDIR(st.st_mode): + self.fail( + "{} '{}' is a directory.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + if self.writable and not os.access(value, os.W_OK): + self.fail( + "{} '{}' is not writable.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + if self.readable and not os.access(value, os.R_OK): + self.fail( + "{} '{}' is not readable.".format( + self.path_type, filename_to_ui(value) + ), + param, + ctx, + ) + + return self.coerce_path_result(rv) + + +class Tuple(CompositeParamType): + """The default behavior of Click is to apply a type on a value directly. + This works well in most cases, except for when `nargs` is set to a fixed + count and different types should be used for different items. In this + case the :class:`Tuple` type can be used. This type can only be used + if `nargs` is set to a fixed number. + + For more information see :ref:`tuple-type`. + + This can be selected by using a Python tuple literal as a type. + + :param types: a list of types that should be used for the tuple items. + """ + + def __init__(self, types): + self.types = [convert_type(ty) for ty in types] + + @property + def name(self): + return "<{}>".format(" ".join(ty.name for ty in self.types)) + + @property + def arity(self): + return len(self.types) + + def convert(self, value, param, ctx): + if len(value) != len(self.types): + raise TypeError( + "It would appear that nargs is set to conflict with the" + " composite type arity." + ) + return tuple(ty(x, param, ctx) for ty, x in zip(self.types, value)) + + +def convert_type(ty, default=None): + """Converts a callable or python type into the most appropriate + param type. + """ + guessed_type = False + if ty is None and default is not None: + if isinstance(default, tuple): + ty = tuple(map(type, default)) + else: + ty = type(default) + guessed_type = True + + if isinstance(ty, tuple): + return Tuple(ty) + if isinstance(ty, ParamType): + return ty + if ty is text_type or ty is str or ty is None: + return STRING + if ty is int: + return INT + # Booleans are only okay if not guessed. This is done because for + # flags the default value is actually a bit of a lie in that it + # indicates which of the flags is the one we want. See get_default() + # for more information. + if ty is bool and not guessed_type: + return BOOL + if ty is float: + return FLOAT + if guessed_type: + return STRING + + # Catch a common mistake + if __debug__: + try: + if issubclass(ty, ParamType): + raise AssertionError( + "Attempted to use an uninstantiated parameter type ({}).".format(ty) + ) + except TypeError: + pass + return FuncParamType(ty) + + +#: A dummy parameter type that just does nothing. From a user's +#: perspective this appears to just be the same as `STRING` but internally +#: no string conversion takes place. This is necessary to achieve the +#: same bytes/unicode behavior on Python 2/3 in situations where you want +#: to not convert argument types. This is usually useful when working +#: with file paths as they can appear in bytes and unicode. +#: +#: For path related uses the :class:`Path` type is a better choice but +#: there are situations where an unprocessed type is useful which is why +#: it is is provided. +#: +#: .. versionadded:: 4.0 +UNPROCESSED = UnprocessedParamType() + +#: A unicode string parameter type which is the implicit default. This +#: can also be selected by using ``str`` as type. +STRING = StringParamType() + +#: An integer parameter. This can also be selected by using ``int`` as +#: type. +INT = IntParamType() + +#: A floating point value parameter. This can also be selected by using +#: ``float`` as type. +FLOAT = FloatParamType() + +#: A boolean parameter. This is the default for boolean flags. This can +#: also be selected by using ``bool`` as a type. +BOOL = BoolParamType() + +#: A UUID parameter. +UUID = UUIDParameterType() diff --git a/openpype/vendor/python/python_2/click/utils.py b/openpype/vendor/python/python_2/click/utils.py new file mode 100644 index 0000000000..79265e732d --- /dev/null +++ b/openpype/vendor/python/python_2/click/utils.py @@ -0,0 +1,455 @@ +import os +import sys + +from ._compat import _default_text_stderr +from ._compat import _default_text_stdout +from ._compat import auto_wrap_for_ansi +from ._compat import binary_streams +from ._compat import filename_to_ui +from ._compat import get_filesystem_encoding +from ._compat import get_streerror +from ._compat import is_bytes +from ._compat import open_stream +from ._compat import PY2 +from ._compat import should_strip_ansi +from ._compat import string_types +from ._compat import strip_ansi +from ._compat import text_streams +from ._compat import text_type +from ._compat import WIN +from .globals import resolve_color_default + +if not PY2: + from ._compat import _find_binary_writer +elif WIN: + from ._winconsole import _get_windows_argv + from ._winconsole import _hash_py_argv + from ._winconsole import _initial_argv_hash + +echo_native_types = string_types + (bytes, bytearray) + + +def _posixify(name): + return "-".join(name.split()).lower() + + +def safecall(func): + """Wraps a function so that it swallows exceptions.""" + + def wrapper(*args, **kwargs): + try: + return func(*args, **kwargs) + except Exception: + pass + + return wrapper + + +def make_str(value): + """Converts a value into a valid string.""" + if isinstance(value, bytes): + try: + return value.decode(get_filesystem_encoding()) + except UnicodeError: + return value.decode("utf-8", "replace") + return text_type(value) + + +def make_default_short_help(help, max_length=45): + """Return a condensed version of help string.""" + words = help.split() + total_length = 0 + result = [] + done = False + + for word in words: + if word[-1:] == ".": + done = True + new_length = 1 + len(word) if result else len(word) + if total_length + new_length > max_length: + result.append("...") + done = True + else: + if result: + result.append(" ") + result.append(word) + if done: + break + total_length += new_length + + return "".join(result) + + +class LazyFile(object): + """A lazy file works like a regular file but it does not fully open + the file but it does perform some basic checks early to see if the + filename parameter does make sense. This is useful for safely opening + files for writing. + """ + + def __init__( + self, filename, mode="r", encoding=None, errors="strict", atomic=False + ): + self.name = filename + self.mode = mode + self.encoding = encoding + self.errors = errors + self.atomic = atomic + + if filename == "-": + self._f, self.should_close = open_stream(filename, mode, encoding, errors) + else: + if "r" in mode: + # Open and close the file in case we're opening it for + # reading so that we can catch at least some errors in + # some cases early. + open(filename, mode).close() + self._f = None + self.should_close = True + + def __getattr__(self, name): + return getattr(self.open(), name) + + def __repr__(self): + if self._f is not None: + return repr(self._f) + return "".format(self.name, self.mode) + + def open(self): + """Opens the file if it's not yet open. This call might fail with + a :exc:`FileError`. Not handling this error will produce an error + that Click shows. + """ + if self._f is not None: + return self._f + try: + rv, self.should_close = open_stream( + self.name, self.mode, self.encoding, self.errors, atomic=self.atomic + ) + except (IOError, OSError) as e: # noqa: E402 + from .exceptions import FileError + + raise FileError(self.name, hint=get_streerror(e)) + self._f = rv + return rv + + def close(self): + """Closes the underlying file, no matter what.""" + if self._f is not None: + self._f.close() + + def close_intelligently(self): + """This function only closes the file if it was opened by the lazy + file wrapper. For instance this will never close stdin. + """ + if self.should_close: + self.close() + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + self.close_intelligently() + + def __iter__(self): + self.open() + return iter(self._f) + + +class KeepOpenFile(object): + def __init__(self, file): + self._file = file + + def __getattr__(self, name): + return getattr(self._file, name) + + def __enter__(self): + return self + + def __exit__(self, exc_type, exc_value, tb): + pass + + def __repr__(self): + return repr(self._file) + + def __iter__(self): + return iter(self._file) + + +def echo(message=None, file=None, nl=True, err=False, color=None): + """Prints a message plus a newline to the given file or stdout. On + first sight, this looks like the print function, but it has improved + support for handling Unicode and binary data that does not fail no + matter how badly configured the system is. + + Primarily it means that you can print binary data as well as Unicode + data on both 2.x and 3.x to the given file in the most appropriate way + possible. This is a very carefree function in that it will try its + best to not fail. As of Click 6.0 this includes support for unicode + output on the Windows console. + + In addition to that, if `colorama`_ is installed, the echo function will + also support clever handling of ANSI codes. Essentially it will then + do the following: + + - add transparent handling of ANSI color codes on Windows. + - hide ANSI codes automatically if the destination file is not a + terminal. + + .. _colorama: https://pypi.org/project/colorama/ + + .. versionchanged:: 6.0 + As of Click 6.0 the echo function will properly support unicode + output on the windows console. Not that click does not modify + the interpreter in any way which means that `sys.stdout` or the + print statement or function will still not provide unicode support. + + .. versionchanged:: 2.0 + Starting with version 2.0 of Click, the echo function will work + with colorama if it's installed. + + .. versionadded:: 3.0 + The `err` parameter was added. + + .. versionchanged:: 4.0 + Added the `color` flag. + + :param message: the message to print + :param file: the file to write to (defaults to ``stdout``) + :param err: if set to true the file defaults to ``stderr`` instead of + ``stdout``. This is faster and easier than calling + :func:`get_text_stderr` yourself. + :param nl: if set to `True` (the default) a newline is printed afterwards. + :param color: controls if the terminal supports ANSI colors or not. The + default is autodetection. + """ + if file is None: + if err: + file = _default_text_stderr() + else: + file = _default_text_stdout() + + # Convert non bytes/text into the native string type. + if message is not None and not isinstance(message, echo_native_types): + message = text_type(message) + + if nl: + message = message or u"" + if isinstance(message, text_type): + message += u"\n" + else: + message += b"\n" + + # If there is a message, and we're in Python 3, and the value looks + # like bytes, we manually need to find the binary stream and write the + # message in there. This is done separately so that most stream + # types will work as you would expect. Eg: you can write to StringIO + # for other cases. + if message and not PY2 and is_bytes(message): + binary_file = _find_binary_writer(file) + if binary_file is not None: + file.flush() + binary_file.write(message) + binary_file.flush() + return + + # ANSI-style support. If there is no message or we are dealing with + # bytes nothing is happening. If we are connected to a file we want + # to strip colors. If we are on windows we either wrap the stream + # to strip the color or we use the colorama support to translate the + # ansi codes to API calls. + if message and not is_bytes(message): + color = resolve_color_default(color) + if should_strip_ansi(file, color): + message = strip_ansi(message) + elif WIN: + if auto_wrap_for_ansi is not None: + file = auto_wrap_for_ansi(file) + elif not color: + message = strip_ansi(message) + + if message: + file.write(message) + file.flush() + + +def get_binary_stream(name): + """Returns a system stream for byte processing. This essentially + returns the stream from the sys module with the given name but it + solves some compatibility issues between different Python versions. + Primarily this function is necessary for getting binary streams on + Python 3. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + """ + opener = binary_streams.get(name) + if opener is None: + raise TypeError("Unknown standard stream '{}'".format(name)) + return opener() + + +def get_text_stream(name, encoding=None, errors="strict"): + """Returns a system stream for text processing. This usually returns + a wrapped stream around a binary stream returned from + :func:`get_binary_stream` but it also can take shortcuts on Python 3 + for already correctly configured streams. + + :param name: the name of the stream to open. Valid names are ``'stdin'``, + ``'stdout'`` and ``'stderr'`` + :param encoding: overrides the detected default encoding. + :param errors: overrides the default error mode. + """ + opener = text_streams.get(name) + if opener is None: + raise TypeError("Unknown standard stream '{}'".format(name)) + return opener(encoding, errors) + + +def open_file( + filename, mode="r", encoding=None, errors="strict", lazy=False, atomic=False +): + """This is similar to how the :class:`File` works but for manual + usage. Files are opened non lazy by default. This can open regular + files as well as stdin/stdout if ``'-'`` is passed. + + If stdin/stdout is returned the stream is wrapped so that the context + manager will not close the stream accidentally. This makes it possible + to always use the function like this without having to worry to + accidentally close a standard stream:: + + with open_file(filename) as f: + ... + + .. versionadded:: 3.0 + + :param filename: the name of the file to open (or ``'-'`` for stdin/stdout). + :param mode: the mode in which to open the file. + :param encoding: the encoding to use. + :param errors: the error handling for this file. + :param lazy: can be flipped to true to open the file lazily. + :param atomic: in atomic mode writes go into a temporary file and it's + moved on close. + """ + if lazy: + return LazyFile(filename, mode, encoding, errors, atomic=atomic) + f, should_close = open_stream(filename, mode, encoding, errors, atomic=atomic) + if not should_close: + f = KeepOpenFile(f) + return f + + +def get_os_args(): + """This returns the argument part of sys.argv in the most appropriate + form for processing. What this means is that this return value is in + a format that works for Click to process but does not necessarily + correspond well to what's actually standard for the interpreter. + + On most environments the return value is ``sys.argv[:1]`` unchanged. + However if you are on Windows and running Python 2 the return value + will actually be a list of unicode strings instead because the + default behavior on that platform otherwise will not be able to + carry all possible values that sys.argv can have. + + .. versionadded:: 6.0 + """ + # We can only extract the unicode argv if sys.argv has not been + # changed since the startup of the application. + if PY2 and WIN and _initial_argv_hash == _hash_py_argv(): + return _get_windows_argv() + return sys.argv[1:] + + +def format_filename(filename, shorten=False): + """Formats a filename for user display. The main purpose of this + function is to ensure that the filename can be displayed at all. This + will decode the filename to unicode if necessary in a way that it will + not fail. Optionally, it can shorten the filename to not include the + full path to the filename. + + :param filename: formats a filename for UI display. This will also convert + the filename into unicode without failing. + :param shorten: this optionally shortens the filename to strip of the + path that leads up to it. + """ + if shorten: + filename = os.path.basename(filename) + return filename_to_ui(filename) + + +def get_app_dir(app_name, roaming=True, force_posix=False): + r"""Returns the config folder for the application. The default behavior + is to return whatever is most appropriate for the operating system. + + To give you an idea, for an app called ``"Foo Bar"``, something like + the following folders could be returned: + + Mac OS X: + ``~/Library/Application Support/Foo Bar`` + Mac OS X (POSIX): + ``~/.foo-bar`` + Unix: + ``~/.config/foo-bar`` + Unix (POSIX): + ``~/.foo-bar`` + Win XP (roaming): + ``C:\Documents and Settings\\Local Settings\Application Data\Foo Bar`` + Win XP (not roaming): + ``C:\Documents and Settings\\Application Data\Foo Bar`` + Win 7 (roaming): + ``C:\Users\\AppData\Roaming\Foo Bar`` + Win 7 (not roaming): + ``C:\Users\\AppData\Local\Foo Bar`` + + .. versionadded:: 2.0 + + :param app_name: the application name. This should be properly capitalized + and can contain whitespace. + :param roaming: controls if the folder should be roaming or not on Windows. + Has no affect otherwise. + :param force_posix: if this is set to `True` then on any POSIX system the + folder will be stored in the home folder with a leading + dot instead of the XDG config home or darwin's + application support folder. + """ + if WIN: + key = "APPDATA" if roaming else "LOCALAPPDATA" + folder = os.environ.get(key) + if folder is None: + folder = os.path.expanduser("~") + return os.path.join(folder, app_name) + if force_posix: + return os.path.join(os.path.expanduser("~/.{}".format(_posixify(app_name)))) + if sys.platform == "darwin": + return os.path.join( + os.path.expanduser("~/Library/Application Support"), app_name + ) + return os.path.join( + os.environ.get("XDG_CONFIG_HOME", os.path.expanduser("~/.config")), + _posixify(app_name), + ) + + +class PacifyFlushWrapper(object): + """This wrapper is used to catch and suppress BrokenPipeErrors resulting + from ``.flush()`` being called on broken pipe during the shutdown/final-GC + of the Python interpreter. Notably ``.flush()`` is always called on + ``sys.stdout`` and ``sys.stderr``. So as to have minimal impact on any + other cleanup code, and the case where the underlying file is not a broken + pipe, all calls and attributes are proxied. + """ + + def __init__(self, wrapped): + self.wrapped = wrapped + + def flush(self): + try: + self.wrapped.flush() + except IOError as e: + import errno + + if e.errno != errno.EPIPE: + raise + + def __getattr__(self, attr): + return getattr(self.wrapped, attr) diff --git a/openpype/version.py b/openpype/version.py index cdd546c4a0..6d89e1eeae 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.15.12-nightly.4" +__version__ = "3.16.5-nightly.2" diff --git a/poetry.lock b/poetry.lock index f71611cb6f..5621d39988 100644 --- a/poetry.lock +++ b/poetry.lock @@ -18,106 +18,106 @@ resolved_reference = "126f7a188cfe36718f707f42ebbc597e86aa86c3" [[package]] name = "aiohttp" -version = "3.8.3" +version = "3.8.4" description = "Async http client/server framework (asyncio)" category = "main" optional = false python-versions = ">=3.6" files = [ - {file = "aiohttp-3.8.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ba71c9b4dcbb16212f334126cc3d8beb6af377f6703d9dc2d9fb3874fd667ee9"}, - {file = "aiohttp-3.8.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d24b8bb40d5c61ef2d9b6a8f4528c2f17f1c5d2d31fed62ec860f6006142e83e"}, - {file = "aiohttp-3.8.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f88df3a83cf9df566f171adba39d5bd52814ac0b94778d2448652fc77f9eb491"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b97decbb3372d4b69e4d4c8117f44632551c692bb1361b356a02b97b69e18a62"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:309aa21c1d54b8ef0723181d430347d7452daaff93e8e2363db8e75c72c2fb2d"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad5383a67514e8e76906a06741febd9126fc7c7ff0f599d6fcce3e82b80d026f"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:20acae4f268317bb975671e375493dbdbc67cddb5f6c71eebdb85b34444ac46b"}, - {file = "aiohttp-3.8.3-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:05a3c31c6d7cd08c149e50dc7aa2568317f5844acd745621983380597f027a18"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d6f76310355e9fae637c3162936e9504b4767d5c52ca268331e2756e54fd4ca5"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:256deb4b29fe5e47893fa32e1de2d73c3afe7407738bd3c63829874661d4822d"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:5c59fcd80b9049b49acd29bd3598cada4afc8d8d69bd4160cd613246912535d7"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:059a91e88f2c00fe40aed9031b3606c3f311414f86a90d696dd982e7aec48142"}, - {file = "aiohttp-3.8.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:2feebbb6074cdbd1ac276dbd737b40e890a1361b3cc30b74ac2f5e24aab41f7b"}, - {file = "aiohttp-3.8.3-cp310-cp310-win32.whl", hash = "sha256:5bf651afd22d5f0c4be16cf39d0482ea494f5c88f03e75e5fef3a85177fecdeb"}, - {file = "aiohttp-3.8.3-cp310-cp310-win_amd64.whl", hash = "sha256:653acc3880459f82a65e27bd6526e47ddf19e643457d36a2250b85b41a564715"}, - {file = "aiohttp-3.8.3-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:86fc24e58ecb32aee09f864cb11bb91bc4c1086615001647dbfc4dc8c32f4008"}, - {file = "aiohttp-3.8.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:75e14eac916f024305db517e00a9252714fce0abcb10ad327fb6dcdc0d060f1d"}, - {file = "aiohttp-3.8.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d1fde0f44029e02d02d3993ad55ce93ead9bb9b15c6b7ccd580f90bd7e3de476"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ab94426ddb1ecc6a0b601d832d5d9d421820989b8caa929114811369673235c"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:89d2e02167fa95172c017732ed7725bc8523c598757f08d13c5acca308e1a061"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:02f9a2c72fc95d59b881cf38a4b2be9381b9527f9d328771e90f72ac76f31ad8"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c7149272fb5834fc186328e2c1fa01dda3e1fa940ce18fded6d412e8f2cf76d"}, - {file = "aiohttp-3.8.3-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:512bd5ab136b8dc0ffe3fdf2dfb0c4b4f49c8577f6cae55dca862cd37a4564e2"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7018ecc5fe97027214556afbc7c502fbd718d0740e87eb1217b17efd05b3d276"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:88c70ed9da9963d5496d38320160e8eb7e5f1886f9290475a881db12f351ab5d"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:da22885266bbfb3f78218dc40205fed2671909fbd0720aedba39b4515c038091"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:e65bc19919c910127c06759a63747ebe14f386cda573d95bcc62b427ca1afc73"}, - {file = "aiohttp-3.8.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:08c78317e950e0762c2983f4dd58dc5e6c9ff75c8a0efeae299d363d439c8e34"}, - {file = "aiohttp-3.8.3-cp311-cp311-win32.whl", hash = "sha256:45d88b016c849d74ebc6f2b6e8bc17cabf26e7e40c0661ddd8fae4c00f015697"}, - {file = "aiohttp-3.8.3-cp311-cp311-win_amd64.whl", hash = "sha256:96372fc29471646b9b106ee918c8eeb4cca423fcbf9a34daa1b93767a88a2290"}, - {file = "aiohttp-3.8.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c971bf3786b5fad82ce5ad570dc6ee420f5b12527157929e830f51c55dc8af77"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff25f48fc8e623d95eca0670b8cc1469a83783c924a602e0fbd47363bb54aaca"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e381581b37db1db7597b62a2e6b8b57c3deec95d93b6d6407c5b61ddc98aca6d"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:db19d60d846283ee275d0416e2a23493f4e6b6028825b51290ac05afc87a6f97"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25892c92bee6d9449ffac82c2fe257f3a6f297792cdb18ad784737d61e7a9a85"}, - {file = "aiohttp-3.8.3-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:398701865e7a9565d49189f6c90868efaca21be65c725fc87fc305906be915da"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:4a4fbc769ea9b6bd97f4ad0b430a6807f92f0e5eb020f1e42ece59f3ecfc4585"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:b29bfd650ed8e148f9c515474a6ef0ba1090b7a8faeee26b74a8ff3b33617502"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:1e56b9cafcd6531bab5d9b2e890bb4937f4165109fe98e2b98ef0dcfcb06ee9d"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:ec40170327d4a404b0d91855d41bfe1fe4b699222b2b93e3d833a27330a87a6d"}, - {file = "aiohttp-3.8.3-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:2df5f139233060578d8c2c975128fb231a89ca0a462b35d4b5fcf7c501ebdbe1"}, - {file = "aiohttp-3.8.3-cp36-cp36m-win32.whl", hash = "sha256:f973157ffeab5459eefe7b97a804987876dd0a55570b8fa56b4e1954bf11329b"}, - {file = "aiohttp-3.8.3-cp36-cp36m-win_amd64.whl", hash = "sha256:437399385f2abcd634865705bdc180c8314124b98299d54fe1d4c8990f2f9494"}, - {file = "aiohttp-3.8.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:09e28f572b21642128ef31f4e8372adb6888846f32fecb288c8b0457597ba61a"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6f3553510abdbec67c043ca85727396ceed1272eef029b050677046d3387be8d"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e168a7560b7c61342ae0412997b069753f27ac4862ec7867eff74f0fe4ea2ad9"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:db4c979b0b3e0fa7e9e69ecd11b2b3174c6963cebadeecfb7ad24532ffcdd11a"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e164e0a98e92d06da343d17d4e9c4da4654f4a4588a20d6c73548a29f176abe2"}, - {file = "aiohttp-3.8.3-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e8a78079d9a39ca9ca99a8b0ac2fdc0c4d25fc80c8a8a82e5c8211509c523363"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:21b30885a63c3f4ff5b77a5d6caf008b037cb521a5f33eab445dc566f6d092cc"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:4b0f30372cef3fdc262f33d06e7b411cd59058ce9174ef159ad938c4a34a89da"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:8135fa153a20d82ffb64f70a1b5c2738684afa197839b34cc3e3c72fa88d302c"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:ad61a9639792fd790523ba072c0555cd6be5a0baf03a49a5dd8cfcf20d56df48"}, - {file = "aiohttp-3.8.3-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:978b046ca728073070e9abc074b6299ebf3501e8dee5e26efacb13cec2b2dea0"}, - {file = "aiohttp-3.8.3-cp37-cp37m-win32.whl", hash = "sha256:0d2c6d8c6872df4a6ec37d2ede71eff62395b9e337b4e18efd2177de883a5033"}, - {file = "aiohttp-3.8.3-cp37-cp37m-win_amd64.whl", hash = "sha256:21d69797eb951f155026651f7e9362877334508d39c2fc37bd04ff55b2007091"}, - {file = "aiohttp-3.8.3-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:2ca9af5f8f5812d475c5259393f52d712f6d5f0d7fdad9acdb1107dd9e3cb7eb"}, - {file = "aiohttp-3.8.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1d90043c1882067f1bd26196d5d2db9aa6d268def3293ed5fb317e13c9413ea4"}, - {file = "aiohttp-3.8.3-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d737fc67b9a970f3234754974531dc9afeea11c70791dcb7db53b0cf81b79784"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ebf909ea0a3fc9596e40d55d8000702a85e27fd578ff41a5500f68f20fd32e6c"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5835f258ca9f7c455493a57ee707b76d2d9634d84d5d7f62e77be984ea80b849"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:da37dcfbf4b7f45d80ee386a5f81122501ec75672f475da34784196690762f4b"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:87f44875f2804bc0511a69ce44a9595d5944837a62caecc8490bbdb0e18b1342"}, - {file = "aiohttp-3.8.3-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:527b3b87b24844ea7865284aabfab08eb0faf599b385b03c2aa91fc6edd6e4b6"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:d5ba88df9aa5e2f806650fcbeedbe4f6e8736e92fc0e73b0400538fd25a4dd96"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:e7b8813be97cab8cb52b1375f41f8e6804f6507fe4660152e8ca5c48f0436017"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:2dea10edfa1a54098703cb7acaa665c07b4e7568472a47f4e64e6319d3821ccf"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:713d22cd9643ba9025d33c4af43943c7a1eb8547729228de18d3e02e278472b6"}, - {file = "aiohttp-3.8.3-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2d252771fc85e0cf8da0b823157962d70639e63cb9b578b1dec9868dd1f4f937"}, - {file = "aiohttp-3.8.3-cp38-cp38-win32.whl", hash = "sha256:66bd5f950344fb2b3dbdd421aaa4e84f4411a1a13fca3aeb2bcbe667f80c9f76"}, - {file = "aiohttp-3.8.3-cp38-cp38-win_amd64.whl", hash = "sha256:84b14f36e85295fe69c6b9789b51a0903b774046d5f7df538176516c3e422446"}, - {file = "aiohttp-3.8.3-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:16c121ba0b1ec2b44b73e3a8a171c4f999b33929cd2397124a8c7fcfc8cd9e06"}, - {file = "aiohttp-3.8.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8d6aaa4e7155afaf994d7924eb290abbe81a6905b303d8cb61310a2aba1c68ba"}, - {file = "aiohttp-3.8.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:43046a319664a04b146f81b40e1545d4c8ac7b7dd04c47e40bf09f65f2437346"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:599418aaaf88a6d02a8c515e656f6faf3d10618d3dd95866eb4436520096c84b"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:92a2964319d359f494f16011e23434f6f8ef0434acd3cf154a6b7bec511e2fb7"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:73a4131962e6d91109bca6536416aa067cf6c4efb871975df734f8d2fd821b37"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:598adde339d2cf7d67beaccda3f2ce7c57b3b412702f29c946708f69cf8222aa"}, - {file = "aiohttp-3.8.3-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:75880ed07be39beff1881d81e4a907cafb802f306efd6d2d15f2b3c69935f6fb"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a0239da9fbafd9ff82fd67c16704a7d1bccf0d107a300e790587ad05547681c8"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:4e3a23ec214e95c9fe85a58470b660efe6534b83e6cbe38b3ed52b053d7cb6ad"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:47841407cc89a4b80b0c52276f3cc8138bbbfba4b179ee3acbd7d77ae33f7ac4"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:54d107c89a3ebcd13228278d68f1436d3f33f2dd2af5415e3feaeb1156e1a62c"}, - {file = "aiohttp-3.8.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c37c5cce780349d4d51739ae682dec63573847a2a8dcb44381b174c3d9c8d403"}, - {file = "aiohttp-3.8.3-cp39-cp39-win32.whl", hash = "sha256:f178d2aadf0166be4df834c4953da2d7eef24719e8aec9a65289483eeea9d618"}, - {file = "aiohttp-3.8.3-cp39-cp39-win_amd64.whl", hash = "sha256:88e5be56c231981428f4f506c68b6a46fa25c4123a2e86d156c58a8369d31ab7"}, - {file = "aiohttp-3.8.3.tar.gz", hash = "sha256:3828fb41b7203176b82fe5d699e0d845435f2374750a44b480ea6b930f6be269"}, + {file = "aiohttp-3.8.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:5ce45967538fb747370308d3145aa68a074bdecb4f3a300869590f725ced69c1"}, + {file = "aiohttp-3.8.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b744c33b6f14ca26b7544e8d8aadff6b765a80ad6164fb1a430bbadd593dfb1a"}, + {file = "aiohttp-3.8.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:1a45865451439eb320784918617ba54b7a377e3501fb70402ab84d38c2cd891b"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a86d42d7cba1cec432d47ab13b6637bee393a10f664c425ea7b305d1301ca1a3"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ee3c36df21b5714d49fc4580247947aa64bcbe2939d1b77b4c8dcb8f6c9faecc"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:176a64b24c0935869d5bbc4c96e82f89f643bcdf08ec947701b9dbb3c956b7dd"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c844fd628851c0bc309f3c801b3a3d58ce430b2ce5b359cd918a5a76d0b20cb5"}, + {file = "aiohttp-3.8.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5393fb786a9e23e4799fec788e7e735de18052f83682ce2dfcabaf1c00c2c08e"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e4b09863aae0dc965c3ef36500d891a3ff495a2ea9ae9171e4519963c12ceefd"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:adfbc22e87365a6e564c804c58fc44ff7727deea782d175c33602737b7feadb6"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:147ae376f14b55f4f3c2b118b95be50a369b89b38a971e80a17c3fd623f280c9"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:eafb3e874816ebe2a92f5e155f17260034c8c341dad1df25672fb710627c6949"}, + {file = "aiohttp-3.8.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c6cc15d58053c76eacac5fa9152d7d84b8d67b3fde92709195cb984cfb3475ea"}, + {file = "aiohttp-3.8.4-cp310-cp310-win32.whl", hash = "sha256:59f029a5f6e2d679296db7bee982bb3d20c088e52a2977e3175faf31d6fb75d1"}, + {file = "aiohttp-3.8.4-cp310-cp310-win_amd64.whl", hash = "sha256:fe7ba4a51f33ab275515f66b0a236bcde4fb5561498fe8f898d4e549b2e4509f"}, + {file = "aiohttp-3.8.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3d8ef1a630519a26d6760bc695842579cb09e373c5f227a21b67dc3eb16cfea4"}, + {file = "aiohttp-3.8.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5b3f2e06a512e94722886c0827bee9807c86a9f698fac6b3aee841fab49bbfb4"}, + {file = "aiohttp-3.8.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3a80464982d41b1fbfe3154e440ba4904b71c1a53e9cd584098cd41efdb188ef"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8b631e26df63e52f7cce0cce6507b7a7f1bc9b0c501fcde69742130b32e8782f"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3f43255086fe25e36fd5ed8f2ee47477408a73ef00e804cb2b5cba4bf2ac7f5e"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4d347a172f866cd1d93126d9b239fcbe682acb39b48ee0873c73c933dd23bd0f"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a3fec6a4cb5551721cdd70473eb009d90935b4063acc5f40905d40ecfea23e05"}, + {file = "aiohttp-3.8.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:80a37fe8f7c1e6ce8f2d9c411676e4bc633a8462844e38f46156d07a7d401654"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d1e6a862b76f34395a985b3cd39a0d949ca80a70b6ebdea37d3ab39ceea6698a"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:cd468460eefef601ece4428d3cf4562459157c0f6523db89365202c31b6daebb"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:618c901dd3aad4ace71dfa0f5e82e88b46ef57e3239fc7027773cb6d4ed53531"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:652b1bff4f15f6287550b4670546a2947f2a4575b6c6dff7760eafb22eacbf0b"}, + {file = "aiohttp-3.8.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:80575ba9377c5171407a06d0196b2310b679dc752d02a1fcaa2bc20b235dbf24"}, + {file = "aiohttp-3.8.4-cp311-cp311-win32.whl", hash = "sha256:bbcf1a76cf6f6dacf2c7f4d2ebd411438c275faa1dc0c68e46eb84eebd05dd7d"}, + {file = "aiohttp-3.8.4-cp311-cp311-win_amd64.whl", hash = "sha256:6e74dd54f7239fcffe07913ff8b964e28b712f09846e20de78676ce2a3dc0bfc"}, + {file = "aiohttp-3.8.4-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:880e15bb6dad90549b43f796b391cfffd7af373f4646784795e20d92606b7a51"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb96fa6b56bb536c42d6a4a87dfca570ff8e52de2d63cabebfd6fb67049c34b6"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4a6cadebe132e90cefa77e45f2d2f1a4b2ce5c6b1bfc1656c1ddafcfe4ba8131"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f352b62b45dff37b55ddd7b9c0c8672c4dd2eb9c0f9c11d395075a84e2c40f75"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7ab43061a0c81198d88f39aaf90dae9a7744620978f7ef3e3708339b8ed2ef01"}, + {file = "aiohttp-3.8.4-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c9cb1565a7ad52e096a6988e2ee0397f72fe056dadf75d17fa6b5aebaea05622"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:1b3ea7edd2d24538959c1c1abf97c744d879d4e541d38305f9bd7d9b10c9ec41"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:7c7837fe8037e96b6dd5cfcf47263c1620a9d332a87ec06a6ca4564e56bd0f36"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:3b90467ebc3d9fa5b0f9b6489dfb2c304a1db7b9946fa92aa76a831b9d587e99"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:cab9401de3ea52b4b4c6971db5fb5c999bd4260898af972bf23de1c6b5dd9d71"}, + {file = "aiohttp-3.8.4-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d1f9282c5f2b5e241034a009779e7b2a1aa045f667ff521e7948ea9b56e0c5ff"}, + {file = "aiohttp-3.8.4-cp36-cp36m-win32.whl", hash = "sha256:5e14f25765a578a0a634d5f0cd1e2c3f53964553a00347998dfdf96b8137f777"}, + {file = "aiohttp-3.8.4-cp36-cp36m-win_amd64.whl", hash = "sha256:4c745b109057e7e5f1848c689ee4fb3a016c8d4d92da52b312f8a509f83aa05e"}, + {file = "aiohttp-3.8.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:aede4df4eeb926c8fa70de46c340a1bc2c6079e1c40ccf7b0eae1313ffd33519"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4ddaae3f3d32fc2cb4c53fab020b69a05c8ab1f02e0e59665c6f7a0d3a5be54f"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c4eb3b82ca349cf6fadcdc7abcc8b3a50ab74a62e9113ab7a8ebc268aad35bb9"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9bcb89336efa095ea21b30f9e686763f2be4478f1b0a616969551982c4ee4c3b"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c08e8ed6fa3d477e501ec9db169bfac8140e830aa372d77e4a43084d8dd91ab"}, + {file = "aiohttp-3.8.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c6cd05ea06daca6ad6a4ca3ba7fe7dc5b5de063ff4daec6170ec0f9979f6c332"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b7a00a9ed8d6e725b55ef98b1b35c88013245f35f68b1b12c5cd4100dddac333"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:de04b491d0e5007ee1b63a309956eaed959a49f5bb4e84b26c8f5d49de140fa9"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:40653609b3bf50611356e6b6554e3a331f6879fa7116f3959b20e3528783e699"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dbf3a08a06b3f433013c143ebd72c15cac33d2914b8ea4bea7ac2c23578815d6"}, + {file = "aiohttp-3.8.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:854f422ac44af92bfe172d8e73229c270dc09b96535e8a548f99c84f82dde241"}, + {file = "aiohttp-3.8.4-cp37-cp37m-win32.whl", hash = "sha256:aeb29c84bb53a84b1a81c6c09d24cf33bb8432cc5c39979021cc0f98c1292a1a"}, + {file = "aiohttp-3.8.4-cp37-cp37m-win_amd64.whl", hash = "sha256:db3fc6120bce9f446d13b1b834ea5b15341ca9ff3f335e4a951a6ead31105480"}, + {file = "aiohttp-3.8.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:fabb87dd8850ef0f7fe2b366d44b77d7e6fa2ea87861ab3844da99291e81e60f"}, + {file = "aiohttp-3.8.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:91f6d540163f90bbaef9387e65f18f73ffd7c79f5225ac3d3f61df7b0d01ad15"}, + {file = "aiohttp-3.8.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d265f09a75a79a788237d7f9054f929ced2e69eb0bb79de3798c468d8a90f945"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d89efa095ca7d442a6d0cbc755f9e08190ba40069b235c9886a8763b03785da"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4dac314662f4e2aa5009977b652d9b8db7121b46c38f2073bfeed9f4049732cd"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fe11310ae1e4cd560035598c3f29d86cef39a83d244c7466f95c27ae04850f10"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6ddb2a2026c3f6a68c3998a6c47ab6795e4127315d2e35a09997da21865757f8"}, + {file = "aiohttp-3.8.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e75b89ac3bd27d2d043b234aa7b734c38ba1b0e43f07787130a0ecac1e12228a"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6e601588f2b502c93c30cd5a45bfc665faaf37bbe835b7cfd461753068232074"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:a5d794d1ae64e7753e405ba58e08fcfa73e3fad93ef9b7e31112ef3c9a0efb52"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:a1f4689c9a1462f3df0a1f7e797791cd6b124ddbee2b570d34e7f38ade0e2c71"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:3032dcb1c35bc330134a5b8a5d4f68c1a87252dfc6e1262c65a7e30e62298275"}, + {file = "aiohttp-3.8.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:8189c56eb0ddbb95bfadb8f60ea1b22fcfa659396ea36f6adcc521213cd7b44d"}, + {file = "aiohttp-3.8.4-cp38-cp38-win32.whl", hash = "sha256:33587f26dcee66efb2fff3c177547bd0449ab7edf1b73a7f5dea1e38609a0c54"}, + {file = "aiohttp-3.8.4-cp38-cp38-win_amd64.whl", hash = "sha256:e595432ac259af2d4630008bf638873d69346372d38255774c0e286951e8b79f"}, + {file = "aiohttp-3.8.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5a7bdf9e57126dc345b683c3632e8ba317c31d2a41acd5800c10640387d193ed"}, + {file = "aiohttp-3.8.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:22f6eab15b6db242499a16de87939a342f5a950ad0abaf1532038e2ce7d31567"}, + {file = "aiohttp-3.8.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:7235604476a76ef249bd64cb8274ed24ccf6995c4a8b51a237005ee7a57e8643"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ea9eb976ffdd79d0e893869cfe179a8f60f152d42cb64622fca418cd9b18dc2a"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:92c0cea74a2a81c4c76b62ea1cac163ecb20fb3ba3a75c909b9fa71b4ad493cf"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:493f5bc2f8307286b7799c6d899d388bbaa7dfa6c4caf4f97ef7521b9cb13719"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0a63f03189a6fa7c900226e3ef5ba4d3bd047e18f445e69adbd65af433add5a2"}, + {file = "aiohttp-3.8.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:10c8cefcff98fd9168cdd86c4da8b84baaa90bf2da2269c6161984e6737bf23e"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:bca5f24726e2919de94f047739d0a4fc01372801a3672708260546aa2601bf57"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:03baa76b730e4e15a45f81dfe29a8d910314143414e528737f8589ec60cf7391"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:8c29c77cc57e40f84acef9bfb904373a4e89a4e8b74e71aa8075c021ec9078c2"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:03543dcf98a6619254b409be2d22b51f21ec66272be4ebda7b04e6412e4b2e14"}, + {file = "aiohttp-3.8.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:17b79c2963db82086229012cff93ea55196ed31f6493bb1ccd2c62f1724324e4"}, + {file = "aiohttp-3.8.4-cp39-cp39-win32.whl", hash = "sha256:34ce9f93a4a68d1272d26030655dd1b58ff727b3ed2a33d80ec433561b03d67a"}, + {file = "aiohttp-3.8.4-cp39-cp39-win_amd64.whl", hash = "sha256:41a86a69bb63bb2fc3dc9ad5ea9f10f1c9c8e282b471931be0268ddd09430b04"}, + {file = "aiohttp-3.8.4.tar.gz", hash = "sha256:bf2e1a9162c1e441bf805a1fd166e249d574ca04e03b34f97e2928769e91ab5c"}, ] [package.dependencies] aiosignal = ">=1.1.2" async-timeout = ">=4.0.0a3,<5.0" attrs = ">=17.3.0" -charset-normalizer = ">=2.0,<3.0" +charset-normalizer = ">=2.0,<4.0" frozenlist = ">=1.1.1" multidict = ">=4.5,<7.0" yarl = ">=1.0,<2.0" @@ -210,7 +210,7 @@ develop = false type = "git" url = "https://github.com/ActiveState/appdirs.git" reference = "master" -resolved_reference = "211708144ddcbba1f02e26a43efec9aef57bc9fc" +resolved_reference = "8734277956c1df3b85385e6b308e954910533884" [[package]] name = "arrow" @@ -229,19 +229,19 @@ python-dateutil = ">=2.7.0" [[package]] name = "astroid" -version = "2.13.2" +version = "2.15.5" description = "An abstract syntax tree for Python with inference support." category = "dev" optional = false python-versions = ">=3.7.2" files = [ - {file = "astroid-2.13.2-py3-none-any.whl", hash = "sha256:8f6a8d40c4ad161d6fc419545ae4b2f275ed86d1c989c97825772120842ee0d2"}, - {file = "astroid-2.13.2.tar.gz", hash = "sha256:3bc7834720e1a24ca797fd785d77efb14f7a28ee8e635ef040b6e2d80ccb3303"}, + {file = "astroid-2.15.5-py3-none-any.whl", hash = "sha256:078e5212f9885fa85fbb0cf0101978a336190aadea6e13305409d099f71b2324"}, + {file = "astroid-2.15.5.tar.gz", hash = "sha256:1039262575027b441137ab4a62a793a9b43defb42c32d5670f38686207cd780f"}, ] [package.dependencies] lazy-object-proxy = ">=1.4.0" -typing-extensions = ">=4.0.0" +typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.11\""} wrapt = {version = ">=1.11,<2", markers = "python_version < \"3.11\""} [[package]] @@ -269,33 +269,33 @@ files = [ [[package]] name = "attrs" -version = "22.2.0" +version = "23.1.0" description = "Classes Without Boilerplate" category = "main" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "attrs-22.2.0-py3-none-any.whl", hash = "sha256:29e95c7f6778868dbd49170f98f8818f78f3dc5e0e37c0b1f474e3561b240836"}, - {file = "attrs-22.2.0.tar.gz", hash = "sha256:c9227bfc2f01993c03f68db37d1d15c9690188323c067c641f1a35ca58185f99"}, + {file = "attrs-23.1.0-py3-none-any.whl", hash = "sha256:1f28b4522cdc2fb4256ac1a020c78acf9cba2c6b461ccd2c126f3aa8e8335d04"}, + {file = "attrs-23.1.0.tar.gz", hash = "sha256:6279836d581513a26f1bf235f9acd333bc9115683f14f7e8fae46c98fc50e015"}, ] [package.extras] -cov = ["attrs[tests]", "coverage-enable-subprocess", "coverage[toml] (>=5.3)"] -dev = ["attrs[docs,tests]"] -docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope.interface"] -tests = ["attrs[tests-no-zope]", "zope.interface"] -tests-no-zope = ["cloudpickle", "cloudpickle", "hypothesis", "hypothesis", "mypy (>=0.971,<0.990)", "mypy (>=0.971,<0.990)", "pympler", "pympler", "pytest (>=4.3.0)", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-mypy-plugins", "pytest-xdist[psutil]", "pytest-xdist[psutil]"] +cov = ["attrs[tests]", "coverage[toml] (>=5.3)"] +dev = ["attrs[docs,tests]", "pre-commit"] +docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"] +tests = ["attrs[tests-no-zope]", "zope-interface"] +tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "pytest-xdist[psutil]"] [[package]] name = "autopep8" -version = "2.0.1" +version = "2.0.2" description = "A tool that automatically formats Python code to conform to the PEP 8 style guide" category = "dev" optional = false python-versions = ">=3.6" files = [ - {file = "autopep8-2.0.1-py2.py3-none-any.whl", hash = "sha256:be5bc98c33515b67475420b7b1feafc8d32c1a69862498eda4983b45bffd2687"}, - {file = "autopep8-2.0.1.tar.gz", hash = "sha256:d27a8929d8dcd21c0f4b3859d2d07c6c25273727b98afc984c039df0f0d86566"}, + {file = "autopep8-2.0.2-py2.py3-none-any.whl", hash = "sha256:86e9303b5e5c8160872b2f5ef611161b2893e9bfe8ccc7e2f76385947d57a2f1"}, + {file = "autopep8-2.0.2.tar.gz", hash = "sha256:f9849cdd62108cb739dbcdbfb7fdcc9a30d1b63c4cc3e1c1f893b5360941b61c"}, ] [package.dependencies] @@ -304,19 +304,16 @@ tomli = {version = "*", markers = "python_version < \"3.11\""} [[package]] name = "babel" -version = "2.11.0" +version = "2.12.1" description = "Internationalization utilities" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "Babel-2.11.0-py3-none-any.whl", hash = "sha256:1ad3eca1c885218f6dce2ab67291178944f810a10a9b5f3cb8382a5a232b64fe"}, - {file = "Babel-2.11.0.tar.gz", hash = "sha256:5ef4b3226b0180dedded4229651c8b0e1a3a6a2837d45a073272f313e4cf97f6"}, + {file = "Babel-2.12.1-py3-none-any.whl", hash = "sha256:b4246fb7677d3b98f501a39d43396d3cafdc8eadb045f4a31be01863f655c610"}, + {file = "Babel-2.12.1.tar.gz", hash = "sha256:cc2d99999cd01d44420ae725a21c9e3711b3aadc7976d6147f622d8581963455"}, ] -[package.dependencies] -pytz = ">=2015.7" - [[package]] name = "bcrypt" version = "4.0.1" @@ -352,16 +349,33 @@ files = [ tests = ["pytest (>=3.2.1,!=3.3.0)"] typecheck = ["mypy"] +[[package]] +name = "bidict" +version = "0.22.1" +description = "The bidirectional mapping library for Python." +category = "main" +optional = false +python-versions = ">=3.7" +files = [ + {file = "bidict-0.22.1-py3-none-any.whl", hash = "sha256:6ef212238eb884b664f28da76f33f1d28b260f665fc737b413b287d5487d1e7b"}, + {file = "bidict-0.22.1.tar.gz", hash = "sha256:1e0f7f74e4860e6d0943a05d4134c63a2fad86f3d4732fb265bd79e4e856d81d"}, +] + +[package.extras] +docs = ["furo", "sphinx", "sphinx-copybutton"] +lint = ["pre-commit"] +test = ["hypothesis", "pytest", "pytest-benchmark[histogram]", "pytest-cov", "pytest-xdist", "sortedcollections", "sortedcontainers", "sphinx"] + [[package]] name = "blessed" -version = "1.19.1" +version = "1.20.0" description = "Easy, practical library for making terminal apps, by providing an elegant, well-documented interface to Colors, Keyboard input, and screen Positioning capabilities." category = "main" optional = false python-versions = ">=2.7" files = [ - {file = "blessed-1.19.1-py2.py3-none-any.whl", hash = "sha256:63b8554ae2e0e7f43749b6715c734cc8f3883010a809bf16790102563e6cf25b"}, - {file = "blessed-1.19.1.tar.gz", hash = "sha256:9a0d099695bf621d4680dd6c73f6ad547f6a3442fbdbe80c4b1daa1edbc492fc"}, + {file = "blessed-1.20.0-py2.py3-none-any.whl", hash = "sha256:0c542922586a265e699188e52d5f5ac5ec0dd517e5a1041d90d2bbf23f906058"}, + {file = "blessed-1.20.0.tar.gz", hash = "sha256:2cdd67f8746e048f00df47a2880f4d6acbcdb399031b604e34ba8f71d5787680"}, ] [package.dependencies] @@ -371,26 +385,26 @@ wcwidth = ">=0.1.4" [[package]] name = "cachetools" -version = "5.2.1" +version = "5.3.1" description = "Extensible memoizing collections and decorators" category = "main" optional = false -python-versions = "~=3.7" +python-versions = ">=3.7" files = [ - {file = "cachetools-5.2.1-py3-none-any.whl", hash = "sha256:8462eebf3a6c15d25430a8c27c56ac61340b2ecf60c9ce57afc2b97e450e47da"}, - {file = "cachetools-5.2.1.tar.gz", hash = "sha256:5991bc0e08a1319bb618d3195ca5b6bc76646a49c21d55962977197b301cc1fe"}, + {file = "cachetools-5.3.1-py3-none-any.whl", hash = "sha256:95ef631eeaea14ba2e36f06437f36463aac3a096799e876ee55e5cdccb102590"}, + {file = "cachetools-5.3.1.tar.gz", hash = "sha256:dce83f2d9b4e1f732a8cd44af8e8fab2dbe46201467fc98b3ef8f269092bf62b"}, ] [[package]] name = "certifi" -version = "2022.12.7" +version = "2023.5.7" description = "Python package for providing Mozilla's CA Bundle." category = "main" optional = false python-versions = ">=3.6" files = [ - {file = "certifi-2022.12.7-py3-none-any.whl", hash = "sha256:4ad3232f5e926d6718ec31cfc1fcadfde020920e278684144551c91769c7bc18"}, - {file = "certifi-2022.12.7.tar.gz", hash = "sha256:35824b4c3a97115964b408844d64aa14db1cc518f6562e8d7261699d1350a9e3"}, + {file = "certifi-2023.5.7-py3-none-any.whl", hash = "sha256:c6c2e98f5c7869efca1f8916fed228dd91539f9f1b444c314c06eef02980c716"}, + {file = "certifi-2023.5.7.tar.gz", hash = "sha256:0f0d56dc5a6ad56fd4ba36484d6cc34451e1c6548c61daad8c320169f91eddc7"}, ] [[package]] @@ -484,31 +498,104 @@ files = [ [[package]] name = "charset-normalizer" -version = "2.1.1" +version = "3.1.0" description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." category = "main" optional = false -python-versions = ">=3.6.0" +python-versions = ">=3.7.0" files = [ - {file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"}, - {file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"}, + {file = "charset-normalizer-3.1.0.tar.gz", hash = "sha256:34e0a2f9c370eb95597aae63bf85eb5e96826d81e3dcf88b8886012906f509b5"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e0ac8959c929593fee38da1c2b64ee9778733cdf03c482c9ff1d508b6b593b2b"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d7fc3fca01da18fbabe4625d64bb612b533533ed10045a2ac3dd194bfa656b60"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04eefcee095f58eaabe6dc3cc2262f3bcd776d2c67005880894f447b3f2cb9c1"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:20064ead0717cf9a73a6d1e779b23d149b53daf971169289ed2ed43a71e8d3b0"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1435ae15108b1cb6fffbcea2af3d468683b7afed0169ad718451f8db5d1aff6f"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c84132a54c750fda57729d1e2599bb598f5fa0344085dbde5003ba429a4798c0"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:75f2568b4189dda1c567339b48cba4ac7384accb9c2a7ed655cd86b04055c795"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11d3bcb7be35e7b1bba2c23beedac81ee893ac9871d0ba79effc7fc01167db6c"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:891cf9b48776b5c61c700b55a598621fdb7b1e301a550365571e9624f270c203"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:5f008525e02908b20e04707a4f704cd286d94718f48bb33edddc7d7b584dddc1"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:b06f0d3bf045158d2fb8837c5785fe9ff9b8c93358be64461a1089f5da983137"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:49919f8400b5e49e961f320c735388ee686a62327e773fa5b3ce6721f7e785ce"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22908891a380d50738e1f978667536f6c6b526a2064156203d418f4856d6e86a"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-win32.whl", hash = "sha256:12d1a39aa6b8c6f6248bb54550efcc1c38ce0d8096a146638fd4738e42284448"}, + {file = "charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:65ed923f84a6844de5fd29726b888e58c62820e0769b76565480e1fdc3d062f8"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9a3267620866c9d17b959a84dd0bd2d45719b817245e49371ead79ed4f710d19"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6734e606355834f13445b6adc38b53c0fd45f1a56a9ba06c2058f86893ae8017"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f8303414c7b03f794347ad062c0516cee0e15f7a612abd0ce1e25caf6ceb47df"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aaf53a6cebad0eae578f062c7d462155eada9c172bd8c4d250b8c1d8eb7f916a"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3dc5b6a8ecfdc5748a7e429782598e4f17ef378e3e272eeb1340ea57c9109f41"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e1b25e3ad6c909f398df8921780d6a3d120d8c09466720226fc621605b6f92b1"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ca564606d2caafb0abe6d1b5311c2649e8071eb241b2d64e75a0d0065107e62"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b82fab78e0b1329e183a65260581de4375f619167478dddab510c6c6fb04d9b6"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:bd7163182133c0c7701b25e604cf1611c0d87712e56e88e7ee5d72deab3e76b5"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:11d117e6c63e8f495412d37e7dc2e2fff09c34b2d09dbe2bee3c6229577818be"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:cf6511efa4801b9b38dc5546d7547d5b5c6ef4b081c60b23e4d941d0eba9cbeb"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:abc1185d79f47c0a7aaf7e2412a0eb2c03b724581139193d2d82b3ad8cbb00ac"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cb7b2ab0188829593b9de646545175547a70d9a6e2b63bf2cd87a0a391599324"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-win32.whl", hash = "sha256:c36bcbc0d5174a80d6cccf43a0ecaca44e81d25be4b7f90f0ed7bcfbb5a00909"}, + {file = "charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:cca4def576f47a09a943666b8f829606bcb17e2bc2d5911a46c8f8da45f56755"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0c95f12b74681e9ae127728f7e5409cbbef9cd914d5896ef238cc779b8152373"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fca62a8301b605b954ad2e9c3666f9d97f63872aa4efcae5492baca2056b74ab"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ac0aa6cd53ab9a31d397f8303f92c42f534693528fafbdb997c82bae6e477ad9"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c3af8e0f07399d3176b179f2e2634c3ce9c1301379a6b8c9c9aeecd481da494f"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a5fc78f9e3f501a1614a98f7c54d3969f3ad9bba8ba3d9b438c3bc5d047dd28"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:628c985afb2c7d27a4800bfb609e03985aaecb42f955049957814e0491d4006d"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:74db0052d985cf37fa111828d0dd230776ac99c740e1a758ad99094be4f1803d"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:1e8fcdd8f672a1c4fc8d0bd3a2b576b152d2a349782d1eb0f6b8e52e9954731d"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:04afa6387e2b282cf78ff3dbce20f0cc071c12dc8f685bd40960cc68644cfea6"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:dd5653e67b149503c68c4018bf07e42eeed6b4e956b24c00ccdf93ac79cdff84"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d2686f91611f9e17f4548dbf050e75b079bbc2a82be565832bc8ea9047b61c8c"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:4155b51ae05ed47199dc5b2a4e62abccb274cee6b01da5b895099b61b1982974"}, + {file = "charset_normalizer-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322102cdf1ab682ecc7d9b1c5eed4ec59657a65e1c146a0da342b78f4112db23"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e633940f28c1e913615fd624fcdd72fdba807bf53ea6925d6a588e84e1151531"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3a06f32c9634a8705f4ca9946d667609f52cf130d5548881401f1eb2c39b1e2c"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7381c66e0561c5757ffe616af869b916c8b4e42b367ab29fedc98481d1e74e14"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3573d376454d956553c356df45bb824262c397c6e26ce43e8203c4c540ee0acb"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e89df2958e5159b811af9ff0f92614dabf4ff617c03a4c1c6ff53bf1c399e0e1"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:78cacd03e79d009d95635e7d6ff12c21eb89b894c354bd2b2ed0b4763373693b"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:de5695a6f1d8340b12a5d6d4484290ee74d61e467c39ff03b39e30df62cf83a0"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1c60b9c202d00052183c9be85e5eaf18a4ada0a47d188a83c8f5c5b23252f649"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f645caaf0008bacf349875a974220f1f1da349c5dbe7c4ec93048cdc785a3326"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea9f9c6034ea2d93d9147818f17c2a0860d41b71c38b9ce4d55f21b6f9165a11"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:80d1543d58bd3d6c271b66abf454d437a438dff01c3e62fdbcd68f2a11310d4b"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:73dc03a6a7e30b7edc5b01b601e53e7fc924b04e1835e8e407c12c037e81adbd"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6f5c2e7bc8a4bf7c426599765b1bd33217ec84023033672c1e9a8b35eaeaaaf8"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-win32.whl", hash = "sha256:12a2b561af122e3d94cdb97fe6fb2bb2b82cef0cdca131646fdb940a1eda04f0"}, + {file = "charset_normalizer-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:3160a0fd9754aab7d47f95a6b63ab355388d890163eb03b2d2b87ab0a30cfa59"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:38e812a197bf8e71a59fe55b757a84c1f946d0ac114acafaafaf21667a7e169e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6baf0baf0d5d265fa7944feb9f7451cc316bfe30e8df1a61b1bb08577c554f31"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8f25e17ab3039b05f762b0a55ae0b3632b2e073d9c8fc88e89aca31a6198e88f"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3747443b6a904001473370d7810aa19c3a180ccd52a7157aacc264a5ac79265e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b116502087ce8a6b7a5f1814568ccbd0e9f6cfd99948aa59b0e241dc57cf739f"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d16fd5252f883eb074ca55cb622bc0bee49b979ae4e8639fff6ca3ff44f9f854"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21fa558996782fc226b529fdd2ed7866c2c6ec91cee82735c98a197fae39f706"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f6c7a8a57e9405cad7485f4c9d3172ae486cfef1344b5ddd8e5239582d7355e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ac3775e3311661d4adace3697a52ac0bab17edd166087d493b52d4f4f553f9f0"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:10c93628d7497c81686e8e5e557aafa78f230cd9e77dd0c40032ef90c18f2230"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:6f4f4668e1831850ebcc2fd0b1cd11721947b6dc7c00bf1c6bd3c929ae14f2c7"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:0be65ccf618c1e7ac9b849c315cc2e8a8751d9cfdaa43027d4f6624bd587ab7e"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:53d0a3fa5f8af98a1e261de6a3943ca631c526635eb5817a87a59d9a57ebf48f"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-win32.whl", hash = "sha256:a04f86f41a8916fe45ac5024ec477f41f886b3c435da2d4e3d2709b22ab02af1"}, + {file = "charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:830d2948a5ec37c386d3170c483063798d7879037492540f10a475e3fd6f244b"}, + {file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"}, ] -[package.extras] -unicode-backport = ["unicodedata2"] - [[package]] name = "click" -version = "7.1.2" +version = "8.1.3" description = "Composable command line interface toolkit" category = "main" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +python-versions = ">=3.7" files = [ - {file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"}, - {file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"}, + {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"}, + {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"}, ] +[package.dependencies] +colorama = {version = "*", markers = "platform_system == \"Windows\""} + [[package]] name = "clique" version = "1.6.1" @@ -530,7 +617,7 @@ test = ["pytest (>=2.3.5,<5)", "pytest-cov (>=2,<3)", "pytest-runner (>=2.7,<3)" name = "colorama" version = "0.4.6" description = "Cross-platform colored terminal text." -category = "dev" +category = "main" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" files = [ @@ -567,63 +654,72 @@ files = [ [[package]] name = "coverage" -version = "7.0.5" +version = "7.2.7" description = "Code coverage measurement for Python" category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "coverage-7.0.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2a7f23bbaeb2a87f90f607730b45564076d870f1fb07b9318d0c21f36871932b"}, - {file = "coverage-7.0.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c18d47f314b950dbf24a41787ced1474e01ca816011925976d90a88b27c22b89"}, - {file = "coverage-7.0.5-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ef14d75d86f104f03dea66c13188487151760ef25dd6b2dbd541885185f05f40"}, - {file = "coverage-7.0.5-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66e50680e888840c0995f2ad766e726ce71ca682e3c5f4eee82272c7671d38a2"}, - {file = "coverage-7.0.5-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9fed35ca8c6e946e877893bbac022e8563b94404a605af1d1e6accc7eb73289"}, - {file = "coverage-7.0.5-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d8d04e755934195bdc1db45ba9e040b8d20d046d04d6d77e71b3b34a8cc002d0"}, - {file = "coverage-7.0.5-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e109f1c9a3ece676597831874126555997c48f62bddbcace6ed17be3e372de8"}, - {file = "coverage-7.0.5-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0a1890fca2962c4f1ad16551d660b46ea77291fba2cc21c024cd527b9d9c8809"}, - {file = "coverage-7.0.5-cp310-cp310-win32.whl", hash = "sha256:be9fcf32c010da0ba40bf4ee01889d6c737658f4ddff160bd7eb9cac8f094b21"}, - {file = "coverage-7.0.5-cp310-cp310-win_amd64.whl", hash = "sha256:cbfcba14a3225b055a28b3199c3d81cd0ab37d2353ffd7f6fd64844cebab31ad"}, - {file = "coverage-7.0.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:30b5fec1d34cc932c1bc04017b538ce16bf84e239378b8f75220478645d11fca"}, - {file = "coverage-7.0.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1caed2367b32cc80a2b7f58a9f46658218a19c6cfe5bc234021966dc3daa01f0"}, - {file = "coverage-7.0.5-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d254666d29540a72d17cc0175746cfb03d5123db33e67d1020e42dae611dc196"}, - {file = "coverage-7.0.5-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:19245c249aa711d954623d94f23cc94c0fd65865661f20b7781210cb97c471c0"}, - {file = "coverage-7.0.5-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b05ed4b35bf6ee790832f68932baf1f00caa32283d66cc4d455c9e9d115aafc"}, - {file = "coverage-7.0.5-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:29de916ba1099ba2aab76aca101580006adfac5646de9b7c010a0f13867cba45"}, - {file = "coverage-7.0.5-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:e057e74e53db78122a3979f908973e171909a58ac20df05c33998d52e6d35757"}, - {file = "coverage-7.0.5-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:411d4ff9d041be08fdfc02adf62e89c735b9468f6d8f6427f8a14b6bb0a85095"}, - {file = "coverage-7.0.5-cp311-cp311-win32.whl", hash = "sha256:52ab14b9e09ce052237dfe12d6892dd39b0401690856bcfe75d5baba4bfe2831"}, - {file = "coverage-7.0.5-cp311-cp311-win_amd64.whl", hash = "sha256:1f66862d3a41674ebd8d1a7b6f5387fe5ce353f8719040a986551a545d7d83ea"}, - {file = "coverage-7.0.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b69522b168a6b64edf0c33ba53eac491c0a8f5cc94fa4337f9c6f4c8f2f5296c"}, - {file = "coverage-7.0.5-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:436e103950d05b7d7f55e39beeb4d5be298ca3e119e0589c0227e6d0b01ee8c7"}, - {file = "coverage-7.0.5-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b8c56bec53d6e3154eaff6ea941226e7bd7cc0d99f9b3756c2520fc7a94e6d96"}, - {file = "coverage-7.0.5-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a38362528a9115a4e276e65eeabf67dcfaf57698e17ae388599568a78dcb029"}, - {file = "coverage-7.0.5-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:f67472c09a0c7486e27f3275f617c964d25e35727af952869dd496b9b5b7f6a3"}, - {file = "coverage-7.0.5-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:220e3fa77d14c8a507b2d951e463b57a1f7810a6443a26f9b7591ef39047b1b2"}, - {file = "coverage-7.0.5-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ecb0f73954892f98611e183f50acdc9e21a4653f294dfbe079da73c6378a6f47"}, - {file = "coverage-7.0.5-cp37-cp37m-win32.whl", hash = "sha256:d8f3e2e0a1d6777e58e834fd5a04657f66affa615dae61dd67c35d1568c38882"}, - {file = "coverage-7.0.5-cp37-cp37m-win_amd64.whl", hash = "sha256:9e662e6fc4f513b79da5d10a23edd2b87685815b337b1a30cd11307a6679148d"}, - {file = "coverage-7.0.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:790e4433962c9f454e213b21b0fd4b42310ade9c077e8edcb5113db0818450cb"}, - {file = "coverage-7.0.5-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:49640bda9bda35b057b0e65b7c43ba706fa2335c9a9896652aebe0fa399e80e6"}, - {file = "coverage-7.0.5-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d66187792bfe56f8c18ba986a0e4ae44856b1c645336bd2c776e3386da91e1dd"}, - {file = "coverage-7.0.5-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:276f4cd0001cd83b00817c8db76730938b1ee40f4993b6a905f40a7278103b3a"}, - {file = "coverage-7.0.5-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95304068686545aa368b35dfda1cdfbbdbe2f6fe43de4a2e9baa8ebd71be46e2"}, - {file = "coverage-7.0.5-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:17e01dd8666c445025c29684d4aabf5a90dc6ef1ab25328aa52bedaa95b65ad7"}, - {file = "coverage-7.0.5-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:ea76dbcad0b7b0deb265d8c36e0801abcddf6cc1395940a24e3595288b405ca0"}, - {file = "coverage-7.0.5-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:50a6adc2be8edd7ee67d1abc3cd20678987c7b9d79cd265de55941e3d0d56499"}, - {file = "coverage-7.0.5-cp38-cp38-win32.whl", hash = "sha256:e4ce984133b888cc3a46867c8b4372c7dee9cee300335e2925e197bcd45b9e16"}, - {file = "coverage-7.0.5-cp38-cp38-win_amd64.whl", hash = "sha256:4a950f83fd3f9bca23b77442f3a2b2ea4ac900944d8af9993743774c4fdc57af"}, - {file = "coverage-7.0.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3c2155943896ac78b9b0fd910fb381186d0c345911f5333ee46ac44c8f0e43ab"}, - {file = "coverage-7.0.5-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:54f7e9705e14b2c9f6abdeb127c390f679f6dbe64ba732788d3015f7f76ef637"}, - {file = "coverage-7.0.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0ee30375b409d9a7ea0f30c50645d436b6f5dfee254edffd27e45a980ad2c7f4"}, - {file = "coverage-7.0.5-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b78729038abea6a5df0d2708dce21e82073463b2d79d10884d7d591e0f385ded"}, - {file = "coverage-7.0.5-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:13250b1f0bd023e0c9f11838bdeb60214dd5b6aaf8e8d2f110c7e232a1bff83b"}, - {file = "coverage-7.0.5-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2c407b1950b2d2ffa091f4e225ca19a66a9bd81222f27c56bd12658fc5ca1209"}, - {file = "coverage-7.0.5-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c76a3075e96b9c9ff00df8b5f7f560f5634dffd1658bafb79eb2682867e94f78"}, - {file = "coverage-7.0.5-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:f26648e1b3b03b6022b48a9b910d0ae209e2d51f50441db5dce5b530fad6d9b1"}, - {file = "coverage-7.0.5-cp39-cp39-win32.whl", hash = "sha256:ba3027deb7abf02859aca49c865ece538aee56dcb4871b4cced23ba4d5088904"}, - {file = "coverage-7.0.5-cp39-cp39-win_amd64.whl", hash = "sha256:949844af60ee96a376aac1ded2a27e134b8c8d35cc006a52903fc06c24a3296f"}, - {file = "coverage-7.0.5-pp37.pp38.pp39-none-any.whl", hash = "sha256:b9727ac4f5cf2cbf87880a63870b5b9730a8ae3a4a360241a0fdaa2f71240ff0"}, - {file = "coverage-7.0.5.tar.gz", hash = "sha256:051afcbd6d2ac39298d62d340f94dbb6a1f31de06dfaf6fcef7b759dd3860c45"}, + {file = "coverage-7.2.7-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d39b5b4f2a66ccae8b7263ac3c8170994b65266797fb96cbbfd3fb5b23921db8"}, + {file = "coverage-7.2.7-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6d040ef7c9859bb11dfeb056ff5b3872436e3b5e401817d87a31e1750b9ae2fb"}, + {file = "coverage-7.2.7-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba90a9563ba44a72fda2e85302c3abc71c5589cea608ca16c22b9804262aaeb6"}, + {file = "coverage-7.2.7-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e7d9405291c6928619403db1d10bd07888888ec1abcbd9748fdaa971d7d661b2"}, + {file = "coverage-7.2.7-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31563e97dae5598556600466ad9beea39fb04e0229e61c12eaa206e0aa202063"}, + {file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ebba1cd308ef115925421d3e6a586e655ca5a77b5bf41e02eb0e4562a111f2d1"}, + {file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:cb017fd1b2603ef59e374ba2063f593abe0fc45f2ad9abdde5b4d83bd922a353"}, + {file = "coverage-7.2.7-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:d62a5c7dad11015c66fbb9d881bc4caa5b12f16292f857842d9d1871595f4495"}, + {file = "coverage-7.2.7-cp310-cp310-win32.whl", hash = "sha256:ee57190f24fba796e36bb6d3aa8a8783c643d8fa9760c89f7a98ab5455fbf818"}, + {file = "coverage-7.2.7-cp310-cp310-win_amd64.whl", hash = "sha256:f75f7168ab25dd93110c8a8117a22450c19976afbc44234cbf71481094c1b850"}, + {file = "coverage-7.2.7-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:06a9a2be0b5b576c3f18f1a241f0473575c4a26021b52b2a85263a00f034d51f"}, + {file = "coverage-7.2.7-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5baa06420f837184130752b7c5ea0808762083bf3487b5038d68b012e5937dbe"}, + {file = "coverage-7.2.7-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fdec9e8cbf13a5bf63290fc6013d216a4c7232efb51548594ca3631a7f13c3a3"}, + {file = "coverage-7.2.7-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:52edc1a60c0d34afa421c9c37078817b2e67a392cab17d97283b64c5833f427f"}, + {file = "coverage-7.2.7-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63426706118b7f5cf6bb6c895dc215d8a418d5952544042c8a2d9fe87fcf09cb"}, + {file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:afb17f84d56068a7c29f5fa37bfd38d5aba69e3304af08ee94da8ed5b0865833"}, + {file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:48c19d2159d433ccc99e729ceae7d5293fbffa0bdb94952d3579983d1c8c9d97"}, + {file = "coverage-7.2.7-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0e1f928eaf5469c11e886fe0885ad2bf1ec606434e79842a879277895a50942a"}, + {file = "coverage-7.2.7-cp311-cp311-win32.whl", hash = "sha256:33d6d3ea29d5b3a1a632b3c4e4f4ecae24ef170b0b9ee493883f2df10039959a"}, + {file = "coverage-7.2.7-cp311-cp311-win_amd64.whl", hash = "sha256:5b7540161790b2f28143191f5f8ec02fb132660ff175b7747b95dcb77ac26562"}, + {file = "coverage-7.2.7-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:f2f67fe12b22cd130d34d0ef79206061bfb5eda52feb6ce0dba0644e20a03cf4"}, + {file = "coverage-7.2.7-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a342242fe22407f3c17f4b499276a02b01e80f861f1682ad1d95b04018e0c0d4"}, + {file = "coverage-7.2.7-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:171717c7cb6b453aebac9a2ef603699da237f341b38eebfee9be75d27dc38e01"}, + {file = "coverage-7.2.7-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:49969a9f7ffa086d973d91cec8d2e31080436ef0fb4a359cae927e742abfaaa6"}, + {file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b46517c02ccd08092f4fa99f24c3b83d8f92f739b4657b0f146246a0ca6a831d"}, + {file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:a3d33a6b3eae87ceaefa91ffdc130b5e8536182cd6dfdbfc1aa56b46ff8c86de"}, + {file = "coverage-7.2.7-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:976b9c42fb2a43ebf304fa7d4a310e5f16cc99992f33eced91ef6f908bd8f33d"}, + {file = "coverage-7.2.7-cp312-cp312-win32.whl", hash = "sha256:8de8bb0e5ad103888d65abef8bca41ab93721647590a3f740100cd65c3b00511"}, + {file = "coverage-7.2.7-cp312-cp312-win_amd64.whl", hash = "sha256:9e31cb64d7de6b6f09702bb27c02d1904b3aebfca610c12772452c4e6c21a0d3"}, + {file = "coverage-7.2.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:58c2ccc2f00ecb51253cbe5d8d7122a34590fac9646a960d1430d5b15321d95f"}, + {file = "coverage-7.2.7-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d22656368f0e6189e24722214ed8d66b8022db19d182927b9a248a2a8a2f67eb"}, + {file = "coverage-7.2.7-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a895fcc7b15c3fc72beb43cdcbdf0ddb7d2ebc959edac9cef390b0d14f39f8a9"}, + {file = "coverage-7.2.7-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84606b74eb7de6ff581a7915e2dab7a28a0517fbe1c9239eb227e1354064dcd"}, + {file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0a5f9e1dbd7fbe30196578ca36f3fba75376fb99888c395c5880b355e2875f8a"}, + {file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:419bfd2caae268623dd469eff96d510a920c90928b60f2073d79f8fe2bbc5959"}, + {file = "coverage-7.2.7-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2aee274c46590717f38ae5e4650988d1af340fe06167546cc32fe2f58ed05b02"}, + {file = "coverage-7.2.7-cp37-cp37m-win32.whl", hash = "sha256:61b9a528fb348373c433e8966535074b802c7a5d7f23c4f421e6c6e2f1697a6f"}, + {file = "coverage-7.2.7-cp37-cp37m-win_amd64.whl", hash = "sha256:b1c546aca0ca4d028901d825015dc8e4d56aac4b541877690eb76490f1dc8ed0"}, + {file = "coverage-7.2.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:54b896376ab563bd38453cecb813c295cf347cf5906e8b41d340b0321a5433e5"}, + {file = "coverage-7.2.7-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3d376df58cc111dc8e21e3b6e24606b5bb5dee6024f46a5abca99124b2229ef5"}, + {file = "coverage-7.2.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e330fc79bd7207e46c7d7fd2bb4af2963f5f635703925543a70b99574b0fea9"}, + {file = "coverage-7.2.7-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e9d683426464e4a252bf70c3498756055016f99ddaec3774bf368e76bbe02b6"}, + {file = "coverage-7.2.7-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d13c64ee2d33eccf7437961b6ea7ad8673e2be040b4f7fd4fd4d4d28d9ccb1e"}, + {file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b7aa5f8a41217360e600da646004f878250a0d6738bcdc11a0a39928d7dc2050"}, + {file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8fa03bce9bfbeeef9f3b160a8bed39a221d82308b4152b27d82d8daa7041fee5"}, + {file = "coverage-7.2.7-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:245167dd26180ab4c91d5e1496a30be4cd721a5cf2abf52974f965f10f11419f"}, + {file = "coverage-7.2.7-cp38-cp38-win32.whl", hash = "sha256:d2c2db7fd82e9b72937969bceac4d6ca89660db0a0967614ce2481e81a0b771e"}, + {file = "coverage-7.2.7-cp38-cp38-win_amd64.whl", hash = "sha256:2e07b54284e381531c87f785f613b833569c14ecacdcb85d56b25c4622c16c3c"}, + {file = "coverage-7.2.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:537891ae8ce59ef63d0123f7ac9e2ae0fc8b72c7ccbe5296fec45fd68967b6c9"}, + {file = "coverage-7.2.7-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:06fb182e69f33f6cd1d39a6c597294cff3143554b64b9825d1dc69d18cc2fff2"}, + {file = "coverage-7.2.7-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:201e7389591af40950a6480bd9edfa8ed04346ff80002cec1a66cac4549c1ad7"}, + {file = "coverage-7.2.7-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f6951407391b639504e3b3be51b7ba5f3528adbf1a8ac3302b687ecababf929e"}, + {file = "coverage-7.2.7-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6f48351d66575f535669306aa7d6d6f71bc43372473b54a832222803eb956fd1"}, + {file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b29019c76039dc3c0fd815c41392a044ce555d9bcdd38b0fb60fb4cd8e475ba9"}, + {file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:81c13a1fc7468c40f13420732805a4c38a105d89848b7c10af65a90beff25250"}, + {file = "coverage-7.2.7-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:975d70ab7e3c80a3fe86001d8751f6778905ec723f5b110aed1e450da9d4b7f2"}, + {file = "coverage-7.2.7-cp39-cp39-win32.whl", hash = "sha256:7ee7d9d4822c8acc74a5e26c50604dff824710bc8de424904c0982e25c39c6cb"}, + {file = "coverage-7.2.7-cp39-cp39-win_amd64.whl", hash = "sha256:eb393e5ebc85245347950143969b241d08b52b88a3dc39479822e073a1a8eb27"}, + {file = "coverage-7.2.7-pp37.pp38.pp39-none-any.whl", hash = "sha256:b7b4c971f05e6ae490fef852c218b0e79d4e52f79ef0c8475566584a8fb3e01d"}, + {file = "coverage-7.2.7.tar.gz", hash = "sha256:924d94291ca674905fe9481f12294eb11f2d3d3fd1adb20314ba89e94f44ed59"}, ] [package.dependencies] @@ -765,24 +861,6 @@ files = [ {file = "cx_Logging-3.1.0.tar.gz", hash = "sha256:8a06834d8527aa904a68b25c9c1a5fa09f0dfdc94dbd9f86b81cd8d2f7a0e487"}, ] -[[package]] -name = "deprecated" -version = "1.2.13" -description = "Python @deprecated decorator to deprecate old python classes, functions or methods." -category = "main" -optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" -files = [ - {file = "Deprecated-1.2.13-py2.py3-none-any.whl", hash = "sha256:64756e3e14c8c5eea9795d93c524551432a0be75629f8f29e67ab8caf076c76d"}, - {file = "Deprecated-1.2.13.tar.gz", hash = "sha256:43ac5335da90c31c24ba028af536a91d41d53f9e6901ddb021bcc572ce44e38d"}, -] - -[package.dependencies] -wrapt = ">=1.10,<2" - -[package.extras] -dev = ["PyTest", "PyTest (<5)", "PyTest-Cov", "PyTest-Cov (<2.6)", "bump2version (<1)", "configparser (<5)", "importlib-metadata (<3)", "importlib-resources (<4)", "sphinx (<2)", "sphinxcontrib-websupport (<2)", "tox", "zipp (<2)"] - [[package]] name = "dill" version = "0.3.6" @@ -863,14 +941,14 @@ stone = ">=2" [[package]] name = "enlighten" -version = "1.11.1" +version = "1.11.2" description = "Enlighten Progress Bar" category = "main" optional = false python-versions = "*" files = [ - {file = "enlighten-1.11.1-py2.py3-none-any.whl", hash = "sha256:e825eb534ca80778bb7d46e5581527b2a6fae559b6cf09e290a7952c6e11961e"}, - {file = "enlighten-1.11.1.tar.gz", hash = "sha256:57abd98a3d3f83484ef9f91f9255f4d23c8b3097ecdb647c7b9b0049d600b7f8"}, + {file = "enlighten-1.11.2-py2.py3-none-any.whl", hash = "sha256:98c9eb20e022b6a57f1c8d4f17e16760780b6881e6d658c40f52d21255ea45f3"}, + {file = "enlighten-1.11.2.tar.gz", hash = "sha256:9284861dee5a272e0e1a3758cd3f3b7180b1bd1754875da76876f2a7f46ccb61"}, ] [package.dependencies] @@ -879,30 +957,30 @@ prefixed = ">=0.3.2" [[package]] name = "evdev" -version = "1.6.0" +version = "1.6.1" description = "Bindings to the Linux input handling subsystem" category = "main" optional = false python-versions = "*" files = [ - {file = "evdev-1.6.0.tar.gz", hash = "sha256:ecfa01b5c84f7e8c6ced3367ac95288f43cd84efbfd7dd7d0cdbfc0d18c87a6a"}, + {file = "evdev-1.6.1.tar.gz", hash = "sha256:299db8628cc73b237fc1cc57d3c2948faa0756e2a58b6194b5bf81dc2081f1e3"}, ] [[package]] name = "filelock" -version = "3.9.0" +version = "3.12.0" description = "A platform independent file lock." category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "filelock-3.9.0-py3-none-any.whl", hash = "sha256:f58d535af89bb9ad5cd4df046f741f8553a418c01a7856bf0d173bbc9f6bd16d"}, - {file = "filelock-3.9.0.tar.gz", hash = "sha256:7b319f24340b51f55a2bf7a12ac0755a9b03e718311dac567a0f4f7fabd2f5de"}, + {file = "filelock-3.12.0-py3-none-any.whl", hash = "sha256:ad98852315c2ab702aeb628412cbf7e95b7ce8c3bf9565670b4eaecf1db370a9"}, + {file = "filelock-3.12.0.tar.gz", hash = "sha256:fc03ae43288c013d2ea83c8597001b1129db351aad9c57fe2409327916b8e718"}, ] [package.extras] -docs = ["furo (>=2022.12.7)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"] -testing = ["covdefaults (>=2.2.2)", "coverage (>=7.0.1)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-timeout (>=2.1)"] +docs = ["furo (>=2023.3.27)", "sphinx (>=6.1.3)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"] +testing = ["covdefaults (>=2.3)", "coverage (>=7.2.3)", "diff-cover (>=7.5)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)", "pytest-timeout (>=2.1)"] [[package]] name = "flake8" @@ -1007,14 +1085,14 @@ files = [ [[package]] name = "ftrack-python-api" -version = "2.3.3" +version = "2.5.0" description = "Python API for ftrack." category = "main" optional = false -python-versions = ">=2.7.9, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, < 3.10" +python-versions = ">=2.7.9, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" files = [ - {file = "ftrack-python-api-2.3.3.tar.gz", hash = "sha256:358f37e5b1c5635eab107c19e27a0c890d512877f78af35b1ac416e90c037295"}, - {file = "ftrack_python_api-2.3.3-py2.py3-none-any.whl", hash = "sha256:82834c4d5def5557a2ea547a7e6f6ba84d3129e8f90457d8bbd85b287a2c39f6"}, + {file = "ftrack-python-api-2.5.0.tar.gz", hash = "sha256:95205022552b1abadec5e9dcb225762b8e8b9f16ebeadba374e56c25e69e6954"}, + {file = "ftrack_python_api-2.5.0-py2.py3-none-any.whl", hash = "sha256:59ef3f1d47e5c1df8c3f7ebcc937bbc9a5613b147f9ed083f10cff6370f0750d"}, ] [package.dependencies] @@ -1041,23 +1119,23 @@ files = [ [[package]] name = "gazu" -version = "0.8.34" +version = "0.9.3" description = "Gazu is a client for Zou, the API to store the data of your CG production." category = "main" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*" +python-versions = ">= 2.7, != 3.0.*, != 3.1.*, != 3.2.*, != 3.3.*, != 3.4.*, != 3.5.*, != 3.6.1, != 3.6.2" files = [ - {file = "gazu-0.8.34-py2.py3-none-any.whl", hash = "sha256:a78a8c5e61108aeaab6185646af78b0402dbdb29097e8ba5882bd55410f38c4b"}, + {file = "gazu-0.9.3-py2.py3-none-any.whl", hash = "sha256:daa6e4bdaa364b68a048ad97837aec011a0060d12edc3a5ac6ae34c13a05cb2b"}, ] [package.dependencies] -deprecated = "1.2.13" -python-socketio = {version = "4.6.1", extras = ["client"], markers = "python_version >= \"3.5\""} -requests = ">=2.25.1,<=2.28.1" +python-socketio = {version = "5.8.0", extras = ["client"], markers = "python_version != \"2.7\""} +requests = ">=2.25.1" [package.extras] dev = ["wheel"] -test = ["black (<=22.8.0)", "pre-commit (<=2.20.0)", "pytest (<=7.1.3)", "pytest-cov (<=3.0.0)", "requests-mock (==1.10.0)"] +lint = ["black (==23.3.0)", "pre-commit (==3.2.2)"] +test = ["pytest", "pytest-cov", "requests-mock"] [[package]] name = "gitdb" @@ -1076,14 +1154,14 @@ smmap = ">=3.0.1,<6" [[package]] name = "gitpython" -version = "3.1.30" -description = "GitPython is a python library used to interact with Git repositories" +version = "3.1.31" +description = "GitPython is a Python library used to interact with Git repositories" category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "GitPython-3.1.30-py3-none-any.whl", hash = "sha256:cd455b0000615c60e286208ba540271af9fe531fa6a87cc590a7298785ab2882"}, - {file = "GitPython-3.1.30.tar.gz", hash = "sha256:769c2d83e13f5d938b7688479da374c4e3d49f71549aaf462b646db9602ea6f8"}, + {file = "GitPython-3.1.31-py3-none-any.whl", hash = "sha256:f04893614f6aa713a60cbbe1e6a97403ef633103cdd0ef5eb6efe0deb98dbe8d"}, + {file = "GitPython-3.1.31.tar.gz", hash = "sha256:8ce3bcf69adfdf7c7d503e78fd3b1c492af782d58893b650adb2ac8912ddd573"}, ] [package.dependencies] @@ -1134,14 +1212,14 @@ uritemplate = ">=3.0.0,<4dev" [[package]] name = "google-auth" -version = "2.16.0" +version = "2.17.3" description = "Google Authentication Library" category = "main" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*" files = [ - {file = "google-auth-2.16.0.tar.gz", hash = "sha256:ed7057a101af1146f0554a769930ac9de506aeca4fd5af6543ebe791851a9fbd"}, - {file = "google_auth-2.16.0-py2.py3-none-any.whl", hash = "sha256:5045648c821fb72384cdc0e82cc326df195f113a33049d9b62b74589243d2acc"}, + {file = "google-auth-2.17.3.tar.gz", hash = "sha256:ce311e2bc58b130fddf316df57c9b3943c2a7b4f6ec31de9663a9333e4064efc"}, + {file = "google_auth-2.17.3-py2.py3-none-any.whl", hash = "sha256:f586b274d3eb7bd932ea424b1c702a30e0393a2e2bc4ca3eae8263ffd8be229f"}, ] [package.dependencies] @@ -1176,14 +1254,14 @@ six = "*" [[package]] name = "googleapis-common-protos" -version = "1.58.0" +version = "1.59.0" description = "Common protobufs used in Google APIs" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "googleapis-common-protos-1.58.0.tar.gz", hash = "sha256:c727251ec025947d545184ba17e3578840fc3a24a0516a020479edab660457df"}, - {file = "googleapis_common_protos-1.58.0-py2.py3-none-any.whl", hash = "sha256:ca3befcd4580dab6ad49356b46bf165bb68ff4b32389f028f1abd7c10ab9519a"}, + {file = "googleapis-common-protos-1.59.0.tar.gz", hash = "sha256:4168fcb568a826a52f23510412da405abd93f4d23ba544bb68d943b14ba3cb44"}, + {file = "googleapis_common_protos-1.59.0-py2.py3-none-any.whl", hash = "sha256:b287dc48449d1d41af0c69f4ea26242b5ae4c3d7249a38b0984c86a4caffff1f"}, ] [package.dependencies] @@ -1194,14 +1272,14 @@ grpc = ["grpcio (>=1.44.0,<2.0.0dev)"] [[package]] name = "httplib2" -version = "0.21.0" +version = "0.22.0" description = "A comprehensive HTTP client library." category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ - {file = "httplib2-0.21.0-py3-none-any.whl", hash = "sha256:987c8bb3eb82d3fa60c68699510a692aa2ad9c4bd4f123e51dfb1488c14cdd01"}, - {file = "httplib2-0.21.0.tar.gz", hash = "sha256:fc144f091c7286b82bec71bdbd9b27323ba709cc612568d3000893bfd9cb4b34"}, + {file = "httplib2-0.22.0-py3-none-any.whl", hash = "sha256:14ae0a53c1ba8f3d37e9e27cf37eabb0fb9980f435ba405d546948b009dd64dc"}, + {file = "httplib2-0.22.0.tar.gz", hash = "sha256:d7a10bc5ef5ab08322488bde8c726eeee5c8618723fdb399597ec58f3d82df81"}, ] [package.dependencies] @@ -1209,14 +1287,14 @@ pyparsing = {version = ">=2.4.2,<3.0.0 || >3.0.0,<3.0.1 || >3.0.1,<3.0.2 || >3.0 [[package]] name = "identify" -version = "2.5.13" +version = "2.5.24" description = "File identification library for Python" category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "identify-2.5.13-py2.py3-none-any.whl", hash = "sha256:8aa48ce56e38c28b6faa9f261075dea0a942dfbb42b341b4e711896cbb40f3f7"}, - {file = "identify-2.5.13.tar.gz", hash = "sha256:abb546bca6f470228785338a01b539de8a85bbf46491250ae03363956d8ebb10"}, + {file = "identify-2.5.24-py2.py3-none-any.whl", hash = "sha256:986dbfb38b1140e763e413e6feb44cd731faf72d1909543178aa79b0e258265d"}, + {file = "identify-2.5.24.tar.gz", hash = "sha256:0aac67d5b4812498056d28a9a512a483f5085cc28640b02b258a59dac34301d4"}, ] [package.extras] @@ -1248,14 +1326,14 @@ files = [ [[package]] name = "importlib-metadata" -version = "6.0.0" +version = "6.6.0" description = "Read metadata from Python packages" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "importlib_metadata-6.0.0-py3-none-any.whl", hash = "sha256:7efb448ec9a5e313a57655d35aa54cd3e01b7e1fbcf72dce1bf06119420f5bad"}, - {file = "importlib_metadata-6.0.0.tar.gz", hash = "sha256:e354bedeb60efa6affdcc8ae121b73544a7aa74156d047311948f6d711cd378d"}, + {file = "importlib_metadata-6.6.0-py3-none-any.whl", hash = "sha256:43dd286a2cd8995d5eaef7fee2066340423b818ed3fd70adf0bad5f1fac53fed"}, + {file = "importlib_metadata-6.6.0.tar.gz", hash = "sha256:92501cdf9cc66ebd3e612f1b4f0c0765dfa42f0fa38ffb319b6bd84dd675d705"}, ] [package.dependencies] @@ -1280,19 +1358,19 @@ files = [ [[package]] name = "isort" -version = "5.11.4" +version = "5.12.0" description = "A Python utility / library to sort Python imports." category = "dev" optional = false -python-versions = ">=3.7.0" +python-versions = ">=3.8.0" files = [ - {file = "isort-5.11.4-py3-none-any.whl", hash = "sha256:c033fd0edb91000a7f09527fe5c75321878f98322a77ddcc81adbd83724afb7b"}, - {file = "isort-5.11.4.tar.gz", hash = "sha256:6db30c5ded9815d813932c04c2f85a360bcdd35fed496f4d8f35495ef0a261b6"}, + {file = "isort-5.12.0-py3-none-any.whl", hash = "sha256:f84c2818376e66cf843d497486ea8fed8700b340f308f076c6fb1229dff318b6"}, + {file = "isort-5.12.0.tar.gz", hash = "sha256:8bef7dde241278824a6d83f44a544709b065191b95b6e50894bdc722fcba0504"}, ] [package.extras] -colors = ["colorama (>=0.4.3,<0.5.0)"] -pipfile-deprecated-finder = ["pipreqs", "requirementslib"] +colors = ["colorama (>=0.4.3)"] +pipfile-deprecated-finder = ["pip-shims (>=0.5.2)", "pipreqs", "requirementslib"] plugins = ["setuptools"] requirements-deprecated-finder = ["pip-api", "pipreqs"] @@ -1448,45 +1526,58 @@ files = [ [[package]] name = "lief" -version = "0.12.3" +version = "0.13.1" description = "Library to instrument executable formats" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "lief-0.12.3-cp310-cp310-macosx_10_14_arm64.whl", hash = "sha256:66724f337e6a36cea1a9380f13b59923f276c49ca837becae2e7be93a2e245d9"}, - {file = "lief-0.12.3-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:6d18aafa2028587c98f6d4387bec94346e92f2b5a8a5002f70b1cf35b1c045cc"}, - {file = "lief-0.12.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c078d6230279ffd3bca717c79664fb8368666f610b577deb24b374607936e9c1"}, - {file = "lief-0.12.3-cp310-cp310-win32.whl", hash = "sha256:e3a6af926532d0aac9e7501946134513d63217bacba666e6f7f5a0b7e15ba236"}, - {file = "lief-0.12.3-cp310-cp310-win_amd64.whl", hash = "sha256:0750b72e3aa161e1fb0e2e7f571121ae05d2428aafd742ff05a7656ad2288447"}, - {file = "lief-0.12.3-cp311-cp311-macosx_10_14_arm64.whl", hash = "sha256:b5c123cb99a7879d754c059e299198b34e7e30e3b64cf22e8962013db0099f47"}, - {file = "lief-0.12.3-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:8bc58fa26a830df6178e36f112cb2bbdd65deff593f066d2d51434ff78386ba5"}, - {file = "lief-0.12.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:04eb6b70d646fb5bd6183575928ee23715550f161f2832cbcd8c6ff2071fb408"}, - {file = "lief-0.12.3-cp311-cp311-win32.whl", hash = "sha256:7e2d0a53c403769b04adcf8df92e83c5e25f9103a052aa7f17b0a9cf057735fb"}, - {file = "lief-0.12.3-cp311-cp311-win_amd64.whl", hash = "sha256:7f6395c12ee1bc4a5162f567cba96d0c72dfb660e7902e84d4f3029daf14fe33"}, - {file = "lief-0.12.3-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:71327fdc764fd2b1f3cd371d8ac5e0b801bde32b71cfcf7dccee506d46768539"}, - {file = "lief-0.12.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d320fb80ed5b42b354b8e4f251ab05a51929c162c57c377b5e95ad4b1c1b415d"}, - {file = "lief-0.12.3-cp36-cp36m-win32.whl", hash = "sha256:176fa6c342dd480195cda34a20f62ac76dfae103b22ca7583b762e0b434ee1f3"}, - {file = "lief-0.12.3-cp36-cp36m-win_amd64.whl", hash = "sha256:3a18fe108fb82a2640864deef933731afe77413b1226551796ef2c373a1b3a2a"}, - {file = "lief-0.12.3-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:c73e990cd2737d1060b8c1e8edcc128832806995b69d1d6bf191409e2cea7bde"}, - {file = "lief-0.12.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:5fa2b1c8ffe47ee66b2507c2bb4e3fd628965532b7888c0627d10e690b5ef20c"}, - {file = "lief-0.12.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f224e9a261e88099f86160f121d088d30894c2946e3e551cf11c678daadcf2b"}, - {file = "lief-0.12.3-cp37-cp37m-win32.whl", hash = "sha256:3481d7c9fb3d3a1acff53851f40efd1a5a05d354312d367294bc2e310b736826"}, - {file = "lief-0.12.3-cp37-cp37m-win_amd64.whl", hash = "sha256:4e5173e1be5ebf43594f4eb187cbcb04758761942bc0a1e685ea1cb9047dc0d9"}, - {file = "lief-0.12.3-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:54d6a45e01260b9c8bf1c99f58257cff5338aee5c02eacfeee789f9d15cf38c6"}, - {file = "lief-0.12.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:4501dc399fb15dc7a3c8df4a76264a86be6d581d99098dafc3a67626149d8ff1"}, - {file = "lief-0.12.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c848aadac0816268aeb9dde7cefdb54bf24f78e664a19e97e74c92d3be1bb147"}, - {file = "lief-0.12.3-cp38-cp38-win32.whl", hash = "sha256:d7e35f9ee9dd6e79add3b343f83659b71c05189e5cb224e02a1902ddc7654e96"}, - {file = "lief-0.12.3-cp38-cp38-win_amd64.whl", hash = "sha256:b00667257b43e93d94166c959055b6147d46d302598f3ee55c194b40414c89cc"}, - {file = "lief-0.12.3-cp39-cp39-macosx_10_14_arm64.whl", hash = "sha256:e6a1b5b389090d524621c2455795e1262f62dc9381bedd96f0cd72b878c4066d"}, - {file = "lief-0.12.3-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:ae773196df814202c0c51056163a1478941b299512b09660a3c37be3c7fac81e"}, - {file = "lief-0.12.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:4a47f410032c63ac3be051d963d0337d6b47f0e94bfe8e946ab4b6c428f4d0f8"}, - {file = "lief-0.12.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dbd11367c2259bd1131a6c8755dcde33314324de5ea029227bfbc7d3755871e6"}, - {file = "lief-0.12.3-cp39-cp39-win32.whl", hash = "sha256:2ce53e311918c3e5b54c815ef420a747208d2a88200c41cd476f3dd1eb876bcf"}, - {file = "lief-0.12.3-cp39-cp39-win_amd64.whl", hash = "sha256:446e53ccf0ebd1616c5d573470662ff71ca6df3cd62ec1764e303764f3f03cca"}, - {file = "lief-0.12.3.zip", hash = "sha256:62e81d2f1a827d43152aed12446a604627e8833493a51dca027026eed0ce7128"}, + {file = "lief-0.13.1-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:b53317d78f8b7528e3f2f358b3f9334a1a84fae88c5aec1a3b7717ed31bfb066"}, + {file = "lief-0.13.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bb8b285a6c670df590c36fc0c19b9d2e32b99f17e57afa29bb3052f1d55aa50f"}, + {file = "lief-0.13.1-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:be871116faa698b6d9da76b0caec2ec5b7e7b8781cfb3a4ac0c4e348fb37ab49"}, + {file = "lief-0.13.1-cp310-cp310-manylinux_2_24_x86_64.whl", hash = "sha256:c6839df875e912edd3fc553ab5d1b916527adee9c57ba85c69314a93f7ba2e15"}, + {file = "lief-0.13.1-cp310-cp310-win32.whl", hash = "sha256:b1f295dbb57094443926ac6051bee9a1945d92344f470da1cb506060eb2f91ac"}, + {file = "lief-0.13.1-cp310-cp310-win_amd64.whl", hash = "sha256:8439805a389cc67b6d4ea7d757a3211f22298edce53c5b064fdf8bf05fabba54"}, + {file = "lief-0.13.1-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:3cfbc6c50f9e3a8015cd5ee88dfe83f423562c025439143bbd5c086a3f9fe599"}, + {file = "lief-0.13.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:661abaa48bc032b9a7529e0b73d2ced3e4a1f13381592f6b9e940750b07a5ac2"}, + {file = "lief-0.13.1-cp311-cp311-manylinux2014_aarch64.whl", hash = "sha256:23617d96d162081f8bf315d9b0494845891f8d0f04ad60991b83367ee9e261aa"}, + {file = "lief-0.13.1-cp311-cp311-manylinux_2_24_x86_64.whl", hash = "sha256:aa7f45c5125be80a513624d3a5f6bd50751c2edc6de5357fde218580111c8535"}, + {file = "lief-0.13.1-cp311-cp311-win32.whl", hash = "sha256:018b542f09fe2305e1585a3e63a7e5132927b835062b456e5c8c571db7784d1e"}, + {file = "lief-0.13.1-cp311-cp311-win_amd64.whl", hash = "sha256:bfbf8885a3643ea9aaf663d039f50ca58b228886c3fe412725b22851aeda3b77"}, + {file = "lief-0.13.1-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:a0472636ab15b9afecf8b5d55966912af8cb4de2f05b98fc05c87d51880d0208"}, + {file = "lief-0.13.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:ccfba33c02f21d4ede26ab85eb6539a00e74e236569c13dcbab2e157b73673c4"}, + {file = "lief-0.13.1-cp38-cp38-manylinux_2_24_x86_64.whl", hash = "sha256:e414d6c23f26053f4824d080885ab1b75482122796cba7d09cbf157900646289"}, + {file = "lief-0.13.1-cp38-cp38-win32.whl", hash = "sha256:a18fee5cf69adf9d5ee977778ccd46c39c450960f806231b26b69011f81bc712"}, + {file = "lief-0.13.1-cp38-cp38-win_amd64.whl", hash = "sha256:04c87039d1e68ebc467f83136179626403547dd1ce851541345f8ca0b1fe6c5b"}, + {file = "lief-0.13.1-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:0283a4c749afe58be8e21cdd9be79c657c51ca9b8346f75f4b97349b1f022851"}, + {file = "lief-0.13.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:95a4b6d1f8dba9360aecf7542e54ce5eb02c0e88f2d827b5445594d5d51109f5"}, + {file = "lief-0.13.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:16753bd72b1e3932d94d088a93b64e08c1f6c8bce1b064b47fe66ed73d9562b2"}, + {file = "lief-0.13.1-cp39-cp39-manylinux_2_24_x86_64.whl", hash = "sha256:965fadb1301d1a81f16067e4fa743d2be3f6aa71391a83b752ff811ec74b0766"}, + {file = "lief-0.13.1-cp39-cp39-win32.whl", hash = "sha256:57bdb0471760c4ff520f5e5d005e503cc7ea3ebe22df307bb579a1a561b8c4e9"}, + {file = "lief-0.13.1-cp39-cp39-win_amd64.whl", hash = "sha256:a3c900f49c3d3135c728faeb386d13310bb3511eb2d4e1c9b109b48ae2658361"}, ] +[[package]] +name = "linkify-it-py" +version = "2.0.2" +description = "Links recognition library with FULL unicode support." +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "linkify-it-py-2.0.2.tar.gz", hash = "sha256:19f3060727842c254c808e99d465c80c49d2c7306788140987a1a7a29b0d6ad2"}, + {file = "linkify_it_py-2.0.2-py3-none-any.whl", hash = "sha256:a3a24428f6c96f27370d7fe61d2ac0be09017be5190d68d8658233171f1b6541"}, +] + +[package.dependencies] +uc-micro-py = "*" + +[package.extras] +benchmark = ["pytest", "pytest-benchmark"] +dev = ["black", "flake8", "isort", "pre-commit", "pyproject-flake8"] +doc = ["myst-parser", "sphinx", "sphinx-book-theme"] +test = ["coverage", "pytest", "pytest-cov"] + [[package]] name = "log4mongo" version = "1.7.0" @@ -1501,6 +1592,47 @@ files = [ [package.dependencies] pymongo = "*" +[[package]] +name = "m2r2" +version = "0.3.3.post2" +description = "Markdown and reStructuredText in a single file." +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "m2r2-0.3.3.post2-py3-none-any.whl", hash = "sha256:86157721eb6eabcd54d4eea7195890cc58fa6188b8d0abea633383cfbb5e11e3"}, + {file = "m2r2-0.3.3.post2.tar.gz", hash = "sha256:e62bcb0e74b3ce19cda0737a0556b04cf4a43b785072fcef474558f2c1482ca8"}, +] + +[package.dependencies] +docutils = ">=0.19" +mistune = "0.8.4" + +[[package]] +name = "markdown-it-py" +version = "2.2.0" +description = "Python port of markdown-it. Markdown parsing, done right!" +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "markdown-it-py-2.2.0.tar.gz", hash = "sha256:7c9a5e412688bc771c67432cbfebcdd686c93ce6484913dccf06cb5a0bea35a1"}, + {file = "markdown_it_py-2.2.0-py3-none-any.whl", hash = "sha256:5a35f8d1870171d9acc47b99612dc146129b631baf04970128b568f190d0cc30"}, +] + +[package.dependencies] +mdurl = ">=0.1,<1.0" + +[package.extras] +benchmarking = ["psutil", "pytest", "pytest-benchmark"] +code-style = ["pre-commit (>=3.0,<4.0)"] +compare = ["commonmark (>=0.9,<1.0)", "markdown (>=3.4,<4.0)", "mistletoe (>=1.0,<2.0)", "mistune (>=2.0,<3.0)", "panflute (>=2.3,<3.0)"] +linkify = ["linkify-it-py (>=1,<3)"] +plugins = ["mdit-py-plugins"] +profiling = ["gprof2dot"] +rtd = ["attrs", "myst-parser", "pyyaml", "sphinx", "sphinx-copybutton", "sphinx-design", "sphinx_book_theme"] +testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"] + [[package]] name = "markupsafe" version = "2.0.1" @@ -1592,6 +1724,50 @@ files = [ {file = "mccabe-0.7.0.tar.gz", hash = "sha256:348e0240c33b60bbdf4e523192ef919f28cb2c3d7d5c7794f74009290f236325"}, ] +[[package]] +name = "mdit-py-plugins" +version = "0.3.5" +description = "Collection of plugins for markdown-it-py" +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "mdit-py-plugins-0.3.5.tar.gz", hash = "sha256:eee0adc7195e5827e17e02d2a258a2ba159944a0748f59c5099a4a27f78fcf6a"}, + {file = "mdit_py_plugins-0.3.5-py3-none-any.whl", hash = "sha256:ca9a0714ea59a24b2b044a1831f48d817dd0c817e84339f20e7889f392d77c4e"}, +] + +[package.dependencies] +markdown-it-py = ">=1.0.0,<3.0.0" + +[package.extras] +code-style = ["pre-commit"] +rtd = ["attrs", "myst-parser (>=0.16.1,<0.17.0)", "sphinx-book-theme (>=0.1.0,<0.2.0)"] +testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"] + +[[package]] +name = "mdurl" +version = "0.1.2" +description = "Markdown URL utilities" +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8"}, + {file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"}, +] + +[[package]] +name = "mistune" +version = "0.8.4" +description = "The fastest markdown parser in pure Python" +category = "dev" +optional = false +python-versions = "*" +files = [ + {file = "mistune-0.8.4-py2.py3-none-any.whl", hash = "sha256:88a1051873018da288eee8538d476dffe1262495144b33ecb586c4ab266bb8d4"}, + {file = "mistune-0.8.4.tar.gz", hash = "sha256:59a3429db53c50b5c6bcc8a07f8848cb00d7dc8bdb431a4ab41920d201d4756e"}, +] + [[package]] name = "multidict" version = "6.0.4" @@ -1676,131 +1852,81 @@ files = [ {file = "multidict-6.0.4.tar.gz", hash = "sha256:3666906492efb76453c0e7b97f2cf459b0682e7402c0489a95484965dbc1da49"}, ] +[[package]] +name = "myst-parser" +version = "0.18.1" +description = "An extended commonmark compliant parser, with bridges to docutils & sphinx." +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "myst-parser-0.18.1.tar.gz", hash = "sha256:79317f4bb2c13053dd6e64f9da1ba1da6cd9c40c8a430c447a7b146a594c246d"}, + {file = "myst_parser-0.18.1-py3-none-any.whl", hash = "sha256:61b275b85d9f58aa327f370913ae1bec26ebad372cc99f3ab85c8ec3ee8d9fb8"}, +] + +[package.dependencies] +docutils = ">=0.15,<0.20" +jinja2 = "*" +markdown-it-py = ">=1.0.0,<3.0.0" +mdit-py-plugins = ">=0.3.1,<0.4.0" +pyyaml = "*" +sphinx = ">=4,<6" +typing-extensions = "*" + +[package.extras] +code-style = ["pre-commit (>=2.12,<3.0)"] +linkify = ["linkify-it-py (>=1.0,<2.0)"] +rtd = ["ipython", "sphinx-book-theme", "sphinx-design", "sphinxcontrib.mermaid (>=0.7.1,<0.8.0)", "sphinxext-opengraph (>=0.6.3,<0.7.0)", "sphinxext-rediraffe (>=0.2.7,<0.3.0)"] +testing = ["beautifulsoup4", "coverage[toml]", "pytest (>=6,<7)", "pytest-cov", "pytest-param-files (>=0.3.4,<0.4.0)", "pytest-regressions", "sphinx (<5.2)", "sphinx-pytest"] + [[package]] name = "nodeenv" -version = "1.7.0" +version = "1.8.0" description = "Node.js virtual environment builder" category = "dev" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*" files = [ - {file = "nodeenv-1.7.0-py2.py3-none-any.whl", hash = "sha256:27083a7b96a25f2f5e1d8cb4b6317ee8aeda3bdd121394e5ac54e498028a042e"}, - {file = "nodeenv-1.7.0.tar.gz", hash = "sha256:e0e7f7dfb85fc5394c6fe1e8fa98131a2473e04311a45afb6508f7cf1836fa2b"}, + {file = "nodeenv-1.8.0-py2.py3-none-any.whl", hash = "sha256:df865724bb3c3adc86b3876fa209771517b0cfe596beff01a92700e0e8be4cec"}, + {file = "nodeenv-1.8.0.tar.gz", hash = "sha256:d51e0c37e64fbf47d017feac3145cdbb58836d7eee8c6f6d3b6880c5456227d2"}, ] [package.dependencies] setuptools = "*" -[[package]] -name = "opencolorio" -version = "2.2.1" -description = "OpenColorIO (OCIO) is a complete color management solution geared towards motion picture production with an emphasis on visual effects and computer animation." -category = "main" -optional = false -python-versions = "*" -files = [ - {file = "opencolorio-2.2.1-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:a9feec76e450325f12203264194d905a938d5e7944772b806886f9531e406d42"}, - {file = "opencolorio-2.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7eeae01328b359408940a1f29d53b15b034755413d95d08781b76084ee14cbb1"}, - {file = "opencolorio-2.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85b63a9162e99f0f29ef4074017d1b6e8caf59096043fb91cbacfc5bc01fa0b9"}, - {file = "opencolorio-2.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:67d19ea54daff2b209b91981da415aa41ea8e3a60fecd5dd843ae13272d38dcf"}, - {file = "opencolorio-2.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:da0043a1007d269b5da3c8ca1de8c63926b38bf5e08cfade6cb8f2f5f6b663b9"}, - {file = "opencolorio-2.2.1-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:62180cec075cae8dff56eeb977132eb9755d7fe312d8d34236cba838cb9314b3"}, - {file = "opencolorio-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:24b7bfc4b77c04845de847373e58232c48838042d5e45e027b8bf64bada988e3"}, - {file = "opencolorio-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41cadab13b18dbedd992df2056c787cf38bf89a5b0903b90f701d5228ac496f9"}, - {file = "opencolorio-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa278dd4414791a5605e685b562b6ad1c729a4a44c1c906151f5bca10c0ff10e"}, - {file = "opencolorio-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:7b44858c26b662ec42b089f8f85ea3aa63aa04e0e58e902a4cbf8cae0fbd4c6c"}, - {file = "opencolorio-2.2.1-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:07fce0d36a6041b524b2122b9f55fbd03e029def5a22e93822041b652b60590a"}, - {file = "opencolorio-2.2.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ae043bc588d9ee98f54fe9524481eba5634d6dd70d0c70e1bd242b60a3a81731"}, - {file = "opencolorio-2.2.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3a4ad1a4ed5742a7dda41f0548274e8328b2774ce04dfc31fd5dfbacabc4c166"}, - {file = "opencolorio-2.2.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bd885e34898c204db19a9e6926c551a74bda6d8e7d3ef27596630e3422b99b1"}, - {file = "opencolorio-2.2.1-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:86ed205bec96fd84e882d431c181560df0cf6f0f73150976303b6f3ff1d9d5ed"}, - {file = "opencolorio-2.2.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c1bf1c19baa86203e2329194ea837161520dae5c94e4f04b7659e9bfe4f1a6a9"}, - {file = "opencolorio-2.2.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:639f7052da7086d999c0d84e424453fb44abc8f2d22ec8601d20d8ee9d90384b"}, - {file = "opencolorio-2.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f7e3208c5c1ac63a6e921398db661fdd9309b17253b285f227818713f3faec92"}, - {file = "opencolorio-2.2.1-cp38-cp38-win_amd64.whl", hash = "sha256:68814600c0d8c07b552e1f1e4e32d45bffba4cb49b41481e5d4dd0bc56a206ea"}, - {file = "opencolorio-2.2.1-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:cb5337ac2804dbb90c856b423d2799d3dc35f9c948da25d8e6506d1dd8200df7"}, - {file = "opencolorio-2.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:425593a96de7927aa7cda065dc3729e881de1d0b72c43e704e02962adb63b4ad"}, - {file = "opencolorio-2.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8be9f6af01e4c710de4cc03c9b6de04ef0844bf611e9100abf045ec62a4c685a"}, - {file = "opencolorio-2.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e84788002aa28151409f2367a040e9d39ffea0a9129777451bd0c55ac87d9d47"}, - {file = "opencolorio-2.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:d92802922bc4e2ff3e9a06d44b6055efd1863abb1aaf0243849d35b077b72253"}, - {file = "opencolorio-2.2.1.tar.gz", hash = "sha256:283abb8db5bc18ab9686e08255a9245deaba3d7837be5363b7a69b0902b73a78"}, -] - -[[package]] -name = "opentimelineio" -version = "0.14.1" -description = "Editorial interchange format and API" -category = "main" -optional = false -python-versions = ">2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*, !=3.6.*, !=3.9.0" -files = [ - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:d5466742d1de323e922965e64ca7099f6dd756774d5f8b404a11d6ec6e7c5fe0"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:3f5187eb0cd8f607bfcc5c1d58ce878734975a0a6a91360a2605ad831198ed89"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:a2b64bf817d3065f7302c748bcc1d5938971e157c42e67fcb4e5e3612358813b"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-win32.whl", hash = "sha256:4cde33ea83ba041332bae55474fc155219871396b82031dd54d3e857973805b6"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27m-win_amd64.whl", hash = "sha256:d5dc153867c688ad4f39cbac78eda069cfe4f17376d9444d202f8073efa6cbd4"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:e07390dd1e0f82e5a5880ef2d498cbcbf482b4e5bfb4b9026342578a2fad358d"}, - {file = "OpenTimelineIO-0.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:4c1c522df397536c7620d44e32302165a9ef9bbbf0de83a5a0621f0a75047cc9"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e368a1d64366e3fdf1eadd10077a135833fdc893ff65f8dc43a91254cb7ee6fa"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:cf2cd94d11d0ae0fc78418cc0d17f2fe3bf85598b9b109f98b2301272a87bff5"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:7af41f43ef72fbf3c0ae2e47cabd7715eb348726c9e5e430ab36ce2357181cf4"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-win32.whl", hash = "sha256:55dbb859d16535ba5dab8a66a78aef8db55f030d771b6e5b91e94241b6db65bd"}, - {file = "OpenTimelineIO-0.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:08eaef8fbc423c25e94e189eb788c92c16916ae74d16ebcab34ba889e980c6ad"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:10b34a6997d6d6edb9b8a1c93718a1e90e8202d930559cdce2ad369e0473327f"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:c6b44986da8c7a64f8f549795279f0af05ec875a425d11600585dab0b3269ec2"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:45e1774d9f7215190a7c1e5b70dfc237f4a03b79b0539902d9ec8074707450f9"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-win32.whl", hash = "sha256:1ee0e72320309b8dedf0e2f40fc2b8d3dd2c854db0aba28a84a038d7177a1208"}, - {file = "OpenTimelineIO-0.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:bd58e9fdc765623e160ab3ec32e9199bcb3906a6f3c06cca7564fbb7c18d2d28"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f8d6e15f793577de59cc01e49600898ab12dbdc260dbcba83936c00965f0090a"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:50644c5e43076a3717b77645657545d0be19376ecb4c6f2e4103670052d726d4"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:a44f77fb5dbfd60d992ac2acc6782a7b0a26452db3a069425b8bd73b2f3bb336"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-win32.whl", hash = "sha256:63fb0d1258f490bcebf6325067db64a0f0dc405b8b905ee2bb625f04d04a8082"}, - {file = "OpenTimelineIO-0.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:8a303b2f3dfba542f588b227575f1967f7a9da854b34f620504e1ecb8d551f5f"}, - {file = "OpenTimelineIO-0.14.1.tar.gz", hash = "sha256:0b9adc0fd303b978af120259d6b1d23e0623800615b4a3e2eb9f9fb2c70d5d13"}, -] - -[package.dependencies] -pyaaf2 = ">=1.4.0,<1.5.0" - -[package.extras] -dev = ["check-manifest", "coverage (>=4.5)", "flake8 (>=3.5)", "urllib3 (>=1.24.3)"] -view = ["PySide2 (>=5.11,<6.0)"] - [[package]] name = "packaging" -version = "23.0" +version = "23.1" description = "Core utilities for Python packages" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "packaging-23.0-py3-none-any.whl", hash = "sha256:714ac14496c3e68c99c29b00845f7a2b85f3bb6f1078fd9f72fd20f0570002b2"}, - {file = "packaging-23.0.tar.gz", hash = "sha256:b6ad297f8907de0fa2fe1ccbd26fdaf387f5f47c7275fedf8cce89f99446cf97"}, + {file = "packaging-23.1-py3-none-any.whl", hash = "sha256:994793af429502c4ea2ebf6bf664629d07c1a9fe974af92966e4b8d2df7edc61"}, + {file = "packaging-23.1.tar.gz", hash = "sha256:a392980d2b6cffa644431898be54b0045151319d1e7ec34f0cfed48767dd334f"}, ] [[package]] name = "paramiko" -version = "2.12.0" +version = "3.2.0" description = "SSH2 protocol library" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "paramiko-2.12.0-py2.py3-none-any.whl", hash = "sha256:b2df1a6325f6996ef55a8789d0462f5b502ea83b3c990cbb5bbe57345c6812c4"}, - {file = "paramiko-2.12.0.tar.gz", hash = "sha256:376885c05c5d6aa6e1f4608aac2a6b5b0548b1add40274477324605903d9cd49"}, + {file = "paramiko-3.2.0-py3-none-any.whl", hash = "sha256:df0f9dd8903bc50f2e10580af687f3015bf592a377cd438d2ec9546467a14eb8"}, + {file = "paramiko-3.2.0.tar.gz", hash = "sha256:93cdce625a8a1dc12204439d45033f3261bdb2c201648cfcdc06f9fd0f94ec29"}, ] [package.dependencies] -bcrypt = ">=3.1.3" -cryptography = ">=2.5" -pynacl = ">=1.0.1" -six = "*" +bcrypt = ">=3.2" +cryptography = ">=3.3" +pynacl = ">=1.5" [package.extras] -all = ["bcrypt (>=3.1.3)", "gssapi (>=1.4.1)", "invoke (>=1.3)", "pyasn1 (>=0.1.7)", "pynacl (>=1.0.1)", "pywin32 (>=2.1.8)"] -ed25519 = ["bcrypt (>=3.1.3)", "pynacl (>=1.0.1)"] +all = ["gssapi (>=1.4.1)", "invoke (>=2.0)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8)"] gssapi = ["gssapi (>=1.4.1)", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8)"] -invoke = ["invoke (>=1.3)"] +invoke = ["invoke (>=2.0)"] [[package]] name = "parso" @@ -1820,18 +1946,19 @@ testing = ["docopt", "pytest (<6.0.0)"] [[package]] name = "patchelf" -version = "0.17.2.0" +version = "0.17.2.1" description = "A small utility to modify the dynamic linker and RPATH of ELF executables." category = "dev" optional = false python-versions = "*" files = [ - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:b8d86f32e1414d6964d5d166ddd2cf829d156fba0d28d32a3bd0192f987f4529"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.musllinux_1_1_ppc64le.whl", hash = "sha256:9233a0f2fc73820c5bd468f27507bdf0c9ac543f07c7f9888bb7cf910b1be22f"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_17_s390x.manylinux2014_s390x.musllinux_1_1_s390x.whl", hash = "sha256:6601d7d831508bcdd3d8ebfa6435c2379bf11e41af2409ae7b88de572926841c"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_5_i686.manylinux1_i686.musllinux_1_1_i686.whl", hash = "sha256:c62a34f0c25e6c2d6ae44389f819a00ccdf3f292ad1b814fbe1cc23cb27023ce"}, - {file = "patchelf-0.17.2.0-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.musllinux_1_1_x86_64.whl", hash = "sha256:1b9fd14f300341dc020ae05c49274dd1fa6727eabb4e61dd7fb6fb3600acd26e"}, - {file = "patchelf-0.17.2.0.tar.gz", hash = "sha256:dedf987a83d7f6d6f5512269e57f5feeec36719bd59567173b6d9beabe019efe"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.musllinux_1_1_aarch64.whl", hash = "sha256:fc329da0e8f628bd836dfb8eaf523547e342351fa8f739bf2b3fe4a6db5a297c"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.musllinux_1_1_armv7l.whl", hash = "sha256:ccb266a94edf016efe80151172c26cff8c2ec120a57a1665d257b0442784195d"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.musllinux_1_1_ppc64le.whl", hash = "sha256:f47b5bdd6885cfb20abdd14c707d26eb6f499a7f52e911865548d4aa43385502"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_17_s390x.manylinux2014_s390x.musllinux_1_1_s390x.whl", hash = "sha256:a9e6ebb0874a11f7ed56d2380bfaa95f00612b23b15f896583da30c2059fcfa8"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_5_i686.manylinux1_i686.musllinux_1_1_i686.whl", hash = "sha256:3c8d58f0e4c1929b1c7c45ba8da5a84a8f1aa6a82a46e1cfb2e44a4d40f350e5"}, + {file = "patchelf-0.17.2.1-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.musllinux_1_1_x86_64.whl", hash = "sha256:d1a9bc0d4fd80c038523ebdc451a1cce75237cfcc52dbd1aca224578001d5927"}, + {file = "patchelf-0.17.2.1.tar.gz", hash = "sha256:a6eb0dd452ce4127d0d5e1eb26515e39186fa609364274bc1b0b77539cfa7031"}, ] [package.extras] @@ -1854,110 +1981,99 @@ six = "*" [[package]] name = "pillow" -version = "9.4.0" +version = "9.5.0" description = "Python Imaging Library (Fork)" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "Pillow-9.4.0-1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b4b4e9dda4f4e4c4e6896f93e84a8f0bcca3b059de9ddf67dac3c334b1195e1"}, - {file = "Pillow-9.4.0-1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fb5c1ad6bad98c57482236a21bf985ab0ef42bd51f7ad4e4538e89a997624e12"}, - {file = "Pillow-9.4.0-1-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:f0caf4a5dcf610d96c3bd32932bfac8aee61c96e60481c2a0ea58da435e25acd"}, - {file = "Pillow-9.4.0-1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:3f4cc516e0b264c8d4ccd6b6cbc69a07c6d582d8337df79be1e15a5056b258c9"}, - {file = "Pillow-9.4.0-1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:b8c2f6eb0df979ee99433d8b3f6d193d9590f735cf12274c108bd954e30ca858"}, - {file = "Pillow-9.4.0-1-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b70756ec9417c34e097f987b4d8c510975216ad26ba6e57ccb53bc758f490dab"}, - {file = "Pillow-9.4.0-1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:43521ce2c4b865d385e78579a082b6ad1166ebed2b1a2293c3be1d68dd7ca3b9"}, - {file = "Pillow-9.4.0-2-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:9d9a62576b68cd90f7075876f4e8444487db5eeea0e4df3ba298ee38a8d067b0"}, - {file = "Pillow-9.4.0-2-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:87708d78a14d56a990fbf4f9cb350b7d89ee8988705e58e39bdf4d82c149210f"}, - {file = "Pillow-9.4.0-2-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:8a2b5874d17e72dfb80d917213abd55d7e1ed2479f38f001f264f7ce7bae757c"}, - {file = "Pillow-9.4.0-2-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:83125753a60cfc8c412de5896d10a0a405e0bd88d0470ad82e0869ddf0cb3848"}, - {file = "Pillow-9.4.0-2-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:9e5f94742033898bfe84c93c831a6f552bb629448d4072dd312306bab3bd96f1"}, - {file = "Pillow-9.4.0-2-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:013016af6b3a12a2f40b704677f8b51f72cb007dac785a9933d5c86a72a7fe33"}, - {file = "Pillow-9.4.0-2-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:99d92d148dd03fd19d16175b6d355cc1b01faf80dae93c6c3eb4163709edc0a9"}, - {file = "Pillow-9.4.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:2968c58feca624bb6c8502f9564dd187d0e1389964898f5e9e1fbc8533169157"}, - {file = "Pillow-9.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c5c1362c14aee73f50143d74389b2c158707b4abce2cb055b7ad37ce60738d47"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd752c5ff1b4a870b7661234694f24b1d2b9076b8bf337321a814c612665f343"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a3049a10261d7f2b6514d35bbb7a4dfc3ece4c4de14ef5876c4b7a23a0e566d"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16a8df99701f9095bea8a6c4b3197da105df6f74e6176c5b410bc2df2fd29a57"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:94cdff45173b1919350601f82d61365e792895e3c3a3443cf99819e6fbf717a5"}, - {file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:ed3e4b4e1e6de75fdc16d3259098de7c6571b1a6cc863b1a49e7d3d53e036070"}, - {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d5b2f8a31bd43e0f18172d8ac82347c8f37ef3e0b414431157718aa234991b28"}, - {file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:09b89ddc95c248ee788328528e6a2996e09eaccddeeb82a5356e92645733be35"}, - {file = "Pillow-9.4.0-cp310-cp310-win32.whl", hash = "sha256:f09598b416ba39a8f489c124447b007fe865f786a89dbfa48bb5cf395693132a"}, - {file = "Pillow-9.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6e78171be3fb7941f9910ea15b4b14ec27725865a73c15277bc39f5ca4f8391"}, - {file = "Pillow-9.4.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3fa1284762aacca6dc97474ee9c16f83990b8eeb6697f2ba17140d54b453e133"}, - {file = "Pillow-9.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:eaef5d2de3c7e9b21f1e762f289d17b726c2239a42b11e25446abf82b26ac132"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4dfdae195335abb4e89cc9762b2edc524f3c6e80d647a9a81bf81e17e3fb6f0"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6abfb51a82e919e3933eb137e17c4ae9c0475a25508ea88993bb59faf82f3b35"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:451f10ef963918e65b8869e17d67db5e2f4ab40e716ee6ce7129b0cde2876eab"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:6663977496d616b618b6cfa43ec86e479ee62b942e1da76a2c3daa1c75933ef4"}, - {file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:60e7da3a3ad1812c128750fc1bc14a7ceeb8d29f77e0a2356a8fb2aa8925287d"}, - {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:19005a8e58b7c1796bc0167862b1f54a64d3b44ee5d48152b06bb861458bc0f8"}, - {file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f715c32e774a60a337b2bb8ad9839b4abf75b267a0f18806f6f4f5f1688c4b5a"}, - {file = "Pillow-9.4.0-cp311-cp311-win32.whl", hash = "sha256:b222090c455d6d1a64e6b7bb5f4035c4dff479e22455c9eaa1bdd4c75b52c80c"}, - {file = "Pillow-9.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:ba6612b6548220ff5e9df85261bddc811a057b0b465a1226b39bfb8550616aee"}, - {file = "Pillow-9.4.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5f532a2ad4d174eb73494e7397988e22bf427f91acc8e6ebf5bb10597b49c493"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dd5a9c3091a0f414a963d427f920368e2b6a4c2f7527fdd82cde8ef0bc7a327"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef21af928e807f10bf4141cad4746eee692a0dd3ff56cfb25fce076ec3cc8abe"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:847b114580c5cc9ebaf216dd8c8dbc6b00a3b7ab0131e173d7120e6deade1f57"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:653d7fb2df65efefbcbf81ef5fe5e5be931f1ee4332c2893ca638c9b11a409c4"}, - {file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:46f39cab8bbf4a384ba7cb0bc8bae7b7062b6a11cfac1ca4bc144dea90d4a9f5"}, - {file = "Pillow-9.4.0-cp37-cp37m-win32.whl", hash = "sha256:7ac7594397698f77bce84382929747130765f66406dc2cd8b4ab4da68ade4c6e"}, - {file = "Pillow-9.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:46c259e87199041583658457372a183636ae8cd56dbf3f0755e0f376a7f9d0e6"}, - {file = "Pillow-9.4.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:0e51f608da093e5d9038c592b5b575cadc12fd748af1479b5e858045fff955a9"}, - {file = "Pillow-9.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:765cb54c0b8724a7c12c55146ae4647e0274a839fb6de7bcba841e04298e1011"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:519e14e2c49fcf7616d6d2cfc5c70adae95682ae20f0395e9280db85e8d6c4df"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d197df5489004db87d90b918033edbeee0bd6df3848a204bca3ff0a903bef837"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0845adc64fe9886db00f5ab68c4a8cd933ab749a87747555cec1c95acea64b0b"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:e1339790c083c5a4de48f688b4841f18df839eb3c9584a770cbd818b33e26d5d"}, - {file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:a96e6e23f2b79433390273eaf8cc94fec9c6370842e577ab10dabdcc7ea0a66b"}, - {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:7cfc287da09f9d2a7ec146ee4d72d6ea1342e770d975e49a8621bf54eaa8f30f"}, - {file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d7081c084ceb58278dd3cf81f836bc818978c0ccc770cbbb202125ddabec6628"}, - {file = "Pillow-9.4.0-cp38-cp38-win32.whl", hash = "sha256:df41112ccce5d47770a0c13651479fbcd8793f34232a2dd9faeccb75eb5d0d0d"}, - {file = "Pillow-9.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:7a21222644ab69ddd9967cfe6f2bb420b460dae4289c9d40ff9a4896e7c35c9a"}, - {file = "Pillow-9.4.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0f3269304c1a7ce82f1759c12ce731ef9b6e95b6df829dccd9fe42912cc48569"}, - {file = "Pillow-9.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cb362e3b0976dc994857391b776ddaa8c13c28a16f80ac6522c23d5257156bed"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2e0f87144fcbbe54297cae708c5e7f9da21a4646523456b00cc956bd4c65815"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:28676836c7796805914b76b1837a40f76827ee0d5398f72f7dcc634bae7c6264"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0884ba7b515163a1a05440a138adeb722b8a6ae2c2b33aea93ea3118dd3a899e"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:53dcb50fbdc3fb2c55431a9b30caeb2f7027fcd2aeb501459464f0214200a503"}, - {file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:e8c5cf126889a4de385c02a2c3d3aba4b00f70234bfddae82a5eaa3ee6d5e3e6"}, - {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6c6b1389ed66cdd174d040105123a5a1bc91d0aa7059c7261d20e583b6d8cbd2"}, - {file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0dd4c681b82214b36273c18ca7ee87065a50e013112eea7d78c7a1b89a739153"}, - {file = "Pillow-9.4.0-cp39-cp39-win32.whl", hash = "sha256:6d9dfb9959a3b0039ee06c1a1a90dc23bac3b430842dcb97908ddde05870601c"}, - {file = "Pillow-9.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:54614444887e0d3043557d9dbc697dbb16cfb5a35d672b7a0fcc1ed0cf1c600b"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b9b752ab91e78234941e44abdecc07f1f0d8f51fb62941d32995b8161f68cfe5"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3b56206244dc8711f7e8b7d6cad4663917cd5b2d950799425076681e8766286"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aabdab8ec1e7ca7f1434d042bf8b1e92056245fb179790dc97ed040361f16bfd"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:db74f5562c09953b2c5f8ec4b7dfd3f5421f31811e97d1dbc0a7c93d6e3a24df"}, - {file = "Pillow-9.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e9d7747847c53a16a729b6ee5e737cf170f7a16611c143d95aa60a109a59c336"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b52ff4f4e002f828ea6483faf4c4e8deea8d743cf801b74910243c58acc6eda3"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:575d8912dca808edd9acd6f7795199332696d3469665ef26163cd090fa1f8bfa"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c3c4ed2ff6760e98d262e0cc9c9a7f7b8a9f61aa4d47c58835cdaf7b0b8811bb"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e621b0246192d3b9cb1dc62c78cfa4c6f6d2ddc0ec207d43c0dedecb914f152a"}, - {file = "Pillow-9.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8f127e7b028900421cad64f51f75c051b628db17fb00e099eb148761eed598c9"}, - {file = "Pillow-9.4.0.tar.gz", hash = "sha256:a1c2d7780448eb93fbcc3789bf3916aa5720d942e37945f4056680317f1cd23e"}, + {file = "Pillow-9.5.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:ace6ca218308447b9077c14ea4ef381ba0b67ee78d64046b3f19cf4e1139ad16"}, + {file = "Pillow-9.5.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d3d403753c9d5adc04d4694d35cf0391f0f3d57c8e0030aac09d7678fa8030aa"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5ba1b81ee69573fe7124881762bb4cd2e4b6ed9dd28c9c60a632902fe8db8b38"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fe7e1c262d3392afcf5071df9afa574544f28eac825284596ac6db56e6d11062"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f36397bf3f7d7c6a3abdea815ecf6fd14e7fcd4418ab24bae01008d8d8ca15e"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:252a03f1bdddce077eff2354c3861bf437c892fb1832f75ce813ee94347aa9b5"}, + {file = "Pillow-9.5.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:85ec677246533e27770b0de5cf0f9d6e4ec0c212a1f89dfc941b64b21226009d"}, + {file = "Pillow-9.5.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:b416f03d37d27290cb93597335a2f85ed446731200705b22bb927405320de903"}, + {file = "Pillow-9.5.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:1781a624c229cb35a2ac31cc4a77e28cafc8900733a864870c49bfeedacd106a"}, + {file = "Pillow-9.5.0-cp310-cp310-win32.whl", hash = "sha256:8507eda3cd0608a1f94f58c64817e83ec12fa93a9436938b191b80d9e4c0fc44"}, + {file = "Pillow-9.5.0-cp310-cp310-win_amd64.whl", hash = "sha256:d3c6b54e304c60c4181da1c9dadf83e4a54fd266a99c70ba646a9baa626819eb"}, + {file = "Pillow-9.5.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:7ec6f6ce99dab90b52da21cf0dc519e21095e332ff3b399a357c187b1a5eee32"}, + {file = "Pillow-9.5.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:560737e70cb9c6255d6dcba3de6578a9e2ec4b573659943a5e7e4af13f298f5c"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:96e88745a55b88a7c64fa49bceff363a1a27d9a64e04019c2281049444a571e3"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d9c206c29b46cfd343ea7cdfe1232443072bbb270d6a46f59c259460db76779a"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cfcc2c53c06f2ccb8976fb5c71d448bdd0a07d26d8e07e321c103416444c7ad1"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:a0f9bb6c80e6efcde93ffc51256d5cfb2155ff8f78292f074f60f9e70b942d99"}, + {file = "Pillow-9.5.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8d935f924bbab8f0a9a28404422da8af4904e36d5c33fc6f677e4c4485515625"}, + {file = "Pillow-9.5.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fed1e1cf6a42577953abbe8e6cf2fe2f566daebde7c34724ec8803c4c0cda579"}, + {file = "Pillow-9.5.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c1170d6b195555644f0616fd6ed929dfcf6333b8675fcca044ae5ab110ded296"}, + {file = "Pillow-9.5.0-cp311-cp311-win32.whl", hash = "sha256:54f7102ad31a3de5666827526e248c3530b3a33539dbda27c6843d19d72644ec"}, + {file = "Pillow-9.5.0-cp311-cp311-win_amd64.whl", hash = "sha256:cfa4561277f677ecf651e2b22dc43e8f5368b74a25a8f7d1d4a3a243e573f2d4"}, + {file = "Pillow-9.5.0-cp311-cp311-win_arm64.whl", hash = "sha256:965e4a05ef364e7b973dd17fc765f42233415974d773e82144c9bbaaaea5d089"}, + {file = "Pillow-9.5.0-cp312-cp312-win32.whl", hash = "sha256:22baf0c3cf0c7f26e82d6e1adf118027afb325e703922c8dfc1d5d0156bb2eeb"}, + {file = "Pillow-9.5.0-cp312-cp312-win_amd64.whl", hash = "sha256:432b975c009cf649420615388561c0ce7cc31ce9b2e374db659ee4f7d57a1f8b"}, + {file = "Pillow-9.5.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5d4ebf8e1db4441a55c509c4baa7a0587a0210f7cd25fcfe74dbbce7a4bd1906"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:375f6e5ee9620a271acb6820b3d1e94ffa8e741c0601db4c0c4d3cb0a9c224bf"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:99eb6cafb6ba90e436684e08dad8be1637efb71c4f2180ee6b8f940739406e78"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2dfaaf10b6172697b9bceb9a3bd7b951819d1ca339a5ef294d1f1ac6d7f63270"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:763782b2e03e45e2c77d7779875f4432e25121ef002a41829d8868700d119392"}, + {file = "Pillow-9.5.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:35f6e77122a0c0762268216315bf239cf52b88865bba522999dc38f1c52b9b47"}, + {file = "Pillow-9.5.0-cp37-cp37m-win32.whl", hash = "sha256:aca1c196f407ec7cf04dcbb15d19a43c507a81f7ffc45b690899d6a76ac9fda7"}, + {file = "Pillow-9.5.0-cp37-cp37m-win_amd64.whl", hash = "sha256:322724c0032af6692456cd6ed554bb85f8149214d97398bb80613b04e33769f6"}, + {file = "Pillow-9.5.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:a0aa9417994d91301056f3d0038af1199eb7adc86e646a36b9e050b06f526597"}, + {file = "Pillow-9.5.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:f8286396b351785801a976b1e85ea88e937712ee2c3ac653710a4a57a8da5d9c"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c830a02caeb789633863b466b9de10c015bded434deb3ec87c768e53752ad22a"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:fbd359831c1657d69bb81f0db962905ee05e5e9451913b18b831febfe0519082"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f8fc330c3370a81bbf3f88557097d1ea26cd8b019d6433aa59f71195f5ddebbf"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:7002d0797a3e4193c7cdee3198d7c14f92c0836d6b4a3f3046a64bd1ce8df2bf"}, + {file = "Pillow-9.5.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:229e2c79c00e85989a34b5981a2b67aa079fd08c903f0aaead522a1d68d79e51"}, + {file = "Pillow-9.5.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:9adf58f5d64e474bed00d69bcd86ec4bcaa4123bfa70a65ce72e424bfb88ed96"}, + {file = "Pillow-9.5.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:662da1f3f89a302cc22faa9f14a262c2e3951f9dbc9617609a47521c69dd9f8f"}, + {file = "Pillow-9.5.0-cp38-cp38-win32.whl", hash = "sha256:6608ff3bf781eee0cd14d0901a2b9cc3d3834516532e3bd673a0a204dc8615fc"}, + {file = "Pillow-9.5.0-cp38-cp38-win_amd64.whl", hash = "sha256:e49eb4e95ff6fd7c0c402508894b1ef0e01b99a44320ba7d8ecbabefddcc5569"}, + {file = "Pillow-9.5.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:482877592e927fd263028c105b36272398e3e1be3269efda09f6ba21fd83ec66"}, + {file = "Pillow-9.5.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3ded42b9ad70e5f1754fb7c2e2d6465a9c842e41d178f262e08b8c85ed8a1d8e"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c446d2245ba29820d405315083d55299a796695d747efceb5717a8b450324115"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8aca1152d93dcc27dc55395604dcfc55bed5f25ef4c98716a928bacba90d33a3"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:608488bdcbdb4ba7837461442b90ea6f3079397ddc968c31265c1e056964f1ef"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:60037a8db8750e474af7ffc9faa9b5859e6c6d0a50e55c45576bf28be7419705"}, + {file = "Pillow-9.5.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:07999f5834bdc404c442146942a2ecadd1cb6292f5229f4ed3b31e0a108746b1"}, + {file = "Pillow-9.5.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:a127ae76092974abfbfa38ca2d12cbeddcdeac0fb71f9627cc1135bedaf9d51a"}, + {file = "Pillow-9.5.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:489f8389261e5ed43ac8ff7b453162af39c3e8abd730af8363587ba64bb2e865"}, + {file = "Pillow-9.5.0-cp39-cp39-win32.whl", hash = "sha256:9b1af95c3a967bf1da94f253e56b6286b50af23392a886720f563c547e48e964"}, + {file = "Pillow-9.5.0-cp39-cp39-win_amd64.whl", hash = "sha256:77165c4a5e7d5a284f10a6efaa39a0ae8ba839da344f20b111d62cc932fa4e5d"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:833b86a98e0ede388fa29363159c9b1a294b0905b5128baf01db683672f230f5"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aaf305d6d40bd9632198c766fb64f0c1a83ca5b667f16c1e79e1661ab5060140"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0852ddb76d85f127c135b6dd1f0bb88dbb9ee990d2cd9aa9e28526c93e794fba"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:91ec6fe47b5eb5a9968c79ad9ed78c342b1f97a091677ba0e012701add857829"}, + {file = "Pillow-9.5.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:cb841572862f629b99725ebaec3287fc6d275be9b14443ea746c1dd325053cbd"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:c380b27d041209b849ed246b111b7c166ba36d7933ec6e41175fd15ab9eb1572"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7c9af5a3b406a50e313467e3565fc99929717f780164fe6fbb7704edba0cebbe"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5671583eab84af046a397d6d0ba25343c00cd50bce03787948e0fff01d4fd9b1"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:84a6f19ce086c1bf894644b43cd129702f781ba5751ca8572f08aa40ef0ab7b7"}, + {file = "Pillow-9.5.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:1e7723bd90ef94eda669a3c2c19d549874dd5badaeefabefd26053304abe5799"}, + {file = "Pillow-9.5.0.tar.gz", hash = "sha256:bf548479d336726d7a0eceb6e767e179fbde37833ae42794602631a070d630f1"}, ] [package.extras] -docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"] +docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-removed-in", "sphinxext-opengraph"] tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"] [[package]] name = "platformdirs" -version = "2.6.2" +version = "3.5.1" description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"." category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "platformdirs-2.6.2-py3-none-any.whl", hash = "sha256:83c8f6d04389165de7c9b6f0c682439697887bca0aa2f1c87ef1826be3584490"}, - {file = "platformdirs-2.6.2.tar.gz", hash = "sha256:e1fea1fe471b9ff8332e229df3cb7de4f53eeea4998d3b6bfff542115e998bd2"}, + {file = "platformdirs-3.5.1-py3-none-any.whl", hash = "sha256:e2378146f1964972c03c085bb5662ae80b2b8c06226c54b2ff4aa9483e8a13a5"}, + {file = "platformdirs-3.5.1.tar.gz", hash = "sha256:412dae91f52a6f84830f39a8078cecd0e866cb72294a5c66808e74d5e88d251f"}, ] [package.extras] -docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"] -test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"] +docs = ["furo (>=2023.3.27)", "proselint (>=0.13)", "sphinx (>=6.2.1)", "sphinx-autodoc-typehints (>=1.23,!=1.23.4)"] +test = ["appdirs (==1.4.4)", "covdefaults (>=2.3)", "pytest (>=7.3.1)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"] [[package]] name = "pluggy" @@ -1987,16 +2103,31 @@ files = [ {file = "ply-3.11.tar.gz", hash = "sha256:00c7c1aaa88358b9c765b6d3000c6eec0ba42abca5351b095321aef446081da3"}, ] +[[package]] +name = "pockets" +version = "0.9.1" +description = "A collection of helpful Python tools!" +category = "dev" +optional = false +python-versions = "*" +files = [ + {file = "pockets-0.9.1-py2.py3-none-any.whl", hash = "sha256:68597934193c08a08eb2bf6a1d85593f627c22f9b065cc727a4f03f669d96d86"}, + {file = "pockets-0.9.1.tar.gz", hash = "sha256:9320f1a3c6f7a9133fe3b571f283bcf3353cd70249025ae8d618e40e9f7e92b3"}, +] + +[package.dependencies] +six = ">=1.5.2" + [[package]] name = "pre-commit" -version = "2.21.0" +version = "3.3.2" description = "A framework for managing and maintaining multi-language pre-commit hooks." category = "dev" optional = false -python-versions = ">=3.7" +python-versions = ">=3.8" files = [ - {file = "pre_commit-2.21.0-py2.py3-none-any.whl", hash = "sha256:e2f91727039fc39a92f58a588a25b87f936de6567eed4f0e673e0507edc75bad"}, - {file = "pre_commit-2.21.0.tar.gz", hash = "sha256:31ef31af7e474a8d8995027fefdfcf509b5c913ff31f2015b4ec4beb26a6f658"}, + {file = "pre_commit-3.3.2-py2.py3-none-any.whl", hash = "sha256:8056bc52181efadf4aac792b1f4f255dfd2fb5a350ded7335d251a68561e8cb6"}, + {file = "pre_commit-3.3.2.tar.gz", hash = "sha256:66e37bec2d882de1f17f88075047ef8962581f83c234ac08da21a0c58953d1f0"}, ] [package.dependencies] @@ -2008,38 +2139,37 @@ virtualenv = ">=20.10.0" [[package]] name = "prefixed" -version = "0.6.0" +version = "0.7.0" description = "Prefixed alternative numeric library" category = "main" optional = false python-versions = "*" files = [ - {file = "prefixed-0.6.0-py2.py3-none-any.whl", hash = "sha256:5ab094773dc71df68cc78151c81510b9521dcc6b58a4acb78442b127d4e400fa"}, - {file = "prefixed-0.6.0.tar.gz", hash = "sha256:b39fbfac72618fa1eeb5b3fd9ed1341f10dd90df75499cb4c38a6c3ef47cdd94"}, + {file = "prefixed-0.7.0-py2.py3-none-any.whl", hash = "sha256:537b0e4ff4516c4578f277a41d7104f769d6935ae9cdb0f88fed82ec7b3c0ca5"}, + {file = "prefixed-0.7.0.tar.gz", hash = "sha256:0b54d15e602eb8af4ac31b1db21a37ea95ce5890e0741bb0dd9ded493cefbbe9"}, ] [[package]] name = "protobuf" -version = "4.21.12" +version = "4.23.2" description = "" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "protobuf-4.21.12-cp310-abi3-win32.whl", hash = "sha256:b135410244ebe777db80298297a97fbb4c862c881b4403b71bac9d4107d61fd1"}, - {file = "protobuf-4.21.12-cp310-abi3-win_amd64.whl", hash = "sha256:89f9149e4a0169cddfc44c74f230d7743002e3aa0b9472d8c28f0388102fc4c2"}, - {file = "protobuf-4.21.12-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:299ea899484ee6f44604deb71f424234f654606b983cb496ea2a53e3c63ab791"}, - {file = "protobuf-4.21.12-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:d1736130bce8cf131ac7957fa26880ca19227d4ad68b4888b3be0dea1f95df97"}, - {file = "protobuf-4.21.12-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:78a28c9fa223998472886c77042e9b9afb6fe4242bd2a2a5aced88e3f4422aa7"}, - {file = "protobuf-4.21.12-cp37-cp37m-win32.whl", hash = "sha256:3d164928ff0727d97022957c2b849250ca0e64777ee31efd7d6de2e07c494717"}, - {file = "protobuf-4.21.12-cp37-cp37m-win_amd64.whl", hash = "sha256:f45460f9ee70a0ec1b6694c6e4e348ad2019275680bd68a1d9314b8c7e01e574"}, - {file = "protobuf-4.21.12-cp38-cp38-win32.whl", hash = "sha256:6ab80df09e3208f742c98443b6166bcb70d65f52cfeb67357d52032ea1ae9bec"}, - {file = "protobuf-4.21.12-cp38-cp38-win_amd64.whl", hash = "sha256:1f22ac0ca65bb70a876060d96d914dae09ac98d114294f77584b0d2644fa9c30"}, - {file = "protobuf-4.21.12-cp39-cp39-win32.whl", hash = "sha256:27f4d15021da6d2b706ddc3860fac0a5ddaba34ab679dc182b60a8bb4e1121cc"}, - {file = "protobuf-4.21.12-cp39-cp39-win_amd64.whl", hash = "sha256:237216c3326d46808a9f7c26fd1bd4b20015fb6867dc5d263a493ef9a539293b"}, - {file = "protobuf-4.21.12-py2.py3-none-any.whl", hash = "sha256:a53fd3f03e578553623272dc46ac2f189de23862e68565e83dde203d41b76fc5"}, - {file = "protobuf-4.21.12-py3-none-any.whl", hash = "sha256:b98d0148f84e3a3c569e19f52103ca1feacdac0d2df8d6533cf983d1fda28462"}, - {file = "protobuf-4.21.12.tar.gz", hash = "sha256:7cd532c4566d0e6feafecc1059d04c7915aec8e182d1cf7adee8b24ef1e2e6ab"}, + {file = "protobuf-4.23.2-cp310-abi3-win32.whl", hash = "sha256:384dd44cb4c43f2ccddd3645389a23ae61aeb8cfa15ca3a0f60e7c3ea09b28b3"}, + {file = "protobuf-4.23.2-cp310-abi3-win_amd64.whl", hash = "sha256:09310bce43353b46d73ba7e3bca78273b9bc50349509b9698e64d288c6372c2a"}, + {file = "protobuf-4.23.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:b2cfab63a230b39ae603834718db74ac11e52bccaaf19bf20f5cce1a84cf76df"}, + {file = "protobuf-4.23.2-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:c52cfcbfba8eb791255edd675c1fe6056f723bf832fa67f0442218f8817c076e"}, + {file = "protobuf-4.23.2-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:86df87016d290143c7ce3be3ad52d055714ebaebb57cc659c387e76cfacd81aa"}, + {file = "protobuf-4.23.2-cp37-cp37m-win32.whl", hash = "sha256:281342ea5eb631c86697e1e048cb7e73b8a4e85f3299a128c116f05f5c668f8f"}, + {file = "protobuf-4.23.2-cp37-cp37m-win_amd64.whl", hash = "sha256:ce744938406de1e64b91410f473736e815f28c3b71201302612a68bf01517fea"}, + {file = "protobuf-4.23.2-cp38-cp38-win32.whl", hash = "sha256:6c081863c379bb1741be8f8193e893511312b1d7329b4a75445d1ea9955be69e"}, + {file = "protobuf-4.23.2-cp38-cp38-win_amd64.whl", hash = "sha256:25e3370eda26469b58b602e29dff069cfaae8eaa0ef4550039cc5ef8dc004511"}, + {file = "protobuf-4.23.2-cp39-cp39-win32.whl", hash = "sha256:efabbbbac1ab519a514579ba9ec52f006c28ae19d97915951f69fa70da2c9e91"}, + {file = "protobuf-4.23.2-cp39-cp39-win_amd64.whl", hash = "sha256:54a533b971288af3b9926e53850c7eb186886c0c84e61daa8444385a4720297f"}, + {file = "protobuf-4.23.2-py3-none-any.whl", hash = "sha256:8da6070310d634c99c0db7df48f10da495cc283fd9e9234877f0cd182d43ab7f"}, + {file = "protobuf-4.23.2.tar.gz", hash = "sha256:20874e7ca4436f683b64ebdbee2129a5a2c301579a67d1a7dda2cdf62fb7f5f7"}, ] [[package]] @@ -2054,54 +2184,43 @@ files = [ {file = "py-1.11.0.tar.gz", hash = "sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719"}, ] -[[package]] -name = "pyaaf2" -version = "1.4.0" -description = "A python module for reading and writing advanced authoring format files" -category = "main" -optional = false -python-versions = "*" -files = [ - {file = "pyaaf2-1.4.0.tar.gz", hash = "sha256:160d3c26c7cfef7176d0bdb0e55772156570435982c3abfa415e89639f76e71b"}, -] - [[package]] name = "pyasn1" -version = "0.4.8" -description = "ASN.1 types and codecs" +version = "0.5.0" +description = "Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208)" category = "main" optional = false -python-versions = "*" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" files = [ - {file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"}, - {file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"}, + {file = "pyasn1-0.5.0-py2.py3-none-any.whl", hash = "sha256:87a2121042a1ac9358cabcaf1d07680ff97ee6404333bacca15f76aa8ad01a57"}, + {file = "pyasn1-0.5.0.tar.gz", hash = "sha256:97b7290ca68e62a832558ec3976f15cbf911bf5d7c7039d8b861c2a0ece69fde"}, ] [[package]] name = "pyasn1-modules" -version = "0.2.8" -description = "A collection of ASN.1-based protocols modules." +version = "0.3.0" +description = "A collection of ASN.1-based protocols modules" category = "main" optional = false -python-versions = "*" +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7" files = [ - {file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"}, - {file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"}, + {file = "pyasn1_modules-0.3.0-py2.py3-none-any.whl", hash = "sha256:d3ccd6ed470d9ffbc716be08bd90efbd44d0734bc9303818f7336070984a162d"}, + {file = "pyasn1_modules-0.3.0.tar.gz", hash = "sha256:5bd01446b736eb9d31512a30d46c1ac3395d676c6f3cafa4c03eb54b9925631c"}, ] [package.dependencies] -pyasn1 = ">=0.4.6,<0.5.0" +pyasn1 = ">=0.4.6,<0.6.0" [[package]] name = "pyblish-base" -version = "1.8.8" +version = "1.8.11" description = "Plug-in driven automation framework for content" category = "main" optional = false python-versions = "*" files = [ - {file = "pyblish-base-1.8.8.tar.gz", hash = "sha256:85a2c034dbb86345bf95018f5b7b3c36c7dda29ea4d93c10d167f147b69a7b22"}, - {file = "pyblish_base-1.8.8-py2.py3-none-any.whl", hash = "sha256:67ea253a05d007ab4a175e44e778928ea7bdb0e9707573e1100417bbf0451a53"}, + {file = "pyblish-base-1.8.11.tar.gz", hash = "sha256:86dfeec0567430eb7eb25f89a18312054147a729ec66f6ac8c7e421fd15b66e1"}, + {file = "pyblish_base-1.8.11-py2.py3-none-any.whl", hash = "sha256:c321be7020c946fe9dfa11941241bd985a572c5009198b4f9810e5afad1f0b4b"}, ] [[package]] @@ -2130,20 +2249,21 @@ files = [ [[package]] name = "pydocstyle" -version = "3.0.0" +version = "6.3.0" description = "Python docstring style checker" category = "dev" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "pydocstyle-3.0.0-py2-none-any.whl", hash = "sha256:2258f9b0df68b97bf3a6c29003edc5238ff8879f1efb6f1999988d934e432bd8"}, - {file = "pydocstyle-3.0.0-py3-none-any.whl", hash = "sha256:ed79d4ec5e92655eccc21eb0c6cf512e69512b4a97d215ace46d17e4990f2039"}, - {file = "pydocstyle-3.0.0.tar.gz", hash = "sha256:5741c85e408f9e0ddf873611085e819b809fca90b619f5fd7f34bd4959da3dd4"}, + {file = "pydocstyle-6.3.0-py3-none-any.whl", hash = "sha256:118762d452a49d6b05e194ef344a55822987a462831ade91ec5c06fd2169d019"}, + {file = "pydocstyle-6.3.0.tar.gz", hash = "sha256:7ce43f0c0ac87b07494eb9c0b462c0b73e6ff276807f204d6b53edc72b7e44e1"}, ] [package.dependencies] -six = "*" -snowballstemmer = "*" +snowballstemmer = ">=2.2.0" + +[package.extras] +toml = ["tomli (>=1.2.3)"] [[package]] name = "pyflakes" @@ -2159,14 +2279,14 @@ files = [ [[package]] name = "pygments" -version = "2.14.0" +version = "2.15.1" description = "Pygments is a syntax highlighting package written in Python." category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"}, - {file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"}, + {file = "Pygments-2.15.1-py3-none-any.whl", hash = "sha256:db2db3deb4b4179f399a09054b023b6a586b76499d36965813c71aa8ed7b5fd1"}, + {file = "Pygments-2.15.1.tar.gz", hash = "sha256:8ace4d3c1dd481894b2005f560ead0f9f19ee64fe983366be1a21e171d12775c"}, ] [package.extras] @@ -2174,18 +2294,18 @@ plugins = ["importlib-metadata"] [[package]] name = "pylint" -version = "2.15.10" +version = "2.17.4" description = "python code static checker" category = "dev" optional = false python-versions = ">=3.7.2" files = [ - {file = "pylint-2.15.10-py3-none-any.whl", hash = "sha256:9df0d07e8948a1c3ffa3b6e2d7e6e63d9fb457c5da5b961ed63106594780cc7e"}, - {file = "pylint-2.15.10.tar.gz", hash = "sha256:b3dc5ef7d33858f297ac0d06cc73862f01e4f2e74025ec3eff347ce0bc60baf5"}, + {file = "pylint-2.17.4-py3-none-any.whl", hash = "sha256:7a1145fb08c251bdb5cca11739722ce64a63db479283d10ce718b2460e54123c"}, + {file = "pylint-2.17.4.tar.gz", hash = "sha256:5dcf1d9e19f41f38e4e85d10f511e5b9c35e1aa74251bf95cdd8cb23584e2db1"}, ] [package.dependencies] -astroid = ">=2.12.13,<=2.14.0-dev0" +astroid = ">=2.15.4,<=2.17.0-dev0" colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} dill = {version = ">=0.2", markers = "python_version < \"3.11\""} isort = ">=4.2.5,<6" @@ -2352,7 +2472,7 @@ files = [ cffi = ">=1.4.1" [package.extras] -docs = ["sphinx (>=1.6.5)", "sphinx_rtd_theme"] +docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"] tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"] [[package]] @@ -2376,83 +2496,83 @@ six = "*" [[package]] name = "pyobjc-core" -version = "9.0.1" +version = "9.1.1" description = "Python<->ObjC Interoperability Module" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-core-9.0.1.tar.gz", hash = "sha256:5ce1510bb0bdff527c597079a42b2e13a19b7592e76850be7960a2775b59c929"}, - {file = "pyobjc_core-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:b614406d46175b1438a9596b664bf61952323116704d19bc1dea68052a0aad98"}, - {file = "pyobjc_core-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:bd397e729f6271c694fb70df8f5d3d3c9b2f2b8ac02fbbdd1757ca96027b94bb"}, - {file = "pyobjc_core-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:d919934eaa6d1cf1505ff447a5c2312be4c5651efcb694eb9f59e86f5bd25e6b"}, - {file = "pyobjc_core-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:67d67ca8b164f38ceacce28a18025845c3ec69613f3301935d4d2c4ceb22e3fd"}, - {file = "pyobjc_core-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:39d11d71f6161ac0bd93cffc8ea210bb0178b56d16a7408bf74283d6ecfa7430"}, - {file = "pyobjc_core-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:25be1c4d530e473ed98b15063b8d6844f0733c98914de6f09fe1f7652b772bbc"}, + {file = "pyobjc-core-9.1.1.tar.gz", hash = "sha256:4b6cb9053b5fcd3c0e76b8c8105a8110786b20f3403c5643a688c5ec51c55c6b"}, + {file = "pyobjc_core-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4bd07049fd9fe5b40e4b7c468af9cf942508387faf383a5acb043d20627bad2c"}, + {file = "pyobjc_core-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:1a8307527621729ff2ab67860e7ed84f76ad0da881b248c2ef31e0da0088e4ba"}, + {file = "pyobjc_core-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:083004d28b92ccb483a41195c600728854843b0486566aba2d6e63eef51f80e6"}, + {file = "pyobjc_core-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:d61e9517d451bc062a7fae8b3648f4deba4fa54a24926fa1cf581b90ef4ced5a"}, + {file = "pyobjc_core-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:1626909916603a3b04c07c721cf1af0e0b892cec85bb3db98d05ba024f1786fc"}, + {file = "pyobjc_core-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2dde96462b52e952515d142e2afbb6913624a02c13582047e06211e6c3993728"}, ] [[package]] name = "pyobjc-framework-applicationservices" -version = "9.0.1" +version = "9.1.1" description = "Wrappers for the framework ApplicationServices on macOS" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-framework-ApplicationServices-9.0.1.tar.gz", hash = "sha256:e3a350781fdcab6c1da4343dfc54ae3c0523e59e61147432f61dcfb365752fde"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:c4214febf3cc2e417ae15d45b6502e5c20f1097cd042b025760d019fe69b07b6"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:c62693e01ba272fbadcd66677881311d2d63fda84b9662533fcc883c54be76d7"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6829df4dc4cf012bdc221d4e0296d6699b33ca89741569df153989a0c18aa40e"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:5af5d12871499c429dd68c5ec4be56c631ec8439aa953c266eed9afdffb5ec2b"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:724da9dfae6ab0505b90340231a685720288caecfcca335b08903102e97a93dc"}, - {file = "pyobjc_framework_ApplicationServices-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:8e1dbfc8f482c433ce642724d4bed0c527c7f2f2f8b9ba1ac3f778a68cf1538d"}, + {file = "pyobjc-framework-ApplicationServices-9.1.1.tar.gz", hash = "sha256:50c613bee364150bbd6cd992ca32b0848a780922cb57d112f6a4a56e29802e19"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f9286c05d80a6aafc7388a4c2a35801db9ea6bab960acf2df079110debb659cb"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3db1c79d7420052320529432e8562cd339a7ef0841df83a85bbf3648abb55b6b"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:baf5a0d72c9e2d2a3b402823a2ea53eccdc27b8b9319d61cee7d753a30cb9411"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7cc5aad93bb6b178f838fe9b78cdcf1217c7baab157b1f3525e0acf696cc3490"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:a03434605873b9f83255a0b16bbc539d06afd77f5969a3b11a1fc293dfd56680"}, + {file = "pyobjc_framework_ApplicationServices-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:9f18e9b92674be0e503a2bd451a328693450e6f80ee510bbc375238b14117e24"}, ] [package.dependencies] -pyobjc-core = ">=9.0.1" -pyobjc-framework-Cocoa = ">=9.0.1" -pyobjc-framework-Quartz = ">=9.0.1" +pyobjc-core = ">=9.1.1" +pyobjc-framework-Cocoa = ">=9.1.1" +pyobjc-framework-Quartz = ">=9.1.1" [[package]] name = "pyobjc-framework-cocoa" -version = "9.0.1" +version = "9.1.1" description = "Wrappers for the Cocoa frameworks on macOS" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-framework-Cocoa-9.0.1.tar.gz", hash = "sha256:a8b53b3426f94307a58e2f8214dc1094c19afa9dcb96f21be12f937d968b2df3"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:5f94b0f92a62b781e633e58f09bcaded63d612f9b1e15202f5f372ea59e4aebd"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f062c3bb5cc89902e6d164aa9a66ffc03638645dd5f0468b6f525ac997c86e51"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0b374c0a9d32ba4fc5610ab2741cb05a005f1dfb82a47dbf2dbb2b3a34b73ce5"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8928080cebbce91ac139e460d3dfc94c7cb6935be032dcae9c0a51b247f9c2d9"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:9d2bd86a0a98d906f762f5dc59f2fc67cce32ae9633b02ff59ac8c8a33dd862d"}, - {file = "pyobjc_framework_Cocoa-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2a41053cbcee30e1e8914efa749c50b70bf782527d5938f2bc2a6393740969ce"}, + {file = "pyobjc-framework-Cocoa-9.1.1.tar.gz", hash = "sha256:345c32b6d1f3db45f635e400f2d0d6c0f0f7349d45ec823f76fc1df43d13caeb"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:9176a4276f3b4b4758e9b9ca10698be5341ceffaeaa4fa055133417179e6bc37"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:5e1e96fb3461f46ff951413515f2029e21be268b0e033db6abee7b64ec8e93d3"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:083b195c496d30c6b9dd86126a6093c4b95e0138e9b052b13e54103fcc0b4872"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a1b3333b1aa045608848bd68bbab4c31171f36aeeaa2fabeb4527c6f6f1e33cd"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:54c017354671f0d955432986c42218e452ca69906a101c8e7acde8510432303a"}, + {file = "pyobjc_framework_Cocoa-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:10c0075688ce95b92caf59e368585fffdcc98c919bc345067af070222f5d01d2"}, ] [package.dependencies] -pyobjc-core = ">=9.0.1" +pyobjc-core = ">=9.1.1" [[package]] name = "pyobjc-framework-quartz" -version = "9.0.1" +version = "9.1.1" description = "Wrappers for the Quartz frameworks on macOS" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "pyobjc-framework-Quartz-9.0.1.tar.gz", hash = "sha256:7e2e37fc5c01bbdc37c1355d886e6184d1977043d5a05d1d956573fa8503dac3"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:13a546a2af7c1c5c2bbf88cce6891896a449e92466415ad14d9a5ee93fba6ef3"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:93ee6e339ab6928115a92188a0162ec80bf62cd0bd908d54695c1b9f9381ea45"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:066ffbe26de1456f79a6d9467dabd6a3b9ef228318a0ba3f3fedbdbc0e2d3444"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0c9b553be6ef672e0886b0d2c77d1841b1a942c7b1dc9a67f6e1376dc5493513"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:7b39f85d0b747b0a13a11d0d538001b757c82d05e656eab437167b5b118307df"}, - {file = "pyobjc_framework_Quartz-9.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0bedb6e1b7789d5b24fd5c790f0d53e4c62930313c97a891068bfa0e966ccc0b"}, + {file = "pyobjc-framework-Quartz-9.1.1.tar.gz", hash = "sha256:8d03bc52bd6d90f00f274fd709b82e53dc5dfca19f3fc744997634e03faaa159"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:32602f46353a5eadb0843a0940635c8ec103f47d5b1ce84284604e01c6393fa8"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:7b3a56f52f9bb7fbd45c5a5f0de312ee9c104dfce6e1731015048d9e65a95e43"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b3138773dfb269e6e3894e20dcfaf90102bad84ba44aa2bba8683b8426a69cdd"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:3b583e6953e9c65525db908c33c1c97cead3ac8aa0cf2759fcc568666a1b7373"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp38-cp38-macosx_11_0_universal2.whl", hash = "sha256:c3efcbba62e9c5351c2a9469faabb7f400f214cd8cf98f57798d6b6c93c76efb"}, + {file = "pyobjc_framework_Quartz-9.1.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:a82d43c6c5fe0f5d350cfc97212bef7c572e345aa9c6e23909d23dace6448c99"}, ] [package.dependencies] -pyobjc-core = ">=9.0.1" -pyobjc-framework-Cocoa = ">=9.0.1" +pyobjc-core = ">=9.1.1" +pyobjc-framework-Cocoa = ">=9.1.1" [[package]] name = "pyparsing" @@ -2507,14 +2627,14 @@ testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xm [[package]] name = "pytest-cov" -version = "4.0.0" +version = "4.1.0" description = "Pytest plugin for measuring coverage." category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "pytest-cov-4.0.0.tar.gz", hash = "sha256:996b79efde6433cdbd0088872dbc5fb3ed7fe1578b68cdbba634f14bb8dd0470"}, - {file = "pytest_cov-4.0.0-py3-none-any.whl", hash = "sha256:2feb1b751d66a8bd934e5edfa2e961d11309dc37b73b0eabe73b5945fee20f6b"}, + {file = "pytest-cov-4.1.0.tar.gz", hash = "sha256:3904b13dfbfec47f003b8e77fd5b589cd11904a21ddf1ab38a64f204d6a10ef6"}, + {file = "pytest_cov-4.1.0-py3-none-any.whl", hash = "sha256:6ba70b9e97e69fcc3fb45bfeab2d0a138fb65c4d0d6a41ef33983ad114be8c3a"}, ] [package.dependencies] @@ -2559,43 +2679,40 @@ six = ">=1.5" [[package]] name = "python-engineio" -version = "3.14.2" -description = "Engine.IO server" +version = "4.4.1" +description = "Engine.IO server and client for Python" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "python-engineio-3.14.2.tar.gz", hash = "sha256:eab4553f2804c1ce97054c8b22cf0d5a9ab23128075248b97e1a5b2f29553085"}, - {file = "python_engineio-3.14.2-py2.py3-none-any.whl", hash = "sha256:5a9e6086d192463b04a1428ff1f85b6ba631bbb19d453b144ffc04f530542b84"}, + {file = "python-engineio-4.4.1.tar.gz", hash = "sha256:eb3663ecb300195926b526386f712dff84cd092c818fb7b62eeeda9160120c29"}, + {file = "python_engineio-4.4.1-py3-none-any.whl", hash = "sha256:28ab67f94cba2e5f598cbb04428138fd6bb8b06d3478c939412da445f24f0773"}, ] -[package.dependencies] -six = ">=1.9.0" - [package.extras] asyncio-client = ["aiohttp (>=3.4)"] client = ["requests (>=2.21.0)", "websocket-client (>=0.54.0)"] [[package]] name = "python-socketio" -version = "4.6.1" -description = "Socket.IO server" +version = "5.8.0" +description = "Socket.IO server and client for Python" category = "main" optional = false -python-versions = "*" +python-versions = ">=3.6" files = [ - {file = "python-socketio-4.6.1.tar.gz", hash = "sha256:cd1f5aa492c1eb2be77838e837a495f117e17f686029ebc03d62c09e33f4fa10"}, - {file = "python_socketio-4.6.1-py2.py3-none-any.whl", hash = "sha256:5a21da53fdbdc6bb6c8071f40e13d100e0b279ad997681c2492478e06f370523"}, + {file = "python-socketio-5.8.0.tar.gz", hash = "sha256:e714f4dddfaaa0cb0e37a1e2deef2bb60590a5b9fea9c343dd8ca5e688416fd9"}, + {file = "python_socketio-5.8.0-py3-none-any.whl", hash = "sha256:7adb8867aac1c2929b9c1429f1c02e12ca4c36b67c807967393e367dfbb01441"}, ] [package.dependencies] -python-engineio = ">=3.13.0,<4" +bidict = ">=0.21.0" +python-engineio = ">=4.3.0" requests = {version = ">=2.21.0", optional = true, markers = "extra == \"client\""} -six = ">=1.9.0" websocket-client = {version = ">=0.54.0", optional = true, markers = "extra == \"client\""} [package.extras] -asyncio-client = ["aiohttp (>=3.4)", "websockets (>=7.0)"] +asyncio-client = ["aiohttp (>=3.4)"] client = ["requests (>=2.21.0)", "websocket-client (>=0.54.0)"] [[package]] @@ -2624,18 +2741,6 @@ files = [ {file = "python3-xlib-0.15.tar.gz", hash = "sha256:dc4245f3ae4aa5949c1d112ee4723901ade37a96721ba9645f2bfa56e5b383f8"}, ] -[[package]] -name = "pytz" -version = "2022.7.1" -description = "World timezone definitions, modern and historical" -category = "dev" -optional = false -python-versions = "*" -files = [ - {file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"}, - {file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"}, -] - [[package]] name = "pywin32" version = "301" @@ -2720,16 +2825,19 @@ files = [ [[package]] name = "qt-py" -version = "1.3.7" +version = "1.3.8" description = "Python 2 & 3 compatibility wrapper around all Qt bindings - PySide, PySide2, PyQt4 and PyQt5." category = "main" optional = false python-versions = "*" files = [ - {file = "Qt.py-1.3.7-py2.py3-none-any.whl", hash = "sha256:150099d1c6f64c9621a2c9d79d45102ec781c30ee30ee69fc082c6e9be7324fe"}, - {file = "Qt.py-1.3.7.tar.gz", hash = "sha256:803c7bdf4d6230f9a466be19d55934a173eabb61406d21cb91e80c2a3f773b1f"}, + {file = "Qt.py-1.3.8-py2.py3-none-any.whl", hash = "sha256:665b9d4cfefaff2d697876d5027e145a0e0b1ba62dda9652ea114db134bc9911"}, + {file = "Qt.py-1.3.8.tar.gz", hash = "sha256:6d330928f7ec8db8e329b19116c52482b6abfaccfa5edef0248e57d012300895"}, ] +[package.dependencies] +types-PySide2 = "*" + [[package]] name = "qtawesome" version = "0.7.3" @@ -2748,14 +2856,14 @@ six = "*" [[package]] name = "qtpy" -version = "2.3.0" +version = "2.3.1" description = "Provides an abstraction layer on top of the various Qt bindings (PyQt5/6 and PySide2/6)." category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "QtPy-2.3.0-py3-none-any.whl", hash = "sha256:8d6d544fc20facd27360ea189592e6135c614785f0dec0b4f083289de6beb408"}, - {file = "QtPy-2.3.0.tar.gz", hash = "sha256:0603c9c83ccc035a4717a12908bf6bc6cb22509827ea2ec0e94c2da7c9ed57c5"}, + {file = "QtPy-2.3.1-py3-none-any.whl", hash = "sha256:5193d20e0b16e4d9d3bc2c642d04d9f4e2c892590bd1b9c92bfe38a95d5a2e12"}, + {file = "QtPy-2.3.1.tar.gz", hash = "sha256:a8c74982d6d172ce124d80cafd39653df78989683f760f2281ba91a6e7b9de8b"}, ] [package.dependencies] @@ -2783,26 +2891,48 @@ sphinx = ">=1.3.1" [[package]] name = "requests" -version = "2.28.1" +version = "2.31.0" description = "Python HTTP for Humans." category = "main" optional = false -python-versions = ">=3.7, <4" +python-versions = ">=3.7" files = [ - {file = "requests-2.28.1-py3-none-any.whl", hash = "sha256:8fefa2a1a1365bf5520aac41836fbee479da67864514bdb821f31ce07ce65349"}, - {file = "requests-2.28.1.tar.gz", hash = "sha256:7c5599b102feddaa661c826c56ab4fee28bfd17f5abca1ebbe3e7f19d7c97983"}, + {file = "requests-2.31.0-py3-none-any.whl", hash = "sha256:58cd2187c01e70e6e26505bca751777aa9f2ee0b7f4300988b709f44e013003f"}, + {file = "requests-2.31.0.tar.gz", hash = "sha256:942c5a758f98d790eaed1a29cb6eefc7ffb0d1cf7af05c3d2791656dbd6ad1e1"}, ] [package.dependencies] certifi = ">=2017.4.17" -charset-normalizer = ">=2,<3" +charset-normalizer = ">=2,<4" idna = ">=2.5,<4" -urllib3 = ">=1.21.1,<1.27" +urllib3 = ">=1.21.1,<3" [package.extras] socks = ["PySocks (>=1.5.6,!=1.5.7)"] use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] +[[package]] +name = "revitron-sphinx-theme" +version = "0.7.2" +description = "Revitron theme for Sphinx" +category = "dev" +optional = false +python-versions = "*" +files = [] +develop = false + +[package.dependencies] +sphinx = "*" + +[package.extras] +dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"] + +[package.source] +type = "git" +url = "https://github.com/revitron/revitron-sphinx-theme.git" +reference = "master" +resolved_reference = "c0779c66365d9d258d93575ebaff7db9d3aee282" + [[package]] name = "rsa" version = "4.9" @@ -2893,19 +3023,19 @@ files = [ [[package]] name = "slack-sdk" -version = "3.19.5" +version = "3.21.3" description = "The Slack API Platform SDK for Python" category = "main" optional = false python-versions = ">=3.6.0" files = [ - {file = "slack_sdk-3.19.5-py2.py3-none-any.whl", hash = "sha256:0b52bb32a87c71f638b9eb47e228dffeebf89de5e762684ef848276f9f186c84"}, - {file = "slack_sdk-3.19.5.tar.gz", hash = "sha256:47fb4af596243fe6585a92f3034de21eb2104a55cc9fd59a92ef3af17cf9ddd8"}, + {file = "slack_sdk-3.21.3-py2.py3-none-any.whl", hash = "sha256:de3c07b92479940b61cd68c566f49fbc9974c8f38f661d26244078f3903bb9cc"}, + {file = "slack_sdk-3.21.3.tar.gz", hash = "sha256:20829bdc1a423ec93dac903470975ebf3bc76fd3fd91a4dadc0eeffc940ecb0c"}, ] [package.extras] -optional = ["SQLAlchemy (>=1,<2)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"] -testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "codecov (>=2,<3)", "databases (>=0.5)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"] +optional = ["SQLAlchemy (>=1.4,<3)", "aiodns (>1.0)", "aiohttp (>=3.7.3,<4)", "boto3 (<=2)", "websocket-client (>=1,<2)", "websockets (>=10,<11)"] +testing = ["Flask (>=1,<2)", "Flask-Sockets (>=0.2,<1)", "Jinja2 (==3.0.3)", "Werkzeug (<2)", "black (==22.8.0)", "boto3 (<=2)", "click (==8.0.4)", "databases (>=0.5)", "flake8 (>=5,<6)", "itsdangerous (==1.1.0)", "moto (>=3,<4)", "psutil (>=5,<6)", "pytest (>=6.2.5,<7)", "pytest-asyncio (<1)", "pytest-cov (>=2,<3)"] [[package]] name = "smmap" @@ -2945,27 +3075,27 @@ files = [ [[package]] name = "sphinx" -version = "6.1.3" +version = "5.3.0" description = "Python documentation generator" category = "dev" optional = false -python-versions = ">=3.8" +python-versions = ">=3.6" files = [ - {file = "Sphinx-6.1.3.tar.gz", hash = "sha256:0dac3b698538ffef41716cf97ba26c1c7788dba73ce6f150c1ff5b4720786dd2"}, - {file = "sphinx-6.1.3-py3-none-any.whl", hash = "sha256:807d1cb3d6be87eb78a381c3e70ebd8d346b9a25f3753e9947e866b2786865fc"}, + {file = "Sphinx-5.3.0.tar.gz", hash = "sha256:51026de0a9ff9fc13c05d74913ad66047e104f56a129ff73e174eb5c3ee794b5"}, + {file = "sphinx-5.3.0-py3-none-any.whl", hash = "sha256:060ca5c9f7ba57a08a1219e547b269fadf125ae25b06b9fa7f66768efb652d6d"}, ] [package.dependencies] alabaster = ">=0.7,<0.8" babel = ">=2.9" colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""} -docutils = ">=0.18,<0.20" +docutils = ">=0.14,<0.20" imagesize = ">=1.3" importlib-metadata = {version = ">=4.8", markers = "python_version < \"3.10\""} Jinja2 = ">=3.0" packaging = ">=21.0" -Pygments = ">=2.13" -requests = ">=2.25.0" +Pygments = ">=2.12" +requests = ">=2.5.0" snowballstemmer = ">=2.0" sphinxcontrib-applehelp = "*" sphinxcontrib-devhelp = "*" @@ -2976,37 +3106,43 @@ sphinxcontrib-serializinghtml = ">=1.1.5" [package.extras] docs = ["sphinxcontrib-websupport"] -lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-simplify", "isort", "mypy (>=0.990)", "ruff", "sphinx-lint", "types-requests"] -test = ["cython", "html5lib", "pytest (>=4.6)"] +lint = ["docutils-stubs", "flake8 (>=3.5.0)", "flake8-bugbear", "flake8-comprehensions", "flake8-simplify", "isort", "mypy (>=0.981)", "sphinx-lint", "types-requests", "types-typed-ast"] +test = ["cython", "html5lib", "pytest (>=4.6)", "typed_ast"] [[package]] -name = "sphinx-rtd-theme" -version = "0.5.1" -description = "Read the Docs theme for Sphinx" +name = "sphinx-autoapi" +version = "2.1.0" +description = "Sphinx API documentation generator" category = "dev" optional = false -python-versions = "*" +python-versions = ">=3.7" files = [ - {file = "sphinx_rtd_theme-0.5.1-py2.py3-none-any.whl", hash = "sha256:fa6bebd5ab9a73da8e102509a86f3fcc36dec04a0b52ea80e5a033b2aba00113"}, - {file = "sphinx_rtd_theme-0.5.1.tar.gz", hash = "sha256:eda689eda0c7301a80cf122dad28b1861e5605cbf455558f3775e1e8200e83a5"}, + {file = "sphinx-autoapi-2.1.0.tar.gz", hash = "sha256:5b5c58064214d5a846c9c81d23f00990a64654b9bca10213231db54a241bc50f"}, + {file = "sphinx_autoapi-2.1.0-py2.py3-none-any.whl", hash = "sha256:b25c7b2cda379447b8c36b6a0e3bdf76e02fd64f7ca99d41c6cbdf130a01768f"}, ] [package.dependencies] -sphinx = "*" +astroid = ">=2.7" +Jinja2 = "*" +PyYAML = "*" +sphinx = ">=5.2.0" +unidecode = "*" [package.extras] -dev = ["bump2version", "sphinxcontrib-httpdomain", "transifex-client"] +docs = ["sphinx", "sphinx-rtd-theme"] +dotnet = ["sphinxcontrib-dotnetdomain"] +go = ["sphinxcontrib-golangdomain"] [[package]] name = "sphinxcontrib-applehelp" -version = "1.0.3" +version = "1.0.4" description = "sphinxcontrib-applehelp is a Sphinx extension which outputs Apple help books" category = "dev" optional = false python-versions = ">=3.8" files = [ - {file = "sphinxcontrib.applehelp-1.0.3-py3-none-any.whl", hash = "sha256:ba0f2a22e6eeada8da6428d0d520215ee8864253f32facf958cca81e426f661d"}, - {file = "sphinxcontrib.applehelp-1.0.3.tar.gz", hash = "sha256:83749f09f6ac843b8cb685277dbc818a8bf2d76cc19602699094fe9a74db529e"}, + {file = "sphinxcontrib-applehelp-1.0.4.tar.gz", hash = "sha256:828f867945bbe39817c210a1abfd1bc4895c8b73fcaade56d45357a348a07d7e"}, + {file = "sphinxcontrib_applehelp-1.0.4-py3-none-any.whl", hash = "sha256:29d341f67fb0f6f586b23ad80e072c8e6ad0b48417db2bde114a4c9746feb228"}, ] [package.extras] @@ -3031,14 +3167,14 @@ test = ["pytest"] [[package]] name = "sphinxcontrib-htmlhelp" -version = "2.0.0" +version = "2.0.1" description = "sphinxcontrib-htmlhelp is a sphinx extension which renders HTML help files" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.8" files = [ - {file = "sphinxcontrib-htmlhelp-2.0.0.tar.gz", hash = "sha256:f5f8bb2d0d629f398bf47d0d69c07bc13b65f75a81ad9e2f71a63d4b7a2f6db2"}, - {file = "sphinxcontrib_htmlhelp-2.0.0-py2.py3-none-any.whl", hash = "sha256:d412243dfb797ae3ec2b59eca0e52dac12e75a241bf0e4eb861e450d06c6ed07"}, + {file = "sphinxcontrib-htmlhelp-2.0.1.tar.gz", hash = "sha256:0cbdd302815330058422b98a113195c9249825d681e18f11e8b1f78a2f11efff"}, + {file = "sphinxcontrib_htmlhelp-2.0.1-py3-none-any.whl", hash = "sha256:c38cb46dccf316c79de6e5515e1770414b797162b23cd3d06e67020e1d2a6903"}, ] [package.extras] @@ -3060,6 +3196,22 @@ files = [ [package.extras] test = ["flake8", "mypy", "pytest"] +[[package]] +name = "sphinxcontrib-napoleon" +version = "0.7" +description = "Sphinx \"napoleon\" extension." +category = "dev" +optional = false +python-versions = "*" +files = [ + {file = "sphinxcontrib-napoleon-0.7.tar.gz", hash = "sha256:407382beed396e9f2d7f3043fad6afda95719204a1e1a231ac865f40abcbfcf8"}, + {file = "sphinxcontrib_napoleon-0.7-py2.py3-none-any.whl", hash = "sha256:711e41a3974bdf110a484aec4c1a556799eb0b3f3b897521a018ad7e2db13fef"}, +] + +[package.dependencies] +pockets = ">=0.3" +six = ">=1.5.2" + [[package]] name = "sphinxcontrib-qthelp" version = "1.0.3" @@ -3146,26 +3298,64 @@ files = [ [[package]] name = "tomlkit" -version = "0.11.6" +version = "0.11.8" description = "Style preserving TOML library" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "tomlkit-0.11.6-py3-none-any.whl", hash = "sha256:07de26b0d8cfc18f871aec595fda24d95b08fef89d147caa861939f37230bf4b"}, - {file = "tomlkit-0.11.6.tar.gz", hash = "sha256:71b952e5721688937fb02cf9d354dbcf0785066149d2855e44531ebdd2b65d73"}, + {file = "tomlkit-0.11.8-py3-none-any.whl", hash = "sha256:8c726c4c202bdb148667835f68d68780b9a003a9ec34167b6c673b38eff2a171"}, + {file = "tomlkit-0.11.8.tar.gz", hash = "sha256:9330fc7faa1db67b541b28e62018c17d20be733177d290a13b24c62d1614e0c3"}, +] + +[[package]] +name = "types-pyside2" +version = "5.15.2.1.5" +description = "The most accurate stubs for PySide2" +category = "main" +optional = false +python-versions = "*" +files = [ + {file = "types_PySide2-5.15.2.1.5-py2.py3-none-any.whl", hash = "sha256:4bbee2c8f09961101013d05bb5c506b7351b3020494fc8b5c3b73c95014fa1b0"}, ] [[package]] name = "typing-extensions" -version = "4.4.0" +version = "4.6.2" description = "Backported and Experimental Type Hints for Python 3.7+" category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"}, - {file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"}, + {file = "typing_extensions-4.6.2-py3-none-any.whl", hash = "sha256:3a8b36f13dd5fdc5d1b16fe317f5668545de77fa0b8e02006381fd49d731ab98"}, + {file = "typing_extensions-4.6.2.tar.gz", hash = "sha256:06006244c70ac8ee83fa8282cb188f697b8db25bc8b4df07be1873c43897060c"}, +] + +[[package]] +name = "uc-micro-py" +version = "1.0.2" +description = "Micro subset of unicode data files for linkify-it-py projects." +category = "dev" +optional = false +python-versions = ">=3.7" +files = [ + {file = "uc-micro-py-1.0.2.tar.gz", hash = "sha256:30ae2ac9c49f39ac6dce743bd187fcd2b574b16ca095fa74cd9396795c954c54"}, + {file = "uc_micro_py-1.0.2-py3-none-any.whl", hash = "sha256:8c9110c309db9d9e87302e2f4ad2c3152770930d88ab385cd544e7a7e75f3de0"}, +] + +[package.extras] +test = ["coverage", "pytest", "pytest-cov"] + +[[package]] +name = "unidecode" +version = "1.2.0" +description = "ASCII transliterations of Unicode text" +category = "main" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "Unidecode-1.2.0-py2.py3-none-any.whl", hash = "sha256:12435ef2fc4cdfd9cf1035a1db7e98b6b047fe591892e81f34e94959591fad00"}, + {file = "Unidecode-1.2.0.tar.gz", hash = "sha256:8d73a97d387a956922344f6b74243c2c6771594659778744b2dbdaad8f6b727d"}, ] [[package]] @@ -3182,41 +3372,42 @@ files = [ [[package]] name = "urllib3" -version = "1.26.14" +version = "2.0.2" description = "HTTP library with thread-safe connection pooling, file post, and more." category = "main" optional = false -python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" +python-versions = ">=3.7" files = [ - {file = "urllib3-1.26.14-py2.py3-none-any.whl", hash = "sha256:75edcdc2f7d85b137124a6c3c9fc3933cdeaa12ecb9a6a959f22797a0feca7e1"}, - {file = "urllib3-1.26.14.tar.gz", hash = "sha256:076907bf8fd355cde77728471316625a4d2f7e713c125f51953bb5b3eecf4f72"}, + {file = "urllib3-2.0.2-py3-none-any.whl", hash = "sha256:d055c2f9d38dc53c808f6fdc8eab7360b6fdbbde02340ed25cfbcd817c62469e"}, + {file = "urllib3-2.0.2.tar.gz", hash = "sha256:61717a1095d7e155cdb737ac7bb2f4324a858a1e2e6466f6d03ff630ca68d3cc"}, ] [package.extras] -brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)", "brotlipy (>=0.6.0)"] -secure = ["certifi", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "ipaddress", "pyOpenSSL (>=0.14)", "urllib3-secure-extra"] -socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] +brotli = ["brotli (>=1.0.9)", "brotlicffi (>=0.8.0)"] +secure = ["certifi", "cryptography (>=1.9)", "idna (>=2.0.0)", "pyopenssl (>=17.1.0)", "urllib3-secure-extra"] +socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"] +zstd = ["zstandard (>=0.18.0)"] [[package]] name = "virtualenv" -version = "20.17.1" +version = "20.23.0" description = "Virtual Python Environment builder" category = "dev" optional = false -python-versions = ">=3.6" +python-versions = ">=3.7" files = [ - {file = "virtualenv-20.17.1-py3-none-any.whl", hash = "sha256:ce3b1684d6e1a20a3e5ed36795a97dfc6af29bc3970ca8dab93e11ac6094b3c4"}, - {file = "virtualenv-20.17.1.tar.gz", hash = "sha256:f8b927684efc6f1cc206c9db297a570ab9ad0e51c16fa9e45487d36d1905c058"}, + {file = "virtualenv-20.23.0-py3-none-any.whl", hash = "sha256:6abec7670e5802a528357fdc75b26b9f57d5d92f29c5462ba0fbe45feacc685e"}, + {file = "virtualenv-20.23.0.tar.gz", hash = "sha256:a85caa554ced0c0afbd0d638e7e2d7b5f92d23478d05d17a76daeac8f279f924"}, ] [package.dependencies] distlib = ">=0.3.6,<1" -filelock = ">=3.4.1,<4" -platformdirs = ">=2.4,<3" +filelock = ">=3.11,<4" +platformdirs = ">=3.2,<4" [package.extras] -docs = ["proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-argparse (>=0.3.2)", "sphinx-rtd-theme (>=1)", "towncrier (>=22.8)"] -testing = ["coverage (>=6.2)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=21.3)", "pytest (>=7.0.1)", "pytest-env (>=0.6.2)", "pytest-freezegun (>=0.4.2)", "pytest-mock (>=3.6.1)", "pytest-randomly (>=3.10.3)", "pytest-timeout (>=2.1)"] +docs = ["furo (>=2023.3.27)", "proselint (>=0.13)", "sphinx (>=6.1.3)", "sphinx-argparse (>=0.4)", "sphinxcontrib-towncrier (>=0.2.1a0)", "towncrier (>=22.12)"] +test = ["covdefaults (>=2.3)", "coverage (>=7.2.3)", "coverage-enable-subprocess (>=1)", "flaky (>=3.7)", "packaging (>=23.1)", "pytest (>=7.3.1)", "pytest-env (>=0.8.1)", "pytest-freezegun (>=0.4.2)", "pytest-mock (>=3.10)", "pytest-randomly (>=3.12)", "pytest-timeout (>=2.1)", "setuptools (>=67.7.1)", "time-machine (>=2.9)"] [[package]] name = "wcwidth" @@ -3247,91 +3438,102 @@ six = "*" [[package]] name = "wheel" -version = "0.38.4" +version = "0.40.0" description = "A built-package format for Python" category = "dev" optional = false python-versions = ">=3.7" files = [ - {file = "wheel-0.38.4-py3-none-any.whl", hash = "sha256:b60533f3f5d530e971d6737ca6d58681ee434818fab630c83a734bb10c083ce8"}, - {file = "wheel-0.38.4.tar.gz", hash = "sha256:965f5259b566725405b05e7cf774052044b1ed30119b5d586b2703aafe8719ac"}, + {file = "wheel-0.40.0-py3-none-any.whl", hash = "sha256:d236b20e7cb522daf2390fa84c55eea81c5c30190f90f29ae2ca1ad8355bf247"}, + {file = "wheel-0.40.0.tar.gz", hash = "sha256:cd1196f3faee2b31968d626e1731c94f99cbdb67cf5a46e4f5656cbee7738873"}, ] [package.extras] -test = ["pytest (>=3.0.0)"] +test = ["pytest (>=6.0.0)"] [[package]] name = "wrapt" -version = "1.14.1" +version = "1.15.0" description = "Module for decorators, wrappers and monkey patching." -category = "main" +category = "dev" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" files = [ - {file = "wrapt-1.14.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:1b376b3f4896e7930f1f772ac4b064ac12598d1c38d04907e696cc4d794b43d3"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:903500616422a40a98a5a3c4ff4ed9d0066f3b4c951fa286018ecdf0750194ef"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5a9a0d155deafd9448baff28c08e150d9b24ff010e899311ddd63c45c2445e28"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:ddaea91abf8b0d13443f6dac52e89051a5063c7d014710dcb4d4abb2ff811a59"}, - {file = "wrapt-1.14.1-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:36f582d0c6bc99d5f39cd3ac2a9062e57f3cf606ade29a0a0d6b323462f4dd87"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:7ef58fb89674095bfc57c4069e95d7a31cfdc0939e2a579882ac7d55aadfd2a1"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:e2f83e18fe2f4c9e7db597e988f72712c0c3676d337d8b101f6758107c42425b"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:ee2b1b1769f6707a8a445162ea16dddf74285c3964f605877a20e38545c3c462"}, - {file = "wrapt-1.14.1-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:833b58d5d0b7e5b9832869f039203389ac7cbf01765639c7309fd50ef619e0b1"}, - {file = "wrapt-1.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:80bb5c256f1415f747011dc3604b59bc1f91c6e7150bd7db03b19170ee06b320"}, - {file = "wrapt-1.14.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:07f7a7d0f388028b2df1d916e94bbb40624c59b48ecc6cbc232546706fac74c2"}, - {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02b41b633c6261feff8ddd8d11c711df6842aba629fdd3da10249a53211a72c4"}, - {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2fe803deacd09a233e4762a1adcea5db5d31e6be577a43352936179d14d90069"}, - {file = "wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:257fd78c513e0fb5cdbe058c27a0624c9884e735bbd131935fd49e9fe719d310"}, - {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4fcc4649dc762cddacd193e6b55bc02edca674067f5f98166d7713b193932b7f"}, - {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:11871514607b15cfeb87c547a49bca19fde402f32e2b1c24a632506c0a756656"}, - {file = "wrapt-1.14.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8ad85f7f4e20964db4daadcab70b47ab05c7c1cf2a7c1e51087bfaa83831854c"}, - {file = "wrapt-1.14.1-cp310-cp310-win32.whl", hash = "sha256:a9a52172be0b5aae932bef82a79ec0a0ce87288c7d132946d645eba03f0ad8a8"}, - {file = "wrapt-1.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:6d323e1554b3d22cfc03cd3243b5bb815a51f5249fdcbb86fda4bf62bab9e164"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:43ca3bbbe97af00f49efb06e352eae40434ca9d915906f77def219b88e85d907"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:6b1a564e6cb69922c7fe3a678b9f9a3c54e72b469875aa8018f18b4d1dd1adf3"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:00b6d4ea20a906c0ca56d84f93065b398ab74b927a7a3dbd470f6fc503f95dc3"}, - {file = "wrapt-1.14.1-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:a85d2b46be66a71bedde836d9e41859879cc54a2a04fad1191eb50c2066f6e9d"}, - {file = "wrapt-1.14.1-cp35-cp35m-win32.whl", hash = "sha256:dbcda74c67263139358f4d188ae5faae95c30929281bc6866d00573783c422b7"}, - {file = "wrapt-1.14.1-cp35-cp35m-win_amd64.whl", hash = "sha256:b21bb4c09ffabfa0e85e3a6b623e19b80e7acd709b9f91452b8297ace2a8ab00"}, - {file = "wrapt-1.14.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9e0fd32e0148dd5dea6af5fee42beb949098564cc23211a88d799e434255a1f4"}, - {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9736af4641846491aedb3c3f56b9bc5568d92b0692303b5a305301a95dfd38b1"}, - {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b02d65b9ccf0ef6c34cba6cf5bf2aab1bb2f49c6090bafeecc9cd81ad4ea1c1"}, - {file = "wrapt-1.14.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21ac0156c4b089b330b7666db40feee30a5d52634cc4560e1905d6529a3897ff"}, - {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:9f3e6f9e05148ff90002b884fbc2a86bd303ae847e472f44ecc06c2cd2fcdb2d"}, - {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:6e743de5e9c3d1b7185870f480587b75b1cb604832e380d64f9504a0535912d1"}, - {file = "wrapt-1.14.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:d79d7d5dc8a32b7093e81e97dad755127ff77bcc899e845f41bf71747af0c569"}, - {file = "wrapt-1.14.1-cp36-cp36m-win32.whl", hash = "sha256:81b19725065dcb43df02b37e03278c011a09e49757287dca60c5aecdd5a0b8ed"}, - {file = "wrapt-1.14.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b014c23646a467558be7da3d6b9fa409b2c567d2110599b7cf9a0c5992b3b471"}, - {file = "wrapt-1.14.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:88bd7b6bd70a5b6803c1abf6bca012f7ed963e58c68d76ee20b9d751c74a3248"}, - {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5901a312f4d14c59918c221323068fad0540e34324925c8475263841dbdfe68"}, - {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d77c85fedff92cf788face9bfa3ebaa364448ebb1d765302e9af11bf449ca36d"}, - {file = "wrapt-1.14.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d649d616e5c6a678b26d15ece345354f7c2286acd6db868e65fcc5ff7c24a77"}, - {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:7d2872609603cb35ca513d7404a94d6d608fc13211563571117046c9d2bcc3d7"}, - {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:ee6acae74a2b91865910eef5e7de37dc6895ad96fa23603d1d27ea69df545015"}, - {file = "wrapt-1.14.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:2b39d38039a1fdad98c87279b48bc5dce2c0ca0d73483b12cb72aa9609278e8a"}, - {file = "wrapt-1.14.1-cp37-cp37m-win32.whl", hash = "sha256:60db23fa423575eeb65ea430cee741acb7c26a1365d103f7b0f6ec412b893853"}, - {file = "wrapt-1.14.1-cp37-cp37m-win_amd64.whl", hash = "sha256:709fe01086a55cf79d20f741f39325018f4df051ef39fe921b1ebe780a66184c"}, - {file = "wrapt-1.14.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8c0ce1e99116d5ab21355d8ebe53d9460366704ea38ae4d9f6933188f327b456"}, - {file = "wrapt-1.14.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e3fb1677c720409d5f671e39bac6c9e0e422584e5f518bfd50aa4cbbea02433f"}, - {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:642c2e7a804fcf18c222e1060df25fc210b9c58db7c91416fb055897fc27e8cc"}, - {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7b7c050ae976e286906dd3f26009e117eb000fb2cf3533398c5ad9ccc86867b1"}, - {file = "wrapt-1.14.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef3f72c9666bba2bab70d2a8b79f2c6d2c1a42a7f7e2b0ec83bb2f9e383950af"}, - {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:01c205616a89d09827986bc4e859bcabd64f5a0662a7fe95e0d359424e0e071b"}, - {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:5a0f54ce2c092aaf439813735584b9537cad479575a09892b8352fea5e988dc0"}, - {file = "wrapt-1.14.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2cf71233a0ed05ccdabe209c606fe0bac7379fdcf687f39b944420d2a09fdb57"}, - {file = "wrapt-1.14.1-cp38-cp38-win32.whl", hash = "sha256:aa31fdcc33fef9eb2552cbcbfee7773d5a6792c137b359e82879c101e98584c5"}, - {file = "wrapt-1.14.1-cp38-cp38-win_amd64.whl", hash = "sha256:d1967f46ea8f2db647c786e78d8cc7e4313dbd1b0aca360592d8027b8508e24d"}, - {file = "wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3232822c7d98d23895ccc443bbdf57c7412c5a65996c30442ebe6ed3df335383"}, - {file = "wrapt-1.14.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:988635d122aaf2bdcef9e795435662bcd65b02f4f4c1ae37fbee7401c440b3a7"}, - {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9cca3c2cdadb362116235fdbd411735de4328c61425b0aa9f872fd76d02c4e86"}, - {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d52a25136894c63de15a35bc0bdc5adb4b0e173b9c0d07a2be9d3ca64a332735"}, - {file = "wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40e7bc81c9e2b2734ea4bc1aceb8a8f0ceaac7c5299bc5d69e37c44d9081d43b"}, - {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b9b7a708dd92306328117d8c4b62e2194d00c365f18eff11a9b53c6f923b01e3"}, - {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:6a9a25751acb379b466ff6be78a315e2b439d4c94c1e99cb7266d40a537995d3"}, - {file = "wrapt-1.14.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:34aa51c45f28ba7f12accd624225e2b1e5a3a45206aa191f6f9aac931d9d56fe"}, - {file = "wrapt-1.14.1-cp39-cp39-win32.whl", hash = "sha256:dee0ce50c6a2dd9056c20db781e9c1cfd33e77d2d569f5d1d9321c641bb903d5"}, - {file = "wrapt-1.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:dee60e1de1898bde3b238f18340eec6148986da0455d8ba7848d50470a7a32fb"}, - {file = "wrapt-1.14.1.tar.gz", hash = "sha256:380a85cf89e0e69b7cfbe2ea9f765f004ff419f34194018a6827ac0e3edfed4d"}, + {file = "wrapt-1.15.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:ca1cccf838cd28d5a0883b342474c630ac48cac5df0ee6eacc9c7290f76b11c1"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e826aadda3cae59295b95343db8f3d965fb31059da7de01ee8d1c40a60398b29"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5fc8e02f5984a55d2c653f5fea93531e9836abbd84342c1d1e17abc4a15084c2"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:96e25c8603a155559231c19c0349245eeb4ac0096fe3c1d0be5c47e075bd4f46"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:40737a081d7497efea35ab9304b829b857f21558acfc7b3272f908d33b0d9d4c"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:f87ec75864c37c4c6cb908d282e1969e79763e0d9becdfe9fe5473b7bb1e5f09"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:1286eb30261894e4c70d124d44b7fd07825340869945c79d05bda53a40caa079"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:493d389a2b63c88ad56cdc35d0fa5752daac56ca755805b1b0c530f785767d5e"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:58d7a75d731e8c63614222bcb21dd992b4ab01a399f1f09dd82af17bbfc2368a"}, + {file = "wrapt-1.15.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:21f6d9a0d5b3a207cdf7acf8e58d7d13d463e639f0c7e01d82cdb671e6cb7923"}, + {file = "wrapt-1.15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ce42618f67741d4697684e501ef02f29e758a123aa2d669e2d964ff734ee00ee"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41d07d029dd4157ae27beab04d22b8e261eddfc6ecd64ff7000b10dc8b3a5727"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54accd4b8bc202966bafafd16e69da9d5640ff92389d33d28555c5fd4f25ccb7"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2fbfbca668dd15b744418265a9607baa970c347eefd0db6a518aaf0cfbd153c0"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:76e9c727a874b4856d11a32fb0b389afc61ce8aaf281ada613713ddeadd1cfec"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e20076a211cd6f9b44a6be58f7eeafa7ab5720eb796975d0c03f05b47d89eb90"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a74d56552ddbde46c246b5b89199cb3fd182f9c346c784e1a93e4dc3f5ec9975"}, + {file = "wrapt-1.15.0-cp310-cp310-win32.whl", hash = "sha256:26458da5653aa5b3d8dc8b24192f574a58984c749401f98fff994d41d3f08da1"}, + {file = "wrapt-1.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:75760a47c06b5974aa5e01949bf7e66d2af4d08cb8c1d6516af5e39595397f5e"}, + {file = "wrapt-1.15.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ba1711cda2d30634a7e452fc79eabcadaffedf241ff206db2ee93dd2c89a60e7"}, + {file = "wrapt-1.15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56374914b132c702aa9aa9959c550004b8847148f95e1b824772d453ac204a72"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a89ce3fd220ff144bd9d54da333ec0de0399b52c9ac3d2ce34b569cf1a5748fb"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bbe623731d03b186b3d6b0d6f51865bf598587c38d6f7b0be2e27414f7f214e"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3abbe948c3cbde2689370a262a8d04e32ec2dd4f27103669a45c6929bcdbfe7c"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b67b819628e3b748fd3c2192c15fb951f549d0f47c0449af0764d7647302fda3"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:7eebcdbe3677e58dd4c0e03b4f2cfa346ed4049687d839adad68cc38bb559c92"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:74934ebd71950e3db69960a7da29204f89624dde411afbfb3b4858c1409b1e98"}, + {file = "wrapt-1.15.0-cp311-cp311-win32.whl", hash = "sha256:bd84395aab8e4d36263cd1b9308cd504f6cf713b7d6d3ce25ea55670baec5416"}, + {file = "wrapt-1.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:a487f72a25904e2b4bbc0817ce7a8de94363bd7e79890510174da9d901c38705"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:4ff0d20f2e670800d3ed2b220d40984162089a6e2c9646fdb09b85e6f9a8fc29"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:9ed6aa0726b9b60911f4aed8ec5b8dd7bf3491476015819f56473ffaef8959bd"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:896689fddba4f23ef7c718279e42f8834041a21342d95e56922e1c10c0cc7afb"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:75669d77bb2c071333417617a235324a1618dba66f82a750362eccbe5b61d248"}, + {file = "wrapt-1.15.0-cp35-cp35m-win32.whl", hash = "sha256:fbec11614dba0424ca72f4e8ba3c420dba07b4a7c206c8c8e4e73f2e98f4c559"}, + {file = "wrapt-1.15.0-cp35-cp35m-win_amd64.whl", hash = "sha256:fd69666217b62fa5d7c6aa88e507493a34dec4fa20c5bd925e4bc12fce586639"}, + {file = "wrapt-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b0724f05c396b0a4c36a3226c31648385deb6a65d8992644c12a4963c70326ba"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbeccb1aa40ab88cd29e6c7d8585582c99548f55f9b2581dfc5ba68c59a85752"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38adf7198f8f154502883242f9fe7333ab05a5b02de7d83aa2d88ea621f13364"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:578383d740457fa790fdf85e6d346fda1416a40549fe8db08e5e9bd281c6a475"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:a4cbb9ff5795cd66f0066bdf5947f170f5d63a9274f99bdbca02fd973adcf2a8"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:af5bd9ccb188f6a5fdda9f1f09d9f4c86cc8a539bd48a0bfdc97723970348418"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:b56d5519e470d3f2fe4aa7585f0632b060d532d0696c5bdfb5e8319e1d0f69a2"}, + {file = "wrapt-1.15.0-cp36-cp36m-win32.whl", hash = "sha256:77d4c1b881076c3ba173484dfa53d3582c1c8ff1f914c6461ab70c8428b796c1"}, + {file = "wrapt-1.15.0-cp36-cp36m-win_amd64.whl", hash = "sha256:077ff0d1f9d9e4ce6476c1a924a3332452c1406e59d90a2cf24aeb29eeac9420"}, + {file = "wrapt-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5c5aa28df055697d7c37d2099a7bc09f559d5053c3349b1ad0c39000e611d317"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3a8564f283394634a7a7054b7983e47dbf39c07712d7b177b37e03f2467a024e"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:780c82a41dc493b62fc5884fb1d3a3b81106642c5c5c78d6a0d4cbe96d62ba7e"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e169e957c33576f47e21864cf3fc9ff47c223a4ebca8960079b8bd36cb014fd0"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b02f21c1e2074943312d03d243ac4388319f2456576b2c6023041c4d57cd7019"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f2e69b3ed24544b0d3dbe2c5c0ba5153ce50dcebb576fdc4696d52aa22db6034"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d787272ed958a05b2c86311d3a4135d3c2aeea4fc655705f074130aa57d71653"}, + {file = "wrapt-1.15.0-cp37-cp37m-win32.whl", hash = "sha256:02fce1852f755f44f95af51f69d22e45080102e9d00258053b79367d07af39c0"}, + {file = "wrapt-1.15.0-cp37-cp37m-win_amd64.whl", hash = "sha256:abd52a09d03adf9c763d706df707c343293d5d106aea53483e0ec8d9e310ad5e"}, + {file = "wrapt-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:cdb4f085756c96a3af04e6eca7f08b1345e94b53af8921b25c72f096e704e145"}, + {file = "wrapt-1.15.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:230ae493696a371f1dbffaad3dafbb742a4d27a0afd2b1aecebe52b740167e7f"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63424c681923b9f3bfbc5e3205aafe790904053d42ddcc08542181a30a7a51bd"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d6bcbfc99f55655c3d93feb7ef3800bd5bbe963a755687cbf1f490a71fb7794b"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c99f4309f5145b93eca6e35ac1a988f0dc0a7ccf9ccdcd78d3c0adf57224e62f"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b130fe77361d6771ecf5a219d8e0817d61b236b7d8b37cc045172e574ed219e6"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:96177eb5645b1c6985f5c11d03fc2dbda9ad24ec0f3a46dcce91445747e15094"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d5fe3e099cf07d0fb5a1e23d399e5d4d1ca3e6dfcbe5c8570ccff3e9208274f7"}, + {file = "wrapt-1.15.0-cp38-cp38-win32.whl", hash = "sha256:abd8f36c99512755b8456047b7be10372fca271bf1467a1caa88db991e7c421b"}, + {file = "wrapt-1.15.0-cp38-cp38-win_amd64.whl", hash = "sha256:b06fa97478a5f478fb05e1980980a7cdf2712015493b44d0c87606c1513ed5b1"}, + {file = "wrapt-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2e51de54d4fb8fb50d6ee8327f9828306a959ae394d3e01a1ba8b2f937747d86"}, + {file = "wrapt-1.15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0970ddb69bba00670e58955f8019bec4a42d1785db3faa043c33d81de2bf843c"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76407ab327158c510f44ded207e2f76b657303e17cb7a572ffe2f5a8a48aa04d"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cd525e0e52a5ff16653a3fc9e3dd827981917d34996600bbc34c05d048ca35cc"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d37ac69edc5614b90516807de32d08cb8e7b12260a285ee330955604ed9dd29"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:078e2a1a86544e644a68422f881c48b84fef6d18f8c7a957ffd3f2e0a74a0d4a"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2cf56d0e237280baed46f0b5316661da892565ff58309d4d2ed7dba763d984b8"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7dc0713bf81287a00516ef43137273b23ee414fe41a3c14be10dd95ed98a2df9"}, + {file = "wrapt-1.15.0-cp39-cp39-win32.whl", hash = "sha256:46ed616d5fb42f98630ed70c3529541408166c22cdfd4540b88d5f21006b0eff"}, + {file = "wrapt-1.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:eef4d64c650f33347c1f9266fa5ae001440b232ad9b98f1f43dfe7a79435c0a6"}, + {file = "wrapt-1.15.0-py3-none-any.whl", hash = "sha256:64b1df0f83706b4ef4cfb4fb0e4c2669100fd7ecacfb59e091fad300d4e04640"}, + {file = "wrapt-1.15.0.tar.gz", hash = "sha256:d06730c6aed78cee4126234cf2d071e01b44b915e725a6cb439a879ec9754a3a"}, ] [[package]] @@ -3357,86 +3559,86 @@ ujson = ["ujson"] [[package]] name = "yarl" -version = "1.8.2" +version = "1.9.2" description = "Yet another URL library" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "yarl-1.8.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:bb81f753c815f6b8e2ddd2eef3c855cf7da193b82396ac013c661aaa6cc6b0a5"}, - {file = "yarl-1.8.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:47d49ac96156f0928f002e2424299b2c91d9db73e08c4cd6742923a086f1c863"}, - {file = "yarl-1.8.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:3fc056e35fa6fba63248d93ff6e672c096f95f7836938241ebc8260e062832fe"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58a3c13d1c3005dbbac5c9f0d3210b60220a65a999b1833aa46bd6677c69b08e"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:10b08293cda921157f1e7c2790999d903b3fd28cd5c208cf8826b3b508026996"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:de986979bbd87272fe557e0a8fcb66fd40ae2ddfe28a8b1ce4eae22681728fef"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c4fcfa71e2c6a3cb568cf81aadc12768b9995323186a10827beccf5fa23d4f8"}, - {file = "yarl-1.8.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ae4d7ff1049f36accde9e1ef7301912a751e5bae0a9d142459646114c70ecba6"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:bf071f797aec5b96abfc735ab97da9fd8f8768b43ce2abd85356a3127909d146"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:74dece2bfc60f0f70907c34b857ee98f2c6dd0f75185db133770cd67300d505f"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:df60a94d332158b444301c7f569659c926168e4d4aad2cfbf4bce0e8fb8be826"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:63243b21c6e28ec2375f932a10ce7eda65139b5b854c0f6b82ed945ba526bff3"}, - {file = "yarl-1.8.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cfa2bbca929aa742b5084fd4663dd4b87c191c844326fcb21c3afd2d11497f80"}, - {file = "yarl-1.8.2-cp310-cp310-win32.whl", hash = "sha256:b05df9ea7496df11b710081bd90ecc3a3db6adb4fee36f6a411e7bc91a18aa42"}, - {file = "yarl-1.8.2-cp310-cp310-win_amd64.whl", hash = "sha256:24ad1d10c9db1953291f56b5fe76203977f1ed05f82d09ec97acb623a7976574"}, - {file = "yarl-1.8.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2a1fca9588f360036242f379bfea2b8b44cae2721859b1c56d033adfd5893634"}, - {file = "yarl-1.8.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f37db05c6051eff17bc832914fe46869f8849de5b92dc4a3466cd63095d23dfd"}, - {file = "yarl-1.8.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:77e913b846a6b9c5f767b14dc1e759e5aff05502fe73079f6f4176359d832581"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0978f29222e649c351b173da2b9b4665ad1feb8d1daa9d971eb90df08702668a"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:388a45dc77198b2460eac0aca1efd6a7c09e976ee768b0d5109173e521a19daf"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2305517e332a862ef75be8fad3606ea10108662bc6fe08509d5ca99503ac2aee"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:42430ff511571940d51e75cf42f1e4dbdded477e71c1b7a17f4da76c1da8ea76"}, - {file = "yarl-1.8.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3150078118f62371375e1e69b13b48288e44f6691c1069340081c3fd12c94d5b"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:c15163b6125db87c8f53c98baa5e785782078fbd2dbeaa04c6141935eb6dab7a"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4d04acba75c72e6eb90745447d69f84e6c9056390f7a9724605ca9c56b4afcc6"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:e7fd20d6576c10306dea2d6a5765f46f0ac5d6f53436217913e952d19237efc4"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:75c16b2a900b3536dfc7014905a128a2bea8fb01f9ee26d2d7d8db0a08e7cb2c"}, - {file = "yarl-1.8.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:6d88056a04860a98341a0cf53e950e3ac9f4e51d1b6f61a53b0609df342cc8b2"}, - {file = "yarl-1.8.2-cp311-cp311-win32.whl", hash = "sha256:fb742dcdd5eec9f26b61224c23baea46c9055cf16f62475e11b9b15dfd5c117b"}, - {file = "yarl-1.8.2-cp311-cp311-win_amd64.whl", hash = "sha256:8c46d3d89902c393a1d1e243ac847e0442d0196bbd81aecc94fcebbc2fd5857c"}, - {file = "yarl-1.8.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ceff9722e0df2e0a9e8a79c610842004fa54e5b309fe6d218e47cd52f791d7ef"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f6b4aca43b602ba0f1459de647af954769919c4714706be36af670a5f44c9c1"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1684a9bd9077e922300ecd48003ddae7a7474e0412bea38d4631443a91d61077"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ebb78745273e51b9832ef90c0898501006670d6e059f2cdb0e999494eb1450c2"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3adeef150d528ded2a8e734ebf9ae2e658f4c49bf413f5f157a470e17a4a2e89"}, - {file = "yarl-1.8.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:57a7c87927a468e5a1dc60c17caf9597161d66457a34273ab1760219953f7f4c"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:efff27bd8cbe1f9bd127e7894942ccc20c857aa8b5a0327874f30201e5ce83d0"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a783cd344113cb88c5ff7ca32f1f16532a6f2142185147822187913eb989f739"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:705227dccbe96ab02c7cb2c43e1228e2826e7ead880bb19ec94ef279e9555b5b"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:34c09b43bd538bf6c4b891ecce94b6fa4f1f10663a8d4ca589a079a5018f6ed7"}, - {file = "yarl-1.8.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:a48f4f7fea9a51098b02209d90297ac324241bf37ff6be6d2b0149ab2bd51b37"}, - {file = "yarl-1.8.2-cp37-cp37m-win32.whl", hash = "sha256:0414fd91ce0b763d4eadb4456795b307a71524dbacd015c657bb2a39db2eab89"}, - {file = "yarl-1.8.2-cp37-cp37m-win_amd64.whl", hash = "sha256:d881d152ae0007809c2c02e22aa534e702f12071e6b285e90945aa3c376463c5"}, - {file = "yarl-1.8.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5df5e3d04101c1e5c3b1d69710b0574171cc02fddc4b23d1b2813e75f35a30b1"}, - {file = "yarl-1.8.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7a66c506ec67eb3159eea5096acd05f5e788ceec7b96087d30c7d2865a243918"}, - {file = "yarl-1.8.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2b4fa2606adf392051d990c3b3877d768771adc3faf2e117b9de7eb977741229"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1e21fb44e1eff06dd6ef971d4bdc611807d6bd3691223d9c01a18cec3677939e"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:93202666046d9edadfe9f2e7bf5e0782ea0d497b6d63da322e541665d65a044e"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fc77086ce244453e074e445104f0ecb27530d6fd3a46698e33f6c38951d5a0f1"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dd68a92cab699a233641f5929a40f02a4ede8c009068ca8aa1fe87b8c20ae3"}, - {file = "yarl-1.8.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1b372aad2b5f81db66ee7ec085cbad72c4da660d994e8e590c997e9b01e44901"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e6f3515aafe0209dd17fb9bdd3b4e892963370b3de781f53e1746a521fb39fc0"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:dfef7350ee369197106805e193d420b75467b6cceac646ea5ed3049fcc950a05"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:728be34f70a190566d20aa13dc1f01dc44b6aa74580e10a3fb159691bc76909d"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:ff205b58dc2929191f68162633d5e10e8044398d7a45265f90a0f1d51f85f72c"}, - {file = "yarl-1.8.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:baf211dcad448a87a0d9047dc8282d7de59473ade7d7fdf22150b1d23859f946"}, - {file = "yarl-1.8.2-cp38-cp38-win32.whl", hash = "sha256:272b4f1599f1b621bf2aabe4e5b54f39a933971f4e7c9aa311d6d7dc06965165"}, - {file = "yarl-1.8.2-cp38-cp38-win_amd64.whl", hash = "sha256:326dd1d3caf910cd26a26ccbfb84c03b608ba32499b5d6eeb09252c920bcbe4f"}, - {file = "yarl-1.8.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:f8ca8ad414c85bbc50f49c0a106f951613dfa5f948ab69c10ce9b128d368baf8"}, - {file = "yarl-1.8.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:418857f837347e8aaef682679f41e36c24250097f9e2f315d39bae3a99a34cbf"}, - {file = "yarl-1.8.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ae0eec05ab49e91a78700761777f284c2df119376e391db42c38ab46fd662b77"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:009a028127e0a1755c38b03244c0bea9d5565630db9c4cf9572496e947137a87"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3edac5d74bb3209c418805bda77f973117836e1de7c000e9755e572c1f7850d0"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:da65c3f263729e47351261351b8679c6429151ef9649bba08ef2528ff2c423b2"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ef8fb25e52663a1c85d608f6dd72e19bd390e2ecaf29c17fb08f730226e3a08"}, - {file = "yarl-1.8.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bcd7bb1e5c45274af9a1dd7494d3c52b2be5e6bd8d7e49c612705fd45420b12d"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:44ceac0450e648de86da8e42674f9b7077d763ea80c8ceb9d1c3e41f0f0a9951"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:97209cc91189b48e7cfe777237c04af8e7cc51eb369004e061809bcdf4e55220"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:48dd18adcf98ea9cd721a25313aef49d70d413a999d7d89df44f469edfb38a06"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:e59399dda559688461762800d7fb34d9e8a6a7444fd76ec33220a926c8be1516"}, - {file = "yarl-1.8.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d617c241c8c3ad5c4e78a08429fa49e4b04bedfc507b34b4d8dceb83b4af3588"}, - {file = "yarl-1.8.2-cp39-cp39-win32.whl", hash = "sha256:cb6d48d80a41f68de41212f3dfd1a9d9898d7841c8f7ce6696cf2fd9cb57ef83"}, - {file = "yarl-1.8.2-cp39-cp39-win_amd64.whl", hash = "sha256:6604711362f2dbf7160df21c416f81fac0de6dbcf0b5445a2ef25478ecc4c778"}, - {file = "yarl-1.8.2.tar.gz", hash = "sha256:49d43402c6e3013ad0978602bf6bf5328535c48d192304b91b97a3c6790b1562"}, + {file = "yarl-1.9.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8c2ad583743d16ddbdf6bb14b5cd76bf43b0d0006e918809d5d4ddf7bde8dd82"}, + {file = "yarl-1.9.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:82aa6264b36c50acfb2424ad5ca537a2060ab6de158a5bd2a72a032cc75b9eb8"}, + {file = "yarl-1.9.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c0c77533b5ed4bcc38e943178ccae29b9bcf48ffd1063f5821192f23a1bd27b9"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee4afac41415d52d53a9833ebae7e32b344be72835bbb589018c9e938045a560"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9bf345c3a4f5ba7f766430f97f9cc1320786f19584acc7086491f45524a551ac"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2a96c19c52ff442a808c105901d0bdfd2e28575b3d5f82e2f5fd67e20dc5f4ea"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:891c0e3ec5ec881541f6c5113d8df0315ce5440e244a716b95f2525b7b9f3608"}, + {file = "yarl-1.9.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c3a53ba34a636a256d767c086ceb111358876e1fb6b50dfc4d3f4951d40133d5"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:566185e8ebc0898b11f8026447eacd02e46226716229cea8db37496c8cdd26e0"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:2b0738fb871812722a0ac2154be1f049c6223b9f6f22eec352996b69775b36d4"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:32f1d071b3f362c80f1a7d322bfd7b2d11e33d2adf395cc1dd4df36c9c243095"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:e9fdc7ac0d42bc3ea78818557fab03af6181e076a2944f43c38684b4b6bed8e3"}, + {file = "yarl-1.9.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:56ff08ab5df8429901ebdc5d15941b59f6253393cb5da07b4170beefcf1b2528"}, + {file = "yarl-1.9.2-cp310-cp310-win32.whl", hash = "sha256:8ea48e0a2f931064469bdabca50c2f578b565fc446f302a79ba6cc0ee7f384d3"}, + {file = "yarl-1.9.2-cp310-cp310-win_amd64.whl", hash = "sha256:50f33040f3836e912ed16d212f6cc1efb3231a8a60526a407aeb66c1c1956dde"}, + {file = "yarl-1.9.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:646d663eb2232d7909e6601f1a9107e66f9791f290a1b3dc7057818fe44fc2b6"}, + {file = "yarl-1.9.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:aff634b15beff8902d1f918012fc2a42e0dbae6f469fce134c8a0dc51ca423bb"}, + {file = "yarl-1.9.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a83503934c6273806aed765035716216cc9ab4e0364f7f066227e1aaea90b8d0"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b25322201585c69abc7b0e89e72790469f7dad90d26754717f3310bfe30331c2"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:22a94666751778629f1ec4280b08eb11815783c63f52092a5953faf73be24191"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8ec53a0ea2a80c5cd1ab397925f94bff59222aa3cf9c6da938ce05c9ec20428d"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:159d81f22d7a43e6eabc36d7194cb53f2f15f498dbbfa8edc8a3239350f59fe7"}, + {file = "yarl-1.9.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:832b7e711027c114d79dffb92576acd1bd2decc467dec60e1cac96912602d0e6"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:95d2ecefbcf4e744ea952d073c6922e72ee650ffc79028eb1e320e732898d7e8"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:d4e2c6d555e77b37288eaf45b8f60f0737c9efa3452c6c44626a5455aeb250b9"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:783185c75c12a017cc345015ea359cc801c3b29a2966c2655cd12b233bf5a2be"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:b8cc1863402472f16c600e3e93d542b7e7542a540f95c30afd472e8e549fc3f7"}, + {file = "yarl-1.9.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:822b30a0f22e588b32d3120f6d41e4ed021806418b4c9f0bc3048b8c8cb3f92a"}, + {file = "yarl-1.9.2-cp311-cp311-win32.whl", hash = "sha256:a60347f234c2212a9f0361955007fcf4033a75bf600a33c88a0a8e91af77c0e8"}, + {file = "yarl-1.9.2-cp311-cp311-win_amd64.whl", hash = "sha256:be6b3fdec5c62f2a67cb3f8c6dbf56bbf3f61c0f046f84645cd1ca73532ea051"}, + {file = "yarl-1.9.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:38a3928ae37558bc1b559f67410df446d1fbfa87318b124bf5032c31e3447b74"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ac9bb4c5ce3975aeac288cfcb5061ce60e0d14d92209e780c93954076c7c4367"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3da8a678ca8b96c8606bbb8bfacd99a12ad5dd288bc6f7979baddd62f71c63ef"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:13414591ff516e04fcdee8dc051c13fd3db13b673c7a4cb1350e6b2ad9639ad3"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bf74d08542c3a9ea97bb8f343d4fcbd4d8f91bba5ec9d5d7f792dbe727f88938"}, + {file = "yarl-1.9.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6e7221580dc1db478464cfeef9b03b95c5852cc22894e418562997df0d074ccc"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:494053246b119b041960ddcd20fd76224149cfea8ed8777b687358727911dd33"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:52a25809fcbecfc63ac9ba0c0fb586f90837f5425edfd1ec9f3372b119585e45"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:e65610c5792870d45d7b68c677681376fcf9cc1c289f23e8e8b39c1485384185"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:1b1bba902cba32cdec51fca038fd53f8beee88b77efc373968d1ed021024cc04"}, + {file = "yarl-1.9.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:662e6016409828ee910f5d9602a2729a8a57d74b163c89a837de3fea050c7582"}, + {file = "yarl-1.9.2-cp37-cp37m-win32.whl", hash = "sha256:f364d3480bffd3aa566e886587eaca7c8c04d74f6e8933f3f2c996b7f09bee1b"}, + {file = "yarl-1.9.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6a5883464143ab3ae9ba68daae8e7c5c95b969462bbe42e2464d60e7e2698368"}, + {file = "yarl-1.9.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5610f80cf43b6202e2c33ba3ec2ee0a2884f8f423c8f4f62906731d876ef4fac"}, + {file = "yarl-1.9.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b9a4e67ad7b646cd6f0938c7ebfd60e481b7410f574c560e455e938d2da8e0f4"}, + {file = "yarl-1.9.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:83fcc480d7549ccebe9415d96d9263e2d4226798c37ebd18c930fce43dfb9574"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5fcd436ea16fee7d4207c045b1e340020e58a2597301cfbcfdbe5abd2356c2fb"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84e0b1599334b1e1478db01b756e55937d4614f8654311eb26012091be109d59"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3458a24e4ea3fd8930e934c129b676c27452e4ebda80fbe47b56d8c6c7a63a9e"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:838162460b3a08987546e881a2bfa573960bb559dfa739e7800ceeec92e64417"}, + {file = "yarl-1.9.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f4e2d08f07a3d7d3e12549052eb5ad3eab1c349c53ac51c209a0e5991bbada78"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:de119f56f3c5f0e2fb4dee508531a32b069a5f2c6e827b272d1e0ff5ac040333"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:149ddea5abf329752ea5051b61bd6c1d979e13fbf122d3a1f9f0c8be6cb6f63c"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:674ca19cbee4a82c9f54e0d1eee28116e63bc6fd1e96c43031d11cbab8b2afd5"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:9b3152f2f5677b997ae6c804b73da05a39daa6a9e85a512e0e6823d81cdad7cc"}, + {file = "yarl-1.9.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:5415d5a4b080dc9612b1b63cba008db84e908b95848369aa1da3686ae27b6d2b"}, + {file = "yarl-1.9.2-cp38-cp38-win32.whl", hash = "sha256:f7a3d8146575e08c29ed1cd287068e6d02f1c7bdff8970db96683b9591b86ee7"}, + {file = "yarl-1.9.2-cp38-cp38-win_amd64.whl", hash = "sha256:63c48f6cef34e6319a74c727376e95626f84ea091f92c0250a98e53e62c77c72"}, + {file = "yarl-1.9.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:75df5ef94c3fdc393c6b19d80e6ef1ecc9ae2f4263c09cacb178d871c02a5ba9"}, + {file = "yarl-1.9.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:c027a6e96ef77d401d8d5a5c8d6bc478e8042f1e448272e8d9752cb0aff8b5c8"}, + {file = "yarl-1.9.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:f3b078dbe227f79be488ffcfc7a9edb3409d018e0952cf13f15fd6512847f3f7"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59723a029760079b7d991a401386390c4be5bfec1e7dd83e25a6a0881859e716"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b03917871bf859a81ccb180c9a2e6c1e04d2f6a51d953e6a5cdd70c93d4e5a2a"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c1012fa63eb6c032f3ce5d2171c267992ae0c00b9e164efe4d73db818465fac3"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a74dcbfe780e62f4b5a062714576f16c2f3493a0394e555ab141bf0d746bb955"}, + {file = "yarl-1.9.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8c56986609b057b4839968ba901944af91b8e92f1725d1a2d77cbac6972b9ed1"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2c315df3293cd521033533d242d15eab26583360b58f7ee5d9565f15fee1bef4"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:b7232f8dfbd225d57340e441d8caf8652a6acd06b389ea2d3222b8bc89cbfca6"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:53338749febd28935d55b41bf0bcc79d634881195a39f6b2f767870b72514caf"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:066c163aec9d3d073dc9ffe5dd3ad05069bcb03fcaab8d221290ba99f9f69ee3"}, + {file = "yarl-1.9.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8288d7cd28f8119b07dd49b7230d6b4562f9b61ee9a4ab02221060d21136be80"}, + {file = "yarl-1.9.2-cp39-cp39-win32.whl", hash = "sha256:b124e2a6d223b65ba8768d5706d103280914d61f5cae3afbc50fc3dfcc016623"}, + {file = "yarl-1.9.2-cp39-cp39-win_amd64.whl", hash = "sha256:61016e7d582bc46a5378ffdd02cd0314fb8ba52f40f9cf4d9a5e7dbef88dee18"}, + {file = "yarl-1.9.2.tar.gz", hash = "sha256:04ab9d4b9f587c06d801c2abfe9317b77cdf996c65a90d5e84ecc45010823571"}, ] [package.dependencies] @@ -3445,21 +3647,24 @@ multidict = ">=4.0" [[package]] name = "zipp" -version = "3.11.0" +version = "3.15.0" description = "Backport of pathlib-compatible object wrapper for zip files" category = "main" optional = false python-versions = ">=3.7" files = [ - {file = "zipp-3.11.0-py3-none-any.whl", hash = "sha256:83a28fcb75844b5c0cdaf5aa4003c2d728c77e05f5aeabe8e95e56727005fbaa"}, - {file = "zipp-3.11.0.tar.gz", hash = "sha256:a7a22e05929290a67401440b39690ae6563279bced5f314609d9d03798f56766"}, + {file = "zipp-3.15.0-py3-none-any.whl", hash = "sha256:48904fc76a60e542af151aded95726c1a5c34ed43ab4134b597665c86d7ad556"}, + {file = "zipp-3.15.0.tar.gz", hash = "sha256:112929ad649da941c23de50f356a2b5570c954b65150642bccdd66bf194d224b"}, ] [package.extras] -docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)"] -testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"] +docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"] +testing = ["big-O", "flake8 (<5)", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"] + +[extras] +docs = [] [metadata] lock-version = "2.0" python-versions = ">=3.9.1,<3.10" -content-hash = "02daca205796a0f29a0d9f50707544e6804f32027eba493cd2aa7f175a00dcea" +content-hash = "d2b8da22dcd11e0b03f19b9b79e51f205156c5ce75e41cc0225392e9afd8803b" diff --git a/pyproject.toml b/pyproject.toml index 06a74d9126..a07c547123 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.15.11" # OpenPype +version = "3.16.4" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" @@ -32,23 +32,23 @@ python = ">=3.9.1,<3.10" aiohttp = "^3.7" aiohttp_json_rpc = "*" # TVPaint server acre = { git = "https://github.com/pypeclub/acre.git" } -opentimelineio = "^0.14" appdirs = { git = "https://github.com/ActiveState/appdirs.git", branch = "master" } blessed = "^1.17" # openpype terminal formatting coolname = "*" clique = "1.6.*" -Click = "^7" +Click = "^8" dnspython = "^2.1.0" ftrack-python-api = "^2.3.3" +arrow = "^0.17" shotgun_api3 = {git = "https://github.com/shotgunsoftware/python-api.git", rev = "v3.3.3"} -gazu = "^0.8.34" +gazu = "^0.9.3" google-api-python-client = "^1.12.8" # sync server google support (should be separate?) jsonschema = "^2.6.0" keyring = "^22.0.1" log4mongo = "^1.7" pathlib2= "^2.3.5" # deadline submit publish job only (single place, maybe not needed?) Pillow = "^9.0" # used in TVPaint and for slates -pyblish-base = "^1.8.8" +pyblish-base = "^1.8.11" pynput = "^1.7.2" # idle manager in tray pymongo = "^3.11.2" "Qt.py" = "^1.3.3" @@ -70,7 +70,8 @@ requests = "^2.25.1" pysftp = "^0.2.9" dropbox = "^11.20.0" aiohttp-middlewares = "^2.0.0" -opencolorio = "^2.2.0" +Unidecode = "1.2.0" +cryptography = "39.0.0" [tool.poetry.dev-dependencies] flake8 = "^6.0" @@ -81,14 +82,19 @@ GitPython = "^3.1.17" jedi = "^0.13" Jinja2 = "^3" markupsafe = "2.0.1" -pycodestyle = "^2.5.0" -pydocstyle = "^3.0.0" +pycodestyle = "*" +pydocstyle = "*" +linkify-it-py = "^2.0.0" +myst-parser = "^0.18.1" pylint = "^2.4.4" pytest = "^6.1" pytest-cov = "*" pytest-print = "*" -Sphinx = "^6.1" -sphinx-rtd-theme = "*" +Sphinx = "^5.3" +m2r2 = "^0.3.3.post2" +sphinx-autoapi = "^2.0.1" +sphinxcontrib-napoleon = "^0.7" +revitron-sphinx-theme = { git = "https://github.com/revitron/revitron-sphinx-theme.git", branch = "master" } recommonmark = "*" wheel = "*" enlighten = "*" # cool terminal progress bars @@ -118,12 +124,18 @@ version = "5.15.2" [openpype.qtbinding.darwin] package = "PySide6" -version = "6.4.1" +version = "6.4.3" [openpype.qtbinding.linux] package = "PySide2" version = "5.15.2" +# Python dependencies that will be available only in runtime of +# OpenPype process - do not interfere with DCCs dependencies +[openpype.runtime-deps] +opencolorio = "2.2.1" +opentimelineio = "0.14.1" + # TODO: we will need to handle different linux flavours here and # also different macos versions too. [openpype.thirdparty.ffmpeg.windows] @@ -165,3 +177,6 @@ ignore = ["website", "docs", ".git"] reportMissingImports = true reportMissingTypeStubs = false + +[tool.poetry.extras] +docs = ["Sphinx", "furo", "sphinxcontrib-napoleon"] diff --git a/server_addon/README.md b/server_addon/README.md new file mode 100644 index 0000000000..c6d467adaa --- /dev/null +++ b/server_addon/README.md @@ -0,0 +1,34 @@ +# Addons for AYON server +Preparation of AYON addons based on OpenPype codebase. The output is a bunch of zip files in `./packages` directory that can be uploaded to AYON server. One of the packages is `openpype` which is OpenPype code converted to AYON addon. The addon is must have requirement to be able to use `ayon-launcher`. The versioning of `openpype` addon is following versioning of OpenPype. The other addons contain only settings models. + +## Intro +OpenPype is transitioning to AYON, a dedicated server with its own database, moving away from MongoDB. During this transition period, OpenPype will remain compatible with both MongoDB and AYON. However, we will gradually update the codebase to align with AYON's data structure and separate individual components into addons. + +Currently, OpenPype has an AYON mode, which means it utilizes the AYON server instead of MongoDB through conversion utilities. Initially, we added the AYON executable alongside the OpenPype executables to enable AYON mode. While this approach worked, updating to new code versions would require a complete reinstallation. To address this, we have decided to create a new repository specifically for the base desktop application logic, which we currently refer to as the AYON Launcher. This Launcher will replace the executables generated by the OpenPype build and convert the OpenPype code into a server addon, resulting in smaller updates. + +Since the implementation of the AYON Launcher is not yet fully completed, we will maintain both methods of starting AYON mode for now. Once the AYON Launcher is finished, we will remove the AYON executables from the OpenPype codebase entirely. + +During this transitional period, the AYON Launcher addon will be a requirement as the entry point for using the AYON Launcher. + +## How to start +There is a `create_ayon_addons.py` python file which contains logic how to create server addon from OpenPype codebase. Just run the code. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py +``` + +It will create directory `./packages/.zip` files for AYON server. You can then copy upload the zip files to AYON server. Restart server to update addons information, add the addon version to server bundle and set the bundle for production or staging usage. + +Once addon is on server and is enabled, you can just run AYON launcher. Content will be downloaded and used automatically. + +### Additional arguments +Additional arguments are useful for development purposes. + +To skip zip creation to keep only server ready folder structure, pass `--skip-zip` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --skip-zip +``` + +To create both zips and keep folder structure, pass `--keep-sources` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --keep-sources +``` diff --git a/server_addon/aftereffects/LICENSE b/server_addon/aftereffects/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/aftereffects/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/aftereffects/README.md b/server_addon/aftereffects/README.md new file mode 100644 index 0000000000..b2f34f3407 --- /dev/null +++ b/server_addon/aftereffects/README.md @@ -0,0 +1,4 @@ +AfterEffects Addon +=============== + +Integration with Adobe AfterEffects. diff --git a/server_addon/aftereffects/server/__init__.py b/server_addon/aftereffects/server/__init__.py new file mode 100644 index 0000000000..e14e76e9db --- /dev/null +++ b/server_addon/aftereffects/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import AfterEffectsSettings, DEFAULT_AFTEREFFECTS_SETTING +from .version import __version__ + + +class AfterEffects(BaseServerAddon): + name = "aftereffects" + title = "AfterEffects" + version = __version__ + + settings_model = AfterEffectsSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_AFTEREFFECTS_SETTING) diff --git a/server_addon/aftereffects/server/settings/__init__.py b/server_addon/aftereffects/server/settings/__init__.py new file mode 100644 index 0000000000..4e96804b4a --- /dev/null +++ b/server_addon/aftereffects/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + AfterEffectsSettings, + DEFAULT_AFTEREFFECTS_SETTING, +) + + +__all__ = ( + "AfterEffectsSettings", + "DEFAULT_AFTEREFFECTS_SETTING", +) diff --git a/server_addon/aftereffects/server/settings/creator_plugins.py b/server_addon/aftereffects/server/settings/creator_plugins.py new file mode 100644 index 0000000000..ee52fadd40 --- /dev/null +++ b/server_addon/aftereffects/server/settings/creator_plugins.py @@ -0,0 +1,18 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateRenderPlugin(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review") + defaults: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AfterEffectsCreatorPlugins(BaseSettingsModel): + RenderCreator: CreateRenderPlugin = Field( + title="Create Render", + default_factory=CreateRenderPlugin, + ) diff --git a/server_addon/aftereffects/server/settings/imageio.py b/server_addon/aftereffects/server/settings/imageio.py new file mode 100644 index 0000000000..55160ffd11 --- /dev/null +++ b/server_addon/aftereffects/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class AfterEffectsImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/aftereffects/server/settings/main.py b/server_addon/aftereffects/server/settings/main.py new file mode 100644 index 0000000000..04d2e51cc9 --- /dev/null +++ b/server_addon/aftereffects/server/settings/main.py @@ -0,0 +1,56 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import AfterEffectsImageIOModel +from .creator_plugins import AfterEffectsCreatorPlugins +from .publish_plugins import ( + AfterEffectsPublishPlugins, + AE_PUBLISH_PLUGINS_DEFAULTS, +) +from .workfile_builder import WorkfileBuilderPlugin +from .templated_workfile_build import TemplatedWorkfileBuildModel + + +class AfterEffectsSettings(BaseSettingsModel): + """AfterEffects Project Settings.""" + + imageio: AfterEffectsImageIOModel = Field( + default_factory=AfterEffectsImageIOModel, + title="OCIO config" + ) + create: AfterEffectsCreatorPlugins = Field( + default_factory=AfterEffectsCreatorPlugins, + title="Creator plugins" + ) + publish: AfterEffectsPublishPlugins = Field( + default_factory=AfterEffectsPublishPlugins, + title="Publish plugins" + ) + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + default_factory=TemplatedWorkfileBuildModel, + title="Templated Workfile Build Settings" + ) + + +DEFAULT_AFTEREFFECTS_SETTING = { + "create": { + "RenderCreator": { + "mark_for_review": True, + "defaults": [ + "Main" + ] + } + }, + "publish": AE_PUBLISH_PLUGINS_DEFAULTS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "templated_workfile_build": { + "profiles": [] + }, +} diff --git a/server_addon/aftereffects/server/settings/publish_plugins.py b/server_addon/aftereffects/server/settings/publish_plugins.py new file mode 100644 index 0000000000..78445d3223 --- /dev/null +++ b/server_addon/aftereffects/server/settings/publish_plugins.py @@ -0,0 +1,68 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectReviewPluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + + +class ValidateSceneSettingsModel(BaseSettingsModel): + """Validate naming of products and layers""" + + # _isGroup = True + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks", + ) + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks", + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class AfterEffectsPublishPlugins(BaseSettingsModel): + CollectReview: CollectReviewPluginModel = Field( + default_factory=CollectReviewPluginModel, + title="Collect Review", + ) + ValidateSceneSettings: ValidateSceneSettingsModel = Field( + default_factory=ValidateSceneSettingsModel, + title="Validate Scene Settings", + ) + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Containers", + ) + + +AE_PUBLISH_PLUGINS_DEFAULTS = { + "CollectReview": { + "enabled": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "skip_resolution_check": [ + ".*" + ], + "skip_timelines_check": [ + ".*" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True, + } +} diff --git a/server_addon/aftereffects/server/settings/templated_workfile_build.py b/server_addon/aftereffects/server/settings/templated_workfile_build.py new file mode 100644 index 0000000000..e0245c8d06 --- /dev/null +++ b/server_addon/aftereffects/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/settings/workfile_builder.py b/server_addon/aftereffects/server/settings/workfile_builder.py new file mode 100644 index 0000000000..d45d3f7f24 --- /dev/null +++ b/server_addon/aftereffects/server/settings/workfile_builder.py @@ -0,0 +1,25 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/version.py b/server_addon/aftereffects/server/version.py new file mode 100644 index 0000000000..a242f0e757 --- /dev/null +++ b/server_addon/aftereffects/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.1" diff --git a/server_addon/applications/server/__init__.py b/server_addon/applications/server/__init__.py new file mode 100644 index 0000000000..e782e8a591 --- /dev/null +++ b/server_addon/applications/server/__init__.py @@ -0,0 +1,233 @@ +import os +import json +import copy + +from ayon_server.addons import BaseServerAddon, AddonLibrary +from ayon_server.lib.postgres import Postgres + +from .version import __version__ +from .settings import ApplicationsAddonSettings, DEFAULT_VALUES + +try: + import semver +except ImportError: + semver = None + + +def sort_versions(addon_versions, reverse=False): + if semver is None: + for addon_version in sorted(addon_versions, reverse=reverse): + yield addon_version + return + + version_objs = [] + invalid_versions = [] + for addon_version in addon_versions: + try: + version_objs.append( + (addon_version, semver.VersionInfo.parse(addon_version)) + ) + except ValueError: + invalid_versions.append(addon_version) + + valid_versions = [ + addon_version + for addon_version, _ in sorted(version_objs, key=lambda x: x[1]) + ] + sorted_versions = list(sorted(invalid_versions)) + valid_versions + if reverse: + sorted_versions = reversed(sorted_versions) + for addon_version in sorted_versions: + yield addon_version + + +def merge_groups(output, new_groups): + groups_by_name = { + o_group["name"]: o_group + for o_group in output + } + extend_groups = [] + for new_group in new_groups: + group_name = new_group["name"] + if group_name not in groups_by_name: + extend_groups.append(new_group) + continue + existing_group = groups_by_name[group_name] + existing_variants = existing_group["variants"] + existing_variants_by_name = { + variant["name"]: variant + for variant in existing_variants + } + for new_variant in new_group["variants"]: + if new_variant["name"] not in existing_variants_by_name: + existing_variants.append(new_variant) + + output.extend(extend_groups) + + +def get_enum_items_from_groups(groups): + label_by_name = {} + for group in groups: + group_name = group["name"] + group_label = group["label"] or group_name + for variant in group["variants"]: + variant_name = variant["name"] + if not variant_name: + continue + variant_label = variant["label"] or variant_name + full_name = f"{group_name}/{variant_name}" + full_label = f"{group_label} {variant_label}" + label_by_name[full_name] = full_label + + return [ + {"value": full_name, "label": label_by_name[full_name]} + for full_name in sorted(label_by_name) + ] + + +class ApplicationsAddon(BaseServerAddon): + name = "applications" + title = "Applications" + version = __version__ + settings_model = ApplicationsAddonSettings + + async def get_default_settings(self): + applications_path = os.path.join(self.addon_dir, "applications.json") + tools_path = os.path.join(self.addon_dir, "tools.json") + default_values = copy.deepcopy(DEFAULT_VALUES) + with open(applications_path, "r") as stream: + default_values.update(json.load(stream)) + + with open(tools_path, "r") as stream: + default_values.update(json.load(stream)) + + return self.get_settings_model()(**default_values) + + async def pre_setup(self): + """Make sure older version of addon use the new way of attributes.""" + + instance = AddonLibrary.getinstance() + app_defs = instance.data.get(self.name) + old_addon = app_defs.versions.get("0.1.0") + if old_addon is not None: + # Override 'create_applications_attribute' for older versions + # - avoid infinite server restart loop + old_addon.create_applications_attribute = ( + self.create_applications_attribute + ) + + async def setup(self): + need_restart = await self.create_applications_attribute() + if need_restart: + self.request_server_restart() + + async def create_applications_attribute(self) -> bool: + """Make sure there are required attributes which ftrack addon needs. + + Returns: + bool: 'True' if an attribute was created or updated. + """ + + instance = AddonLibrary.getinstance() + app_defs = instance.data.get(self.name) + all_applications = [] + all_tools = [] + for addon_version in sort_versions( + app_defs.versions.keys(), reverse=True + ): + addon = app_defs.versions[addon_version] + for variant in ("production", "staging"): + settings_model = await addon.get_studio_settings(variant) + studio_settings = settings_model.dict() + application_settings = studio_settings["applications"] + app_groups = application_settings.pop("additional_apps") + for group_name, value in application_settings.items(): + value["name"] = group_name + app_groups.append(value) + merge_groups(all_applications, app_groups) + merge_groups(all_tools, studio_settings["tool_groups"]) + + query = "SELECT name, position, scope, data from public.attributes" + + apps_attrib_name = "applications" + tools_attrib_name = "tools" + + apps_enum = get_enum_items_from_groups(all_applications) + tools_enum = get_enum_items_from_groups(all_tools) + apps_attribute_data = { + "type": "list_of_strings", + "title": "Applications", + "enum": apps_enum + } + tools_attribute_data = { + "type": "list_of_strings", + "title": "Tools", + "enum": tools_enum + } + apps_scope = ["project"] + tools_scope = ["project", "folder", "task"] + + apps_match_position = None + apps_matches = False + tools_match_position = None + tools_matches = False + position = 1 + async for row in Postgres.iterate(query): + position += 1 + if row["name"] == apps_attrib_name: + # Check if scope is matching ftrack addon requirements + if ( + set(row["scope"]) == set(apps_scope) + and row["data"].get("enum") == apps_enum + ): + apps_matches = True + apps_match_position = row["position"] + + elif row["name"] == tools_attrib_name: + if ( + set(row["scope"]) == set(tools_scope) + and row["data"].get("enum") == tools_enum + ): + tools_matches = True + tools_match_position = row["position"] + + if apps_matches and tools_matches: + return False + + postgre_query = "\n".join(( + "INSERT INTO public.attributes", + " (name, position, scope, data)", + "VALUES", + " ($1, $2, $3, $4)", + "ON CONFLICT (name)", + "DO UPDATE SET", + " scope = $3,", + " data = $4", + )) + if not apps_matches: + # Reuse position from found attribute + if apps_match_position is None: + apps_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + apps_attrib_name, + apps_match_position, + apps_scope, + apps_attribute_data, + ) + + if not tools_matches: + if tools_match_position is None: + tools_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + tools_attrib_name, + tools_match_position, + tools_scope, + tools_attribute_data, + ) + return True diff --git a/server_addon/applications/server/applications.json b/server_addon/applications/server/applications.json new file mode 100644 index 0000000000..8e5b28623e --- /dev/null +++ b/server_addon/applications/server/applications.json @@ -0,0 +1,1123 @@ +{ + "applications": { + "maya": { + "enabled": true, + "label": "Maya", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", + "variants": [ + { + "name": "2023", + "label": "2023", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2023/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2023\"\n}", + "use_python_2": false + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2022\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2022/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2022\"\n}", + "use_python_2": false + }, + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2020\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2020/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2020\"\n}", + "use_python_2": true + }, + { + "name": "2019", + "label": "2019", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2019\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2019/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2019\"\n}", + "use_python_2": true + }, + { + "name": "2018", + "label": "2018", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2018\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2018/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2018\"\n}", + "use_python_2": true + } + ] + }, + "adsk_3dsmax": { + "enabled": true, + "label": "3ds Max", + "icon": "{}/app_icons/3dsmax.png", + "host_name": "max", + "environment": "{\n \"ADSK_3DSMAX_STARTUPSCRIPTS_ADDON_DIR\": \"{OPENPYPE_ROOT}/openpype/hosts/max/startup\"\n}", + "variants": [ + { + "name": "2023", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\3ds Max 2023\\3dsmax.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"3DSMAX_VERSION\": \"2023\"\n}" + } + ] + }, + "flame": { + "enabled": true, + "label": "Flame", + "icon": "{}/app_icons/flame.png", + "host_name": "flame", + "environment": "{\n \"FLAME_SCRIPT_DIRS\": {\n \"windows\": \"\",\n \"darwin\": \"\",\n \"linux\": \"\"\n },\n \"FLAME_WIRETAP_HOSTNAME\": \"\",\n \"FLAME_WIRETAP_VOLUME\": \"stonefs\",\n \"FLAME_WIRETAP_GROUP\": \"staff\"\n}", + "variants": [ + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021\"\n}", + "use_python_2": true + }, + { + "name": "2021_1", + "label": "2021.1", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021.1/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021.1/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021.1/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021.1/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021.1\"\n}", + "use_python_2": true + } + ] + }, + "nuke": { + "enabled": true, + "label": "Nuke", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Nuke14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Nuke13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Nuke13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "nukeassist": { + "enabled": true, + "label": "Nuke Assist", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeAssist14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeAssist13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeAssist13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}" + } + ] + }, + "nukex": { + "enabled": true, + "label": "Nuke X", + "icon": "{}/app_icons/nukex.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeX14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeX13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeX13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}" + } + ] + }, + "nukestudio": { + "enabled": true, + "label": "Nuke Studio", + "icon": "{}/app_icons/nukestudio.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeStudio14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeStudio13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeStudio13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}" + } + ] + }, + "hiero": { + "enabled": true, + "label": "Hiero", + "icon": "{}/app_icons/hiero.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Hiero14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Hiero13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Hiero13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}" + } + ] + }, + "fusion": { + "enabled": true, + "label": "Fusion", + "icon": "{}/app_icons/fusion.png", + "host_name": "fusion", + "environment": "{\n \"FUSION_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 17\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "16", + "label": "16", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 16\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "9", + "label": "9", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 9\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "resolve": { + "enabled": true, + "label": "Resolve", + "icon": "{}/app_icons/resolve.png", + "host_name": "resolve", + "environment": "{\n \"RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR\": [],\n \"RESOLVE_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "stable", + "label": "stable", + "executables": { + "windows": [ + "C:/Program Files/Blackmagic Design/DaVinci Resolve/Resolve.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "houdini": { + "enabled": true, + "label": "Houdini", + "icon": "{}/app_icons/houdini.png", + "host_name": "houdini", + "environment": "{}", + "variants": [ + { + "name": "18-5", + "label": "18.5", + "executables": { + "windows": [ + "C:\\Program Files\\Side Effects Software\\Houdini 18.5.499\\bin\\houdini.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "18", + "label": "18", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + } + ] + }, + "blender": { + "enabled": true, + "label": "Blender", + "icon": "{}/app_icons/blender.png", + "host_name": "blender", + "environment": "{}", + "variants": [ + { + "name": "2-83", + "label": "2.83", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.83\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-90", + "label": "2.90", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.90\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-91", + "label": "2.91", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.91\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + } + ] + }, + "harmony": { + "enabled": true, + "label": "Harmony", + "icon": "{}/app_icons/harmony.png", + "host_name": "harmony", + "environment": "{\n \"AVALON_HARMONY_WORKFILES_ON_LAUNCH\": \"1\"\n}", + "variants": [ + { + "name": "21", + "label": "21", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 21 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "20", + "label": "20", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 20 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 17 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 17 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "tvpaint": { + "enabled": true, + "label": "TVPaint", + "icon": "{}/app_icons/tvpaint.png", + "host_name": "tvpaint", + "environment": "{}", + "variants": [ + { + "name": "animation_11-64bits", + "label": "11 (64bits)", + "executables": { + "windows": [ + "C:\\Program Files\\TVPaint Developpement\\TVPaint Animation 11 (64bits)\\TVPaint Animation 11 (64bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "animation_11-32bits", + "label": "11 (32bits)", + "executables": { + "windows": [ + "C:\\Program Files (x86)\\TVPaint Developpement\\TVPaint Animation 11 (32bits)\\TVPaint Animation 11 (32bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "photoshop": { + "enabled": true, + "label": "Photoshop", + "icon": "{}/app_icons/photoshop.png", + "host_name": "photoshop", + "environment": "{\n \"AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2020\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2021\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2022\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "aftereffects": { + "enabled": true, + "label": "AfterEffects", + "icon": "{}/app_icons/aftereffects.png", + "host_name": "aftereffects", + "environment": "{\n \"AVALON_AFTEREFFECTS_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2020\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2021\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2022\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MULTIPROCESS\": \"No\"\n}" + } + ] + }, + "celaction": { + "enabled": true, + "label": "CelAction 2D", + "icon": "app_icons/celaction.png", + "host_name": "celaction", + "environment": "{\n \"CELACTION_TEMPLATE\": \"{OPENPYPE_REPOS_ROOT}/openpype/hosts/celaction/celaction_template_scene.scn\"\n}", + "variants": [ + { + "name": "local", + "label": "local", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "unreal": { + "enabled": true, + "label": "Unreal Editor", + "icon": "{}/app_icons/ue4.png", + "host_name": "unreal", + "environment": "{}", + "variants": [ + { + "name": "4-26", + "label": "4.26", + "executables": {}, + "arguments": {}, + "environment": "{}" + } + ] + }, + "djvview": { + "enabled": true, + "label": "DJV View", + "icon": "{}/app_icons/djvView.png", + "host_name": "", + "environment": "{}", + "variants": [ + { + "name": "1-1", + "label": "1.1", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "additional_apps": [] + } +} diff --git a/server_addon/applications/server/settings.py b/server_addon/applications/server/settings.py new file mode 100644 index 0000000000..fd481b6ce8 --- /dev/null +++ b/server_addon/applications/server/settings.py @@ -0,0 +1,201 @@ +import json +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from ayon_server.exceptions import BadRequestException + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError as exc: + print(exc) + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class MultiplatformStrList(BaseSettingsModel): + windows: list[str] = Field(default_factory=list, title="Windows") + linux: list[str] = Field(default_factory=list, title="Linux") + darwin: list[str] = Field(default_factory=list, title="MacOS") + + +class AppVariant(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + executables: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Executables" + ) + arguments: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Arguments" + ) + environment: str = Field("{}", title="Environment", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class AppVariantWithPython(AppVariant): + use_python_2: bool = Field(False, title="Use Python 2") + + +class AppGroup(BaseSettingsModel): + enabled: bool = Field(True) + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariant] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class AppGroupWithPython(AppGroup): + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + +class AdditionalAppGroup(BaseSettingsModel): + enabled: bool = Field(True) + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ToolVariantModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_names: list[str] = Field(default_factory=list, title="Hosts") + # TODO use applications enum if possible + app_variants: list[str] = Field(default_factory=list, title="Applications") + environment: str = Field("{}", title="Environments", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ToolGroupModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + environment: str = Field("{}", title="Environments", widget="textarea") + variants: list[ToolVariantModel] = Field( + default_factory=ToolVariantModel + ) + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsSettings(BaseSettingsModel): + """Applications settings""" + + maya: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Maya") + adsk_3dsmax: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk 3ds Max") + flame: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Flame") + nuke: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke") + nukeassist: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Assist") + nukex: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke X") + nukestudio: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Studio") + hiero: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Hiero") + fusion: AppGroup = Field( + default_factory=AppGroupWithPython, title="Fusion") + resolve: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Resolve") + houdini: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Houdini") + blender: AppGroup = Field( + default_factory=AppGroupWithPython, title="Blender") + harmony: AppGroup = Field( + default_factory=AppGroupWithPython, title="Harmony") + tvpaint: AppGroup = Field( + default_factory=AppGroupWithPython, title="TVPaint") + photoshop: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe Photoshop") + aftereffects: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe After Effects") + celaction: AppGroup = Field( + default_factory=AppGroupWithPython, title="Celaction 2D") + unreal: AppGroup = Field( + default_factory=AppGroupWithPython, title="Unreal Editor") + additional_apps: list[AdditionalAppGroup] = Field( + default_factory=list, title="Additional Applications") + + @validator("additional_apps") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsAddonSettings(BaseSettingsModel): + applications: ApplicationsSettings = Field( + default_factory=ApplicationsSettings, + title="Applications", + scope=["studio"] + ) + tool_groups: list[ToolGroupModel] = Field( + default_factory=list, + scope=["studio"] + ) + only_available: bool = Field( + True, title="Show only available applications") + + @validator("tool_groups") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "only_available": False +} diff --git a/server_addon/applications/server/tools.json b/server_addon/applications/server/tools.json new file mode 100644 index 0000000000..54bee11cf7 --- /dev/null +++ b/server_addon/applications/server/tools.json @@ -0,0 +1,55 @@ +{ + "tool_groups": [ + { + "environment": "{\n \"MTOA\": \"{STUDIO_SOFTWARE}/arnold/mtoa_{MAYA_VERSION}_{MTOA_VERSION}\",\n \"MAYA_RENDER_DESC_PATH\": \"{MTOA}\",\n \"MAYA_MODULE_PATH\": \"{MTOA}\",\n \"ARNOLD_PLUGIN_PATH\": \"{MTOA}/shaders\",\n \"MTOA_EXTENSIONS_PATH\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"MTOA_EXTENSIONS\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"DYLD_LIBRARY_PATH\": {\n \"darwin\": \"{MTOA}/bin\"\n },\n \"PATH\": {\n \"windows\": \"{PATH};{MTOA}/bin\"\n }\n}", + "name": "mtoa", + "label": "Autodesk Arnold", + "variants": [ + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.2\"\n}", + "name": "3-2", + "label": "3.2" + }, + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.1\"\n}", + "name": "3-1", + "label": "3.1" + } + ] + }, + { + "environment": "{}", + "name": "vray", + "label": "Chaos Group Vray", + "variants": [] + }, + { + "environment": "{}", + "name": "yeti", + "label": "Peregrine Labs Yeti", + "variants": [] + }, + { + "environment": "{}", + "name": "renderman", + "label": "Pixar Renderman", + "variants": [ + { + "host_names": [ + "maya" + ], + "app_variants": [ + "maya/2022" + ], + "environment": "{\n \"RFMTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManForMaya-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManForMaya-24.3\",\n \"linux\": \"/opt/pixar/RenderManForMaya-24.3\"\n },\n \"RMANTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManProServer-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManProServer-24.3\",\n \"linux\": \"/opt/pixar/RenderManProServer-24.3\"\n }\n}", + "name": "24-3-maya", + "label": "24.3 RFM" + } + ] + } + ] +} diff --git a/server_addon/applications/server/version.py b/server_addon/applications/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/applications/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/blender/server/__init__.py b/server_addon/blender/server/__init__.py new file mode 100644 index 0000000000..a7d6cb4400 --- /dev/null +++ b/server_addon/blender/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import BlenderSettings, DEFAULT_VALUES + + +class BlenderAddon(BaseServerAddon): + name = "blender" + title = "Blender" + version = __version__ + settings_model: Type[BlenderSettings] = BlenderSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/blender/server/settings/__init__.py b/server_addon/blender/server/settings/__init__.py new file mode 100644 index 0000000000..3d51e5c3e1 --- /dev/null +++ b/server_addon/blender/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + BlenderSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "BlenderSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/blender/server/settings/imageio.py b/server_addon/blender/server/settings/imageio.py new file mode 100644 index 0000000000..a6d3c5ff64 --- /dev/null +++ b/server_addon/blender/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class BlenderImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/blender/server/settings/main.py b/server_addon/blender/server/settings/main.py new file mode 100644 index 0000000000..f6118d39cd --- /dev/null +++ b/server_addon/blender/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + TemplateWorkfileBaseOptions, +) + +from .imageio import BlenderImageIOModel +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_BLENDER_PUBLISH_SETTINGS +) + + +class UnitScaleSettingsModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + apply_on_opening: bool = Field( + False, title="Apply on Opening Existing Files") + base_file_unit_scale: float = Field( + 1.0, title="Base File Unit Scale" + ) + + +class BlenderSettings(BaseSettingsModel): + unit_scale_settings: UnitScaleSettingsModel = Field( + default_factory=UnitScaleSettingsModel, + title="Set Unit Scale" + ) + set_resolution_startup: bool = Field( + True, + title="Set Resolution on Startup" + ) + set_frames_startup: bool = Field( + True, + title="Set Start/End Frames and FPS on Startup" + ) + imageio: BlenderImageIOModel = Field( + default_factory=BlenderImageIOModel, + title="Color Management (ImageIO)" + ) + workfile_builder: TemplateWorkfileBaseOptions = Field( + default_factory=TemplateWorkfileBaseOptions, + title="Workfile Builder" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins" + ) + + +DEFAULT_VALUES = { + "unit_scale_settings": { + "enabled": True, + "apply_on_opening": False, + "base_file_unit_scale": 0.01 + }, + "set_frames_startup": True, + "set_resolution_startup": True, + "publish": DEFAULT_BLENDER_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py new file mode 100644 index 0000000000..65dda78411 --- /dev/null +++ b/server_addon/blender/server/settings/publish_plugins.py @@ -0,0 +1,283 @@ +import json +from pydantic import Field, validator +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ExtractBlendModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + families: list[str] = Field( + default_factory=list, + title="Families" + ) + + +class ExtractPlayblastModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + presets: str = Field("", title="Presets", widget="textarea") + + @validator("presets") + def validate_json(cls, value): + return validate_json_dict(value) + + +class PublishPuginsModel(BaseSettingsModel): + ValidateCameraZeroKeyframe: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Camera Zero Keyframe", + section="Validators" + ) + ValidateMeshHasUvs: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh Has Uvs" + ) + ValidateMeshNoNegativeScale: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh No Negative Scale" + ) + ValidateTransformZero: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Transform Zero" + ) + ValidateNoColonsInName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate No Colons In Name" + ) + ExtractBlend: ExtractBlendModel = Field( + default_factory=ExtractBlendModel, + title="Extract Blend", + section="Extractors" + ) + ExtractFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract FBX" + ) + ExtractABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract ABC" + ) + ExtractBlendAnimation: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Blend Animation" + ) + ExtractAnimationFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Animation FBX" + ) + ExtractCamera: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera" + ) + ExtractCameraABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera as ABC" + ) + ExtractLayout: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Layout" + ) + ExtractThumbnail: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Thumbnail" + ) + ExtractPlayblast: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Playblast" + ) + + +DEFAULT_BLENDER_PUBLISH_SETTINGS = { + "ValidateCameraZeroKeyframe": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasUvs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateTransformZero": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoColonsInName": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractBlend": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "camera", + "rig", + "action", + "layout", + "blendScene" + ] + }, + "ExtractFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractABC": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractBlendAnimation": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractAnimationFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractCamera": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractCameraABC": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractLayout": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractThumbnail": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "model": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": False, + "show_shadows": False, + "show_cavity": True + }, + "overlay": { + "show_overlays": False + } + } + }, + "rig": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": True, + "show_shadows": False, + "show_cavity": False + }, + "overlay": { + "show_overlays": True, + "show_ortho_grid": False, + "show_floor": False, + "show_axis_x": False, + "show_axis_y": False, + "show_axis_z": False, + "show_text": False, + "show_stats": False, + "show_cursor": False, + "show_annotation": False, + "show_extras": False, + "show_relationship_lines": False, + "show_outline_selected": False, + "show_motion_paths": False, + "show_object_origins": False, + "show_bones": True + } + } + } + }, + indent=4, + ) + }, + "ExtractPlayblast": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "default": { + "image_settings": { + "file_format": "PNG", + "color_mode": "RGB", + "color_depth": "8", + "compression": 15 + }, + "display_options": { + "shading": { + "type": "MATERIAL", + "render_pass": "COMBINED" + }, + "overlay": { + "show_overlays": False + } + } + } + }, + indent=4 + ) + } +} diff --git a/server_addon/blender/server/version.py b/server_addon/blender/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/blender/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/celaction/server/__init__.py b/server_addon/celaction/server/__init__.py new file mode 100644 index 0000000000..90d3dbaa01 --- /dev/null +++ b/server_addon/celaction/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CelActionSettings, DEFAULT_VALUES + + +class CelActionAddon(BaseServerAddon): + name = "celaction" + title = "CelAction" + version = __version__ + settings_model: Type[CelActionSettings] = CelActionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/celaction/server/imageio.py b/server_addon/celaction/server/imageio.py new file mode 100644 index 0000000000..72da441528 --- /dev/null +++ b/server_addon/celaction/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CelActionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/celaction/server/settings.py b/server_addon/celaction/server/settings.py new file mode 100644 index 0000000000..68d1d2dc31 --- /dev/null +++ b/server_addon/celaction/server/settings.py @@ -0,0 +1,92 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import CelActionImageIOModel + + +class CollectRenderPathModel(BaseSettingsModel): + output_extension: str = Field( + "", + title="Output render file extension" + ) + anatomy_template_key_render_files: str = Field( + "", + title="Anatomy template key: render files" + ) + anatomy_template_key_metadata: str = Field( + "", + title="Anatomy template key: metadata job file" + ) + + +def _workfile_submit_overrides(): + return [ + { + "value": "render_chunk", + "label": "Pass chunk size" + }, + { + "value": "frame_range", + "label": "Pass frame range" + }, + { + "value": "resolution", + "label": "Pass resolution" + } + ] + + +class WorkfileModel(BaseSettingsModel): + submission_overrides: list[str] = Field( + default_factory=list, + title="Submission workfile overrides", + enum_resolver=_workfile_submit_overrides + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectRenderPath: CollectRenderPathModel = Field( + default_factory=CollectRenderPathModel, + title="Collect Render Path" + ) + + +class CelActionSettings(BaseSettingsModel): + imageio: CelActionImageIOModel = Field( + default_factory=CelActionImageIOModel, + title="Color Management (ImageIO)" + ) + workfile: WorkfileModel = Field( + title="Workfile" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "workfile": { + "submission_overrides": [ + "render_chunk", + "frame_range", + "resolution" + ] + }, + "publish": { + "CollectRenderPath": { + "output_extension": "png", + "anatomy_template_key_render_files": "render", + "anatomy_template_key_metadata": "render" + } + } +} diff --git a/server_addon/celaction/server/version.py b/server_addon/celaction/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/celaction/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/clockify/server/__init__.py b/server_addon/clockify/server/__init__.py new file mode 100644 index 0000000000..0fa453fdf4 --- /dev/null +++ b/server_addon/clockify/server/__init__.py @@ -0,0 +1,15 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ClockifySettings + + +class ClockifyAddon(BaseServerAddon): + name = "clockify" + title = "Clockify" + version = __version__ + settings_model: Type[ClockifySettings] = ClockifySettings + frontend_scopes = {} + services = {} diff --git a/server_addon/clockify/server/settings.py b/server_addon/clockify/server/settings.py new file mode 100644 index 0000000000..9067cd4243 --- /dev/null +++ b/server_addon/clockify/server/settings.py @@ -0,0 +1,10 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ClockifySettings(BaseSettingsModel): + workspace_name: str = Field( + "", + title="Workspace name", + scope=["studio"] + ) diff --git a/server_addon/clockify/server/version.py b/server_addon/clockify/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/clockify/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/core/server/__init__.py b/server_addon/core/server/__init__.py new file mode 100644 index 0000000000..4de2b038a5 --- /dev/null +++ b/server_addon/core/server/__init__.py @@ -0,0 +1,15 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CoreSettings, DEFAULT_VALUES + + +class CoreAddon(BaseServerAddon): + name = "core" + title = "Core" + version = __version__ + settings_model = CoreSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/core/server/settings/__init__.py b/server_addon/core/server/settings/__init__.py new file mode 100644 index 0000000000..527a2bdc0c --- /dev/null +++ b/server_addon/core/server/settings/__init__.py @@ -0,0 +1,7 @@ +from .main import CoreSettings, DEFAULT_VALUES + + +__all__ = ( + "CoreSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/core/server/settings/main.py b/server_addon/core/server/settings/main.py new file mode 100644 index 0000000000..d19d732e71 --- /dev/null +++ b/server_addon/core/server/settings/main.py @@ -0,0 +1,160 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathListModel, + ensure_unique_names, +) +from ayon_server.exceptions import BadRequestException + +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_VALUES +from .tools import GlobalToolsModel, DEFAULT_TOOLS_VALUES + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class CoreImageIOFileRulesModel(BaseSettingsModel): + activate_global_file_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CoreImageIOConfigModel(BaseSettingsModel): + filepath: list[str] = Field(default_factory=list, title="Config path") + + +class CoreImageIOBaseModel(BaseSettingsModel): + activate_global_color_management: bool = Field( + False, + title="Override global OCIO config" + ) + ocio_config: CoreImageIOConfigModel = Field( + default_factory=CoreImageIOConfigModel, title="OCIO config" + ) + file_rules: CoreImageIOFileRulesModel = Field( + default_factory=CoreImageIOFileRulesModel, title="File Rules" + ) + + +class CoreSettings(BaseSettingsModel): + studio_name: str = Field("", title="Studio name", scope=["studio"]) + studio_code: str = Field("", title="Studio code", scope=["studio"]) + environments: str = Field( + "{}", + title="Global environment variables", + widget="textarea", + scope=["studio"], + ) + tools: GlobalToolsModel = Field( + default_factory=GlobalToolsModel, + title="Tools" + ) + imageio: CoreImageIOBaseModel = Field( + default_factory=CoreImageIOBaseModel, + title="Color Management (ImageIO)" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + project_plugins: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Additional Project Plugin Paths", + ) + project_folder_structure: str = Field( + "{}", + widget="textarea", + title="Project folder structure", + section="---" + ) + project_environments: str = Field( + "{}", + widget="textarea", + title="Project environments", + section="---" + ) + + @validator( + "environments", + "project_folder_structure", + "project_environments") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +DEFAULT_VALUES = { + "imageio": { + "activate_global_color_management": False, + "ocio_config": { + "filepath": [ + "{BUILTIN_OCIO_ROOT}/aces_1.2/config.ocio", + "{BUILTIN_OCIO_ROOT}/nuke-default/config.ocio" + ] + }, + "file_rules": { + "activate_global_file_rules": False, + "rules": [ + { + "name": "example", + "pattern": ".*(beauty).*", + "colorspace": "ACES - ACEScg", + "ext": "exr" + } + ] + } + }, + "studio_name": "", + "studio_code": "", + "environments": "{}", + "tools": DEFAULT_TOOLS_VALUES, + "publish": DEFAULT_PUBLISH_VALUES, + "project_folder_structure": json.dumps({ + "__project_root__": { + "prod": {}, + "resources": { + "footage": { + "plates": {}, + "offline": {} + }, + "audio": {}, + "art_dept": {} + }, + "editorial": {}, + "assets": { + "characters": {}, + "locations": {} + }, + "shots": {} + } + }, indent=4), + "project_plugins": { + "windows": [], + "darwin": [], + "linux": [] + }, + "project_environments": "{}" +} diff --git a/server_addon/core/server/settings/publish_plugins.py b/server_addon/core/server/settings/publish_plugins.py new file mode 100644 index 0000000000..c012312579 --- /dev/null +++ b/server_addon/core/server/settings/publish_plugins.py @@ -0,0 +1,959 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + +from ayon_server.types import ColorRGBA_uint8 + + +class ValidateBaseModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class CollectAnatomyInstanceDataModel(BaseSettingsModel): + _isGroup = True + follow_workfile_version: bool = Field( + True, title="Collect Anatomy Instance Data" + ) + + +class CollectAudioModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + audio_product_name: str = Field( + "", title="Name of audio variant" + ) + + +class CollectSceneVersionModel(BaseSettingsModel): + _isGroup = True + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + skip_hosts_headless_publish: list[str] = Field( + default_factory=list, + title="Skip for host if headless publish" + ) + + +class CollectCommentPIModel(BaseSettingsModel): + enabled: bool = Field(True) + families: list[str] = Field(default_factory=list, title="Families") + + +class CollectFramesFixDefModel(BaseSettingsModel): + enabled: bool = Field(True) + rewrite_version_enable: bool = Field( + True, + title="Show 'Rewrite latest version' toggle" + ) + + +class ValidateIntentProfile(BaseSettingsModel): + _layout = "expanded" + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + # TODO This was 'validate' in v3 + validate_intent: bool = Field(True, title="Validate") + + +class ValidateIntentModel(BaseSettingsModel): + """Validate if Publishing intent was selected. + + It is possible to disable validation for specific publishing context + with profiles. + """ + + _isGroup = True + enabled: bool = Field(False) + profiles: list[ValidateIntentProfile] = Field(default_factory=list) + + +class ExtractThumbnailFFmpegModel(BaseSettingsModel): + _layout = "expanded" + input: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + + +class ExtractThumbnailModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + ffmpeg_args: ExtractThumbnailFFmpegModel = Field( + default_factory=ExtractThumbnailFFmpegModel + ) + + +def _extract_oiio_transcoding_type(): + return [ + {"value": "colorspace", "label": "Use Colorspace"}, + {"value": "display", "label": "Use Display&View"} + ] + + +class OIIOToolArgumentsModel(BaseSettingsModel): + additional_command_args: list[str] = Field( + default_factory=list, title="Arguments") + + +class ExtractOIIOTranscodeOutputModel(BaseSettingsModel): + extension: str = Field("", title="Extension") + transcoding_type: str = Field( + "colorspace", + title="Transcoding type", + enum_resolver=_extract_oiio_transcoding_type + ) + colorspace: str = Field("", title="Colorspace") + display: str = Field("", title="Display") + view: str = Field("", title="View") + oiiotool_args: OIIOToolArgumentsModel = Field( + default_factory=OIIOToolArgumentsModel, + title="OIIOtool arguments") + + tags: list[str] = Field(default_factory=list, title="Tags") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + + +class ExtractOIIOTranscodeProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + delete_original: bool = Field( + True, + title="Delete Original Representation" + ) + outputs: list[ExtractOIIOTranscodeOutputModel] = Field( + default_factory=list, + title="Output Definitions", + ) + + +class ExtractOIIOTranscodeModel(BaseSettingsModel): + enabled: bool = Field(True) + profiles: list[ExtractOIIOTranscodeProfileModel] = Field( + default_factory=list, title="Profiles" + ) + + +# --- [START] Extract Review --- +class ExtractReviewFFmpegModel(BaseSettingsModel): + video_filters: list[str] = Field( + default_factory=list, + title="Video filters" + ) + audio_filters: list[str] = Field( + default_factory=list, + title="Audio filters" + ) + input: list[str] = Field( + default_factory=list, + title="Input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="Output arguments" + ) + + +def extract_review_filter_enum(): + return [ + { + "value": "everytime", + "label": "Always" + }, + { + "value": "single_frame", + "label": "Only if input has 1 image frame" + }, + { + "value": "multi_frame", + "label": "Only if input is video or sequence of frames" + } + ] + + +class ExtractReviewFilterModel(BaseSettingsModel): + families: list[str] = Field(default_factory=list, title="Families") + product_names: list[str] = Field( + default_factory=list, title="Product names") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + single_frame_filter: str = Field( + "everytime", + description=( + "Use output always / only if input is 1 frame" + " image / only if has 2+ frames or is video" + ), + enum_resolver=extract_review_filter_enum + ) + + +class ExtractReviewLetterBox(BaseSettingsModel): + enabled: bool = Field(True) + ratio: float = Field( + 0.0, + title="Ratio", + ge=0.0, + le=10000.0 + ) + fill_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Fill Color" + ) + line_thickness: int = Field( + 0, + title="Line Thickness", + ge=0, + le=1000 + ) + line_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Line Color" + ) + + +class ExtractReviewOutputDefModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + ext: str = Field("", title="Output extension") + # TODO use some different source of tags + tags: list[str] = Field(default_factory=list, title="Tags") + burnins: list[str] = Field( + default_factory=list, title="Link to a burnin by name" + ) + ffmpeg_args: ExtractReviewFFmpegModel = Field( + default_factory=ExtractReviewFFmpegModel, + title="FFmpeg arguments" + ) + filter: ExtractReviewFilterModel = Field( + default_factory=ExtractReviewFilterModel, + title="Additional output filtering" + ) + overscan_crop: str = Field( + "", + title="Overscan crop", + description=( + "Crop input overscan. See the documentation for more information." + ) + ) + overscan_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Overscan color", + description=( + "Overscan color is used when input aspect ratio is not" + " same as output aspect ratio." + ) + ) + width: int = Field( + 0, + ge=0, + le=100000, + title="Output width", + description=( + "Width and Height must be both set to higher" + " value than 0 else source resolution is used." + ) + ) + height: int = Field( + 0, + title="Output height", + ge=0, + le=100000, + ) + scale_pixel_aspect: bool = Field( + True, + title="Scale pixel aspect", + description=( + "Rescale input when it's pixel aspect ratio is not 1." + " Usefull for anamorph reviews." + ) + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + description=( + "Background color is used only when input have transparency" + " and Alpha is higher than 0." + ), + title="Background color", + ) + letter_box: ExtractReviewLetterBox = Field( + default_factory=ExtractReviewLetterBox, + title="Letter Box" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractReviewProfileModel(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + # TODO use hosts enum + hosts: list[str] = Field( + default_factory=list, title="Host names" + ) + outputs: list[ExtractReviewOutputDefModel] = Field( + default_factory=list, title="Output Definitions" + ) + + @validator("outputs") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ExtractReviewModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + profiles: list[ExtractReviewProfileModel] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Review --- + + +# --- [Start] Extract Burnin --- +class ExtractBurninOptionsModel(BaseSettingsModel): + font_size: int = Field(0, ge=0, title="Font size") + font_color: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Font color" + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 1.0), + title="Background color" + ) + x_offset: int = Field(0, title="X Offset") + y_offset: int = Field(0, title="Y Offset") + bg_padding: int = Field(0, title="Padding around text") + font_filepath: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Font file path" + ) + + +class ExtractBurninDefFilter(BaseSettingsModel): + families: list[str] = Field( + default_factory=list, + title="Families" + ) + tags: list[str] = Field( + default_factory=list, + title="Tags" + ) + + +class ExtractBurninDef(BaseSettingsModel): + _isGroup = True + _layout = "expanded" + name: str = Field("") + TOP_LEFT: str = Field("", topic="Top Left") + TOP_CENTERED: str = Field("", topic="Top Centered") + TOP_RIGHT: str = Field("", topic="Top Right") + BOTTOM_LEFT: str = Field("", topic="Bottom Left") + BOTTOM_CENTERED: str = Field("", topic="Bottom Centered") + BOTTOM_RIGHT: str = Field("", topic="Bottom Right") + filter: ExtractBurninDefFilter = Field( + default_factory=ExtractBurninDefFilter, + title="Additional filtering" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractBurninProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Produt types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + burnins: list[ExtractBurninDef] = Field( + default_factory=list, + title="Burnins" + ) + + @validator("burnins") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + + return value + + +class ExtractBurninModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + options: ExtractBurninOptionsModel = Field( + default_factory=ExtractBurninOptionsModel, + title="Burnin formatting options" + ) + profiles: list[ExtractBurninProfile] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Burnin --- + + +class PreIntegrateThumbnailsProfile(BaseSettingsModel): + _isGroup = True + product_types: list[str] = Field( + default_factory=list, + title="Product types", + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts", + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names", + ) + integrate_thumbnail: bool = Field(True) + + +class PreIntegrateThumbnailsModel(BaseSettingsModel): + """Explicitly set if Thumbnail representation should be integrated. + + If no matching profile set, existing state from Host implementation + is kept. + """ + + _isGroup = True + enabled: bool = Field(True) + integrate_profiles: list[PreIntegrateThumbnailsProfile] = Field( + default_factory=list, + title="Integrate profiles" + ) + + +class IntegrateProductGroupProfile(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class IntegrateProductGroupModel(BaseSettingsModel): + """Group published products by filtering logic. + + Set all published instances as a part of specific group named according + to 'Template'. + + Implemented all variants of placeholders '{task}', '{product[type]}', + '{host}', '{product[name]}', '{renderlayer}'. + """ + + _isGroup = True + product_grouping_profiles: list[IntegrateProductGroupProfile] = Field( + default_factory=list, + title="Product group profiles" + ) + + +class IntegrateANProductGroupProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template: str = Field("", title="Template") + + +class IntegrateANTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroVersionModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + families: list[str] = Field(default_factory=list, title="Families") + # TODO remove when removed from client code + template_name_profiles: list[IntegrateHeroTemplateNameProfileModel] = ( + Field( + default_factory=list, + title="Template name profiles" + ) + ) + + +class CleanUpModel(BaseSettingsModel): + _isGroup = True + paterns: list[str] = Field( + default_factory=list, + title="Patterns (regex)" + ) + remove_temp_renders: bool = Field(False, title="Remove Temp renders") + + +class CleanUpFarmModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + + +class PublishPuginsModel(BaseSettingsModel): + CollectAnatomyInstanceData: CollectAnatomyInstanceDataModel = Field( + default_factory=CollectAnatomyInstanceDataModel, + title="Collect Anatomy Instance Data" + ) + CollectAudio: CollectAudioModel = Field( + default_factory=CollectAudioModel, + title="Collect Audio" + ) + CollectSceneVersion: CollectSceneVersionModel = Field( + default_factory=CollectSceneVersionModel, + title="Collect Version from Workfile" + ) + collect_comment_per_instance: CollectCommentPIModel = Field( + default_factory=CollectCommentPIModel, + title="Collect comment per instance", + ) + CollectFramesFixDef: CollectFramesFixDefModel = Field( + default_factory=CollectFramesFixDefModel, + title="Collect Frames to Fix", + ) + ValidateEditorialAssetName: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Editorial Asset Name" + ) + ValidateVersion: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Version" + ) + ValidateIntent: ValidateIntentModel = Field( + default_factory=ValidateIntentModel, + title="Validate Intent" + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + default_factory=ExtractThumbnailModel, + title="Extract Thumbnail" + ) + ExtractOIIOTranscode: ExtractOIIOTranscodeModel = Field( + default_factory=ExtractOIIOTranscodeModel, + title="Extract OIIO Transcode" + ) + ExtractReview: ExtractReviewModel = Field( + default_factory=ExtractReviewModel, + title="Extract Review" + ) + ExtractBurnin: ExtractBurninModel = Field( + default_factory=ExtractBurninModel, + title="Extract Burnin" + ) + PreIntegrateThumbnails: PreIntegrateThumbnailsModel = Field( + default_factory=PreIntegrateThumbnailsModel, + title="Override Integrate Thumbnail Representations" + ) + IntegrateProductGroup: IntegrateProductGroupModel = Field( + default_factory=IntegrateProductGroupModel, + title="Integrate Product Group" + ) + IntegrateHeroVersion: IntegrateHeroVersionModel = Field( + default_factory=IntegrateHeroVersionModel, + title="Integrate Hero Version" + ) + CleanUp: CleanUpModel = Field( + default_factory=CleanUpModel, + title="Clean Up" + ) + CleanUpFarm: CleanUpFarmModel = Field( + default_factory=CleanUpFarmModel, + title="Clean Up Farm" + ) + + +DEFAULT_PUBLISH_VALUES = { + "CollectAnatomyInstanceData": { + "follow_workfile_version": False + }, + "CollectAudio": { + "enabled": False, + "audio_product_name": "audioMain" + }, + "CollectSceneVersion": { + "hosts": [ + "aftereffects", + "blender", + "celaction", + "fusion", + "harmony", + "hiero", + "houdini", + "maya", + "nuke", + "photoshop", + "resolve", + "tvpaint" + ], + "skip_hosts_headless_publish": [] + }, + "collect_comment_per_instance": { + "enabled": False, + "families": [] + }, + "CollectFramesFixDef": { + "enabled": True, + "rewrite_version_enable": True + }, + "ValidateEditorialAssetName": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVersion": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateIntent": { + "enabled": False, + "profiles": [] + }, + "ExtractThumbnail": { + "enabled": True, + "ffmpeg_args": { + "input": [ + "-apply_trc gamma22" + ], + "output": [] + } + }, + "ExtractOIIOTranscode": { + "enabled": True, + "profiles": [] + }, + "ExtractReview": { + "enabled": True, + "profiles": [ + { + "product_types": [], + "hosts": [], + "outputs": [ + { + "name": "png", + "ext": "png", + "tags": [ + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [], + "output": [] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "single_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 1920, + "height": 1080, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + }, + { + "name": "h264", + "ext": "mp4", + "tags": [ + "burnin", + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [ + "-apply_trc gamma22" + ], + "output": [ + "-pix_fmt yuv420p", + "-crf 18", + "-intra" + ] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "multi_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 0, + "height": 0, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + } + ] + } + ] + }, + "ExtractBurnin": { + "enabled": True, + "options": { + "font_size": 42, + "font_color": [255, 255, 255, 1.0], + "bg_color": [0, 0, 0, 0.5], + "x_offset": 5, + "y_offset": 5, + "bg_padding": 5, + "font_filepath": { + "windows": "", + "darwin": "", + "linux": "" + } + }, + "profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + }, + ] + }, + { + "product_types": ["review"], + "hosts": [ + "maya", + "houdini", + "max" + ], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "focal_length_burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "{focalLength:.2f} mm", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + } + ] + } + ] + }, + "PreIntegrateThumbnails": { + "enabled": True, + "integrate_profiles": [] + }, + "IntegrateProductGroup": { + "product_grouping_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "" + } + ] + }, + "IntegrateHeroVersion": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "rig", + "look", + "pointcache", + "animation", + "setdress", + "layout", + "mayaScene", + "simpleUnrealTexture" + ], + "template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "simpleUnrealTextureHero" + } + ] + }, + "CleanUp": { + "paterns": [], + "remove_temp_renders": False + }, + "CleanUpFarm": { + "enabled": False + } +} diff --git a/server_addon/core/server/settings/tools.py b/server_addon/core/server/settings/tools.py new file mode 100644 index 0000000000..7befc795e4 --- /dev/null +++ b/server_addon/core/server/settings/tools.py @@ -0,0 +1,506 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + + +class ProductTypeSmartSelectModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Product type") + task_names: list[str] = Field(default_factory=list, title="Task names") + + @validator("name") + def normalize_value(cls, value): + return normalize_name(value) + + +class ProductNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class CreatorToolModel(BaseSettingsModel): + # TODO this was dynamic dictionary '{name: task_names}' + product_types_smart_select: list[ProductTypeSmartSelectModel] = Field( + default_factory=list, + title="Create Smart Select" + ) + product_name_profiles: list[ProductNameProfile] = Field( + default_factory=list, + title="Product name profiles" + ) + + @validator("product_types_smart_select") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class WorkfileTemplateProfile(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + # TODO this was using project anatomy template name + workfile_template: str = Field("", title="Workfile template") + + +class LastWorkfileOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + use_last_published_workfile: bool = Field( + True, title="Use last published workfile" + ) + + +class WorkfilesToolOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + + +class ExtraWorkFoldersProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + folders: list[str] = Field(default_factory=list, title="Folders") + + +class WorkfilesLockProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + host_names: list[str] = Field(default_factory=list, title="Hosts") + enabled: bool = Field(True, title="Enabled") + + +class WorkfilesToolModel(BaseSettingsModel): + workfile_template_profiles: list[WorkfileTemplateProfile] = Field( + default_factory=list, + title="Workfile template profiles" + ) + last_workfile_on_startup: list[LastWorkfileOnStartupProfile] = Field( + default_factory=list, + title="Open last workfile on launch" + ) + open_workfile_tool_on_startup: list[WorkfilesToolOnStartupProfile] = Field( + default_factory=list, + title="Open workfile tool on launch" + ) + extra_folders: list[ExtraWorkFoldersProfile] = Field( + default_factory=list, + title="Extra work folders" + ) + workfile_lock_profiles: list[WorkfilesLockProfile] = Field( + default_factory=list, + title="Workfile lock profiles" + ) + + +def _product_types_enum(): + return [ + "action", + "animation", + "assembly", + "audio", + "backgroundComp", + "backgroundLayout", + "camera", + "editorial", + "gizmo", + "image", + "layout", + "look", + "matchmove", + "mayaScene", + "model", + "nukenodes", + "plate", + "pointcache", + "prerender", + "redshiftproxy", + "reference", + "render", + "review", + "rig", + "setdress", + "take", + "usdShade", + "vdbcache", + "vrayproxy", + "workfile", + "xgen", + "yetiRig", + "yeticache" + ] + + +class LoaderProductTypeFilterProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + is_include: bool = Field(True, title="Exclude / Include") + filter_product_types: list[str] = Field( + default_factory=list, + enum_resolver=_product_types_enum + ) + + +class LoaderToolModel(BaseSettingsModel): + product_type_filter_profiles: list[LoaderProductTypeFilterProfile] = Field( + default_factory=list, + title="Product type filtering" + ) + + +class PublishTemplateNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + template_name: str = Field("", title="Template name") + + +class CustomStagingDirProfileModel(BaseSettingsModel): + active: bool = Field(True, title="Is active") + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, title="Task names" + ) + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names" + ) + custom_staging_dir_persistent: bool = Field( + False, title="Custom Staging Folder Persistent" + ) + template_name: str = Field("", title="Template Name") + + +class PublishToolModel(BaseSettingsModel): + template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Template name profiles" + ) + hero_template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Hero template name profiles" + ) + custom_staging_dir_profiles: list[CustomStagingDirProfileModel] = Field( + default_factory=list, + title="Custom Staging Dir Profiles" + ) + + +class GlobalToolsModel(BaseSettingsModel): + creator: CreatorToolModel = Field( + default_factory=CreatorToolModel, + title="Creator" + ) + Workfiles: WorkfilesToolModel = Field( + default_factory=WorkfilesToolModel, + title="Workfiles" + ) + loader: LoaderToolModel = Field( + default_factory=LoaderToolModel, + title="Loader" + ) + publish: PublishToolModel = Field( + default_factory=PublishToolModel, + title="Publish" + ) + + +DEFAULT_TOOLS_VALUES = { + "creator": { + "product_types_smart_select": [ + { + "name": "Render", + "task_names": [ + "light", + "render" + ] + }, + { + "name": "Model", + "task_names": [ + "model" + ] + }, + { + "name": "Layout", + "task_names": [ + "layout" + ] + }, + { + "name": "Look", + "task_names": [ + "look" + ] + }, + { + "name": "Rig", + "task_names": [ + "rigging", + "rig" + ] + } + ], + "product_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{variant}" + }, + { + "product_types": [ + "workfile" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": [ + "render" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Variant}" + }, + { + "product_types": [ + "renderLayer", + "renderPass" + ], + "hosts": [ + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}_{Renderlayer}_{Renderpass}" + }, + { + "product_types": [ + "review", + "workfile" + ], + "hosts": [ + "aftereffects", + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": ["render"], + "hosts": [ + "aftereffects" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Composition}{Variant}" + }, + { + "product_types": [ + "staticMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "S_{folder[name]}{variant}" + }, + { + "product_types": [ + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "SK_{folder[name]}{variant}" + } + ] + }, + "Workfiles": { + "workfile_template_profiles": [ + { + "task_types": [], + "hosts": [], + "workfile_template": "work" + }, + { + "task_types": [], + "hosts": [ + "unreal" + ], + "workfile_template": "work_unreal" + } + ], + "last_workfile_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": True, + "use_last_published_workfile": False + } + ], + "open_workfile_tool_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": False + } + ], + "extra_folders": [], + "workfile_lock_profiles": [] + }, + "loader": { + "product_type_filter_profiles": [ + { + "hosts": [], + "task_types": [], + "is_include": True, + "filter_product_types": [] + } + ] + }, + "publish": { + "template_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish" + }, + { + "product_types": [ + "review", + "render", + "prerender" + ], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish_render" + }, + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_simpleUnrealTexture" + }, + { + "product_types": [ + "staticMesh", + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_maya2unreal" + }, + { + "product_types": [ + "online" + ], + "hosts": [ + "traypublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_online" + } + ], + "hero_template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "hero_simpleUnrealTextureHero" + } + ] + } +} diff --git a/server_addon/core/server/version.py b/server_addon/core/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/core/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/create_ayon_addons.py b/server_addon/create_ayon_addons.py new file mode 100644 index 0000000000..61dbd5c8d9 --- /dev/null +++ b/server_addon/create_ayon_addons.py @@ -0,0 +1,308 @@ +import os +import sys +import re +import json +import shutil +import zipfile +import platform +import collections +from pathlib import Path +from typing import Any, Optional, Iterable, Pattern, List, Tuple + +# Patterns of directories to be skipped for server part of addon +IGNORE_DIR_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip directories starting with '.' + r"^\.", + # Skip any pycache folders + "^__pycache__$" + } +] + +# Patterns of files to be skipped for server part of addon +IGNORE_FILE_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip files starting with '.' + # NOTE this could be an issue in some cases + r"^\.", + # Skip '.pyc' files + r"\.pyc$" + } +] + + +class ZipFileLongPaths(zipfile.ZipFile): + """Allows longer paths in zip files. + + Regular DOS paths are limited to MAX_PATH (260) characters, including + the string's terminating NUL character. + That limit can be exceeded by using an extended-length path that + starts with the '\\?\' prefix. + """ + _is_windows = platform.system().lower() == "windows" + + def _extract_member(self, member, tpath, pwd): + if self._is_windows: + tpath = os.path.abspath(tpath) + if tpath.startswith("\\\\"): + tpath = "\\\\?\\UNC\\" + tpath[2:] + else: + tpath = "\\\\?\\" + tpath + + return super(ZipFileLongPaths, self)._extract_member( + member, tpath, pwd + ) + + +def _value_match_regexes(value: str, regexes: Iterable[Pattern]) -> bool: + return any( + regex.search(value) + for regex in regexes + ) + + +def find_files_in_subdir( + src_path: str, + ignore_file_patterns: Optional[List[Pattern]] = None, + ignore_dir_patterns: Optional[List[Pattern]] = None, + ignore_subdirs: Optional[Iterable[Tuple[str]]] = None +): + """Find all files to copy in subdirectories of given path. + + All files that match any of the patterns in 'ignore_file_patterns' will + be skipped and any directories that match any of the patterns in + 'ignore_dir_patterns' will be skipped with all subfiles. + + Args: + src_path (str): Path to directory to search in. + ignore_file_patterns (Optional[List[Pattern]]): List of regexes + to match files to ignore. + ignore_dir_patterns (Optional[List[Pattern]]): List of regexes + to match directories to ignore. + ignore_subdirs (Optional[Iterable[Tuple[str]]]): List of + subdirectories to ignore. + + Returns: + List[Tuple[str, str]]: List of tuples with path to file and parent + directories relative to 'src_path'. + """ + + if ignore_file_patterns is None: + ignore_file_patterns = IGNORE_FILE_PATTERNS + + if ignore_dir_patterns is None: + ignore_dir_patterns = IGNORE_DIR_PATTERNS + output: list[tuple[str, str]] = [] + + hierarchy_queue = collections.deque() + hierarchy_queue.append((src_path, [])) + while hierarchy_queue: + item: tuple[str, str] = hierarchy_queue.popleft() + dirpath, parents = item + if ignore_subdirs and parents in ignore_subdirs: + continue + for name in os.listdir(dirpath): + path = os.path.join(dirpath, name) + if os.path.isfile(path): + if not _value_match_regexes(name, ignore_file_patterns): + items = list(parents) + items.append(name) + output.append((path, os.path.sep.join(items))) + continue + + if not _value_match_regexes(name, ignore_dir_patterns): + items = list(parents) + items.append(name) + hierarchy_queue.append((path, items)) + + return output + + +def read_addon_version(version_path: Path) -> str: + # Read version + version_content: dict[str, Any] = {} + with open(str(version_path), "r") as stream: + exec(stream.read(), version_content) + return version_content["__version__"] + + +def get_addon_version(addon_dir: Path) -> str: + return read_addon_version(addon_dir / "server" / "version.py") + + +def create_addon_zip( + output_dir: Path, + addon_name: str, + addon_version: str, + keep_source: bool +): + zip_filepath = output_dir / f"{addon_name}-{addon_version}.zip" + addon_output_dir = output_dir / addon_name / addon_version + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + zipf.writestr( + "manifest.json", + json.dumps({ + "addon_name": addon_name, + "addon_version": addon_version + }) + ) + # Add client code content to zip + src_root = os.path.normpath(str(addon_output_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(addon_output_dir)): + rel_root = "" + if root != src_root: + rel_root = root[src_root_offset:] + + for filename in filenames: + src_path = os.path.join(root, filename) + if rel_root: + dst_path = os.path.join("addon", rel_root, filename) + else: + dst_path = os.path.join("addon", filename) + zipf.write(src_path, dst_path) + + if not keep_source: + shutil.rmtree(str(output_dir / addon_name)) + + +def create_openpype_package( + addon_dir: Path, + output_dir: Path, + root_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + pyproject_path = addon_dir / "client" / "pyproject.toml" + + openpype_dir = root_dir / "openpype" + version_path = openpype_dir / "version.py" + addon_version = read_addon_version(version_path) + + addon_output_dir = output_dir / "openpype" / addon_version + private_dir = addon_output_dir / "private" + # Make sure dir exists + addon_output_dir.mkdir(parents=True) + private_dir.mkdir(parents=True) + + # Copy version + shutil.copy(str(version_path), str(addon_output_dir)) + for subitem in server_dir.iterdir(): + shutil.copy(str(subitem), str(addon_output_dir / subitem.name)) + + # Copy pyproject.toml + shutil.copy( + str(pyproject_path), + (private_dir / pyproject_path.name) + ) + + ignored_hosts = [] + ignored_modules = [ + "ftrack", + "shotgrid", + "sync_server", + "example_addons", + "slack" + ] + # Subdirs that won't be added to output zip file + ignored_subpaths = [ + ["addons"], + ["vendor", "common", "ayon_api"], + ] + ignored_subpaths.extend( + ["hosts", host_name] + for host_name in ignored_hosts + ) + ignored_subpaths.extend( + ["modules", module_name] + for module_name in ignored_modules + ) + + # Zip client + zip_filepath = private_dir / "client.zip" + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + # Add client code content to zip + for path, sub_path in find_files_in_subdir( + str(openpype_dir), ignore_subdirs=ignored_subpaths + ): + zipf.write(path, f"{openpype_dir.name}/{sub_path}") + + if create_zip: + create_addon_zip(output_dir, "openpype", addon_version, keep_source) + + +def create_addon_package( + addon_dir: Path, + output_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + addon_version = get_addon_version(addon_dir) + + addon_output_dir = output_dir / addon_dir.name / addon_version + if addon_output_dir.exists(): + shutil.rmtree(str(addon_output_dir)) + addon_output_dir.mkdir(parents=True) + + # Copy server content + src_root = os.path.normpath(str(server_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(server_dir)): + dst_root = addon_output_dir + if root != src_root: + rel_root = root[src_root_offset:] + dst_root = dst_root / rel_root + + dst_root.mkdir(parents=True, exist_ok=True) + for filename in filenames: + src_path = os.path.join(root, filename) + shutil.copy(src_path, str(dst_root)) + + if create_zip: + create_addon_zip( + output_dir, addon_dir.name, addon_version, keep_source + ) + + +def main(create_zip=True, keep_source=False): + current_dir = Path(os.path.dirname(os.path.abspath(__file__))) + root_dir = current_dir.parent + output_dir = current_dir / "packages" + print("Package creation started...") + + # Make sure package dir is empty + if output_dir.exists(): + shutil.rmtree(str(output_dir)) + # Make sure output dir is created + output_dir.mkdir(parents=True) + + for addon_dir in current_dir.iterdir(): + if not addon_dir.is_dir(): + continue + + server_dir = addon_dir / "server" + if not server_dir.exists(): + continue + + if addon_dir.name == "openpype": + create_openpype_package( + addon_dir, output_dir, root_dir, create_zip, keep_source + ) + + else: + create_addon_package( + addon_dir, output_dir, create_zip, keep_source + ) + + print(f"- package '{addon_dir.name}' created") + print(f"Package creation finished. Output directory: {output_dir}") + + +if __name__ == "__main__": + create_zip = "--skip-zip" not in sys.argv + keep_sources = "--keep-sources" in sys.argv + main(create_zip, keep_sources) diff --git a/server_addon/deadline/server/__init__.py b/server_addon/deadline/server/__init__.py new file mode 100644 index 0000000000..36d04189a9 --- /dev/null +++ b/server_addon/deadline/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import DeadlineSettings, DEFAULT_VALUES + + +class Deadline(BaseServerAddon): + name = "deadline" + title = "Deadline" + version = __version__ + settings_model: Type[DeadlineSettings] = DeadlineSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/deadline/server/settings/__init__.py b/server_addon/deadline/server/settings/__init__.py new file mode 100644 index 0000000000..0307862afa --- /dev/null +++ b/server_addon/deadline/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + DeadlineSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "DeadlineSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/deadline/server/settings/main.py b/server_addon/deadline/server/settings/main.py new file mode 100644 index 0000000000..f158b7464d --- /dev/null +++ b/server_addon/deadline/server/settings/main.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + +from .publish_plugins import ( + PublishPluginsModel, + DEFAULT_DEADLINE_PLUGINS_SETTINGS +) + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class DeadlineSettings(BaseSettingsModel): + deadline_urls: list[ServerListSubmodel] = Field( + default_factory=list, + title="System Deadline Webservice URLs", + scope=["studio"], + ) + deadline_servers: list[str] = Field( + title="Project deadline servers", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + @validator("deadline_urls") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "deadline_urls": [ + { + "name": "default", + "value": "http://127.0.0.1:8082" + } + ], + # TODO: this needs to be dynamic from "deadline_urls" + "deadline_servers": [], + "publish": DEFAULT_DEADLINE_PLUGINS_SETTINGS +} diff --git a/server_addon/deadline/server/settings/publish_plugins.py b/server_addon/deadline/server/settings/publish_plugins.py new file mode 100644 index 0000000000..8d1b667345 --- /dev/null +++ b/server_addon/deadline/server/settings/publish_plugins.py @@ -0,0 +1,435 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class CollectDefaultDeadlineServerModel(BaseSettingsModel): + """Settings for event handlers running in ftrack service.""" + + pass_mongo_url: bool = Field(title="Pass Mongo url to job") + + +class CollectDeadlinePoolsModel(BaseSettingsModel): + """Settings Deadline default pools.""" + + primary_pool: str = Field(title="Primary Pool") + + secondary_pool: str = Field(title="Secondary Pool") + + +class ValidateExpectedFilesModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active: bool = Field(True, title="Active") + allow_user_override: bool = Field( + True, title="Allow user change frame range" + ) + families: list[str] = Field( + default_factory=list, title="Trigger on families" + ) + targets: list[str] = Field( + default_factory=list, title="Trigger for plugins" + ) + + +def tile_assembler_enum(): + """Return a list of value/label dicts for the enumerator. + + Returning a list of dicts is used to allow for a custom label to be + displayed in the UI. + """ + return [ + { + "value": "DraftTileAssembler", + "label": "Draft Tile Assembler" + }, + { + "value": "OpenPypeTileAssembler", + "label": "Open Image IO" + } + ] + + +class ScenePatchesSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Patch name") + regex: str = Field(title="Patch regex") + line: str = Field(title="Patch line") + + +class MayaSubmitDeadlineModel(BaseSettingsModel): + """Maya deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + import_reference: bool = Field(title="Use Scene with Imported Reference") + asset_dependencies: bool = Field(title="Use Asset dependencies") + priority: int = Field(title="Priority") + tile_priority: int = Field(title="Tile Priority") + group: str = Field(title="Group") + limit: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + tile_assembler_plugin: str = Field( + title="Tile Assembler Plugin", + enum_resolver=tile_assembler_enum, + ) + jobInfo: str = Field( + title="Additional JobInfo data", + widget="textarea", + ) + pluginInfo: str = Field( + title="Additional PluginInfo data", + widget="textarea", + ) + + scene_patches: list[ScenePatchesSubmodel] = Field( + default_factory=list, + title="Scene patches", + ) + strict_error_checking: bool = Field( + title="Disable Strict Error Check profiles" + ) + + @validator("limit", "scene_patches") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class MaxSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Frame per Task") + group: str = Field("", title="Group Name") + + +class EnvSearchReplaceSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class LimitGroupsSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Name") + value: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + + +class FusionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + priority: int = Field(50, title="Priority") + chunk_size: int = Field(10, title="Frame per Task") + concurrent_tasks: int = Field(1, title="Number of concurrent tasks") + group: str = Field("", title="Group Name") + + +class NukeSubmitDeadlineModel(BaseSettingsModel): + """Nuke deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + concurrent_tasks: int = Field(title="Number of concurrent tasks") + group: str = Field(title="Group") + department: str = Field(title="Department") + use_gpu: bool = Field(title="Use GPU") + + env_allowed_keys: list[str] = Field( + default_factory=list, + title="Allowed environment keys" + ) + + env_search_replace_values: list[EnvSearchReplaceSubmodel] = Field( + default_factory=list, + title="Search & replace in environment values", + ) + + limit_groups: list[LimitGroupsSubmodel] = Field( + default_factory=list, + title="Limit Groups", + ) + + @validator("limit_groups", "env_allowed_keys", "env_search_replace_values") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class HarmonySubmitDeadlineModel(BaseSettingsModel): + """Harmony deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + + +class AfterEffectsSubmitDeadlineModel(BaseSettingsModel): + """After Effects deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + multiprocess: bool = Field(title="Optional") + + +class CelactionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + deadline_department: str = Field("", title="Deadline apartment") + deadline_priority: int = Field(50, title="Deadline priority") + deadline_pool: str = Field("", title="Deadline pool") + deadline_pool_secondary: str = Field("", title="Deadline pool (secondary)") + deadline_group: str = Field("", title="Deadline Group") + deadline_chunk_size: int = Field(10, title="Deadline Chunk size") + deadline_job_delay: str = Field( + "", title="Delay job (timecode dd:hh:mm:ss)" + ) + + +class AOVFilterSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Host") + value: list[str] = Field( + default_factory=list, + title="AOV regex" + ) + + +class ProcessSubmittedJobOnFarmModel(BaseSettingsModel): + """Process submitted job on farm.""" + + enabled: bool = Field(title="Enabled") + deadline_department: str = Field(title="Department") + deadline_pool: str = Field(title="Pool") + deadline_group: str = Field(title="Group") + deadline_chunk_size: int = Field(title="Chunk Size") + deadline_priority: int = Field(title="Priority") + publishing_script: str = Field(title="Publishing script path") + skip_integration_repre_list: list[str] = Field( + default_factory=list, + title="Skip integration of representation with ext" + ) + aov_filter: list[AOVFilterSubmodel] = Field( + default_factory=list, + title="Reviewable products filter", + ) + + @validator("aov_filter", "skip_integration_repre_list") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class PublishPluginsModel(BaseSettingsModel): + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDeadlinePools: CollectDeadlinePoolsModel = Field( + default_factory=CollectDeadlinePoolsModel, + title="Default Pools") + ValidateExpectedFiles: ValidateExpectedFilesModel = Field( + default_factory=ValidateExpectedFilesModel, + title="Validate Expected Files" + ) + MayaSubmitDeadline: MayaSubmitDeadlineModel = Field( + default_factory=MayaSubmitDeadlineModel, + title="Maya Submit to deadline") + MaxSubmitDeadline: MaxSubmitDeadlineModel = Field( + default_factory=MaxSubmitDeadlineModel, + title="Max Submit to deadline") + FusionSubmitDeadline: FusionSubmitDeadlineModel = Field( + default_factory=FusionSubmitDeadlineModel, + title="Fusion submit to Deadline") + NukeSubmitDeadline: NukeSubmitDeadlineModel = Field( + default_factory=NukeSubmitDeadlineModel, + title="Nuke Submit to deadline") + HarmonySubmitDeadline: HarmonySubmitDeadlineModel = Field( + default_factory=HarmonySubmitDeadlineModel, + title="Harmony Submit to deadline") + AfterEffectsSubmitDeadline: AfterEffectsSubmitDeadlineModel = Field( + default_factory=AfterEffectsSubmitDeadlineModel, + title="After Effects to deadline") + CelactionSubmitDeadline: CelactionSubmitDeadlineModel = Field( + default_factory=CelactionSubmitDeadlineModel, + title="Celaction Submit Deadline" + ) + ProcessSubmittedJobOnFarm: ProcessSubmittedJobOnFarmModel = Field( + default_factory=ProcessSubmittedJobOnFarmModel, + title="Process submitted job on farm.") + + +DEFAULT_DEADLINE_PLUGINS_SETTINGS = { + "CollectDefaultDeadlineServer": { + "pass_mongo_url": True + }, + "CollectDeadlinePools": { + "primary_pool": "", + "secondary_pool": "" + }, + "ValidateExpectedFiles": { + "enabled": True, + "active": True, + "allow_user_override": True, + "families": [ + "render" + ], + "targets": [ + "deadline" + ] + }, + "MayaSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "tile_assembler_plugin": "DraftTileAssembler", + "use_published": True, + "import_reference": False, + "asset_dependencies": True, + "strict_error_checking": True, + "priority": 50, + "tile_priority": 50, + "group": "none", + "limit": [], + # this used to be empty dict + "jobInfo": "", + # this used to be empty dict + "pluginInfo": "", + "scene_patches": [] + }, + "MaxSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, + "FusionSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "" + }, + "NukeSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "", + "department": "", + "use_gpu": True, + "env_allowed_keys": [], + "env_search_replace_values": [], + "limit_groups": [] + }, + "HarmonySubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "" + }, + "AfterEffectsSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "", + "multiprocess": True + }, + "CelactionSubmitDeadline": { + "enabled": True, + "deadline_department": "", + "deadline_priority": 50, + "deadline_pool": "", + "deadline_pool_secondary": "", + "deadline_group": "", + "deadline_chunk_size": 10, + "deadline_job_delay": "00:00:00:00" + }, + "ProcessSubmittedJobOnFarm": { + "enabled": True, + "deadline_department": "", + "deadline_pool": "", + "deadline_group": "", + "deadline_chunk_size": 1, + "deadline_priority": 50, + "publishing_script": "", + "skip_integration_repre_list": [], + "aov_filter": [ + { + "name": "maya", + "value": [ + ".*([Bb]eauty).*" + ] + }, + { + "name": "aftereffects", + "value": [ + ".*" + ] + }, + { + "name": "celaction", + "value": [ + ".*" + ] + }, + { + "name": "harmony", + "value": [ + ".*" + ] + }, + { + "name": "max", + "value": [ + ".*" + ] + }, + { + "name": "fusion", + "value": [ + ".*" + ] + } + ] + } +} diff --git a/server_addon/deadline/server/version.py b/server_addon/deadline/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/deadline/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/flame/server/__init__.py b/server_addon/flame/server/__init__.py new file mode 100644 index 0000000000..7d5eb3960f --- /dev/null +++ b/server_addon/flame/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FlameSettings, DEFAULT_VALUES + + +class FlameAddon(BaseServerAddon): + name = "flame" + title = "Flame" + version = __version__ + settings_model: Type[FlameSettings] = FlameSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/flame/server/settings/__init__.py b/server_addon/flame/server/settings/__init__.py new file mode 100644 index 0000000000..39b8220d40 --- /dev/null +++ b/server_addon/flame/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + FlameSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "FlameSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/flame/server/settings/create_plugins.py b/server_addon/flame/server/settings/create_plugins.py new file mode 100644 index 0000000000..374a7368d2 --- /dev/null +++ b/server_addon/flame/server/settings/create_plugins.py @@ -0,0 +1,120 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModel(BaseSettingsModel): + hierarchy: str = Field( + "shot", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + useShotName: bool = Field( + True, + title="Use Shot Name", + ) + clipRename: bool = Field( + False, + title="Rename clips", + ) + clipName: str = Field( + "{sequence}{shot}", + title="Clip name template" + ) + segmentIndex: bool = Field( + True, + title="Accept segment order" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "a", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "####", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + includeHandles: bool = Field( + False, + title="Enable handles including" + ) + retimedHandles: bool = Field( + True, + title="Enable retimed handles" + ) + retimedFramerange: bool = Field( + True, + title="Enable retimed shot frameranges" + ) + + +class CreatePuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModel = Field( + default_factory=CreateShotClipModel, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "useShotName": True, + "clipRename": False, + "clipName": "{sequence}{shot}", + "segmentIndex": True, + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "a", + "track": "{_track_}", + "shot": "####", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 5, + "handleEnd": 5, + "includeHandles": False, + "retimedHandles": True, + "retimedFramerange": True + } +} diff --git a/server_addon/flame/server/settings/imageio.py b/server_addon/flame/server/settings/imageio.py new file mode 100644 index 0000000000..ef1e4721d1 --- /dev/null +++ b/server_addon/flame/server/settings/imageio.py @@ -0,0 +1,130 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ProfileNamesMappingInputsModel(BaseSettingsModel): + _layout = "expanded" + + flameName: str = Field("", title="Flame name") + ocioName: str = Field("", title="OCIO name") + + +class ProfileNamesMappingModel(BaseSettingsModel): + _layout = "expanded" + + inputs: list[ProfileNamesMappingInputsModel] = Field( + default_factory=list, + title="Profile names mapping" + ) + + +class ImageIOProjectModel(BaseSettingsModel): + colourPolicy: str = Field( + "ACES 1.1", + title="Colour Policy (name or path)", + section="Project" + ) + frameDepth: str = Field( + "16-bit fp", + title="Image Depth" + ) + fieldDominance: str = Field( + "PROGRESSIVE", + title="Field Dominance" + ) + + +class FlameImageIOModel(BaseSettingsModel): + _isGroup = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + # NOTE 'project' attribute was expanded to this model but that caused + # inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + project: ImageIOProjectModel = Field( + default_factory=ImageIOProjectModel, + title="Project" + ) + profilesMapping: ProfileNamesMappingModel = Field( + default_factory=ProfileNamesMappingModel, + title="Profile names mapping" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "project": { + "colourPolicy": "ACES 1.1", + "frameDepth": "16-bit fp", + "fieldDominance": "PROGRESSIVE" + }, + "profilesMapping": { + "inputs": [ + { + "flameName": "ACEScg", + "ocioName": "ACES - ACEScg" + }, + { + "flameName": "Rec.709 video", + "ocioName": "Output - Rec.709" + } + ] + } +} diff --git a/server_addon/flame/server/settings/loader_plugins.py b/server_addon/flame/server/settings/loader_plugins.py new file mode 100644 index 0000000000..6c27b926c2 --- /dev/null +++ b/server_addon/flame/server/settings/loader_plugins.py @@ -0,0 +1,99 @@ +from ayon_server.settings import Field, BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field(True) + + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_group_name: str = Field( + "OpenPype_Reels", + title="Reel group name" + ) + reel_name: str = Field( + "Loaded", + title="Reel name" + ) + + clip_name_template: str = Field( + "{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoadClipBatchModel(BaseSettingsModel): + enabled: bool = Field(True) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_name: str = Field( + "OP_LoadedReel", + title="Reel name" + ) + clip_name_template: str = Field( + "{batch}_{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoaderPluginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + LoadClipBatch: LoadClipBatchModel = Field( + default_factory=LoadClipBatchModel, + title="Load as clip to current batch" + ) + + +DEFAULT_LOADER_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_group_name": "OpenPype_Reels", + "reel_name": "Loaded", + "clip_name_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + }, + "LoadClipBatch": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_name": "OP_LoadedReel", + "clip_name_template": "{batch}_{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + } +} diff --git a/server_addon/flame/server/settings/main.py b/server_addon/flame/server/settings/main.py new file mode 100644 index 0000000000..f28de6641b --- /dev/null +++ b/server_addon/flame/server/settings/main.py @@ -0,0 +1,33 @@ +from ayon_server.settings import Field, BaseSettingsModel + +from .imageio import FlameImageIOModel, DEFAULT_IMAGEIO_SETTINGS +from .create_plugins import CreatePuginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_SETTINGS +from .loader_plugins import LoaderPluginsModel, DEFAULT_LOADER_SETTINGS + + +class FlameSettings(BaseSettingsModel): + imageio: FlameImageIOModel = Field( + default_factory=FlameImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatePuginsModel = Field( + default_factory=CreatePuginsModel, + title="Create plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + load: LoaderPluginsModel = Field( + default_factory=LoaderPluginsModel, + title="Loader plugins" + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADER_SETTINGS +} diff --git a/server_addon/flame/server/settings/publish_plugins.py b/server_addon/flame/server/settings/publish_plugins.py new file mode 100644 index 0000000000..ea7f109f73 --- /dev/null +++ b/server_addon/flame/server/settings/publish_plugins.py @@ -0,0 +1,190 @@ +from ayon_server.settings import Field, BaseSettingsModel, task_types_enum + + +class XMLPresetAttrsFromCommentsModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Attribute name") + type: str = Field( + default_factory=str, + title="Attribute type", + enum_resolver=lambda: ["number", "float", "string"] + ) + + +class AddTasksModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Task name") + type: str = Field( + default_factory=str, + title="Task type", + enum_resolver=task_types_enum + ) + create_batch_group: bool = Field( + True, + title="Create batch group" + ) + + +class CollectTimelineInstancesModel(BaseSettingsModel): + _isGroup = True + + xml_preset_attrs_from_comments: list[XMLPresetAttrsFromCommentsModel] = Field( + default_factory=list, + title="XML presets attributes parsable from segment comments" + ) + add_tasks: list[AddTasksModel] = Field( + default_factory=list, + title="Add tasks" + ) + + +class ExportPresetsMappingModel(BaseSettingsModel): + _layout = "expanded" + + name: str = Field( + ..., + title="Name" + ) + active: bool = Field(True, title="Is active") + export_type: str = Field( + "File Sequence", + title="Eport clip type", + enum_resolver=lambda: ["Movie", "File Sequence", "Sequence Publish"] + ) + ext: str = Field("exr", title="Output extension") + xml_preset_file: str = Field( + "OpenEXR (16-bit fp DWAA).xml", + title="XML preset file (with ext)" + ) + colorspace_out: str = Field( + "ACES - ACEScg", + title="Output color (imageio)" + ) + # TODO remove when resolved or v3 is not a thing anymore + # NOTE next 4 attributes were grouped under 'other_parameters' but that + # created inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + xml_preset_dir: str = Field( + "", + title="XML preset directory" + ) + parsed_comment_attrs: bool = Field( + True, + title="Parsed comment attributes" + ) + representation_add_range: bool = Field( + True, + title="Add range to representation name" + ) + representation_tags: list[str] = Field( + default_factory=list, + title="Representation tags" + ) + load_to_batch_group: bool = Field( + True, + title="Load to batch group reel" + ) + batch_group_loader_name: str = Field( + "LoadClipBatch", + title="Use loader name" + ) + filter_path_regex: str = Field( + ".*", + title="Regex in clip path" + ) + + +class ExtractProductResourcesModel(BaseSettingsModel): + _isGroup = True + + keep_original_representation: bool = Field( + False, + title="Publish clip's original media" + ) + export_presets_mapping: list[ExportPresetsMappingModel] = Field( + default_factory=list, + title="Export presets mapping" + ) + + +class IntegrateBatchGroupModel(BaseSettingsModel): + enabled: bool = Field( + False, + title="Enabled" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectTimelineInstances: CollectTimelineInstancesModel = Field( + default_factory=CollectTimelineInstancesModel, + title="Collect Timeline Instances" + ) + + ExtractProductResources: ExtractProductResourcesModel = Field( + default_factory=ExtractProductResourcesModel, + title="Extract Product Resources" + ) + + IntegrateBatchGroup: IntegrateBatchGroupModel = Field( + default_factory=IntegrateBatchGroupModel, + title="IntegrateBatchGroup" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectTimelineInstances": { + "xml_preset_attrs_from_comments": [ + { + "name": "width", + "type": "number" + }, + { + "name": "height", + "type": "number" + }, + { + "name": "pixelRatio", + "type": "float" + }, + { + "name": "resizeType", + "type": "string" + }, + { + "name": "resizeFilter", + "type": "string" + } + ], + "add_tasks": [ + { + "name": "compositing", + "type": "Compositing", + "create_batch_group": True + } + ] + }, + "ExtractProductResources": { + "keep_original_representation": False, + "export_presets_mapping": [ + { + "name": "exr16fpdwaa", + "active": True, + "export_type": "File Sequence", + "ext": "exr", + "xml_preset_file": "OpenEXR (16-bit fp DWAA).xml", + "colorspace_out": "ACES - ACEScg", + "xml_preset_dir": "", + "parsed_comment_attrs": True, + "representation_add_range": True, + "representation_tags": [], + "load_to_batch_group": True, + "batch_group_loader_name": "LoadClipBatch", + "filter_path_regex": ".*" + } + ] + }, + "IntegrateBatchGroup": { + "enabled": False + } +} diff --git a/server_addon/flame/server/version.py b/server_addon/flame/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/flame/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/fusion/server/__init__.py b/server_addon/fusion/server/__init__.py new file mode 100644 index 0000000000..4d43f28812 --- /dev/null +++ b/server_addon/fusion/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FusionSettings, DEFAULT_VALUES + + +class FusionAddon(BaseServerAddon): + name = "fusion" + title = "Fusion" + version = __version__ + settings_model: Type[FusionSettings] = FusionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/fusion/server/imageio.py b/server_addon/fusion/server/imageio.py new file mode 100644 index 0000000000..fe867af424 --- /dev/null +++ b/server_addon/fusion/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class FusionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/fusion/server/settings.py b/server_addon/fusion/server/settings.py new file mode 100644 index 0000000000..92fb362c66 --- /dev/null +++ b/server_addon/fusion/server/settings.py @@ -0,0 +1,95 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, +) + +from .imageio import FusionImageIOModel + + +class CopyFusionSettingsModel(BaseSettingsModel): + copy_path: str = Field("", title="Local Fusion profile directory") + copy_status: bool = Field(title="Copy profile on first launch") + force_sync: bool = Field(title="Resync profile on each launch") + + +def _create_saver_instance_attributes_enum(): + return [ + { + "value": "reviewable", + "label": "Reviewable" + }, + { + "value": "farm_rendering", + "label": "Farm rendering" + } + ] + + +class CreateSaverPluginModel(BaseSettingsModel): + _isGroup = True + temp_rendering_path_template: str = Field( + "", title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default variants" + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=_create_saver_instance_attributes_enum, + title="Instance attributes" + ) + + +class CreatPluginsModel(BaseSettingsModel): + CreateSaver: CreateSaverPluginModel = Field( + default_factory=CreateSaverPluginModel, + title="Create Saver" + ) + + +class FusionSettings(BaseSettingsModel): + imageio: FusionImageIOModel = Field( + default_factory=FusionImageIOModel, + title="Color Management (ImageIO)" + ) + copy_fusion_settings: CopyFusionSettingsModel = Field( + default_factory=CopyFusionSettingsModel, + title="Local Fusion profile settings" + ) + create: CreatPluginsModel = Field( + default_factory=CreatPluginsModel, + title="Creator plugins" + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "copy_fusion_settings": { + "copy_path": "~/.openpype/hosts/fusion/profiles", + "copy_status": False, + "force_sync": False + }, + "create": { + "CreateSaver": { + "temp_rendering_path_template": "{workdir}/renders/fusion/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ] + } + } +} diff --git a/server_addon/fusion/server/version.py b/server_addon/fusion/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/fusion/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/harmony/LICENSE b/server_addon/harmony/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/harmony/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/harmony/README.md b/server_addon/harmony/README.md new file mode 100644 index 0000000000..d971fa39f9 --- /dev/null +++ b/server_addon/harmony/README.md @@ -0,0 +1,4 @@ +ToonBoom Harmony Addon +=============== + +Integration with ToonBoom Harmony. diff --git a/server_addon/harmony/server/__init__.py b/server_addon/harmony/server/__init__.py new file mode 100644 index 0000000000..4ecda1989e --- /dev/null +++ b/server_addon/harmony/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import HarmonySettings, DEFAULT_HARMONY_SETTING +from .version import __version__ + + +class Harmony(BaseServerAddon): + name = "harmony" + title = "Harmony" + version = __version__ + + settings_model = HarmonySettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_HARMONY_SETTING) diff --git a/server_addon/harmony/server/settings/__init__.py b/server_addon/harmony/server/settings/__init__.py new file mode 100644 index 0000000000..4a8118d4da --- /dev/null +++ b/server_addon/harmony/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HarmonySettings, + DEFAULT_HARMONY_SETTING, +) + + +__all__ = ( + "HarmonySettings", + "DEFAULT_HARMONY_SETTING", +) diff --git a/server_addon/harmony/server/settings/imageio.py b/server_addon/harmony/server/settings/imageio.py new file mode 100644 index 0000000000..4e01fae3d4 --- /dev/null +++ b/server_addon/harmony/server/settings/imageio.py @@ -0,0 +1,55 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class HarmonyImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/harmony/server/settings/main.py b/server_addon/harmony/server/settings/main.py new file mode 100644 index 0000000000..0936bc1fc7 --- /dev/null +++ b/server_addon/harmony/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import HarmonyImageIOModel +from .publish_plugins import HarmonyPublishPlugins + + +class HarmonySettings(BaseSettingsModel): + """Harmony Project Settings.""" + + imageio: HarmonyImageIOModel = Field( + default_factory=HarmonyImageIOModel, + title="OCIO config" + ) + publish: HarmonyPublishPlugins = Field( + default_factory=HarmonyPublishPlugins, + title="Publish plugins" + ) + + +DEFAULT_HARMONY_SETTING = { + "load": { + "ImageSequenceLoader": { + "family": [ + "shot", + "render", + "image", + "plate", + "reference" + ], + "representations": [ + "jpeg", + "png", + "jpg" + ] + } + }, + "publish": { + "CollectPalettes": { + "allowed_tasks": [ + ".*" + ] + }, + "ValidateAudio": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "frame_check_filter": [], + "skip_resolution_check": [], + "skip_timelines_check": [] + } + } +} diff --git a/server_addon/harmony/server/settings/publish_plugins.py b/server_addon/harmony/server/settings/publish_plugins.py new file mode 100644 index 0000000000..bdaec2bbd4 --- /dev/null +++ b/server_addon/harmony/server/settings/publish_plugins.py @@ -0,0 +1,76 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectPalettesPlugin(BaseSettingsModel): + """Set regular expressions to filter triggering on specific task names. '.*' means on all.""" # noqa + + allowed_tasks: list[str] = Field( + default_factory=list, + title="Allowed tasks" + ) + + +class ValidateAudioPlugin(BaseSettingsModel): + """Check if scene contains audio track.""" # + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check if loaded container is scene are latest versions.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateSceneSettingsPlugin(BaseSettingsModel): + """Validate if FrameStart, FrameEnd and Resolution match shot data in DB. + Use regular expressions to limit validations only on particular asset + or task names.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + frame_check_filter: list[str] = Field( + default_factory=list, + title="Skip Frame check for Assets with name containing" + ) + + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks" + ) + + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks" + ) + + +class HarmonyPublishPlugins(BaseSettingsModel): + + CollectPalettes: CollectPalettesPlugin = Field( + title="Collect Palettes", + default_factory=CollectPalettesPlugin, + ) + + ValidateAudio: ValidateAudioPlugin = Field( + title="Validate Audio", + default_factory=ValidateAudioPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateSceneSettings: ValidateSceneSettingsPlugin = Field( + title="Validate Scene Settings", + default_factory=ValidateSceneSettingsPlugin, + ) diff --git a/server_addon/harmony/server/version.py b/server_addon/harmony/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/harmony/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/hiero/server/__init__.py b/server_addon/hiero/server/__init__.py new file mode 100644 index 0000000000..d0f9bcefc3 --- /dev/null +++ b/server_addon/hiero/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HieroSettings, DEFAULT_VALUES + + +class HieroAddon(BaseServerAddon): + name = "hiero" + title = "Hiero" + version = __version__ + settings_model: Type[HieroSettings] = HieroSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/hiero/server/settings/__init__.py b/server_addon/hiero/server/settings/__init__.py new file mode 100644 index 0000000000..246c8203e9 --- /dev/null +++ b/server_addon/hiero/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HieroSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HieroSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/hiero/server/settings/common.py b/server_addon/hiero/server/settings/common.py new file mode 100644 index 0000000000..eb4791f93e --- /dev/null +++ b/server_addon/hiero/server/settings/common.py @@ -0,0 +1,98 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"} +] + + +class KnobModel(BaseSettingsModel): + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Value" + ) diff --git a/server_addon/hiero/server/settings/create_plugins.py b/server_addon/hiero/server/settings/create_plugins.py new file mode 100644 index 0000000000..daec4a7cea --- /dev/null +++ b/server_addon/hiero/server/settings/create_plugins.py @@ -0,0 +1,97 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/hiero/server/settings/filters.py b/server_addon/hiero/server/settings/filters.py new file mode 100644 index 0000000000..7e2702b3b7 --- /dev/null +++ b/server_addon/hiero/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/hiero/server/settings/imageio.py b/server_addon/hiero/server/settings/imageio.py new file mode 100644 index 0000000000..f2c2728057 --- /dev/null +++ b/server_addon/hiero/server/settings/imageio.py @@ -0,0 +1,169 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Hiero workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Hiero is excpecting camel case key names, + but for better code consistency we are using snake_case: + + ocio_config = ocioConfigName + working_space_name = workingSpace + int_16_name = sixteenBitLut + int_8_name = eightBitLut + float_name = floatLut + log_name = logLut + viewer_name = viewerLut + thumbnail_name = thumbnailLut + """ + + ocioConfigName: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + workingSpace: str = Field( + title="Working Space" + ) + viewerLut: str = Field( + title="Viewer" + ) + eightBitLut: str = Field( + title="8-bit files" + ) + sixteenBitLut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + thumbnailLut: str = Field( + title="Thumnails" + ) + monitorOutLut: str = Field( + title="Monitor" + ) + + +class ClipColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ClipColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Hiero color management project settings. """ + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to clips via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "workfile": { + "ocioConfigName": "nuke-default", + "workingSpace": "linear", + "viewerLut": "sRGB", + "eightBitLut": "sRGB", + "sixteenBitLut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear", + "thumbnailLut": "sRGB", + "monitorOutLut": "sRGB" + }, + "regexInputs": { + "inputs": [ + { + "regex": "[^-a-zA-Z0-9](plateRef).*(?=mp4)", + "colorspace": "sRGB" + } + ] + } +} diff --git a/server_addon/hiero/server/settings/loader_plugins.py b/server_addon/hiero/server/settings/loader_plugins.py new file mode 100644 index 0000000000..83b3564c2a --- /dev/null +++ b/server_addon/hiero/server/settings/loader_plugins.py @@ -0,0 +1,38 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + clip_name_template: str = Field( + title="Clip name template" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "clip_name_template": "{folder[name]}_{product[name]}_{representation}" + } +} diff --git a/server_addon/hiero/server/settings/main.py b/server_addon/hiero/server/settings/main.py new file mode 100644 index 0000000000..47f8110c22 --- /dev/null +++ b/server_addon/hiero/server/settings/main.py @@ -0,0 +1,64 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .filters import PublishGUIFilterItemModel + + +class HieroSettings(BaseSettingsModel): + """Nuke addon settings.""" + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "filters": [], +} diff --git a/server_addon/hiero/server/settings/publish_plugins.py b/server_addon/hiero/server/settings/publish_plugins.py new file mode 100644 index 0000000000..a85e62724b --- /dev/null +++ b/server_addon/hiero/server/settings/publish_plugins.py @@ -0,0 +1,48 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CollectInstanceVersionModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + + +class ExtractReviewCutUpVideoModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + tags_addition: list[str] = Field( + default_factory=list, + title="Additional tags" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceVersion: CollectInstanceVersionModel = Field( + default_factory=CollectInstanceVersionModel, + title="Collect Instance Version" + ) + """# TODO: enhance settings with host api: + Rename class name and plugin name + to match title (it makes more sense) + """ + ExtractReviewCutUpVideo: ExtractReviewCutUpVideoModel = Field( + default_factory=ExtractReviewCutUpVideoModel, + title="Exctract Review Trim" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceVersion": { + "enabled": False, + }, + "ExtractReviewCutUpVideo": { + "enabled": True, + "tags_addition": [ + "review" + ] + } +} diff --git a/server_addon/hiero/server/settings/scriptsmenu.py b/server_addon/hiero/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..51cb088298 --- /dev/null +++ b/server_addon/hiero/server/settings/scriptsmenu.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + """# TODO: enhance settings with host api: + - in api rename key `name` to `menu_name` + """ + name: str = Field(title="Menu name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition") + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_hiero')", + "tooltip": "Open the OpenPype Hiero user doc page" + } + ] +} diff --git a/server_addon/hiero/server/version.py b/server_addon/hiero/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/hiero/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/houdini/server/__init__.py b/server_addon/houdini/server/__init__.py new file mode 100644 index 0000000000..870ec2d0b7 --- /dev/null +++ b/server_addon/houdini/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HoudiniSettings, DEFAULT_VALUES + + +class Houdini(BaseServerAddon): + name = "houdini" + title = "Houdini" + version = __version__ + settings_model: Type[HoudiniSettings] = HoudiniSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/houdini/server/settings/__init__.py b/server_addon/houdini/server/settings/__init__.py new file mode 100644 index 0000000000..9fd2678925 --- /dev/null +++ b/server_addon/houdini/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HoudiniSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HoudiniSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/houdini/server/settings/imageio.py b/server_addon/houdini/server/settings/imageio.py new file mode 100644 index 0000000000..88aa40ecd6 --- /dev/null +++ b/server_addon/houdini/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class HoudiniImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/houdini/server/settings/main.py b/server_addon/houdini/server/settings/main.py new file mode 100644 index 0000000000..fdb6838f5c --- /dev/null +++ b/server_addon/houdini/server/settings/main.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + +from .imageio import HoudiniImageIOModel +from .publish_plugins import ( + PublishPluginsModel, + CreatePluginsModel, + DEFAULT_HOUDINI_PUBLISH_SETTINGS, + DEFAULT_HOUDINI_CREATE_SETTINGS +) + + +class ShelfToolsModel(BaseSettingsModel): + name: str = Field(title="Name") + help: str = Field(title="Help text") + script: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Script Path " + ) + icon: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Icon Path " + ) + + +class ShelfDefinitionModel(BaseSettingsModel): + _layout = "expanded" + shelf_name: str = Field(title="Shelf name") + tools_list: list[ShelfToolsModel] = Field( + default_factory=list, + title="Shelf Tools" + ) + + +class ShelvesModel(BaseSettingsModel): + _layout = "expanded" + shelf_set_name: str = Field(title="Shelfs set name") + + shelf_set_source_path: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Shelf Set Path (optional)" + ) + + shelf_definition: list[ShelfDefinitionModel] = Field( + default_factory=list, + title="Shelf Definitions" + ) + + +class HoudiniSettings(BaseSettingsModel): + imageio: HoudiniImageIOModel = Field( + default_factory=HoudiniImageIOModel, + title="Color Management (ImageIO)" + ) + shelves: list[ShelvesModel] = Field( + default_factory=list, + title="Houdini Scripts Shelves", + ) + + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Creator Plugins", + ) + + +DEFAULT_VALUES = { + "shelves": [], + "create": DEFAULT_HOUDINI_CREATE_SETTINGS, + "publish": DEFAULT_HOUDINI_PUBLISH_SETTINGS +} diff --git a/server_addon/houdini/server/settings/publish_plugins.py b/server_addon/houdini/server/settings/publish_plugins.py new file mode 100644 index 0000000000..7d35d7e634 --- /dev/null +++ b/server_addon/houdini/server/settings/publish_plugins.py @@ -0,0 +1,156 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +# Creator Plugins +class CreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + title="Default Products", + default_factory=list, + ) + + +class CreateArnoldAssModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + title="Default Products", + default_factory=list, + ) + ext: str = Field(Title="Extension") + + +class CreatePluginsModel(BaseSettingsModel): + CreateArnoldAss: CreateArnoldAssModel = Field( + default_factory=CreateArnoldAssModel, + title="Create Alembic Camera") + CreateAlembicCamera: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Alembic Camera") + CreateCompositeSequence: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Composite Sequence") + CreatePointCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Point Cache") + CreateRedshiftROP: CreatorModel = Field( + default_factory=CreatorModel, + title="Create RedshiftROP") + CreateRemotePublish: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Remote Publish") + CreateVDBCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create VDB Cache") + CreateUSD: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD") + CreateUSDModel: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD model") + USDCreateShadingWorkspace: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD shading workspace") + CreateUSDRender: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD render") + + +DEFAULT_HOUDINI_CREATE_SETTINGS = { + "CreateArnoldAss": { + "enabled": True, + "default_variants": ["Main"], + "ext": ".ass" + }, + "CreateAlembicCamera": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateCompositeSequence": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreatePointCache": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateRedshiftROP": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateRemotePublish": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateVDBCache": { + "enabled": True, + "default_variants": ["Main"] + }, + "CreateUSD": { + "enabled": False, + "default_variants": ["Main"] + }, + "CreateUSDModel": { + "enabled": False, + "default_variants": ["Main"] + }, + "USDCreateShadingWorkspace": { + "enabled": False, + "default_variants": ["Main"] + }, + "CreateUSDRender": { + "enabled": False, + "default_variants": ["Main"] + }, +} + + +# Publish Plugins +class ValidateWorkfilePathsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + node_types: list[str] = Field( + default_factory=list, + title="Node Types" + ) + prohibited_vars: list[str] = Field( + default_factory=list, + title="Prohibited Variables" + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPluginsModel(BaseSettingsModel): + ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field( + default_factory=ValidateWorkfilePathsModel, + title="Validate workfile paths settings.") + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Latest Containers.") + + +DEFAULT_HOUDINI_PUBLISH_SETTINGS = { + "ValidateWorkfilePaths": { + "enabled": True, + "optional": True, + "node_types": [ + "file", + "alembic" + ], + "prohibited_vars": [ + "$HIP", + "$JOB" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/houdini/server/version.py b/server_addon/houdini/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/houdini/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/kitsu/server/__init__.py b/server_addon/kitsu/server/__init__.py new file mode 100644 index 0000000000..69cf812dea --- /dev/null +++ b/server_addon/kitsu/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import KitsuSettings, DEFAULT_VALUES + + +class KitsuAddon(BaseServerAddon): + name = "kitsu" + title = "Kitsu" + version = __version__ + settings_model: Type[KitsuSettings] = KitsuSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/kitsu/server/settings.py b/server_addon/kitsu/server/settings.py new file mode 100644 index 0000000000..a4d10d889d --- /dev/null +++ b/server_addon/kitsu/server/settings.py @@ -0,0 +1,112 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class EntityPattern(BaseSettingsModel): + episode: str = Field(title="Episode") + sequence: str = Field(title="Sequence") + shot: str = Field(title="Shot") + + +def _status_change_cond_enum(): + return [ + {"value": "equal", "label": "Equal"}, + {"value": "not_equal", "label": "Not equal"} + ] + + +class StatusChangeCondition(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + short_name: str = Field("", title="Short name") + + +class StatusChangeProductTypeRequirementModel(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + product_type: str = Field("", title="Product type") + + +class StatusChangeConditionsModel(BaseSettingsModel): + status_conditions: list[StatusChangeCondition] = Field( + default_factory=list, + title="Status conditions" + ) + product_type_requirements: list[StatusChangeProductTypeRequirementModel] = Field( + default_factory=list, + title="Product type requirements") + + +class CustomCommentTemplateModel(BaseSettingsModel): + enabled: bool = Field(True) + comment_template: str = Field("", title="Custom comment") + + +class IntegrateKitsuNotes(BaseSettingsModel): + """Kitsu supports markdown and here you can create a custom comment template. + + You can use data from your publishing instance's data. + """ + + set_status_note: bool = Field(title="Set status on note") + note_status_shortname: str = Field(title="Note shortname") + status_change_conditions: StatusChangeConditionsModel = Field( + default_factory=StatusChangeConditionsModel, + title="Status change conditions" + ) + custom_comment_template: CustomCommentTemplateModel = Field( + default_factory=CustomCommentTemplateModel, + title="Custom Comment Template", + ) + + +class PublishPlugins(BaseSettingsModel): + IntegrateKitsuNote: IntegrateKitsuNotes = Field( + default_factory=IntegrateKitsuNotes, + title="Integrate Kitsu Note" + ) + + +class KitsuSettings(BaseSettingsModel): + server: str = Field( + "", + title="Kitsu Server", + scope=["studio"], + ) + entities_naming_pattern: EntityPattern = Field( + default_factory=EntityPattern, + title="Entities naming pattern", + ) + publish: PublishPlugins = Field( + default_factory=PublishPlugins, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "entities_naming_pattern": { + "episode": "E##", + "sequence": "SQ##", + "shot": "SH##" + }, + "publish": { + "IntegrateKitsuNote": { + "set_status_note": False, + "note_status_shortname": "wfa", + "status_change_conditions": { + "status_conditions": [], + "product_type_requirements": [] + }, + "custom_comment_template": { + "enabled": False, + "comment_template": "{comment}\n\n| | |\n|--|--|\n| version| `{version}` |\n| product type | `{product[type]}` |\n| name | `{name}` |" + } + } + } +} diff --git a/server_addon/kitsu/server/version.py b/server_addon/kitsu/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/kitsu/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/max/server/__init__.py b/server_addon/max/server/__init__.py new file mode 100644 index 0000000000..31c694a084 --- /dev/null +++ b/server_addon/max/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MaxSettings, DEFAULT_VALUES + + +class MaxAddon(BaseServerAddon): + name = "max" + title = "Max" + version = __version__ + settings_model: Type[MaxSettings] = MaxSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/max/server/settings/__init__.py b/server_addon/max/server/settings/__init__.py new file mode 100644 index 0000000000..986b1903a5 --- /dev/null +++ b/server_addon/max/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + MaxSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "MaxSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/max/server/settings/imageio.py b/server_addon/max/server/settings/imageio.py new file mode 100644 index 0000000000..5e46104fa7 --- /dev/null +++ b/server_addon/max/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/max/server/settings/main.py b/server_addon/max/server/settings/main.py new file mode 100644 index 0000000000..7f4561cbb1 --- /dev/null +++ b/server_addon/max/server/settings/main.py @@ -0,0 +1,60 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import ImageIOSettings +from .render_settings import ( + RenderSettingsModel, DEFAULT_RENDER_SETTINGS +) +from .publishers import ( + PublishersModel, DEFAULT_PUBLISH_SETTINGS +) + + +class PRTAttributesModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Attribute") + + +class PointCloudSettings(BaseSettingsModel): + attribute: list[PRTAttributesModel] = Field( + default_factory=list, title="Channel Attribute") + + +class MaxSettings(BaseSettingsModel): + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (ImageIO)" + ) + RenderSettings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, + title="Render Settings" + ) + PointCloud: PointCloudSettings = Field( + default_factory=PointCloudSettings, + title="Point Cloud" + ) + publish: PublishersModel = Field( + default_factory=PublishersModel, + title="Publish Plugins") + + +DEFAULT_VALUES = { + "RenderSettings": DEFAULT_RENDER_SETTINGS, + "PointCloud": { + "attribute": [ + {"name": "Age", "value": "age"}, + {"name": "Radius", "value": "radius"}, + {"name": "Position", "value": "position"}, + {"name": "Rotation", "value": "rotation"}, + {"name": "Scale", "value": "scale"}, + {"name": "Velocity", "value": "velocity"}, + {"name": "Color", "value": "color"}, + {"name": "TextureCoordinate", "value": "texcoord"}, + {"name": "MaterialID", "value": "matid"}, + {"name": "custFloats", "value": "custFloats"}, + {"name": "custVecs", "value": "custVecs"}, + ] + }, + "publish": DEFAULT_PUBLISH_SETTINGS + +} diff --git a/server_addon/max/server/settings/publishers.py b/server_addon/max/server/settings/publishers.py new file mode 100644 index 0000000000..a695b85e89 --- /dev/null +++ b/server_addon/max/server/settings/publishers.py @@ -0,0 +1,26 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishersModel(BaseSettingsModel): + ValidateFrameRange: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Frame Range", + section="Validators" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/max/server/settings/render_settings.py b/server_addon/max/server/settings/render_settings.py new file mode 100644 index 0000000000..6c236d9f12 --- /dev/null +++ b/server_addon/max/server/settings/render_settings.py @@ -0,0 +1,49 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def image_format_enum(): + """Return enumerator for image output formats.""" + return [ + {"label": "bmp", "value": "bmp"}, + {"label": "exr", "value": "exr"}, + {"label": "tif", "value": "tif"}, + {"label": "tiff", "value": "tiff"}, + {"label": "jpg", "value": "jpg"}, + {"label": "png", "value": "png"}, + {"label": "tga", "value": "tga"}, + {"label": "dds", "value": "dds"} + ] + + +class RenderSettingsModel(BaseSettingsModel): + default_render_image_folder: str = Field( + title="Default render image folder" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + image_format: str = Field( + enum_resolver=image_format_enum, + title="Output Image Format" + ) + multipass: bool = Field(title="multipass") + + +DEFAULT_RENDER_SETTINGS = { + "default_render_image_folder": "renders/3dsmax", + "aov_separator": "underscore", + "image_format": "png", + "multipass": True +} diff --git a/server_addon/max/server/version.py b/server_addon/max/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/max/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/openpype/modules/ftrack/python2_vendor/arrow/LICENSE b/server_addon/maya/LICENCE similarity index 99% rename from openpype/modules/ftrack/python2_vendor/arrow/LICENSE rename to server_addon/maya/LICENCE index 2bef500de7..261eeb9e9f 100644 --- a/openpype/modules/ftrack/python2_vendor/arrow/LICENSE +++ b/server_addon/maya/LICENCE @@ -186,7 +186,7 @@ same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright 2019 Chris Smith + Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/server_addon/maya/README.md b/server_addon/maya/README.md new file mode 100644 index 0000000000..c65c09fba0 --- /dev/null +++ b/server_addon/maya/README.md @@ -0,0 +1,4 @@ +Maya Integration Addon +====================== + +WIP diff --git a/server_addon/maya/server/__init__.py b/server_addon/maya/server/__init__.py new file mode 100644 index 0000000000..8784427dcf --- /dev/null +++ b/server_addon/maya/server/__init__.py @@ -0,0 +1,16 @@ +"""Maya Addon Module""" +from ayon_server.addons import BaseServerAddon + +from .settings.main import MayaSettings, DEFAULT_MAYA_SETTING +from .version import __version__ + + +class MayaAddon(BaseServerAddon): + name = "maya" + title = "Maya" + version = __version__ + settings_model = MayaSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_MAYA_SETTING) diff --git a/openpype/modules/ftrack/python2_vendor/arrow/tests/__init__.py b/server_addon/maya/server/settings/__init__.py similarity index 100% rename from openpype/modules/ftrack/python2_vendor/arrow/tests/__init__.py rename to server_addon/maya/server/settings/__init__.py diff --git a/server_addon/maya/server/settings/creators.py b/server_addon/maya/server/settings/creators.py new file mode 100644 index 0000000000..9b97b92e59 --- /dev/null +++ b/server_addon/maya/server/settings/creators.py @@ -0,0 +1,408 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + make_tx: bool = Field(title="Make tx files") + rs_tex: bool = Field(title="Make Redshift texture files") + default_variants: list[str] = Field( + default_factory=list, title="Default Products" + ) + + +class BasicCreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateUnrealStaticMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + static_mesh_prefixes: str = Field("S", title="Static Mesh Prefix") + collision_prefixes: list[str] = Field( + default_factory=list, + title="Collision Prefixes" + ) + + +class CreateUnrealSkeletalMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + default_variants: list[str] = Field( + default_factory=list, title="Default Products") + joint_hints: str = Field("jnt_org", title="Joint root hint") + + +class CreateMultiverseLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + publish_mip_map: bool = Field(title="publish_mip_map") + + +class BasicExportMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAnimationModel(BaseSettingsModel): + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_parent_hierarchy: bool = Field( + title="Include Parent Hierarchy") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreatePointCacheModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + default_variants: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAssModel(BasicCreatorModel): + expandProcedurals: bool = Field(title="Expand Procedurals") + motionBlur: bool = Field(title="Motion Blur") + motionBlurKeys: int = Field(2, title="Motion Blur Keys") + motionBlurLength: float = Field(0.5, title="Motion Blur Length") + maskOptions: bool = Field(title="Mask Options") + maskCamera: bool = Field(title="Mask Camera") + maskLight: bool = Field(title="Mask Light") + maskShape: bool = Field(title="Mask Shape") + maskShader: bool = Field(title="Mask Shader") + maskOverride: bool = Field(title="Mask Override") + maskDriver: bool = Field(title="Mask Driver") + maskFilter: bool = Field(title="Mask Filter") + maskColor_manager: bool = Field(title="Mask Color Manager") + maskOperator: bool = Field(title="Mask Operator") + + +class CreateReviewModel(BasicCreatorModel): + useMayaTimeline: bool = Field(title="Use Maya Timeline for Frame Range.") + + +class CreateVrayProxyModel(BaseSettingsModel): + enabled: bool = Field(True) + vrmesh: bool = Field(title="VrMesh") + alembic: bool = Field(title="Alembic") + default_variants: list[str] = Field( + default_factory=list, title="Default Products") + + +class CreatorsModel(BaseSettingsModel): + CreateLook: CreateLookModel = Field( + default_factory=CreateLookModel, + title="Create Look" + ) + CreateRender: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render" + ) + # "-" is not compatible in the new model + CreateUnrealStaticMesh: CreateUnrealStaticMeshModel = Field( + default_factory=CreateUnrealStaticMeshModel, + title="Create Unreal_Static Mesh" + ) + # "-" is not compatible in the new model + CreateUnrealSkeletalMesh: CreateUnrealSkeletalMeshModel = Field( + default_factory=CreateUnrealSkeletalMeshModel, + title="Create Unreal_Skeletal Mesh" + ) + CreateMultiverseLook: CreateMultiverseLookModel = Field( + default_factory=CreateMultiverseLookModel, + title="Create Multiverse Look" + ) + CreateAnimation: CreateAnimationModel = Field( + default_factory=CreateAnimationModel, + title="Create Animation" + ) + CreateModel: BasicExportMeshModel = Field( + default_factory=BasicExportMeshModel, + title="Create Model" + ) + CreatePointCache: CreatePointCacheModel = Field( + default_factory=CreatePointCacheModel, + title="Create Point Cache" + ) + CreateProxyAlembic: CreateProxyAlembicModel = Field( + default_factory=CreateProxyAlembicModel, + title="Create Proxy Alembic" + ) + CreateMultiverseUsd: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD" + ) + CreateMultiverseUsdComp: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Composition" + ) + CreateMultiverseUsdOver: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Override" + ) + CreateAss: CreateAssModel = Field( + default_factory=CreateAssModel, + title="Create Ass" + ) + CreateAssembly: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Assembly" + ) + CreateCamera: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Camera" + ) + CreateLayout: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Layout" + ) + CreateMayaScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Maya Scene" + ) + CreateRenderSetup: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render Setup" + ) + CreateReview: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + CreateRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Rig" + ) + CreateSetDress: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Set Dress" + ) + CreateVrayProxy: CreateVrayProxyModel = Field( + default_factory=CreateVrayProxyModel, + title="Create VRay Proxy" + ) + CreateVRayScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create VRay Scene" + ) + CreateYetiRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Yeti Rig" + ) + + +DEFAULT_CREATORS_SETTINGS = { + "CreateLook": { + "enabled": True, + "make_tx": True, + "rs_tex": False, + "default_variants": [ + "Main" + ] + }, + "CreateRender": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateUnrealStaticMesh": { + "enabled": True, + "default_variants": [ + "", + "_Main" + ], + "static_mesh_prefix": "S", + "collision_prefixes": [ + "UBX", + "UCP", + "USP", + "UCX" + ] + }, + "CreateUnrealSkeletalMesh": { + "enabled": True, + "default_variants": [], + "joint_hints": "jnt_org" + }, + "CreateMultiverseLook": { + "enabled": True, + "publish_mip_map": True + }, + "CreateAnimation": { + "write_color_sets": False, + "write_face_sets": False, + "include_parent_hierarchy": False, + "include_user_defined_attributes": False, + "default_variants": [ + "Main" + ] + }, + "CreateModel": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ] + }, + "CreatePointCache": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "include_user_defined_attributes": False, + "default_variants": [ + "Main" + ] + }, + "CreateProxyAlembic": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsd": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsdComp": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMultiverseUsdOver": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateAss": { + "enabled": True, + "default_variants": [ + "Main" + ], + "expandProcedurals": False, + "motionBlur": True, + "motionBlurKeys": 2, + "motionBlurLength": 0.5, + "maskOptions": False, + "maskCamera": False, + "maskLight": False, + "maskShape": False, + "maskShader": False, + "maskOverride": False, + "maskDriver": False, + "maskFilter": False, + "maskColor_manager": False, + "maskOperator": False + }, + "CreateAssembly": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateCamera": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateLayout": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateMayaScene": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateRenderSetup": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateReview": { + "enabled": True, + "default_variants": [ + "Main" + ], + "useMayaTimeline": True + }, + "CreateRig": { + "enabled": True, + "default_variants": [ + "Main", + "Sim", + "Cloth" + ] + }, + "CreateSetDress": { + "enabled": True, + "default_variants": [ + "Main", + "Anim" + ] + }, + "CreateVrayProxy": { + "enabled": True, + "vrmesh": True, + "alembic": True, + "default_variants": [ + "Main" + ] + }, + "CreateVRayScene": { + "enabled": True, + "default_variants": [ + "Main" + ] + }, + "CreateYetiRig": { + "enabled": True, + "default_variants": [ + "Main" + ] + } +} diff --git a/server_addon/maya/server/settings/explicit_plugins_loading.py b/server_addon/maya/server/settings/explicit_plugins_loading.py new file mode 100644 index 0000000000..394adb728f --- /dev/null +++ b/server_addon/maya/server/settings/explicit_plugins_loading.py @@ -0,0 +1,429 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class PluginsModel(BaseSettingsModel): + _layout = "expanded" + enabled: bool = Field(title="Enabled") + name: str = Field("", title="Name") + + +class ExplicitPluginsLoadingModel(BaseSettingsModel): + """Maya Explicit Plugins Loading.""" + _isGroup: bool = True + enabled: bool = Field(title="enabled") + plugins_to_load: list[PluginsModel] = Field( + default_factory=list, title="Plugins To Load" + ) + + +DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS = { + "enabled": False, + "plugins_to_load": [ + { + "enabled": False, + "name": "AbcBullet" + }, + { + "enabled": True, + "name": "AbcExport" + }, + { + "enabled": True, + "name": "AbcImport" + }, + { + "enabled": False, + "name": "animImportExport" + }, + { + "enabled": False, + "name": "ArubaTessellator" + }, + { + "enabled": False, + "name": "ATFPlugin" + }, + { + "enabled": False, + "name": "atomImportExport" + }, + { + "enabled": False, + "name": "AutodeskPacketFile" + }, + { + "enabled": False, + "name": "autoLoader" + }, + { + "enabled": False, + "name": "bifmeshio" + }, + { + "enabled": False, + "name": "bifrostGraph" + }, + { + "enabled": False, + "name": "bifrostshellnode" + }, + { + "enabled": False, + "name": "bifrostvisplugin" + }, + { + "enabled": False, + "name": "blast2Cmd" + }, + { + "enabled": False, + "name": "bluePencil" + }, + { + "enabled": False, + "name": "Boss" + }, + { + "enabled": False, + "name": "bullet" + }, + { + "enabled": True, + "name": "cacheEvaluator" + }, + { + "enabled": False, + "name": "cgfxShader" + }, + { + "enabled": False, + "name": "cleanPerFaceAssignment" + }, + { + "enabled": False, + "name": "clearcoat" + }, + { + "enabled": False, + "name": "convertToComponentTags" + }, + { + "enabled": False, + "name": "curveWarp" + }, + { + "enabled": False, + "name": "ddsFloatReader" + }, + { + "enabled": True, + "name": "deformerEvaluator" + }, + { + "enabled": False, + "name": "dgProfiler" + }, + { + "enabled": False, + "name": "drawUfe" + }, + { + "enabled": False, + "name": "dx11Shader" + }, + { + "enabled": False, + "name": "fbxmaya" + }, + { + "enabled": False, + "name": "fltTranslator" + }, + { + "enabled": False, + "name": "freeze" + }, + { + "enabled": False, + "name": "Fur" + }, + { + "enabled": False, + "name": "gameFbxExporter" + }, + { + "enabled": False, + "name": "gameInputDevice" + }, + { + "enabled": False, + "name": "GamePipeline" + }, + { + "enabled": False, + "name": "gameVertexCount" + }, + { + "enabled": False, + "name": "geometryReport" + }, + { + "enabled": False, + "name": "geometryTools" + }, + { + "enabled": False, + "name": "glslShader" + }, + { + "enabled": True, + "name": "GPUBuiltInDeformer" + }, + { + "enabled": False, + "name": "gpuCache" + }, + { + "enabled": False, + "name": "hairPhysicalShader" + }, + { + "enabled": False, + "name": "ik2Bsolver" + }, + { + "enabled": False, + "name": "ikSpringSolver" + }, + { + "enabled": False, + "name": "invertShape" + }, + { + "enabled": False, + "name": "lges" + }, + { + "enabled": False, + "name": "lookdevKit" + }, + { + "enabled": False, + "name": "MASH" + }, + { + "enabled": False, + "name": "matrixNodes" + }, + { + "enabled": False, + "name": "mayaCharacterization" + }, + { + "enabled": False, + "name": "mayaHIK" + }, + { + "enabled": False, + "name": "MayaMuscle" + }, + { + "enabled": False, + "name": "mayaUsdPlugin" + }, + { + "enabled": False, + "name": "mayaVnnPlugin" + }, + { + "enabled": False, + "name": "melProfiler" + }, + { + "enabled": False, + "name": "meshReorder" + }, + { + "enabled": True, + "name": "modelingToolkit" + }, + { + "enabled": False, + "name": "mtoa" + }, + { + "enabled": False, + "name": "mtoh" + }, + { + "enabled": False, + "name": "nearestPointOnMesh" + }, + { + "enabled": True, + "name": "objExport" + }, + { + "enabled": False, + "name": "OneClick" + }, + { + "enabled": False, + "name": "OpenEXRLoader" + }, + { + "enabled": False, + "name": "pgYetiMaya" + }, + { + "enabled": False, + "name": "pgyetiVrayMaya" + }, + { + "enabled": False, + "name": "polyBoolean" + }, + { + "enabled": False, + "name": "poseInterpolator" + }, + { + "enabled": False, + "name": "quatNodes" + }, + { + "enabled": False, + "name": "randomizerDevice" + }, + { + "enabled": False, + "name": "redshift4maya" + }, + { + "enabled": True, + "name": "renderSetup" + }, + { + "enabled": False, + "name": "retargeterNodes" + }, + { + "enabled": False, + "name": "RokokoMotionLibrary" + }, + { + "enabled": False, + "name": "rotateHelper" + }, + { + "enabled": False, + "name": "sceneAssembly" + }, + { + "enabled": False, + "name": "shaderFXPlugin" + }, + { + "enabled": False, + "name": "shotCamera" + }, + { + "enabled": False, + "name": "snapTransform" + }, + { + "enabled": False, + "name": "stage" + }, + { + "enabled": True, + "name": "stereoCamera" + }, + { + "enabled": False, + "name": "stlTranslator" + }, + { + "enabled": False, + "name": "studioImport" + }, + { + "enabled": False, + "name": "Substance" + }, + { + "enabled": False, + "name": "substancelink" + }, + { + "enabled": False, + "name": "substancemaya" + }, + { + "enabled": False, + "name": "substanceworkflow" + }, + { + "enabled": False, + "name": "svgFileTranslator" + }, + { + "enabled": False, + "name": "sweep" + }, + { + "enabled": False, + "name": "testify" + }, + { + "enabled": False, + "name": "tiffFloatReader" + }, + { + "enabled": False, + "name": "timeSliderBookmark" + }, + { + "enabled": False, + "name": "Turtle" + }, + { + "enabled": False, + "name": "Type" + }, + { + "enabled": False, + "name": "udpDevice" + }, + { + "enabled": False, + "name": "ufeSupport" + }, + { + "enabled": False, + "name": "Unfold3D" + }, + { + "enabled": False, + "name": "VectorRender" + }, + { + "enabled": False, + "name": "vrayformaya" + }, + { + "enabled": False, + "name": "vrayvolumegrid" + }, + { + "enabled": False, + "name": "xgenToolkit" + }, + { + "enabled": False, + "name": "xgenVray" + } + ] +} diff --git a/server_addon/maya/server/settings/imageio.py b/server_addon/maya/server/settings/imageio.py new file mode 100644 index 0000000000..7512bfe253 --- /dev/null +++ b/server_addon/maya/server/settings/imageio.py @@ -0,0 +1,126 @@ +"""Providing models and setting values for image IO in Maya. + +Note: Names were changed to get rid of the versions in class names. +""" +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ColorManagementPreferenceV2Model(BaseSettingsModel): + """Color Management Preference v2 (Maya 2022+).""" + _layout = "expanded" + + enabled: bool = Field(True, title="Use Color Management Preference v2") + + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ColorManagementPreferenceModel(BaseSettingsModel): + """Color Management Preference (legacy).""" + _layout = "expanded" + + renderSpace: str = Field(title="Rendering Space") + viewTransform: str = Field(title="Viewer Transform ") + + +class WorkfileImageIOModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ImageIOSettings(BaseSettingsModel): + """Maya color management project settings. + + Todo: What to do with color management preferences version? + """ + + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileImageIOModel = Field( + default_factory=WorkfileImageIOModel, + title="Workfile" + ) + # Deprecated + colorManagementPreference_v2: ColorManagementPreferenceV2Model = Field( + default_factory=ColorManagementPreferenceV2Model, + title="Color Management Preference v2 (Maya 2022+)" + ) + colorManagementPreference: ColorManagementPreferenceModel = Field( + default_factory=ColorManagementPreferenceModel, + title="Color Management Preference (legacy)" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "activate_host_color_management": True, + "ocio_config": { + "override_global_config": False, + "filepath": [] + }, + "file_rules": { + "activate_host_rules": False, + "rules": [] + }, + "workfile": { + "enabled": False, + "renderSpace": "ACES - ACEScg", + "displayName": "ACES", + "viewName": "sRGB" + }, + "colorManagementPreference_v2": { + "enabled": True, + "renderSpace": "ACEScg", + "displayName": "sRGB", + "viewName": "ACES 1.0 SDR-video" + }, + "colorManagementPreference": { + "renderSpace": "scene-linear Rec 709/sRGB", + "viewTransform": "sRGB gamma" + } +} diff --git a/server_addon/maya/server/settings/include_handles.py b/server_addon/maya/server/settings/include_handles.py new file mode 100644 index 0000000000..3ba6aca66b --- /dev/null +++ b/server_addon/maya/server/settings/include_handles.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class IncludeByTaskTypeModel(BaseSettingsModel): + task_type: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + include_handles: bool = Field(True, title="Include handles") + + +class IncludeHandlesModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + include_handles_default: bool = Field( + True, title="Include handles by default" + ) + per_task_type: list[IncludeByTaskTypeModel] = Field( + default_factory=list, + title="Include/exclude handles by task type" + ) + + +DEFAULT_INCLUDE_HANDLES = { + "include_handles_default": False, + "per_task_type": [] +} diff --git a/server_addon/maya/server/settings/loaders.py b/server_addon/maya/server/settings/loaders.py new file mode 100644 index 0000000000..ed6b6fd2ac --- /dev/null +++ b/server_addon/maya/server/settings/loaders.py @@ -0,0 +1,129 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class ColorsSetting(BaseSettingsModel): + model: ColorRGBA_uint8 = Field( + (209, 132, 30, 1.0), title="Model:") + rig: ColorRGBA_uint8 = Field( + (59, 226, 235, 1.0), title="Rig:") + pointcache: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Pointcache:") + animation: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Animation:") + ass: ColorRGBA_uint8 = Field( + (249, 135, 53, 1.0), title="Arnold StandIn:") + camera: ColorRGBA_uint8 = Field( + (136, 114, 244, 1.0), title="Camera:") + fbx: ColorRGBA_uint8 = Field( + (215, 166, 255, 1.0), title="FBX:") + mayaAscii: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Ascii:") + mayaScene: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Scene:") + setdress: ColorRGBA_uint8 = Field( + (255, 250, 90, 1.0), title="Set Dress:") + layout: ColorRGBA_uint8 = Field(( + 255, 250, 90, 1.0), title="Layout:") + vdbcache: ColorRGBA_uint8 = Field( + (249, 54, 0, 1.0), title="VDB Cache:") + vrayproxy: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Proxy:") + vrayscene_layer: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Scene:") + yeticache: ColorRGBA_uint8 = Field( + (99, 206, 220, 1.0), title="Yeti Cache:") + yetiRig: ColorRGBA_uint8 = Field( + (0, 205, 125, 1.0), title="Yeti Rig:") + + +class ReferenceLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + display_handle: bool = Field(title="Display Handle On Load References") + + +class ImportLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + + +class LoadersModel(BaseSettingsModel): + colors: ColorsSetting = Field( + default_factory=ColorsSetting, + title="Loaded Products Outliner Colors") + + reference_loader: ReferenceLoaderModel = Field( + default_factory=ReferenceLoaderModel, + title="Reference Loader" + ) + + import_loader: ImportLoaderModel = Field( + default_factory=ImportLoaderModel, + title="Import Loader" + ) + +DEFAULT_LOADERS_SETTING = { + "colors": { + "model": [ + 209, 132, 30, 1.0 + ], + "rig": [ + 59, 226, 235, 1.0 + ], + "pointcache": [ + 94, 209, 30, 1.0 + ], + "animation": [ + 94, 209, 30, 1.0 + ], + "ass": [ + 249, 135, 53, 1.0 + ], + "camera": [ + 136, 114, 244, 1.0 + ], + "fbx": [ + 215, 166, 255, 1.0 + ], + "mayaAscii": [ + 67, 174, 255, 1.0 + ], + "mayaScene": [ + 67, 174, 255, 1.0 + ], + "setdress": [ + 255, 250, 90, 1.0 + ], + "layout": [ + 255, 250, 90, 1.0 + ], + "vdbcache": [ + 249, 54, 0, 1.0 + ], + "vrayproxy": [ + 255, 150, 12, 1.0 + ], + "vrayscene_layer": [ + 255, 150, 12, 1.0 + ], + "yeticache": [ + 99, 206, 220, 1.0 + ], + "yetiRig": [ + 0, 205, 125, 1.0 + ] + }, + "reference_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + }, + "import_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + } +} diff --git a/server_addon/maya/server/settings/main.py b/server_addon/maya/server/settings/main.py new file mode 100644 index 0000000000..c8021614be --- /dev/null +++ b/server_addon/maya/server/settings/main.py @@ -0,0 +1,141 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from .imageio import ImageIOSettings, DEFAULT_IMAGEIO_SETTINGS +from .maya_dirmap import MayaDirmapModel, DEFAULT_MAYA_DIRMAP_SETTINGS +from .include_handles import IncludeHandlesModel, DEFAULT_INCLUDE_HANDLES +from .explicit_plugins_loading import ( + ExplicitPluginsLoadingModel, DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS +) +from .scriptsmenu import ScriptsmenuModel, DEFAULT_SCRIPTSMENU_SETTINGS +from .render_settings import RenderSettingsModel, DEFAULT_RENDER_SETTINGS +from .creators import CreatorsModel, DEFAULT_CREATORS_SETTINGS +from .publishers import PublishersModel, DEFAULT_PUBLISH_SETTINGS +from .loaders import LoadersModel, DEFAULT_LOADERS_SETTING +from .workfile_build_settings import ProfilesModel, DEFAULT_WORKFILE_SETTING +from .templated_workfile_settings import ( + TemplatedProfilesModel, DEFAULT_TEMPLATED_WORKFILE_SETTINGS +) + + +class ExtMappingItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Product type") + value: str = Field(title="Extension") + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class MayaSettings(BaseSettingsModel): + """Maya Project Settings.""" + + open_workfile_post_initialization: bool = Field( + True, title="Open Workfile Post Initialization") + explicit_plugins_loading: ExplicitPluginsLoadingModel = Field( + default_factory=ExplicitPluginsLoadingModel, + title="Explicit Plugins Loading") + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, title="Color Management (imageio)") + mel_workspace: str = Field(title="Maya MEL Workspace", widget="textarea") + ext_mapping: list[ExtMappingItemModel] = Field( + default_factory=list, title="Extension Mapping") + maya_dirmap: MayaDirmapModel = Field( + default_factory=MayaDirmapModel, title="Maya dirmap Settings") + include_handles: IncludeHandlesModel = Field( + default_factory=IncludeHandlesModel, + title="Include/Exclude Handles in default playback & render range" + ) + scriptsmenu: ScriptsmenuModel = Field( + default_factory=ScriptsmenuModel, + title="Scriptsmenu Settings" + ) + render_settings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, title="Render Settings") + create: CreatorsModel = Field( + default_factory=CreatorsModel, title="Creators") + publish: PublishersModel = Field( + default_factory=PublishersModel, title="Publishers") + load: LoadersModel = Field( + default_factory=LoadersModel, title="Loaders") + workfile_build: ProfilesModel = Field( + default_factory=ProfilesModel, title="Workfile Build Settings") + templated_workfile_build: TemplatedProfilesModel = Field( + default_factory=TemplatedProfilesModel, + title="Templated Workfile Build Settings") + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters", "ext_mapping") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_MEL_WORKSPACE_SETTINGS = "\n".join(( + 'workspace -fr "shaders" "renderData/shaders";', + 'workspace -fr "images" "renders/maya";', + 'workspace -fr "particles" "particles";', + 'workspace -fr "mayaAscii" "";', + 'workspace -fr "mayaBinary" "";', + 'workspace -fr "scene" "";', + 'workspace -fr "alembicCache" "cache/alembic";', + 'workspace -fr "renderData" "renderData";', + 'workspace -fr "sourceImages" "sourceimages";', + 'workspace -fr "fileCache" "cache/nCache";', + '', +)) + +DEFAULT_MAYA_SETTING = { + "open_workfile_post_initialization": False, + "explicit_plugins_loading": DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "mel_workspace": DEFAULT_MEL_WORKSPACE_SETTINGS, + "ext_mapping": [ + {"name": "model", "value": "ma"}, + {"name": "mayaAscii", "value": "ma"}, + {"name": "camera", "value": "ma"}, + {"name": "rig", "value": "ma"}, + {"name": "workfile", "value": "ma"}, + {"name": "yetiRig", "value": "ma"} + ], + # `maya_dirmap` was originally with dash - `maya-dirmap` + "maya_dirmap": DEFAULT_MAYA_DIRMAP_SETTINGS, + "include_handles": DEFAULT_INCLUDE_HANDLES, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "render_settings": DEFAULT_RENDER_SETTINGS, + "create": DEFAULT_CREATORS_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADERS_SETTING, + "workfile_build": DEFAULT_WORKFILE_SETTING, + "templated_workfile_build": DEFAULT_TEMPLATED_WORKFILE_SETTINGS, + "filters": [ + { + "name": "preset 1", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + {"name": "ValidateShapeDefaultNames", "value": False}, + ] + }, + { + "name": "preset 2", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + ] + }, + ] +} diff --git a/server_addon/maya/server/settings/maya_dirmap.py b/server_addon/maya/server/settings/maya_dirmap.py new file mode 100644 index 0000000000..243261dc87 --- /dev/null +++ b/server_addon/maya/server/settings/maya_dirmap.py @@ -0,0 +1,40 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class MayaDirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, title="Destination Paths" + ) + + +class MayaDirmapModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + # Use ${} placeholder instead of absolute value of a root in + # referenced filepaths. + use_env_var_as_root: bool = Field( + title="Use env var placeholder in referenced paths" + ) + paths: MayaDirmapPathsSubmodel = Field( + default_factory=MayaDirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +DEFAULT_MAYA_DIRMAP_SETTINGS = { + "use_env_var_as_root": False, + "enabled": False, + "paths": { + "source-path": [], + "destination-path": [] + } +} diff --git a/server_addon/maya/server/settings/publish_playblast.py b/server_addon/maya/server/settings/publish_playblast.py new file mode 100644 index 0000000000..acfcaf5988 --- /dev/null +++ b/server_addon/maya/server/settings/publish_playblast.py @@ -0,0 +1,382 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum, +) +from ayon_server.types import ColorRGBA_uint8 + + +def hardware_falloff_enum(): + return [ + {"label": "Linear", "value": "0"}, + {"label": "Exponential", "value": "1"}, + {"label": "Exponential Squared", "value": "2"} + ] + + +def renderer_enum(): + return [ + {"label": "Viewport 2.0", "value": "vp2Renderer"} + ] + + +def displayLights_enum(): + return [ + {"label": "Default Lighting", "value": "default"}, + {"label": "All Lights", "value": "all"}, + {"label": "Selected Lights", "value": "selected"}, + {"label": "Flat Lighting", "value": "flat"}, + {"label": "No Lights", "value": "nolights"} + ] + + +def plugin_objects_default(): + return [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + + +class CodecSetting(BaseSettingsModel): + _layout = "expanded" + compression: str = Field("png", title="Encoding") + format: str = Field("image", title="Format") + quality: int = Field(95, title="Quality", ge=0, le=100) + + +class DisplayOptionsSetting(BaseSettingsModel): + _layout = "expanded" + override_display: bool = Field(True, title="Override display options") + background: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Color" + ) + displayGradient: bool = Field(True, title="Display background gradient") + backgroundTop: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Top" + ) + backgroundBottom: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Bottom" + ) + + +class GenericSetting(BaseSettingsModel): + _layout = "expanded" + isolate_view: bool = Field(True, title="Isolate View") + off_screen: bool = Field(True, title="Off Screen") + pan_zoom: bool = Field(False, title="2D Pan/Zoom") + + +class RendererSetting(BaseSettingsModel): + _layout = "expanded" + rendererName: str = Field( + "vp2Renderer", + enum_resolver=renderer_enum, + title="Renderer name" + ) + + +class ResolutionSetting(BaseSettingsModel): + _layout = "expanded" + width: int = Field(0, title="Width") + height: int = Field(0, title="Height") + + +class PluginObjectsModel(BaseSettingsModel): + name: str = Field("", title="Name") + value: bool = Field(True, title="Enabled") + + +class ViewportOptionsSetting(BaseSettingsModel): + override_viewport_options: bool = Field( + True, title="Override viewport options" + ) + displayLights: str = Field( + "default", enum_resolver=displayLights_enum, title="Display Lights" + ) + displayTextures: bool = Field(True, title="Display Textures") + textureMaxResolution: int = Field(1024, title="Texture Clamp Resolution") + renderDepthOfField: bool = Field( + True, title="Depth of Field", section="Depth of Field" + ) + shadows: bool = Field(True, title="Display Shadows") + twoSidedLighting: bool = Field(True, title="Two Sided Lighting") + lineAAEnable: bool = Field( + True, title="Enable Anti-Aliasing", section="Anti-Aliasing" + ) + multiSample: int = Field(8, title="Anti Aliasing Samples") + useDefaultMaterial: bool = Field(False, title="Use Default Material") + wireframeOnShaded: bool = Field(False, title="Wireframe On Shaded") + xray: bool = Field(False, title="X-Ray") + jointXray: bool = Field(False, title="X-Ray Joints") + backfaceCulling: bool = Field(False, title="Backface Culling") + ssaoEnable: bool = Field( + False, title="Screen Space Ambient Occlusion", section="SSAO" + ) + ssaoAmount: int = Field(1, title="SSAO Amount") + ssaoRadius: int = Field(16, title="SSAO Radius") + ssaoFilterRadius: int = Field(16, title="SSAO Filter Radius") + ssaoSamples: int = Field(16, title="SSAO Samples") + fogging: bool = Field(False, title="Enable Hardware Fog", section="Fog") + hwFogFalloff: str = Field( + "0", enum_resolver=hardware_falloff_enum, title="Hardware Falloff" + ) + hwFogDensity: float = Field(0.0, title="Fog Density") + hwFogStart: int = Field(0, title="Fog Start") + hwFogEnd: int = Field(100, title="Fog End") + hwFogAlpha: int = Field(0, title="Fog Alpha") + hwFogColorR: float = Field(1.0, title="Fog Color R") + hwFogColorG: float = Field(1.0, title="Fog Color G") + hwFogColorB: float = Field(1.0, title="Fog Color B") + motionBlurEnable: bool = Field( + False, title="Enable Motion Blur", section="Motion Blur" + ) + motionBlurSampleCount: int = Field(8, title="Motion Blur Sample Count") + motionBlurShutterOpenFraction: float = Field( + 0.2, title="Shutter Open Fraction" + ) + cameras: bool = Field(False, title="Cameras", section="Show") + clipGhosts: bool = Field(False, title="Clip Ghosts") + deformers: bool = Field(False, title="Deformers") + dimensions: bool = Field(False, title="Dimensions") + dynamicConstraints: bool = Field(False, title="Dynamic Constraints") + dynamics: bool = Field(False, title="Dynamics") + fluids: bool = Field(False, title="Fluids") + follicles: bool = Field(False, title="Follicles") + greasePencils: bool = Field(False, title="Grease Pencils") + grid: bool = Field(False, title="Grid") + hairSystems: bool = Field(True, title="Hair Systems") + handles: bool = Field(False, title="Handles") + headsUpDisplay: bool = Field(False, title="HUD") + ikHandles: bool = Field(False, title="IK Handles") + imagePlane: bool = Field(True, title="Image Plane") + joints: bool = Field(False, title="Joints") + lights: bool = Field(False, title="Lights") + locators: bool = Field(False, title="Locators") + manipulators: bool = Field(False, title="Manipulators") + motionTrails: bool = Field(False, title="Motion Trails") + nCloths: bool = Field(False, title="nCloths") + nParticles: bool = Field(False, title="nParticles") + nRigids: bool = Field(False, title="nRigids") + controlVertices: bool = Field(False, title="NURBS CVs") + nurbsCurves: bool = Field(False, title="NURBS Curves") + hulls: bool = Field(False, title="NURBS Hulls") + nurbsSurfaces: bool = Field(False, title="NURBS Surfaces") + particleInstancers: bool = Field(False, title="Particle Instancers") + pivots: bool = Field(False, title="Pivots") + planes: bool = Field(False, title="Planes") + pluginShapes: bool = Field(False, title="Plugin Shapes") + polymeshes: bool = Field(True, title="Polygons") + strokes: bool = Field(False, title="Strokes") + subdivSurfaces: bool = Field(False, title="Subdiv Surfaces") + textures: bool = Field(False, title="Texture Placements") + pluginObjects: list[PluginObjectsModel] = Field( + default_factory=plugin_objects_default, + title="Plugin Objects" + ) + + @validator("pluginObjects") + def validate_unique_plugin_objects(cls, value): + ensure_unique_names(value) + return value + + +class CameraOptionsSetting(BaseSettingsModel): + displayGateMask: bool = Field(False, title="Display Gate Mask") + displayResolution: bool = Field(False, title="Display Resolution") + displayFilmGate: bool = Field(False, title="Display Film Gate") + displayFieldChart: bool = Field(False, title="Display Field Chart") + displaySafeAction: bool = Field(False, title="Display Safe Action") + displaySafeTitle: bool = Field(False, title="Display Safe Title") + displayFilmPivot: bool = Field(False, title="Display Film Pivot") + displayFilmOrigin: bool = Field(False, title="Display Film Origin") + overscan: int = Field(1.0, title="Overscan") + + +class CapturePresetSetting(BaseSettingsModel): + Codec: CodecSetting = Field( + default_factory=CodecSetting, + title="Codec", + section="Codec") + DisplayOptions: DisplayOptionsSetting = Field( + default_factory=DisplayOptionsSetting, + title="Display Options", + section="Display Options") + Generic: GenericSetting = Field( + default_factory=GenericSetting, + title="Generic", + section="Generic") + Renderer: RendererSetting = Field( + default_factory=RendererSetting, + title="Renderer", + section="Renderer") + Resolution: ResolutionSetting = Field( + default_factory=ResolutionSetting, + title="Resolution", + section="Resolution") + ViewportOptions: ViewportOptionsSetting = Field( + default_factory=ViewportOptionsSetting, + title="Viewport Options") + CameraOptions: CameraOptionsSetting = Field( + default_factory=CameraOptionsSetting, + title="Camera Options") + + +class ProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + product_names: list[str] = Field(default_factory=list, title="Products names") + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="Capture Preset" + ) + + +class ExtractPlayblastSetting(BaseSettingsModel): + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="DEPRECATED! Please use \"Profiles\" below. Capture Preset" + ) + profiles: list[ProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_PLAYBLAST_SETTING = { + "capture_preset": { + "Codec": { + "compression": "png", + "format": "image", + "quality": 95 + }, + "DisplayOptions": { + "override_display": True, + "background": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundBottom": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundTop": [ + 125, + 125, + 125, + 1.0 + ], + "displayGradient": True + }, + "Generic": { + "isolate_view": True, + "off_screen": True, + "pan_zoom": False + }, + "Renderer": { + "rendererName": "vp2Renderer" + }, + "Resolution": { + "width": 1920, + "height": 1080 + }, + "ViewportOptions": { + "override_viewport_options": True, + "displayLights": "default", + "displayTextures": True, + "textureMaxResolution": 1024, + "renderDepthOfField": True, + "shadows": True, + "twoSidedLighting": True, + "lineAAEnable": True, + "multiSample": 8, + "useDefaultMaterial": False, + "wireframeOnShaded": False, + "xray": False, + "jointXray": False, + "backfaceCulling": False, + "ssaoEnable": False, + "ssaoAmount": 1, + "ssaoRadius": 16, + "ssaoFilterRadius": 16, + "ssaoSamples": 16, + "fogging": False, + "hwFogFalloff": "0", + "hwFogDensity": 0.0, + "hwFogStart": 0, + "hwFogEnd": 100, + "hwFogAlpha": 0, + "hwFogColorR": 1.0, + "hwFogColorG": 1.0, + "hwFogColorB": 1.0, + "motionBlurEnable": False, + "motionBlurSampleCount": 8, + "motionBlurShutterOpenFraction": 0.2, + "cameras": False, + "clipGhosts": False, + "deformers": False, + "dimensions": False, + "dynamicConstraints": False, + "dynamics": False, + "fluids": False, + "follicles": False, + "greasePencils": False, + "grid": False, + "hairSystems": True, + "handles": False, + "headsUpDisplay": False, + "ikHandles": False, + "imagePlane": True, + "joints": False, + "lights": False, + "locators": False, + "manipulators": False, + "motionTrails": False, + "nCloths": False, + "nParticles": False, + "nRigids": False, + "controlVertices": False, + "nurbsCurves": False, + "hulls": False, + "nurbsSurfaces": False, + "particleInstancers": False, + "pivots": False, + "planes": False, + "pluginShapes": False, + "polymeshes": True, + "strokes": False, + "subdivSurfaces": False, + "textures": False, + "pluginObjects": [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + }, + "CameraOptions": { + "displayGateMask": False, + "displayResolution": False, + "displayFilmGate": False, + "displayFieldChart": False, + "displaySafeAction": False, + "displaySafeTitle": False, + "displayFilmPivot": False, + "displayFilmOrigin": False, + "overscan": 1.0 + } + }, + "profiles": [] +} diff --git a/server_addon/maya/server/settings/publishers.py b/server_addon/maya/server/settings/publishers.py new file mode 100644 index 0000000000..bd7ccdf4d5 --- /dev/null +++ b/server_addon/maya/server/settings/publishers.py @@ -0,0 +1,1262 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + ensure_unique_names, +) +from ayon_server.exceptions import BadRequestException +from .publish_playblast import ( + ExtractPlayblastSetting, + DEFAULT_PLAYBLAST_SETTING, +) + + +def linear_unit_enum(): + """Get linear units enumerator.""" + return [ + {"label": "mm", "value": "millimeter"}, + {"label": "cm", "value": "centimeter"}, + {"label": "m", "value": "meter"}, + {"label": "km", "value": "kilometer"}, + {"label": "in", "value": "inch"}, + {"label": "ft", "value": "foot"}, + {"label": "yd", "value": "yard"}, + {"label": "mi", "value": "mile"} + ] + + +def angular_unit_enum(): + """Get angular units enumerator.""" + return [ + {"label": "deg", "value": "degree"}, + {"label": "rad", "value": "radian"}, + ] + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateMeshUVSetMap1Model(BasicValidateModel): + """Validate model's default uv set exists and is named 'map1'.""" + pass + + +class ValidateNoAnimationModel(BasicValidateModel): + """Ensure no keyframes on nodes in the Instance.""" + pass + + +class ValidateRigOutSetNodeIdsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateSkinclusterDeformerSet") + optional: bool = Field(title="Optional") + allow_history_only: bool = Field(title="Allow history only") + + +class ValidateModelNameModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + database: bool = Field(title="Use database shader name definitions") + material_file: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Material File", + description=( + "Path to material file defining list of material names to check." + ) + ) + regex: str = Field( + "(.*)_(\\d)*_(?P.*)_(GEO)", + title="Validation regex", + description=( + "Regex for validating name of top level group name. You can use" + " named capturing groups:(?P.*) for Asset name" + ) + ) + top_level_regex: str = Field( + ".*_GRP", + title="Top level group name regex", + description=( + "To check for asset in name so *_some_asset_name_GRP" + " is valid, use:.*?_(?P.*)_GEO" + ) + ) + + +class ValidateModelContentModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_top_group: bool = Field(title="Validate one top group") + + +class ValidateTransformNamingSuffixModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + SUFFIX_NAMING_TABLE: str = Field( + "{}", + title="Suffix Naming Tables", + widget="textarea", + description=( + "Validates transform suffix based on" + " the type of its children shapes." + ) + ) + + @validator("SUFFIX_NAMING_TABLE") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + ALLOW_IF_NOT_IN_SUFFIX_TABLE: bool = Field( + title="Allow if suffix not in table" + ) + + +class CollectMayaRenderModel(BaseSettingsModel): + sync_workfile_version: bool = Field( + title="Sync render version with workfile" + ) + + +class CollectFbxCameraModel(BaseSettingsModel): + enabled: bool = Field(title="CollectFbxCamera") + + +class CollectGLTFModel(BaseSettingsModel): + enabled: bool = Field(title="CollectGLTF") + + +class ValidateFrameRangeModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateFrameRange") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + exclude_product_types: list[str] = Field( + default_factory=list, + title="Exclude product types" + ) + + +class ValidateShaderNameModel(BaseSettingsModel): + """ + Shader name regex can use named capture group asset to validate against current asset name. + """ + enabled: bool = Field(title="ValidateShaderName") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + regex: str = Field("(?P.*)_(.*)_SHD", title="Validation regex") + + +class ValidateAttributesModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateAttributes") + attributes: str = Field( + "{}", title="Attributes", widget="textarea") + + @validator("attributes") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The attibutes can't be parsed as json object" + ) + return value + + +class ValidateLoadedPluginModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateLoadedPlugin") + optional: bool = Field(title="Optional") + whitelist_native_plugins: bool = Field( + title="Whitelist Maya Native Plugins" + ) + authorized_plugins: list[str] = Field( + default_factory=list, title="Authorized plugins" + ) + + +class ValidateMayaUnitsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateMayaUnits") + optional: bool = Field(title="Optional") + validate_linear_units: bool = Field(title="Validate linear units") + linear_units: str = Field( + enum_resolver=linear_unit_enum, title="Linear Units" + ) + validate_angular_units: bool = Field(title="Validate angular units") + angular_units: str = Field( + enum_resolver=angular_unit_enum, title="Angular units" + ) + validate_fps: bool = Field(title="Validate fps") + + +class ValidateUnrealStaticMeshNameModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateUnrealStaticMeshName") + optional: bool = Field(title="Optional") + validate_mesh: bool = Field(title="Validate mesh names") + validate_collision: bool = Field(title="Validate collison names") + + +class ValidateCycleErrorModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateCycleError") + optional: bool = Field(title="Optional") + families: list[str] = Field(default_factory=list, title="Families") + + +class ValidatePluginPathAttributesAttrModel(BaseSettingsModel): + name: str = Field(title="Node type") + value: str = Field(title="Attribute") + + +class ValidatePluginPathAttributesModel(BaseSettingsModel): + """Fill in the node types and attributes you want to validate. + +

e.g. AlembicNode.abc_file, the node type is AlembicNode + and the node attribute is abc_file + """ + + enabled: bool = True + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + attribute: list[ValidatePluginPathAttributesAttrModel] = Field( + default_factory=list, + title="File Attribute" + ) + + @validator("attribute") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +# Validate Render Setting +class RendererAttributesModel(BaseSettingsModel): + _layout = "compact" + type: str = Field(title="Type") + value: str = Field(title="Value") + + +class ValidateRenderSettingsModel(BaseSettingsModel): + arnold_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Arnold Render Attributes") + vray_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="VRay Render Attributes") + redshift_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Redshift Render Attributes") + renderman_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Renderman Render Attributes") + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateCameraContentsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_shapes: bool = Field(title="Validate presence of shapes") + + +class ExtractProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractObjModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + + +class ExtractMayaSceneRawModel(BaseSettingsModel): + """Add loaded instances to those published families:""" + enabled: bool = Field(title="ExtractMayaSceneRaw") + add_for_families: list[str] = Field(default_factory=list, title="Families") + + +class ExtractCameraAlembicModel(BaseSettingsModel): + """ + List of attributes that will be added to the baked alembic camera. Needs to be written in python list syntax. + """ + enabled: bool = Field(title="ExtractCameraAlembic") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + bake_attributes: str = Field( + "[]", title="Base Attributes", widget="textarea" + ) + + @validator("bake_attributes") + def validate_json_list(cls, value): + if not value.strip(): + return "[]" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, list) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + + +class ExtractGLBModel(BaseSettingsModel): + enabled: bool = True + active: bool = Field(title="Active") + ogsfx_path: str = Field(title="GLSL Shader Directory") + + +class ExtractLookArgsModel(BaseSettingsModel): + argument: str = Field(title="Argument") + parameters: list[str] = Field(default_factory=list, title="Parameters") + + +class ExtractLookModel(BaseSettingsModel): + maketx_arguments: list[ExtractLookArgsModel] = Field( + default_factory=list, + title="Extra arguments for maketx command line" + ) + + +class ExtractGPUCacheModel(BaseSettingsModel): + enabled: bool = True + families: list[str] = Field(default_factory=list, title="Families") + step: float = Field(1.0, ge=1.0, title="Step") + stepSave: int = Field(1, ge=1, title="Step Save") + optimize: bool = Field(title="Optimize Hierarchy") + optimizationThreshold: int = Field(1, ge=1, title="Optimization Threshold") + optimizeAnimationsForMotionBlur: bool = Field( + title="Optimize Animations For Motion Blur" + ) + writeMaterials: bool = Field(title="Write Materials") + useBaseTessellation: bool = Field(title="User Base Tesselation") + + +class PublishersModel(BaseSettingsModel): + CollectMayaRender: CollectMayaRenderModel = Field( + default_factory=CollectMayaRenderModel, + title="Collect Render Layers", + section="Collectors" + ) + CollectFbxCamera: CollectFbxCameraModel = Field( + default_factory=CollectFbxCameraModel, + title="Collect Camera for FBX export", + ) + CollectGLTF: CollectGLTFModel = Field( + default_factory=CollectGLTFModel, + title="Collect Assets for GLB/GLTF export" + ) + ValidateInstanceInContext: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instance In Context", + section="Validators" + ) + ValidateContainers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Containers" + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + default_factory=ValidateFrameRangeModel, + title="Validate Frame Range" + ) + ValidateShaderName: ValidateShaderNameModel = Field( + default_factory=ValidateShaderNameModel, + title="Validate Shader Name" + ) + ValidateShadingEngine: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Look Shading Engine Naming" + ) + ValidateMayaColorSpace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Colorspace" + ) + ValidateAttributes: ValidateAttributesModel = Field( + default_factory=ValidateAttributesModel, + title="Validate Attributes" + ) + ValidateLoadedPlugin: ValidateLoadedPluginModel = Field( + default_factory=ValidateLoadedPluginModel, + title="Validate Loaded Plugin" + ) + ValidateMayaUnits: ValidateMayaUnitsModel = Field( + default_factory=ValidateMayaUnitsModel, + title="Validate Maya Units" + ) + ValidateUnrealStaticMeshName: ValidateUnrealStaticMeshNameModel = Field( + default_factory=ValidateUnrealStaticMeshNameModel, + title="Validate Unreal Static Mesh Name" + ) + ValidateCycleError: ValidateCycleErrorModel = Field( + default_factory=ValidateCycleErrorModel, + title="Validate Cycle Error" + ) + ValidatePluginPathAttributes: ValidatePluginPathAttributesModel = Field( + default_factory=ValidatePluginPathAttributesModel, + title="Plug-in Path Attributes" + ) + ValidateRenderSettings: ValidateRenderSettingsModel = Field( + default_factory=ValidateRenderSettingsModel, + title="Validate Render Settings" + ) + ValidateCurrentRenderLayerIsRenderable: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Current Render Layer Has Renderable Camera" + ) + ValidateGLSLMaterial: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Material" + ) + ValidateGLSLPlugin: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Plugin" + ) + ValidateRenderImageRule: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Image Rule (Workspace)" + ) + ValidateRenderNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras Renderable" + ) + ValidateRenderSingleCamera: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Single Camera " + ) + ValidateRenderLayerAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Passes/AOVs Are Registered" + ) + ValidateStepSize: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Step Size" + ) + ValidateVRayDistributedRendering: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Distributed Rendering" + ) + ValidateVrayReferencedAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Referenced AOVs" + ) + ValidateVRayTranslatorEnabled: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Translator Settings" + ) + ValidateVrayProxy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Settings" + ) + ValidateVrayProxyMembers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Members" + ) + ValidateYetiRenderScriptCallbacks: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Render Script Callbacks" + ) + ValidateYetiRigCacheState: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Cache State" + ) + ValidateYetiRigInputShapesInInstance: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Input Shapes In Instance" + ) + ValidateYetiRigSettings: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Settings" + ) + # Model - START + ValidateModelName: ValidateModelNameModel = Field( + default_factory=ValidateModelNameModel, + title="Validate Model Name", + section="Model", + ) + ValidateModelContent: ValidateModelContentModel = Field( + default_factory=ValidateModelContentModel, + title="Validate Model Content", + ) + ValidateTransformNamingSuffix: ValidateTransformNamingSuffixModel = Field( + default_factory=ValidateTransformNamingSuffixModel, + title="Validate Transform Naming Suffix", + ) + ValidateColorSets: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Color Sets", + ) + ValidateMeshHasOverlappingUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has Overlapping UVs", + ) + ValidateMeshArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Arnold Attributes", + ) + ValidateMeshShaderConnections: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Shader Connections", + ) + ValidateMeshSingleUVSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Single UV Set", + ) + ValidateMeshHasUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has UVs", + ) + ValidateMeshLaminaFaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Lamina Faces", + ) + ValidateMeshNgons: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Ngons", + ) + ValidateMeshNonManifold: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Non-Manifold", + ) + ValidateMeshNoNegativeScale: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh No Negative Scale", + ) + ValidateMeshNonZeroEdgeLength: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Edge Length Non Zero", + ) + ValidateMeshNormalsUnlocked: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Normals Unlocked", + ) + ValidateMeshUVSetMap1: ValidateMeshUVSetMap1Model = Field( + default_factory=ValidateMeshUVSetMap1Model, + title="Validate Mesh UV Set Map 1", + ) + ValidateMeshVerticesHaveEdges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Vertices Have Edges", + ) + ValidateNoAnimation: ValidateNoAnimationModel = Field( + default_factory=ValidateNoAnimationModel, + title="Validate No Animation", + ) + ValidateNoNamespace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Namespace", + ) + ValidateNoNullTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Null Transforms", + ) + ValidateNoUnknownNodes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Unknown Nodes", + ) + ValidateNodeNoGhosting: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Node No Ghosting", + ) + ValidateShapeDefaultNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Default Names", + ) + ValidateShapeRenderStats: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Render Stats", + ) + ValidateShapeZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Zero", + ) + ValidateTransformZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Transform Zero", + ) + ValidateUniqueNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unique Names", + ) + ValidateNoVRayMesh: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No V-Ray Proxies (VRayMesh)", + ) + ValidateUnrealMeshTriangulated: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate if Mesh is Triangulated", + ) + ValidateAlembicVisibleOnly: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Alembic Visible Node", + ) + ExtractProxyAlembic: ExtractProxyAlembicModel = Field( + default_factory=ExtractProxyAlembicModel, + title="Extract Proxy Alembic", + section="Model Extractors", + ) + ExtractAlembic: ExtractAlembicModel = Field( + default_factory=ExtractAlembicModel, + title="Extract Alembic", + ) + ExtractObj: ExtractObjModel = Field( + default_factory=ExtractObjModel, + title="Extract OBJ" + ) + # Model - END + + # Rig - START + ValidateRigContents: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Contents", + section="Rig", + ) + ValidateRigJointsHidden: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Joints Hidden", + ) + ValidateRigControllers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers", + ) + ValidateAnimationContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Content", + ) + ValidateOutRelatedNodeIds: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Out Set Related Node Ids", + ) + ValidateRigControllersArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers (Arnold Attributes)", + ) + ValidateSkeletalMeshHierarchy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeletal Mesh Top Node", + ) + ValidateSkinclusterDeformerSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skincluster Deformer Relationships", + ) + ValidateRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( + default_factory=ValidateRigOutSetNodeIdsModel, + title="Validate Rig Out Set Node Ids", + ) + # Rig - END + ValidateCameraAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Camera Attributes" + ) + ValidateAssemblyName: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Name" + ) + ValidateAssemblyNamespaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Namespaces" + ) + ValidateAssemblyModelTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Model Transforms" + ) + ValidateAssRelativePaths: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Ass Relative Paths" + ) + ValidateInstancerContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Content" + ) + ValidateInstancerFrameRanges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Cache Frame Ranges" + ) + ValidateNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras" + ) + ValidateUnrealUpAxis: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unreal Up-Axis Check" + ) + ValidateCameraContents: ValidateCameraContentsModel = Field( + default_factory=ValidateCameraContentsModel, + title="Validate Camera Content" + ) + ExtractPlayblast: ExtractPlayblastSetting = Field( + default_factory=ExtractPlayblastSetting, + title="Extract Playblast Settings", + section="Extractors" + ) + ExtractMayaSceneRaw: ExtractMayaSceneRawModel = Field( + default_factory=ExtractMayaSceneRawModel, + title="Maya Scene(Raw)" + ) + ExtractCameraAlembic: ExtractCameraAlembicModel = Field( + default_factory=ExtractCameraAlembicModel, + title="Extract Camera Alembic" + ) + ExtractGLB: ExtractGLBModel = Field( + default_factory=ExtractGLBModel, + title="Extract GLB" + ) + ExtractLook: ExtractLookModel = Field( + default_factory=ExtractLookModel, + title="Extract Look" + ) + ExtractGPUCache: ExtractGPUCacheModel = Field( + default_factory=ExtractGPUCacheModel, + title="Extract GPU Cache", + ) + + +DEFAULT_SUFFIX_NAMING = { + "mesh": ["_GEO", "_GES", "_GEP", "_OSD"], + "nurbsCurve": ["_CRV"], + "nurbsSurface": ["_NRB"], + "locator": ["_LOC"], + "group": ["_GRP"] +} + +DEFAULT_PUBLISH_SETTINGS = { + "CollectMayaRender": { + "sync_workfile_version": False + }, + "CollectFbxCamera": { + "enabled": False + }, + "CollectGLTF": { + "enabled": False + }, + "ValidateInstanceInContext": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True, + "exclude_product_types": [ + "model", + "rig", + "staticMesh" + ] + }, + "ValidateShaderName": { + "enabled": False, + "optional": True, + "active": True, + "regex": "(?P.*)_(.*)_SHD" + }, + "ValidateShadingEngine": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMayaColorSpace": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAttributes": { + "enabled": False, + "attributes": "{}" + }, + "ValidateLoadedPlugin": { + "enabled": False, + "optional": True, + "whitelist_native_plugins": False, + "authorized_plugins": [] + }, + "ValidateMayaUnits": { + "enabled": True, + "optional": False, + "validate_linear_units": True, + "linear_units": "cm", + "validate_angular_units": True, + "angular_units": "deg", + "validate_fps": True + }, + "ValidateUnrealStaticMeshName": { + "enabled": True, + "optional": True, + "validate_mesh": False, + "validate_collision": True + }, + "ValidateCycleError": { + "enabled": True, + "optional": False, + "families": [ + "rig" + ] + }, + "ValidatePluginPathAttributes": { + "enabled": True, + "optional": False, + "active": True, + "attribute": [ + {"name": "AlembicNode", "value": "abc_File"}, + {"name": "VRayProxy", "value": "fileName"}, + {"name": "RenderManArchive", "value": "filename"}, + {"name": "pgYetiMaya", "value": "cacheFileName"}, + {"name": "aiStandIn", "value": "dso"}, + {"name": "RedshiftSprite", "value": "tex0"}, + {"name": "RedshiftBokeh", "value": "dofBokehImage"}, + {"name": "RedshiftCameraMap", "value": "tex0"}, + {"name": "RedshiftEnvironment", "value": "tex2"}, + {"name": "RedshiftDomeLight", "value": "tex1"}, + {"name": "RedshiftIESLight", "value": "profile"}, + {"name": "RedshiftLightGobo", "value": "tex0"}, + {"name": "RedshiftNormalMap", "value": "tex0"}, + {"name": "RedshiftProxyMesh", "value": "fileName"}, + {"name": "RedshiftVolumeShape", "value": "fileName"}, + {"name": "VRayTexGLSL", "value": "fileName"}, + {"name": "VRayMtlGLSL", "value": "fileName"}, + {"name": "VRayVRmatMtl", "value": "fileName"}, + {"name": "VRayPtex", "value": "ptexFile"}, + {"name": "VRayLightIESShape", "value": "iesFile"}, + {"name": "VRayMesh", "value": "materialAssignmentsFile"}, + {"name": "VRayMtlOSL", "value": "fileName"}, + {"name": "VRayTexOSL", "value": "fileName"}, + {"name": "VRayTexOCIO", "value": "ocioConfigFile"}, + {"name": "VRaySettingsNode", "value": "pmap_autoSaveFile2"}, + {"name": "VRayScannedMtl", "value": "file"}, + {"name": "VRayScene", "value": "parameterOverrideFilePath"}, + {"name": "VRayMtlMDL", "value": "filename"}, + {"name": "VRaySimbiont", "value": "file"}, + {"name": "dlOpenVDBShape", "value": "filename"}, + {"name": "pgYetiMayaShape", "value": "liveABCFilename"}, + {"name": "gpuCache", "value": "cacheFileName"}, + ] + }, + "ValidateRenderSettings": { + "arnold_render_attributes": [], + "vray_render_attributes": [], + "redshift_render_attributes": [], + "renderman_render_attributes": [] + }, + "ValidateCurrentRenderLayerIsRenderable": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLMaterial": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLPlugin": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderImageRule": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderSingleCamera": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderLayerAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateStepSize": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayDistributedRendering": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayReferencedAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayTranslatorEnabled": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxyMembers": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRenderScriptCallbacks": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigCacheState": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigInputShapesInInstance": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigSettings": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateModelName": { + "enabled": False, + "database": True, + "material_file": { + "windows": "", + "darwin": "", + "linux": "" + }, + "regex": "(.*)_(\\d)*_(?P.*)_(GEO)", + "top_level_regex": ".*_GRP" + }, + "ValidateModelContent": { + "enabled": True, + "optional": False, + "validate_top_group": True + }, + "ValidateTransformNamingSuffix": { + "enabled": True, + "optional": True, + "SUFFIX_NAMING_TABLE": json.dumps(DEFAULT_SUFFIX_NAMING, indent=4), + "ALLOW_IF_NOT_IN_SUFFIX_TABLE": True + }, + "ValidateColorSets": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasOverlappingUVs": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshArnoldAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshShaderConnections": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshSingleUVSet": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshHasUVs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshLaminaFaces": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNgons": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNonManifold": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateMeshNonZeroEdgeLength": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNormalsUnlocked": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshUVSetMap1": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshVerticesHaveEdges": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNoAnimation": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoNamespace": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoNullTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoUnknownNodes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNodeNoGhosting": { + "enabled": False, + "optional": False, + "active": True + }, + "ValidateShapeDefaultNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeRenderStats": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateTransformZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateUniqueNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoVRayMesh": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealMeshTriangulated": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAlembicVisibleOnly": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractProxyAlembic": { + "enabled": True, + "families": [ + "proxyAbc" + ] + }, + "ExtractAlembic": { + "enabled": True, + "families": [ + "pointcache", + "model", + "vrayproxy.alembic" + ] + }, + "ExtractObj": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigContents": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigJointsHidden": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigControllers": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAnimationContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateOutRelatedNodeIds": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigControllersArnoldAttributes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkeletalMeshHierarchy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkinclusterDeformerSet": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigOutSetNodeIds": { + "enabled": True, + "optional": False, + "allow_history_only": False + }, + "ValidateCameraAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssemblyName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAssemblyNamespaces": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssemblyModelTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssRelativePaths": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerFrameRanges": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealUpAxis": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateCameraContents": { + "enabled": True, + "optional": False, + "validate_shapes": True + }, + "ExtractPlayblast": DEFAULT_PLAYBLAST_SETTING, + "ExtractMayaSceneRaw": { + "enabled": True, + "add_for_families": [ + "layout" + ] + }, + "ExtractCameraAlembic": { + "enabled": True, + "optional": True, + "active": True, + "bake_attributes": "[]" + }, + "ExtractGLB": { + "enabled": True, + "active": True, + "ogsfx_path": "/maya2glTF/PBR/shaders/glTF_PBR.ogsfx" + }, + "ExtractLook": { + "maketx_arguments": [] + }, + "ExtractGPUCache": { + "enabled": False, + "families": [ + "model", + "animation", + "pointcache" + ], + "step": 1.0, + "stepSave": 1, + "optimize": True, + "optimizationThreshold": 40000, + "optimizeAnimationsForMotionBlur": True, + "writeMaterials": True, + "useBaseTessellation": True + } +} diff --git a/server_addon/maya/server/settings/render_settings.py b/server_addon/maya/server/settings/render_settings.py new file mode 100644 index 0000000000..b6163a04ce --- /dev/null +++ b/server_addon/maya/server/settings/render_settings.py @@ -0,0 +1,500 @@ +"""Providing models and values for Maya Render Settings.""" +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def arnold_image_format_enum(): + """Return enumerator for Arnold output formats.""" + return [ + {"label": "jpeg", "value": "jpeg"}, + {"label": "png", "value": "png"}, + {"label": "deepexr", "value": "deep exr"}, + {"label": "tif", "value": "tif"}, + {"label": "exr", "value": "exr"}, + {"label": "maya", "value": "maya"}, + {"label": "mtoa_shaders", "value": "mtoa_shaders"} + ] + + +def arnold_aov_list_enum(): + """Return enumerator for Arnold AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "ID", "label": "ID"}, + {"value": "N", "label": "N"}, + {"value": "P", "label": "P"}, + {"value": "Pref", "label": "Pref"}, + {"value": "RGBA", "label": "RGBA"}, + {"value": "Z", "label": "Z"}, + {"value": "albedo", "label": "albedo"}, + {"value": "background", "label": "background"}, + {"value": "coat", "label": "coat"}, + {"value": "coat_albedo", "label": "coat_albedo"}, + {"value": "coat_direct", "label": "coat_direct"}, + {"value": "coat_indirect", "label": "coat_indirect"}, + {"value": "cputime", "label": "cputime"}, + {"value": "crypto_asset", "label": "crypto_asset"}, + {"value": "crypto_material", "label": "cypto_material"}, + {"value": "crypto_object", "label": "crypto_object"}, + {"value": "diffuse", "label": "diffuse"}, + {"value": "diffuse_albedo", "label": "diffuse_albedo"}, + {"value": "diffuse_direct", "label": "diffuse_direct"}, + {"value": "diffuse_indirect", "label": "diffuse_indirect"}, + {"value": "direct", "label": "direct"}, + {"value": "emission", "label": "emission"}, + {"value": "highlight", "label": "highlight"}, + {"value": "indirect", "label": "indirect"}, + {"value": "motionvector", "label": "motionvector"}, + {"value": "opacity", "label": "opacity"}, + {"value": "raycount", "label": "raycount"}, + {"value": "rim_light", "label": "rim_light"}, + {"value": "shadow", "label": "shadow"}, + {"value": "shadow_diff", "label": "shadow_diff"}, + {"value": "shadow_mask", "label": "shadow_mask"}, + {"value": "shadow_matte", "label": "shadow_matte"}, + {"value": "sheen", "label": "sheen"}, + {"value": "sheen_albedo", "label": "sheen_albedo"}, + {"value": "sheen_direct", "label": "sheen_direct"}, + {"value": "sheen_indirect", "label": "sheen_indirect"}, + {"value": "specular", "label": "specular"}, + {"value": "specular_albedo", "label": "specular_albedo"}, + {"value": "specular_direct", "label": "specular_direct"}, + {"value": "specular_indirect", "label": "specular_indirect"}, + {"value": "sss", "label": "sss"}, + {"value": "sss_albedo", "label": "sss_albedo"}, + {"value": "sss_direct", "label": "sss_direct"}, + {"value": "sss_indirect", "label": "sss_indirect"}, + {"value": "transmission", "label": "transmission"}, + {"value": "transmission_albedo", "label": "transmission_albedo"}, + {"value": "transmission_direct", "label": "transmission_direct"}, + {"value": "transmission_indirect", "label": "transmission_indirect"}, + {"value": "volume", "label": "volume"}, + {"value": "volume_Z", "label": "volume_Z"}, + {"value": "volume_albedo", "label": "volume_albedo"}, + {"value": "volume_direct", "label": "volume_direct"}, + {"value": "volume_indirect", "label": "volume_indirect"}, + {"value": "volume_opacity", "label": "volume_opacity"}, + ] + + +def vray_image_output_enum(): + """Return output format for Vray enumerator.""" + return [ + {"label": "png", "value": "png"}, + {"label": "jpg", "value": "jpg"}, + {"label": "vrimg", "value": "vrimg"}, + {"label": "hdr", "value": "hdr"}, + {"label": "exr", "value": "exr"}, + {"label": "exr (multichannel)", "value": "exr (multichannel)"}, + {"label": "exr (deep)", "value": "exr (deep)"}, + {"label": "tga", "value": "tga"}, + {"label": "bmp", "value": "bmp"}, + {"label": "sgi", "value": "sgi"} + ] + + +def vray_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "atmosphereChannel", "label": "atmosphere"}, + {"value": "backgroundChannel", "label": "background"}, + {"value": "bumpNormalsChannel", "label": "bumpnormals"}, + {"value": "causticsChannel", "label": "caustics"}, + {"value": "coatFilterChannel", "label": "coat_filter"}, + {"value": "coatGlossinessChannel", "label": "coatGloss"}, + {"value": "coatReflectionChannel", "label": "coat_reflection"}, + {"value": "vrayCoatChannel", "label": "coat_specular"}, + {"value": "CoverageChannel", "label": "coverage"}, + {"value": "cryptomatteChannel", "label": "cryptomatte"}, + {"value": "customColor", "label": "custom_color"}, + {"value": "drBucketChannel", "label": "DR"}, + {"value": "denoiserChannel", "label": "denoiser"}, + {"value": "diffuseChannel", "label": "diffuse"}, + {"value": "ExtraTexElement", "label": "extraTex"}, + {"value": "giChannel", "label": "GI"}, + {"value": "LightMixElement", "label": "None"}, + {"value": "lightingChannel", "label": "lighting"}, + {"value": "LightingAnalysisChannel", "label": "LightingAnalysis"}, + {"value": "materialIDChannel", "label": "materialID"}, + {"value": "MaterialSelectElement", "label": "materialSelect"}, + {"value": "matteShadowChannel", "label": "matteShadow"}, + {"value": "MultiMatteElement", "label": "multimatte"}, + {"value": "multimatteIDChannel", "label": "multimatteID"}, + {"value": "normalsChannel", "label": "normals"}, + {"value": "nodeIDChannel", "label": "objectId"}, + {"value": "objectSelectChannel", "label": "objectSelect"}, + {"value": "rawCoatFilterChannel", "label": "raw_coat_filter"}, + {"value": "rawCoatReflectionChannel", "label": "raw_coat_reflection"}, + {"value": "rawDiffuseFilterChannel", "label": "rawDiffuseFilter"}, + {"value": "rawGiChannel", "label": "rawGI"}, + {"value": "rawLightChannel", "label": "rawLight"}, + {"value": "rawReflectionChannel", "label": "rawReflection"}, + { + "value": "rawReflectionFilterChannel", + "label": "rawReflectionFilter" + }, + {"value": "rawRefractionChannel", "label": "rawRefraction"}, + { + "value": "rawRefractionFilterChannel", + "label": "rawRefractionFilter" + }, + {"value": "rawShadowChannel", "label": "rawShadow"}, + {"value": "rawSheenFilterChannel", "label": "raw_sheen_filter"}, + { + "value": "rawSheenReflectionChannel", + "label": "raw_sheen_reflection" + }, + {"value": "rawTotalLightChannel", "label": "rawTotalLight"}, + {"value": "reflectIORChannel", "label": "reflIOR"}, + {"value": "reflectChannel", "label": "reflect"}, + {"value": "reflectionFilterChannel", "label": "reflectionFilter"}, + {"value": "reflectGlossinessChannel", "label": "reflGloss"}, + {"value": "refractChannel", "label": "refract"}, + {"value": "refractionFilterChannel", "label": "refractionFilter"}, + {"value": "refractGlossinessChannel", "label": "refrGloss"}, + {"value": "renderIDChannel", "label": "renderId"}, + {"value": "FastSSS2Channel", "label": "SSS"}, + {"value": "sampleRateChannel", "label": "sampleRate"}, + {"value": "samplerInfo", "label": "samplerInfo"}, + {"value": "selfIllumChannel", "label": "selfIllum"}, + {"value": "shadowChannel", "label": "shadow"}, + {"value": "sheenFilterChannel", "label": "sheen_filter"}, + {"value": "sheenGlossinessChannel", "label": "sheenGloss"}, + {"value": "sheenReflectionChannel", "label": "sheen_reflection"}, + {"value": "vraySheenChannel", "label": "sheen_specular"}, + {"value": "specularChannel", "label": "specular"}, + {"value": "Toon", "label": "Toon"}, + {"value": "toonLightingChannel", "label": "toonLighting"}, + {"value": "toonSpecularChannel", "label": "toonSpecular"}, + {"value": "totalLightChannel", "label": "totalLight"}, + {"value": "unclampedColorChannel", "label": "unclampedColor"}, + {"value": "VRScansPaintMaskChannel", "label": "VRScansPaintMask"}, + {"value": "VRScansZoneMaskChannel", "label": "VRScansZoneMask"}, + {"value": "velocityChannel", "label": "velocity"}, + {"value": "zdepthChannel", "label": "zDepth"}, + {"value": "LightSelectElement", "label": "lightselect"}, + ] + + +def redshift_engine_enum(): + """Get Redshift engine type enumerator.""" + return [ + {"value": "0", "label": "None"}, + {"value": "1", "label": "Photon Map"}, + {"value": "2", "label": "Irradiance Cache"}, + {"value": "3", "label": "Brute Force"} + ] + + +def redshift_image_output_enum(): + """Return output format for Redshift enumerator.""" + return [ + {"value": "iff", "label": "Maya IFF"}, + {"value": "exr", "label": "OpenEXR"}, + {"value": "tif", "label": "TIFF"}, + {"value": "png", "label": "PNG"}, + {"value": "tga", "label": "Targa"}, + {"value": "jpg", "label": "JPEG"} + ] + + +def redshift_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< none >"}, + {"value": "AO", "label": "Ambient Occlusion"}, + {"value": "Background", "label": "Background"}, + {"value": "Beauty", "label": "Beauty"}, + {"value": "BumpNormals", "label": "Bump Normals"}, + {"value": "Caustics", "label": "Caustics"}, + {"value": "CausticsRaw", "label": "Caustics Raw"}, + {"value": "Cryptomatte", "label": "Cryptomatte"}, + {"value": "Custom", "label": "Custom"}, + {"value": "Z", "label": "Depth"}, + {"value": "DiffuseFilter", "label": "Diffuse Filter"}, + {"value": "DiffuseLighting", "label": "Diffuse Lighting"}, + {"value": "DiffuseLightingRaw", "label": "Diffuse Lighting Raw"}, + {"value": "Emission", "label": "Emission"}, + {"value": "GI", "label": "Global Illumination"}, + {"value": "GIRaw", "label": "Global Illumination Raw"}, + {"value": "Matte", "label": "Matte"}, + {"value": "MotionVectors", "label": "Ambient Occlusion"}, + {"value": "N", "label": "Normals"}, + {"value": "ID", "label": "ObjectID"}, + {"value": "ObjectBumpNormal", "label": "Object-Space Bump Normals"}, + {"value": "ObjectPosition", "label": "Object-Space Positions"}, + {"value": "PuzzleMatte", "label": "Puzzle Matte"}, + {"value": "Reflections", "label": "Reflections"}, + {"value": "ReflectionsFilter", "label": "Reflections Filter"}, + {"value": "ReflectionsRaw", "label": "Reflections Raw"}, + {"value": "Refractions", "label": "Refractions"}, + {"value": "RefractionsFilter", "label": "Refractions Filter"}, + {"value": "RefractionsRaw", "label": "Refractions Filter"}, + {"value": "Shadows", "label": "Shadows"}, + {"value": "SpecularLighting", "label": "Specular Lighting"}, + {"value": "SSS", "label": "Sub Surface Scatter"}, + {"value": "SSSRaw", "label": "Sub Surface Scatter Raw"}, + { + "value": "TotalDiffuseLightingRaw", + "label": "Total Diffuse Lighting Raw" + }, + { + "value": "TotalTransLightingRaw", + "label": "Total Translucency Filter" + }, + {"value": "TransTint", "label": "Translucency Filter"}, + {"value": "TransGIRaw", "label": "Translucency Lighting Raw"}, + {"value": "VolumeFogEmission", "label": "Volume Fog Emission"}, + {"value": "VolumeFogTint", "label": "Volume Fog Tint"}, + {"value": "VolumeLighting", "label": "Volume Lighting"}, + {"value": "P", "label": "World Position"}, + ] + + +class AdditionalOptionsModel(BaseSettingsModel): + """Additional Option""" + _layout = "compact" + + attribute: str = Field("", title="Attribute name") + value: str = Field("", title="Value") + + +class ArnoldSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + image_format: str = Field( + enum_resolver=arnold_image_format_enum, title="Output Image Format") + multilayer_exr: bool = Field(title="Multilayer (exr)") + tiled: bool = Field(title="Tiled (tif, exr)") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=arnold_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Arnold Options", + description=( + "Add additional options - put attribute and value, like AASamples" + ) + ) + + +class VraySettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # engine was str because of JSON limitation (key must be string) + engine: str = Field( + enum_resolver=lambda: [ + {"label": "V-Ray", "value": "1"}, + {"label": "V-Ray GPU", "value": "2"} + ], + title="Production Engine" + ) + image_format: str = Field( + enum_resolver=vray_image_output_enum, + title="Output Image Format" + ) + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=vray_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like aaFilterSize" + ) + ) + + +class RedshiftSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # both engines are using the same enumerator, + # both were originally str because of JSON limitation. + primary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Primary GI Engine" + ) + secondary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Secondary GI Engine" + ) + image_format: str = Field( + enum_resolver=redshift_image_output_enum, + title="Output Image Format" + ) + multilayer_exr: bool = Field(title="Multilayer (exr)") + force_combine: bool = Field(title="Force combine beauty and AOVs") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=redshift_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like reflectionMaxTraceDepth" + ) + ) + + +def renderman_display_filters(): + return [ + "PxrBackgroundDisplayFilter", + "PxrCopyAOVDisplayFilter", + "PxrEdgeDetect", + "PxrFilmicTonemapperDisplayFilter", + "PxrGradeDisplayFilter", + "PxrHalfBufferErrorFilter", + "PxrImageDisplayFilter", + "PxrLightSaturation", + "PxrShadowDisplayFilter", + "PxrStylizedHatching", + "PxrStylizedLines", + "PxrStylizedToon", + "PxrWhitePointDisplayFilter" + ] + + +def renderman_sample_filters_enum(): + return [ + "PxrBackgroundSampleFilter", + "PxrCopyAOVSampleFilter", + "PxrCryptomatte", + "PxrFilmicTonemapperSampleFilter", + "PxrGradeSampleFilter", + "PxrShadowFilter", + "PxrWatermarkFilter", + "PxrWhitePointSampleFilter" + ] + + +class RendermanSettingsModel(BaseSettingsModel): + image_prefix: str = Field( + "", title="Image prefix template") + image_dir: str = Field( + "", title="Image Output Directory") + display_filters: list[str] = Field( + default_factory=list, + title="Display Filters", + enum_resolver=renderman_display_filters + ) + imageDisplay_dir: str = Field( + "", title="Image Display Filter Directory") + sample_filters: list[str] = Field( + default_factory=list, + title="Sample Filters", + enum_resolver=renderman_sample_filters_enum + ) + cryptomatte_dir: str = Field( + "", title="Cryptomatte Output Directory") + watermark_dir: str = Field( + "", title="Watermark Filter Directory") + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Renderer Options" + ) + + +class RenderSettingsModel(BaseSettingsModel): + apply_render_settings: bool = Field( + title="Apply Render Settings on creation" + ) + default_render_image_folder: str = Field( + title="Default render image folder" + ) + enable_all_lights: bool = Field( + title="Include all lights in Render Setup Layers by default" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + reset_current_frame: bool = Field( + title="Reset Current Frame") + remove_aovs: bool = Field( + title="Remove existing AOVs") + arnold_renderer: ArnoldSettingsModel = Field( + default_factory=ArnoldSettingsModel, + title="Arnold Renderer") + vray_renderer: VraySettingsModel = Field( + default_factory=VraySettingsModel, + title="Vray Renderer") + redshift_renderer: RedshiftSettingsModel = Field( + default_factory=RedshiftSettingsModel, + title="Redshift Renderer") + renderman_renderer: RendermanSettingsModel = Field( + default_factory=RendermanSettingsModel, + title="Renderman Renderer") + + +DEFAULT_RENDER_SETTINGS = { + "apply_render_settings": True, + "default_render_image_folder": "renders/maya", + "enable_all_lights": True, + "aov_separator": "underscore", + "reset_current_frame": False, + "remove_aovs": False, + "arnold_renderer": { + "image_prefix": "//_", + "image_format": "exr", + "multilayer_exr": True, + "tiled": True, + "aov_list": [], + "additional_options": [] + }, + "vray_renderer": { + "image_prefix": "//", + "engine": "1", + "image_format": "exr", + "aov_list": [], + "additional_options": [] + }, + "redshift_renderer": { + "image_prefix": "//", + "primary_gi_engine": "0", + "secondary_gi_engine": "0", + "image_format": "exr", + "multilayer_exr": True, + "force_combine": True, + "aov_list": [], + "additional_options": [] + }, + "renderman_renderer": { + "image_prefix": "{aov_separator}..", + "image_dir": "/", + "display_filters": [], + "imageDisplay_dir": "/{aov_separator}imageDisplayFilter..", + "sample_filters": [], + "cryptomatte_dir": "/{aov_separator}cryptomatte..", + "watermark_dir": "/{aov_separator}watermarkFilter..", + "additional_options": [] + } +} diff --git a/server_addon/maya/server/settings/scriptsmenu.py b/server_addon/maya/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..82c1c2e53c --- /dev/null +++ b/server_addon/maya/server/settings/scriptsmenu.py @@ -0,0 +1,43 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + tags: list[str] = Field(default_factory=list, title="A list of tags") + + +class ScriptsmenuModel(BaseSettingsModel): + _isGroup = True + + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Menu Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "command": "import openpype.hosts.maya.api.commands as op_cmds; op_cmds.edit_shader_definitions()", + "sourcetype": "python", + "title": "Edit shader name definitions", + "tooltip": "Edit shader name definitions used in validation and renaming.", + "tags": [ + "pipeline", + "shader" + ] + } + ] +} diff --git a/server_addon/maya/server/settings/templated_workfile_settings.py b/server_addon/maya/server/settings/templated_workfile_settings.py new file mode 100644 index 0000000000..ef81b31a07 --- /dev/null +++ b/server_addon/maya/server/settings/templated_workfile_settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class WorkfileBuildProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + path: str = Field("", title="Path to template") + + +class TemplatedProfilesModel(BaseSettingsModel): + profiles: list[WorkfileBuildProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_TEMPLATED_WORKFILE_SETTINGS = { + "profiles": [] +} diff --git a/server_addon/maya/server/settings/workfile_build_settings.py b/server_addon/maya/server/settings/workfile_build_settings.py new file mode 100644 index 0000000000..dc56d1a320 --- /dev/null +++ b/server_addon/maya/server/settings/workfile_build_settings.py @@ -0,0 +1,131 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ContextItemModel(BaseSettingsModel): + _layout = "expanded" + product_name_filters: list[str] = Field( + default_factory=list, title="Product name Filters") + product_types: list[str] = Field( + default_factory=list, title="Product types") + repre_names: list[str] = Field( + default_factory=list, title="Repre Names") + loaders: list[str] = Field( + default_factory=list, title="Loaders") + + +class WorkfileSettingModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + enum_resolver=task_types_enum, + title="Task types") + tasks: list[str] = Field( + default_factory=list, + title="Task names") + current_context: list[ContextItemModel] = Field( + default_factory=list, + title="Current Context") + linked_assets: list[ContextItemModel] = Field( + default_factory=list, + title="Linked Assets") + + +class ProfilesModel(BaseSettingsModel): + profiles: list[WorkfileSettingModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_WORKFILE_SETTING = { + "profiles": [ + { + "task_types": [], + "tasks": [ + "Lighting" + ], + "current_context": [ + { + "product_name_filters": [ + ".+[Mm]ain" + ], + "product_types": [ + "model" + ], + "repre_names": [ + "abc", + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "animation", + "pointcache", + "proxyAbc" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "rendersetup" + ], + "repre_names": [ + "json" + ], + "loaders": [ + "RenderSetupLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "camera" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + } + ], + "linked_assets": [ + { + "product_name_filters": [], + "product_types": [ + "sedress" + ], + "repre_names": [ + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "ArnoldStandin" + ], + "repre_names": [ + "ass" + ], + "loaders": [ + "assLoader" + ] + } + ] + } + ] +} diff --git a/server_addon/maya/server/version.py b/server_addon/maya/server/version.py new file mode 100644 index 0000000000..e57ad00718 --- /dev/null +++ b/server_addon/maya/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.3" diff --git a/server_addon/muster/server/__init__.py b/server_addon/muster/server/__init__.py new file mode 100644 index 0000000000..2cb8943554 --- /dev/null +++ b/server_addon/muster/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MusterSettings, DEFAULT_VALUES + + +class MusterAddon(BaseServerAddon): + name = "muster" + version = __version__ + title = "Muster" + settings_model: Type[MusterSettings] = MusterSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/muster/server/settings.py b/server_addon/muster/server/settings.py new file mode 100644 index 0000000000..e37c762870 --- /dev/null +++ b/server_addon/muster/server/settings.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TemplatesMapping(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: int = Field(title="mapping") + + +class MusterSettings(BaseSettingsModel): + enabled: bool = True + MUSTER_REST_URL: str = Field( + "", + title="Muster Rest URL", + scope=["studio"], + ) + + templates_mapping: list[TemplatesMapping] = Field( + default_factory=list, + title="Templates mapping", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "MUSTER_REST_URL": "http://127.0.0.1:9890", + "templates_mapping": [ + {"name": "file_layers", "value": 7}, + {"name": "mentalray", "value": 2}, + {"name": "mentalray_sf", "value": 6}, + {"name": "redshift", "value": 55}, + {"name": "renderman", "value": 29}, + {"name": "software", "value": 1}, + {"name": "software_sf", "value": 5}, + {"name": "turtle", "value": 10}, + {"name": "vector", "value": 4}, + {"name": "vray", "value": 37}, + {"name": "ffmpeg", "value": 48} + ] +} diff --git a/server_addon/muster/server/version.py b/server_addon/muster/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/muster/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/nuke/server/__init__.py b/server_addon/nuke/server/__init__.py new file mode 100644 index 0000000000..032ceea5fb --- /dev/null +++ b/server_addon/nuke/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import NukeSettings, DEFAULT_VALUES + + +class NukeAddon(BaseServerAddon): + name = "nuke" + title = "Nuke" + version = __version__ + settings_model: Type[NukeSettings] = NukeSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/nuke/server/settings/__init__.py b/server_addon/nuke/server/settings/__init__.py new file mode 100644 index 0000000000..1e58865395 --- /dev/null +++ b/server_addon/nuke/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + NukeSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "NukeSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/nuke/server/settings/common.py b/server_addon/nuke/server/settings/common.py new file mode 100644 index 0000000000..700f01f3dc --- /dev/null +++ b/server_addon/nuke/server/settings/common.py @@ -0,0 +1,142 @@ +import json +from pydantic import Field +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +class Box(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + r: float = Field(1.0, title="R") + t: float = Field(1.0, title="T") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"}, + {"value": "box", "label": "Box"}, + {"value": "expression", "label": "Expression"} +] + + +class KnobModel(BaseSettingsModel): + """# TODO: new data structure + - v3 was having type, name, value but + ayon is not able to make it the same. Current model is + defining `type` as `text` and instead of `value` the key is `text`. + So if `type` is `boolean` then key is `boolean` (value). + """ + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + box: Box = Field( + default_factory=Box, + title="Value" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Formatable" + ) + expression: str = Field( + "", + title="Expression" + ) diff --git a/server_addon/nuke/server/settings/create_plugins.py b/server_addon/nuke/server/settings/create_plugins.py new file mode 100644 index 0000000000..0bbae4ee77 --- /dev/null +++ b/server_addon/nuke/server/settings/create_plugins.py @@ -0,0 +1,223 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) +from .common import KnobModel + + +def instance_attributes_enum(): + """Return create write instance attributes.""" + return [ + {"value": "reviewable", "label": "Reviewable"}, + {"value": "farm_rendering", "label": "Farm rendering"}, + {"value": "use_range_limit", "label": "Use range limit"} + ] + + +class PrenodeModel(BaseSettingsModel): + # TODO: missing in host api + # - good for `dependency` + name: str = Field( + title="Node name" + ) + + # TODO: `nodeclass` should be renamed to `nuke_node_class` + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + + """# TODO: Changes in host api: + - Need complete rework of knob types in nuke integration. + - We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteRenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWritePrerenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteImageModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateWriteRender: CreateWriteRenderModel = Field( + default_factory=CreateWriteRenderModel, + title="Create Write Render" + ) + CreateWritePrerender: CreateWritePrerenderModel = Field( + default_factory=CreateWritePrerenderModel, + title="Create Write Prerender" + ) + CreateWriteImage: CreateWriteImageModel = Field( + default_factory=CreateWriteImageModel, + title="Create Write Image" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateWriteRender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ], + "prenodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependent": "", + "knobs": [ + { + "type": "text", + "name": "resize", + "text": "none" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + } + ] + } + ] + }, + "CreateWritePrerender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Key01", + "Bg01", + "Fg01", + "Branch01", + "Part01" + ], + "instance_attributes": [ + "farm_rendering", + "use_range_limit" + ], + "prenodes": [] + }, + "CreateWriteImage": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{ext}", + "default_variants": [ + "StillFrame", + "MPFrame", + "LayoutFrame" + ], + "instance_attributes": [ + "use_range_limit" + ], + "prenodes": [ + { + "name": "FrameHold01", + "nodeclass": "FrameHold", + "dependent": "", + "knobs": [ + { + "type": "expression", + "name": "first_frame", + "expression": "parent.first" + } + ] + } + ] + } +} diff --git a/server_addon/nuke/server/settings/dirmap.py b/server_addon/nuke/server/settings/dirmap.py new file mode 100644 index 0000000000..2da6d7bf60 --- /dev/null +++ b/server_addon/nuke/server/settings/dirmap.py @@ -0,0 +1,47 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class DirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, + title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, + title="Destination Paths" + ) + + +class DirmapSettings(BaseSettingsModel): + """Nuke color management project settings.""" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + paths: DirmapPathsSubmodel = Field( + default_factory=DirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +"""# TODO: +nuke is having originally implemented +following data inputs: + +"nuke-dirmap": { + "enabled": false, + "paths": { + "source-path": [], + "destination-path": [] + } +} +""" + +DEFAULT_DIRMAP_SETTINGS = { + "enabled": False, + "paths": { + "source_path": [], + "destination_path": [] + } +} diff --git a/server_addon/nuke/server/settings/filters.py b/server_addon/nuke/server/settings/filters.py new file mode 100644 index 0000000000..7e2702b3b7 --- /dev/null +++ b/server_addon/nuke/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/nuke/server/settings/general.py b/server_addon/nuke/server/settings/general.py new file mode 100644 index 0000000000..bcbb183952 --- /dev/null +++ b/server_addon/nuke/server/settings/general.py @@ -0,0 +1,42 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class MenuShortcut(BaseSettingsModel): + """Nuke general project settings.""" + + create: str = Field( + title="Create..." + ) + publish: str = Field( + title="Publish..." + ) + load: str = Field( + title="Load..." + ) + manage: str = Field( + title="Manage..." + ) + build_workfile: str = Field( + title="Build Workfile..." + ) + + +class GeneralSettings(BaseSettingsModel): + """Nuke general project settings.""" + + menu: MenuShortcut = Field( + default_factory=MenuShortcut, + title="Menu Shortcuts", + ) + + +DEFAULT_GENERAL_SETTINGS = { + "menu": { + "create": "ctrl+alt+c", + "publish": "ctrl+alt+p", + "load": "ctrl+alt+l", + "manage": "ctrl+alt+m", + "build_workfile": "ctrl+alt+b" + } +} diff --git a/server_addon/nuke/server/settings/gizmo.py b/server_addon/nuke/server/settings/gizmo.py new file mode 100644 index 0000000000..4cdd614da8 --- /dev/null +++ b/server_addon/nuke/server/settings/gizmo.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + + +class SubGizmoItem(BaseSettingsModel): + title: str = Field( + title="Label" + ) + sourcetype: str = Field( + title="Type of usage" + ) + command: str = Field( + title="Python command" + ) + icon: str = Field( + title="Icon Path" + ) + shortcut: str = Field( + title="Hotkey" + ) + + +class GizmoDefinitionItem(BaseSettingsModel): + gizmo_toolbar_path: str = Field( + title="Gizmo Menu" + ) + sub_gizmo_list: list[SubGizmoItem] = Field( + default_factory=list, title="Sub Gizmo List") + + +class GizmoItem(BaseSettingsModel): + """Nuke gizmo item """ + + toolbar_menu_name: str = Field( + title="Toolbar Menu Name" + ) + gizmo_source_dir: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Gizmo Directory Path" + ) + toolbar_icon_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Toolbar Icon Path" + ) + gizmo_definition: list[GizmoDefinitionItem] = Field( + default_factory=list, title="Gizmo Definition") + + +DEFAULT_GIZMO_ITEM = { + "toolbar_menu_name": "OpenPype Gizmo", + "gizmo_source_dir": { + "windows": [], + "darwin": [], + "linux": [] + }, + "toolbar_icon_path": { + "windows": "", + "darwin": "", + "linux": "" + }, + "gizmo_definition": [ + { + "gizmo_toolbar_path": "/path/to/menu", + "sub_gizmo_list": [ + { + "sourcetype": "python", + "title": "Gizmo Note", + "command": "nuke.nodes.StickyNote(label='You can create your own toolbar menu in the Nuke GizmoMenu of OpenPype')", + "icon": "", + "shortcut": "" + } + ] + } + ] +} diff --git a/server_addon/nuke/server/settings/imageio.py b/server_addon/nuke/server/settings/imageio.py new file mode 100644 index 0000000000..b43017ef8b --- /dev/null +++ b/server_addon/nuke/server/settings/imageio.py @@ -0,0 +1,410 @@ +from typing import Literal +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .common import KnobModel + + +class NodesModel(BaseSettingsModel): + """# TODO: This needs to be somehow labeled in settings panel + or at least it could show gist of configuration + """ + _layout = "expanded" + plugins: list[str] = Field( + title="Used in plugins" + ) + # TODO: rename `nukeNodeClass` to `nuke_node_class` + nukeNodeClass: str = Field( + title="Nuke Node Class", + ) + + """ # TODO: Need complete rework of knob types + in nuke integration. We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class NodesSetting(BaseSettingsModel): + # TODO: rename `requiredNodes` to `required_nodes` + requiredNodes: list[NodesModel] = Field( + title="Plugin required", + default_factory=list + ) + # TODO: rename `overrideNodes` to `override_nodes` + overrideNodes: list[NodesModel] = Field( + title="Plugin's node overrides", + default_factory=list + ) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Nuke workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Nuke is excpecting camel case key names, + but for better code consistency we need to + be using snake_case: + + color_management = colorManagement + ocio_config = OCIO_config + working_space_name = workingSpaceLUT + monitor_name = monitorLut + monitor_out_name = monitorOutLut + int_8_name = int8Lut + int_16_name = int16Lut + log_name = logLut + float_name = floatLut + """ + + colorManagement: Literal["Nuke", "OCIO"] = Field( + title="Color Management" + ) + + OCIO_config: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + + workingSpaceLUT: str = Field( + title="Working Space" + ) + monitorLut: str = Field( + title="Monitor" + ) + int8Lut: str = Field( + title="8-bit files" + ) + int16Lut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + + +class ReadColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ReadColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ViewProcessModel(BaseSettingsModel): + viewerProcess: str = Field( + title="Viewer Process Name" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Nuke color management project settings. """ + _isGroup: bool = True + + """# TODO: enhance settings with host api: + to restruture settings for simplification. + + now: nuke/imageio/viewer/viewerProcess + future: nuke/imageio/viewer + """ + activate_host_color_management: bool = Field( + True, title="Enable Color Management") + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + viewer: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Viewer", + description="""Viewer profile is used during + Creation of new viewer node at knob viewerProcess""" + ) + + """# TODO: enhance settings with host api: + to restruture settings for simplification. + + now: nuke/imageio/baking/viewerProcess + future: nuke/imageio/baking + """ + baking: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Baking", + description="""Baking profile is used during + publishing baked colorspace data at knob viewerProcess""" + ) + + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + + nodes: NodesSetting = Field( + default_factory=NodesSetting, + title="Nodes" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to read nodes via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "viewer": { + "viewerProcess": "sRGB" + }, + "baking": { + "viewerProcess": "rec709" + }, + "workfile": { + "colorManagement": "Nuke", + "OCIO_config": "nuke-default", + "workingSpaceLUT": "linear", + "monitorLut": "sRGB", + "int8Lut": "sRGB", + "int16Lut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear" + }, + "nodes": { + "requiredNodes": [ + { + "plugins": [ + "CreateWriteRender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 186, + 35, + 35 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWritePrerender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 171, + 171, + 10 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWriteImage" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "tiff" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit" + }, + { + "type": "text", + "name": "compression", + "text": "Deflate" + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 56, + 162, + 7 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "sRGB" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + } + ], + "overrideNodes": [] + }, + "regexInputs": { + "inputs": [ + { + "regex": "(beauty).*(?=.exr)", + "colorspace": "linear" + } + ] + } +} diff --git a/server_addon/nuke/server/settings/loader_plugins.py b/server_addon/nuke/server/settings/loader_plugins.py new file mode 100644 index 0000000000..6db381bffb --- /dev/null +++ b/server_addon/nuke/server/settings/loader_plugins.py @@ -0,0 +1,80 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadImageModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + """# TODO: v3 api used `_representation` + New api is hiding it so it had to be renamed + to `representations_include` + """ + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + + +class LoadClipOptionsModel(BaseSettingsModel): + start_at_workfile: bool = Field( + title="Start at workfile's start frame" + ) + add_retime: bool = Field( + title="Add retime" + ) + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + """# TODO: v3 api used `_representation` + New api is hiding it so it had to be renamed + to `representations_include` + """ + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + options_defaults: LoadClipOptionsModel = Field( + default_factory=LoadClipOptionsModel, + title="Loader option defaults" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image" + ) + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadImage": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}" + }, + "LoadClip": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}", + "options_defaults": { + "start_at_workfile": True, + "add_retime": True + } + } +} diff --git a/server_addon/nuke/server/settings/main.py b/server_addon/nuke/server/settings/main.py new file mode 100644 index 0000000000..4687d48ac9 --- /dev/null +++ b/server_addon/nuke/server/settings/main.py @@ -0,0 +1,128 @@ +from pydantic import validator, Field + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) + +from .general import ( + GeneralSettings, + DEFAULT_GENERAL_SETTINGS +) +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .dirmap import ( + DirmapSettings, + DEFAULT_DIRMAP_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .gizmo import ( + GizmoItem, + DEFAULT_GIZMO_ITEM +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .workfile_builder import ( + WorkfileBuilderModel, + DEFAULT_WORKFILE_BUILDER_SETTINGS +) +from .templated_workfile_build import ( + TemplatedWorkfileBuildModel +) +from .filters import PublishGUIFilterItemModel + + +class NukeSettings(BaseSettingsModel): + """Nuke addon settings.""" + + general: GeneralSettings = Field( + default_factory=GeneralSettings, + title="General", + ) + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + """# TODO: fix host api: + - rename `nuke-dirmap` to `dirmap` was inevitable + """ + dirmap: DirmapSettings = Field( + default_factory=DirmapSettings, + title="Nuke Directory Mapping", + ) + + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + + gizmo: list[GizmoItem] = Field( + default_factory=list, title="Gizmo Menu") + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins", + ) + + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader Plugins", + ) + + workfile_builder: WorkfileBuilderModel = Field( + default_factory=WorkfileBuilderModel, + title="Workfile Builder", + ) + + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + title="Templated Workfile Build", + default_factory=TemplatedWorkfileBuildModel + ) + + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + @validator("filters") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "general": DEFAULT_GENERAL_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "dirmap": DEFAULT_DIRMAP_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "gizmo": [DEFAULT_GIZMO_ITEM], + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "workfile_builder": DEFAULT_WORKFILE_BUILDER_SETTINGS, + "templated_workfile_build": { + "profiles": [] + }, + "filters": [] +} diff --git a/server_addon/nuke/server/settings/publish_plugins.py b/server_addon/nuke/server/settings/publish_plugins.py new file mode 100644 index 0000000000..7e898f8c9a --- /dev/null +++ b/server_addon/nuke/server/settings/publish_plugins.py @@ -0,0 +1,504 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum +) +from .common import KnobModel, validate_json_dict + + +def nuke_render_publish_types_enum(): + """Return all nuke render families available in creators.""" + return [ + {"value": "render", "label": "Render"}, + {"value": "prerender", "label": "Prerender"}, + {"value": "image", "label": "Image"} + ] + + +def nuke_product_types_enum(): + """Return all nuke families available in creators.""" + return [ + {"value": "nukenodes", "label": "Nukenodes"}, + {"value": "model", "label": "Model"}, + {"value": "camera", "label": "Camera"}, + {"value": "gizmo", "label": "Gizmo"}, + {"value": "source", "label": "Source"} + ] + nuke_render_publish_types_enum() + + +class NodeModel(BaseSettingsModel): + # TODO: missing in host api + name: str = Field( + title="Node name" + ) + # TODO: `nodeclass` rename to `nuke_node_class` + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + """# TODO: Changes in host api: + - Need complete rework of knob types in nuke integration. + - We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class ThumbnailRepositionNodeModel(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field(title="Knobs", default_factory=list) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CollectInstanceDataModel(BaseSettingsModel): + sync_workfile_version_on_product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_product_types_enum, + title="Sync workfile versions for familes" + ) + + +class OptionalPluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateKnobsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + knobs: str = Field( + "{}", + title="Knobs", + widget="textarea", + ) + + @validator("knobs") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ExtractThumbnailModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + use_rendered: bool = Field(title="Use rendered images") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + """# TODO: needs to rewrite from v3 to ayon + - `nodes` in v3 was dict but now `prenodes` is list of dict + - also later `nodes` should be `prenodes` + """ + + nodes: list[NodeModel] = Field( + title="Nodes (deprecated)" + ) + reposition_nodes: list[ThumbnailRepositionNodeModel] = Field( + title="Reposition nodes", + default_factory=list + ) + + +class ExtractReviewDataModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class ExtractReviewDataLutModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class BakingStreamFilterModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_render_publish_types_enum, + title="Sync workfile versions for familes" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names") + + +class ReformatNodesRepositionNodes(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field( + default_factory=list, + title="Node knobs") + + +class ReformatNodesConfigModel(BaseSettingsModel): + """Only reposition nodes supported. + + You can add multiple reformat nodes and set their knobs. + Order of reformat nodes is important. First reformat node will + be applied first and last reformat node will be applied last. + """ + enabled: bool = Field(False) + reposition_nodes: list[ReformatNodesRepositionNodes] = Field( + default_factory=list, + title="Reposition knobs" + ) + + +class BakingStreamModel(BaseSettingsModel): + name: str = Field(title="Output name") + filter: BakingStreamFilterModel = Field( + title="Filter", default_factory=BakingStreamFilterModel) + read_raw: bool = Field(title="Read raw switch") + viewer_process_override: str = Field(title="Viewer process override") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + reformat_nodes_config: ReformatNodesConfigModel = Field( + default_factory=ReformatNodesConfigModel, + title="Reformat Nodes") + extension: str = Field(title="File extension") + add_custom_tags: list[str] = Field( + title="Custom tags", default_factory=list) + + +class ExtractReviewDataMovModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + viewer_lut_raw: bool = Field(title="Viewer lut raw") + outputs: list[BakingStreamModel] = Field( + title="Baking streams" + ) + + +class FSubmissionNoteModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FSubmistingForModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FVFXScopeOfWorkModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class ExctractSlateFrameParamModel(BaseSettingsModel): + f_submission_note: FSubmissionNoteModel = Field( + title="f_submission_note", + default_factory=FSubmissionNoteModel + ) + f_submitting_for: FSubmistingForModel = Field( + title="f_submitting_for", + default_factory=FSubmistingForModel + ) + f_vfx_scope_of_work: FVFXScopeOfWorkModel = Field( + title="f_vfx_scope_of_work", + default_factory=FVFXScopeOfWorkModel + ) + + +class ExtractSlateFrameModel(BaseSettingsModel): + viewer_lut_raw: bool = Field(title="Viewer lut raw") + """# TODO: v3 api different model: + - not possible to replicate v3 model: + {"name": [bool, str]} + - not it is: + {"name": {"enabled": bool, "template": str}} + """ + key_value_mapping: ExctractSlateFrameParamModel = Field( + title="Key value mapping", + default_factory=ExctractSlateFrameParamModel + ) + + +class IncrementScriptVersionModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceData: CollectInstanceDataModel = Field( + title="Collect Instance Version", + default_factory=CollectInstanceDataModel, + section="Collectors" + ) + ValidateCorrectAssetName: OptionalPluginModel = Field( + title="Validate Correct Folder Name", + default_factory=OptionalPluginModel, + section="Validators" + ) + ValidateContainers: OptionalPluginModel = Field( + title="Validate Containers", + default_factory=OptionalPluginModel + ) + ValidateKnobs: ValidateKnobsModel = Field( + title="Validate Knobs", + default_factory=ValidateKnobsModel + ) + ValidateOutputResolution: OptionalPluginModel = Field( + title="Validate Output Resolution", + default_factory=OptionalPluginModel + ) + ValidateGizmo: OptionalPluginModel = Field( + title="Validate Gizmo", + default_factory=OptionalPluginModel + ) + ValidateBackdrop: OptionalPluginModel = Field( + title="Validate Backdrop", + default_factory=OptionalPluginModel + ) + ValidateScript: OptionalPluginModel = Field( + title="Validate Script", + default_factory=OptionalPluginModel + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + title="Extract Thumbnail", + default_factory=ExtractThumbnailModel, + section="Extractors" + ) + ExtractReviewData: ExtractReviewDataModel = Field( + title="Extract Review Data", + default_factory=ExtractReviewDataModel + ) + ExtractReviewDataLut: ExtractReviewDataLutModel = Field( + title="Extract Review Data Lut", + default_factory=ExtractReviewDataLutModel + ) + ExtractReviewDataMov: ExtractReviewDataMovModel = Field( + title="Extract Review Data Mov", + default_factory=ExtractReviewDataMovModel + ) + ExtractSlateFrame: ExtractSlateFrameModel = Field( + title="Extract Slate Frame", + default_factory=ExtractSlateFrameModel + ) + # TODO: plugin should be renamed - `workfile` not `script` + IncrementScriptVersion: IncrementScriptVersionModel = Field( + title="Increment Workfile Version", + default_factory=IncrementScriptVersionModel, + section="Integrators" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceData": { + "sync_workfile_version_on_product_types": [ + "nukenodes", + "camera", + "gizmo", + "source", + "render", + "write" + ] + }, + "ValidateCorrectAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateKnobs": { + "enabled": False, + "knobs": "\n".join([ + '{', + ' "render": {', + ' "review": true', + ' }', + '}' + ]) + }, + "ValidateOutputResolution": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateGizmo": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateBackdrop": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateScript": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractThumbnail": { + "enabled": True, + "use_rendered": True, + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "nodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependency": "", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + }, + { + "type": "boolean", + "name": "pbb", + "boolean": False + } + ] + } + ], + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "ExtractReviewData": { + "enabled": False + }, + "ExtractReviewDataLut": { + "enabled": False + }, + "ExtractReviewDataMov": { + "enabled": True, + "viewer_lut_raw": False, + "outputs": [ + { + "name": "baking", + "filter": { + "task_types": [], + "product_types": [], + "product_names": [] + }, + "read_raw": False, + "viewer_process_override": "", + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "reformat_nodes_config": { + "enabled": False, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + ] + }, + "ExtractSlateFrame": { + "viewer_lut_raw": False, + "key_value_mapping": { + "f_submission_note": { + "enabled": True, + "template": "{comment}" + }, + "f_submitting_for": { + "enabled": True, + "template": "{intent[value]}" + }, + "f_vfx_scope_of_work": { + "enabled": False, + "template": "" + } + } + }, + "IncrementScriptVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/nuke/server/settings/scriptsmenu.py b/server_addon/nuke/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..9d1c32ebac --- /dev/null +++ b/server_addon/nuke/server/settings/scriptsmenu.py @@ -0,0 +1,54 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + # TODO: in api rename key `name` to `menu_name` + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_nuke_tut')", + "tooltip": "Open the OpenPype Nuke user doc page" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set Frame Start (Read Node)", + "command": "from openpype.hosts.nuke.startup.frame_setting_for_read_nodes import main;main();", + "tooltip": "Set frame start for read node(s)" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set non publish output for Write Node", + "command": "from openpype.hosts.nuke.startup.custom_write_node import main;main();", + "tooltip": "Open the OpenPype Nuke user doc page" + } + ] +} diff --git a/server_addon/nuke/server/settings/templated_workfile_build.py b/server_addon/nuke/server/settings/templated_workfile_build.py new file mode 100644 index 0000000000..e0245c8d06 --- /dev/null +++ b/server_addon/nuke/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/nuke/server/settings/workfile_builder.py b/server_addon/nuke/server/settings/workfile_builder.py new file mode 100644 index 0000000000..ee67c7c16a --- /dev/null +++ b/server_addon/nuke/server/settings/workfile_builder.py @@ -0,0 +1,72 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, + MultiplatformPathModel, +) + + +class CustomTemplateModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Gizmo Directory Path" + ) + + +class BuilderProfileItemModel(BaseSettingsModel): + product_name_filters: list[str] = Field( + default_factory=list, + title="Product name" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + repre_names: list[str] = Field( + default_factory=list, + title="Representations" + ) + loaders: list[str] = Field( + default_factory=list, + title="Loader plugins" + ) + + +class BuilderProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + current_context: list[BuilderProfileItemModel] = Field( + title="Current context") + linked_assets: list[BuilderProfileItemModel] = Field( + title="Linked assets/shots") + + +class WorkfileBuilderModel(BaseSettingsModel): + create_first_version: bool = Field( + title="Create first workfile") + custom_templates: list[CustomTemplateModel] = Field( + title="Custom templates") + builder_on_start: bool = Field( + title="Run Builder at first workfile") + profiles: list[BuilderProfileModel] = Field( + title="Builder profiles") + + +DEFAULT_WORKFILE_BUILDER_SETTINGS = { + "create_first_version": False, + "custom_templates": [], + "builder_on_start": False, + "profiles": [] +} diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py new file mode 100644 index 0000000000..b3f4756216 --- /dev/null +++ b/server_addon/nuke/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.2" diff --git a/server_addon/openpype/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml new file mode 100644 index 0000000000..6d5ac92ca7 --- /dev/null +++ b/server_addon/openpype/client/pyproject.toml @@ -0,0 +1,25 @@ +[project] +name="openpype" +description="OpenPype addon for AYON server." + +[tool.poetry.dependencies] +python = ">=3.9.1,<3.10" +aiohttp_json_rpc = "*" # TVPaint server +aiohttp-middlewares = "^2.0.0" +wsrpc_aiohttp = "^3.1.1" # websocket server +clique = "1.6.*" +shotgun_api3 = {git = "https://github.com/shotgunsoftware/python-api.git", rev = "v3.3.3"} +gazu = "^0.9.3" +google-api-python-client = "^1.12.8" # sync server google support (should be separate?) +jsonschema = "^2.6.0" +pymongo = "^3.11.2" +log4mongo = "^1.7" +pathlib2= "^2.3.5" # deadline submit publish job only (single place, maybe not needed?) +pyblish-base = "^1.8.11" +pynput = "^1.7.2" # Timers manager - TODO replace +"Qt.py" = "^1.3.3" +qtawesome = "0.7.3" +speedcopy = "^2.1" +slack-sdk = "^3.6.0" +pysftp = "^0.2.9" +dropbox = "^11.20.0" diff --git a/server_addon/openpype/server/__init__.py b/server_addon/openpype/server/__init__.py new file mode 100644 index 0000000000..df24c73c76 --- /dev/null +++ b/server_addon/openpype/server/__init__.py @@ -0,0 +1,9 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ + + +class OpenPypeAddon(BaseServerAddon): + name = "openpype" + title = "OpenPype" + version = __version__ diff --git a/server_addon/photoshop/LICENSE b/server_addon/photoshop/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/photoshop/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/photoshop/README.md b/server_addon/photoshop/README.md new file mode 100644 index 0000000000..2d1e1c745c --- /dev/null +++ b/server_addon/photoshop/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Photoshop. diff --git a/server_addon/photoshop/server/__init__.py b/server_addon/photoshop/server/__init__.py new file mode 100644 index 0000000000..3a45f7a809 --- /dev/null +++ b/server_addon/photoshop/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import PhotoshopSettings, DEFAULT_PHOTOSHOP_SETTING +from .version import __version__ + + +class Photoshop(BaseServerAddon): + name = "photoshop" + title = "Photoshop" + version = __version__ + + settings_model = PhotoshopSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_PHOTOSHOP_SETTING) diff --git a/server_addon/photoshop/server/settings/__init__.py b/server_addon/photoshop/server/settings/__init__.py new file mode 100644 index 0000000000..9ae5764362 --- /dev/null +++ b/server_addon/photoshop/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + PhotoshopSettings, + DEFAULT_PHOTOSHOP_SETTING, +) + + +__all__ = ( + "PhotoshopSettings", + "DEFAULT_PHOTOSHOP_SETTING", +) diff --git a/server_addon/photoshop/server/settings/creator_plugins.py b/server_addon/photoshop/server/settings/creator_plugins.py new file mode 100644 index 0000000000..2fe63a7e3a --- /dev/null +++ b/server_addon/photoshop/server/settings/creator_plugins.py @@ -0,0 +1,79 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateImagePluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AutoImageCreatorPluginModel(BaseSettingsModel): + enabled: bool = Field(False, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateReviewPlugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateWorkfilelugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class PhotoshopCreatorPlugins(BaseSettingsModel): + ImageCreator: CreateImagePluginModel = Field( + title="Create Image", + default_factory=CreateImagePluginModel, + ) + AutoImageCreator: AutoImageCreatorPluginModel = Field( + title="Create Flatten Image", + default_factory=AutoImageCreatorPluginModel, + ) + ReviewCreator: CreateReviewPlugin = Field( + title="Create Review", + default_factory=CreateReviewPlugin, + ) + WorkfileCreator: CreateWorkfilelugin = Field( + title="Create Workfile", + default_factory=CreateWorkfilelugin, + ) + + +DEFAULT_CREATE_SETTINGS = { + "ImageCreator": { + "enabled": True, + "active_on_create": True, + "mark_for_review": False, + "default_variants": [ + "Main" + ] + }, + "AutoImageCreator": { + "enabled": False, + "active_on_create": True, + "mark_for_review": False, + "default_variant": "" + }, + "ReviewCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "" + }, + "WorkfileCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main" + } +} diff --git a/server_addon/photoshop/server/settings/imageio.py b/server_addon/photoshop/server/settings/imageio.py new file mode 100644 index 0000000000..56b7f2fa32 --- /dev/null +++ b/server_addon/photoshop/server/settings/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class PhotoshopImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/photoshop/server/settings/main.py b/server_addon/photoshop/server/settings/main.py new file mode 100644 index 0000000000..ae7705b3db --- /dev/null +++ b/server_addon/photoshop/server/settings/main.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import PhotoshopImageIOModel +from .creator_plugins import PhotoshopCreatorPlugins, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PhotoshopPublishPlugins, DEFAULT_PUBLISH_SETTINGS +from .workfile_builder import WorkfileBuilderPlugin + + +class PhotoshopSettings(BaseSettingsModel): + """Photoshop Project Settings.""" + + imageio: PhotoshopImageIOModel = Field( + default_factory=PhotoshopImageIOModel, + title="OCIO config" + ) + + create: PhotoshopCreatorPlugins = Field( + default_factory=PhotoshopCreatorPlugins, + title="Creator plugins" + ) + + publish: PhotoshopPublishPlugins = Field( + default_factory=PhotoshopPublishPlugins, + title="Publish plugins" + ) + + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + + +DEFAULT_PHOTOSHOP_SETTING = { + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/photoshop/server/settings/publish_plugins.py b/server_addon/photoshop/server/settings/publish_plugins.py new file mode 100644 index 0000000000..6bc72b4072 --- /dev/null +++ b/server_addon/photoshop/server/settings/publish_plugins.py @@ -0,0 +1,221 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +create_flatten_image_enum = [ + {"value": "flatten_with_images", "label": "Flatten with images"}, + {"value": "flatten_only", "label": "Flatten only"}, + {"value": "no", "label": "No"}, +] + + +color_code_enum = [ + {"value": "red", "label": "Red"}, + {"value": "orange", "label": "Orange"}, + {"value": "yellowColor", "label": "Yellow"}, + {"value": "grain", "label": "Green"}, + {"value": "blue", "label": "Blue"}, + {"value": "violet", "label": "Violet"}, + {"value": "gray", "label": "Gray"}, +] + + +class ColorCodeMappings(BaseSettingsModel): + color_code: list[str] = Field( + title="Color codes for layers", + default_factory=list, + enum_resolver=lambda: color_code_enum, + ) + + layer_name_regex: list[str] = Field( + "", + title="Layer name regex" + ) + + product_type: str = Field( + "", + title="Resulting product type" + ) + + product_name_template: str = Field( + "", + title="Product name template" + ) + + +class ExtractedOptions(BaseSettingsModel): + tags: list[str] = Field( + title="Tags", + default_factory=list + ) + + +class CollectColorCodedInstancesPlugin(BaseSettingsModel): + """Set color for publishable layers, set its resulting product type + and template for product name. \n Can create flatten image from published + instances. + (Applicable only for remote publishing!)""" + + enabled: bool = Field(True, title="Enabled") + create_flatten_image: str = Field( + "", + title="Create flatten image", + enum_resolver=lambda: create_flatten_image_enum, + ) + + flatten_product_type_template: str = Field( + "", + title="Subset template for flatten image" + ) + + color_code_mapping: list[ColorCodeMappings] = Field( + title="Color code mappings", + default_factory=ColorCodeMappings, + ) + + +class CollectReviewPlugin(BaseSettingsModel): + """Should review product be created""" + enabled: bool = Field(True, title="Enabled") + + +class CollectVersionPlugin(BaseSettingsModel): + """Synchronize version for image and review instances by workfile version""" # noqa + enabled: bool = Field(True, title="Enabled") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check that workfile contains latest version of loaded items""" # noqa + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateNamingPlugin(BaseSettingsModel): + """Validate naming of products and layers""" # noqa + invalid_chars: str = Field( + '', + title="Regex pattern of invalid characters" + ) + + replace_char: str = Field( + '', + title="Replacement character" + ) + + +class ExtractImagePlugin(BaseSettingsModel): + """Currently only jpg and png are supported""" + formats: list[str] = Field( + title="Extract Formats", + default_factory=list, + ) + + +class ExtractReviewPlugin(BaseSettingsModel): + make_image_sequence: bool = Field( + False, + title="Make an image sequence instead of flatten image" + ) + + max_downscale_size: int = Field( + 8192, + title="Maximum size of sources for review", + description="FFMpeg can only handle limited resolution for creation of review and/or thumbnail", # noqa + gt=300, # greater than + le=16384, # less or equal + ) + + jpg_options: ExtractedOptions = Field( + title="Extracted jpg Options", + default_factory=ExtractedOptions + ) + + mov_options: ExtractedOptions = Field( + title="Extracted mov Options", + default_factory=ExtractedOptions + ) + + +class PhotoshopPublishPlugins(BaseSettingsModel): + CollectColorCodedInstances: CollectColorCodedInstancesPlugin = Field( + title="Collect Color Coded Instances", + default_factory=CollectColorCodedInstancesPlugin, + ) + CollectReview: CollectReviewPlugin = Field( + title="Collect Review", + default_factory=CollectReviewPlugin, + ) + + CollectVersion: CollectVersionPlugin = Field( + title="Create Image", + default_factory=CollectVersionPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateNaming: ValidateNamingPlugin = Field( + title="Validate naming of products and layers", + default_factory=ValidateNamingPlugin, + ) + + ExtractImage: ExtractImagePlugin = Field( + title="Extract Image", + default_factory=ExtractImagePlugin, + ) + + ExtractReview: ExtractReviewPlugin = Field( + title="Extract Review", + default_factory=ExtractReviewPlugin, + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectColorCodedInstances": { + "create_flatten_image": "no", + "flatten_product_type_template": "", + "color_code_mapping": [] + }, + "CollectReview": { + "enabled": True + }, + "CollectVersion": { + "enabled": False + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNaming": { + "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,;]", + "replace_char": "_" + }, + "ExtractImage": { + "formats": [ + "png", + "jpg" + ] + }, + "ExtractReview": { + "make_image_sequence": False, + "max_downscale_size": 8192, + "jpg_options": { + "tags": [ + "review", + "ftrackreview" + ] + }, + "mov_options": { + "tags": [ + "review", + "ftrackreview" + ] + } + } +} diff --git a/server_addon/photoshop/server/settings/workfile_builder.py b/server_addon/photoshop/server/settings/workfile_builder.py new file mode 100644 index 0000000000..ec2ee136ad --- /dev/null +++ b/server_addon/photoshop/server/settings/workfile_builder.py @@ -0,0 +1,41 @@ +from pydantic import Field +from pathlib import Path + +from ayon_server.settings import BaseSettingsModel + + +class PathsTemplate(BaseSettingsModel): + windows: Path = Field( + '', + title="Windows" + ) + darwin: Path = Field( + '', + title="MacOS" + ) + linux: Path = Field( + '', + title="Linux" + ) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: PathsTemplate = Field( + default_factory=PathsTemplate + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/photoshop/server/version.py b/server_addon/photoshop/server/version.py new file mode 100644 index 0000000000..d4b9e2d7f3 --- /dev/null +++ b/server_addon/photoshop/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.0" diff --git a/server_addon/resolve/server/__init__.py b/server_addon/resolve/server/__init__.py new file mode 100644 index 0000000000..a84180d0f5 --- /dev/null +++ b/server_addon/resolve/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ResolveSettings, DEFAULT_VALUES + + +class ResolveAddon(BaseServerAddon): + name = "resolve" + title = "DaVinci Resolve" + version = __version__ + settings_model: Type[ResolveSettings] = ResolveSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/resolve/server/imageio.py b/server_addon/resolve/server/imageio.py new file mode 100644 index 0000000000..c2bfcd40d0 --- /dev/null +++ b/server_addon/resolve/server/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class ResolveImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/resolve/server/settings.py b/server_addon/resolve/server/settings.py new file mode 100644 index 0000000000..326f6bea1e --- /dev/null +++ b/server_addon/resolve/server/settings.py @@ -0,0 +1,114 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import ResolveImageIOModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +class ResolveSettings(BaseSettingsModel): + launch_openpype_menu_on_start: bool = Field( + False, title="Launch OpenPype menu on start of Resolve" + ) + imageio: ResolveImageIOModel = Field( + default_factory=ResolveImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatorPuginsModel = Field( + default_factory=CreatorPuginsModel, + title="Creator plugins", + ) + + +DEFAULT_VALUES = { + "launch_openpype_menu_on_start": False, + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/resolve/server/version.py b/server_addon/resolve/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/resolve/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/royal_render/server/__init__.py b/server_addon/royal_render/server/__init__.py new file mode 100644 index 0000000000..c5f0aafa00 --- /dev/null +++ b/server_addon/royal_render/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import RoyalRenderSettings, DEFAULT_VALUES + + +class RoyalRenderAddon(BaseServerAddon): + name = "royalrender" + version = __version__ + title = "Royal Render" + settings_model: Type[RoyalRenderSettings] = RoyalRenderSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/royal_render/server/settings.py b/server_addon/royal_render/server/settings.py new file mode 100644 index 0000000000..677d7e2671 --- /dev/null +++ b/server_addon/royal_render/server/settings.py @@ -0,0 +1,70 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomPath(MultiplatformPathModel): + _layout = "expanded" + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + value: CustomPath = Field( + default_factory=CustomPath + ) + + +class CollectSequencesFromJobModel(BaseSettingsModel): + review: bool = Field(True, title="Generate reviews from sequences") + + +class PublishPluginsModel(BaseSettingsModel): + CollectSequencesFromJob: CollectSequencesFromJobModel = Field( + default_factory=CollectSequencesFromJobModel, + title="Collect Sequences from the Job" + ) + + +class RoyalRenderSettings(BaseSettingsModel): + enabled: bool = True + # WARNING/TODO this needs change + # - both system and project settings contained 'rr_path' + # where project settings did choose one of rr_path from system settings + # that is not possible in AYON + rr_paths: list[ServerListSubmodel] = Field( + default_factory=list, + title="Royal Render Root Paths", + scope=["studio"], + ) + # This was 'rr_paths' in project settings and should be enum of + # 'rr_paths' from system settings, but that's not possible in AYON + selected_rr_paths: list[str] = Field( + default_factory=list, + title="Selected Royal Render Paths", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "rr_paths": [ + { + "name": "default", + "value": { + "windows": "", + "darwin": "", + "linux": "" + } + } + ], + "selected_rr_paths": ["default"], + "publish": { + "CollectSequencesFromJob": { + "review": True + } + } +} diff --git a/server_addon/royal_render/server/version.py b/server_addon/royal_render/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/royal_render/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/timers_manager/server/__init__.py b/server_addon/timers_manager/server/__init__.py new file mode 100644 index 0000000000..29f9d47370 --- /dev/null +++ b/server_addon/timers_manager/server/__init__.py @@ -0,0 +1,13 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TimersManagerSettings + + +class TimersManagerAddon(BaseServerAddon): + name = "timers_manager" + version = __version__ + title = "Timers Manager" + settings_model: Type[TimersManagerSettings] = TimersManagerSettings diff --git a/server_addon/timers_manager/server/settings.py b/server_addon/timers_manager/server/settings.py new file mode 100644 index 0000000000..a5c5721a57 --- /dev/null +++ b/server_addon/timers_manager/server/settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TimersManagerSettings(BaseSettingsModel): + auto_stop: bool = Field( + True, + title="Auto stop timer", + scope=["studio"], + ) + full_time: int = Field( + 15, + title="Max idle time", + scope=["studio"], + ) + message_time: float = Field( + 0.5, + title="When dialog will show", + scope=["studio"], + ) + disregard_publishing: bool = Field( + False, + title="Disregard publishing", + scope=["studio"], + ) diff --git a/server_addon/timers_manager/server/version.py b/server_addon/timers_manager/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/timers_manager/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/traypublisher/server/LICENSE b/server_addon/traypublisher/server/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/traypublisher/server/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/traypublisher/server/README.md b/server_addon/traypublisher/server/README.md new file mode 100644 index 0000000000..c0029bc782 --- /dev/null +++ b/server_addon/traypublisher/server/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Traypublisher. diff --git a/server_addon/traypublisher/server/__init__.py b/server_addon/traypublisher/server/__init__.py new file mode 100644 index 0000000000..e6f079609f --- /dev/null +++ b/server_addon/traypublisher/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TraypublisherSettings, DEFAULT_TRAYPUBLISHER_SETTING + + +class Traypublisher(BaseServerAddon): + name = "traypublisher" + title = "TrayPublisher" + version = __version__ + + settings_model = TraypublisherSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_TRAYPUBLISHER_SETTING) diff --git a/server_addon/traypublisher/server/settings/__init__.py b/server_addon/traypublisher/server/settings/__init__.py new file mode 100644 index 0000000000..bcf8beffa7 --- /dev/null +++ b/server_addon/traypublisher/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TraypublisherSettings, + DEFAULT_TRAYPUBLISHER_SETTING, +) + + +__all__ = ( + "TraypublisherSettings", + "DEFAULT_TRAYPUBLISHER_SETTING", +) diff --git a/server_addon/traypublisher/server/settings/creator_plugins.py b/server_addon/traypublisher/server/settings/creator_plugins.py new file mode 100644 index 0000000000..345cb92e63 --- /dev/null +++ b/server_addon/traypublisher/server/settings/creator_plugins.py @@ -0,0 +1,46 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BatchMovieCreatorPlugin(BaseSettingsModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + + default_tasks: list[str] = Field( + title="Default tasks", + default_factory=list + ) + + extensions: list[str] = Field( + title="Extensions", + default_factory=list + ) + + +class TrayPublisherCreatePluginsModel(BaseSettingsModel): + BatchMovieCreator: BatchMovieCreatorPlugin = Field( + title="Batch Movie Creator", + default_factory=BatchMovieCreatorPlugin + ) + + +DEFAULT_CREATORS = { + "BatchMovieCreator": { + "default_variants": [ + "Main" + ], + "default_tasks": [ + "Compositing" + ], + "extensions": [ + ".mov" + ] + }, +} diff --git a/server_addon/traypublisher/server/settings/editorial_creators.py b/server_addon/traypublisher/server/settings/editorial_creators.py new file mode 100644 index 0000000000..4111f22576 --- /dev/null +++ b/server_addon/traypublisher/server/settings/editorial_creators.py @@ -0,0 +1,181 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ClipNameTokenizerItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field("#TODO", title="Tokenizer name") + regex: str = Field("", title="Tokenizer regex") + + +class ShotAddTasksItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field('', title="Key") + task_type: list[str] = Field( + title="Task type", + default_factory=list, + enum_resolver=task_types_enum) + + +class ShotRenameSubmodel(BaseSettingsModel): + enabled: bool = True + shot_rename_template: str = Field( + "", + title="Shot rename template" + ) + + +parent_type_enum = [ + {"value": "Project", "label": "Project"}, + {"value": "Folder", "label": "Folder"}, + {"value": "Episode", "label": "Episode"}, + {"value": "Sequence", "label": "Sequence"}, +] + + +class TokenToParentConvertorItem(BaseSettingsModel): + # TODO - was 'type' must be renamed in code to `parent_type` + parent_type: str = Field( + "Project", + enum_resolver=lambda: parent_type_enum + ) + name: str = Field( + "", + title="Parent token name", + description="Unique name used in `Parent path template`" + ) + value: str = Field( + "", + title="Parent token value", + description="Template where any text, Anatomy keys and Tokens could be used" # noqa + ) + + +class ShotHierchySubmodel(BaseSettingsModel): + enabled: bool = True + parents_path: str = Field( + "", + title="Parents path template", + description="Using keys from \"Token to parent convertor\" or tokens directly" # noqa + ) + parents: list[TokenToParentConvertorItem] = Field( + default_factory=TokenToParentConvertorItem, + title="Token to parent convertor" + ) + + +output_file_type = [ + {"value": ".mp4", "label": "MP4"}, + {"value": ".mov", "label": "MOV"}, + {"value": ".wav", "label": "WAV"} +] + + +class ProductTypePresetItem(BaseSettingsModel): + product_type: str = Field("", title="Product type") + # TODO add placeholder '< Inherited >' + variant: str = Field("", title="Variant") + review: bool = Field(True, title="Review") + output_file_type: str = Field( + ".mp4", + enum_resolver=lambda: output_file_type + ) + + +class EditorialSimpleCreatorPlugin(BaseSettingsModel): + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + clip_name_tokenizer: list[ClipNameTokenizerItem] = Field( + default_factory=ClipNameTokenizerItem, + description=( + "Using Regex expression to create tokens. \nThose can be used" + " later in \"Shot rename\" creator \nor \"Shot hierarchy\"." + "\n\nTokens should be decorated with \"_\" on each side" + ) + ) + shot_rename: ShotRenameSubmodel = Field( + title="Shot Rename", + default_factory=ShotRenameSubmodel + ) + shot_hierarchy: ShotHierchySubmodel = Field( + title="Shot Hierarchy", + default_factory=ShotHierchySubmodel + ) + shot_add_tasks: list[ShotAddTasksItem] = Field( + title="Add tasks to shot", + default_factory=ShotAddTasksItem + ) + product_type_presets: list[ProductTypePresetItem] = Field( + default_factory=list + ) + + +class TraypublisherEditorialCreatorPlugins(BaseSettingsModel): + editorial_simple: EditorialSimpleCreatorPlugin = Field( + title="Editorial simple creator", + default_factory=EditorialSimpleCreatorPlugin, + ) + + +DEFAULT_EDITORIAL_CREATORS = { + "editorial_simple": { + "default_variants": [ + "Main" + ], + "clip_name_tokenizer": [ + {"name": "_sequence_", "regex": "(sc\\d{3})"}, + {"name": "_shot_", "regex": "(sh\\d{3})"} + ], + "shot_rename": { + "enabled": True, + "shot_rename_template": "{project[code]}_{_sequence_}_{_shot_}" + }, + "shot_hierarchy": { + "enabled": True, + "parents_path": "{project}/{folder}/{sequence}", + "parents": [ + { + "parent_type": "Project", + "name": "project", + "value": "{project[name]}" + }, + { + "parent_type": "Folder", + "name": "folder", + "value": "shots" + }, + { + "parent_type": "Sequence", + "name": "sequence", + "value": "{_sequence_}" + } + ] + }, + "shot_add_tasks": [], + "product_type_presets": [ + { + "product_type": "review", + "variant": "Reference", + "review": True, + "output_file_type": ".mp4" + }, + { + "product_type": "plate", + "variant": "", + "review": False, + "output_file_type": ".mov" + }, + { + "product_type": "audio", + "variant": "", + "review": False, + "output_file_type": ".wav" + } + ] + } +} diff --git a/server_addon/traypublisher/server/settings/imageio.py b/server_addon/traypublisher/server/settings/imageio.py new file mode 100644 index 0000000000..3df0d2f2fb --- /dev/null +++ b/server_addon/traypublisher/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TrayPublisherImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/traypublisher/server/settings/main.py b/server_addon/traypublisher/server/settings/main.py new file mode 100644 index 0000000000..fad96bef2f --- /dev/null +++ b/server_addon/traypublisher/server/settings/main.py @@ -0,0 +1,52 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import TrayPublisherImageIOModel +from .simple_creators import ( + SimpleCreatorPlugin, + DEFAULT_SIMPLE_CREATORS, +) +from .editorial_creators import ( + TraypublisherEditorialCreatorPlugins, + DEFAULT_EDITORIAL_CREATORS, +) +from .creator_plugins import ( + TrayPublisherCreatePluginsModel, + DEFAULT_CREATORS, +) +from .publish_plugins import ( + TrayPublisherPublishPlugins, + DEFAULT_PUBLISH_PLUGINS, +) + + +class TraypublisherSettings(BaseSettingsModel): + """Traypublisher Project Settings.""" + imageio: TrayPublisherImageIOModel = Field( + default_factory=TrayPublisherImageIOModel, + title="Color Management (ImageIO)" + ) + simple_creators: list[SimpleCreatorPlugin] = Field( + title="Simple Create Plugins", + default_factory=SimpleCreatorPlugin, + ) + editorial_creators: TraypublisherEditorialCreatorPlugins = Field( + title="Editorial Creators", + default_factory=TraypublisherEditorialCreatorPlugins, + ) + create: TrayPublisherCreatePluginsModel = Field( + title="Create", + default_factory=TrayPublisherCreatePluginsModel + ) + publish: TrayPublisherPublishPlugins = Field( + title="Publish Plugins", + default_factory=TrayPublisherPublishPlugins + ) + + +DEFAULT_TRAYPUBLISHER_SETTING = { + "simple_creators": DEFAULT_SIMPLE_CREATORS, + "editorial_creators": DEFAULT_EDITORIAL_CREATORS, + "create": DEFAULT_CREATORS, + "publish": DEFAULT_PUBLISH_PLUGINS, +} diff --git a/server_addon/traypublisher/server/settings/publish_plugins.py b/server_addon/traypublisher/server/settings/publish_plugins.py new file mode 100644 index 0000000000..8c844f29f2 --- /dev/null +++ b/server_addon/traypublisher/server/settings/publish_plugins.py @@ -0,0 +1,50 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ValidatePluginModel(BaseSettingsModel): + _isGroup = True + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateFrameRangeModel(ValidatePluginModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + +class TrayPublisherPublishPlugins(BaseSettingsModel): + CollectFrameDataFromAssetEntity: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Collect Frame Data From Folder Entity", + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + title="Validate Frame Range", + default_factory=ValidateFrameRangeModel, + ) + ValidateExistingVersion: ValidatePluginModel = Field( + title="Validate Existing Version", + default_factory=ValidatePluginModel, + ) + + +DEFAULT_PUBLISH_PLUGINS = { + "CollectFrameDataFromAssetEntity": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateExistingVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/traypublisher/server/settings/simple_creators.py b/server_addon/traypublisher/server/settings/simple_creators.py new file mode 100644 index 0000000000..94d6602738 --- /dev/null +++ b/server_addon/traypublisher/server/settings/simple_creators.py @@ -0,0 +1,292 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class SimpleCreatorPlugin(BaseSettingsModel): + _layout = "expanded" + product_type: str = Field("", title="Product type") + # TODO add placeholder + identifier: str = Field("", title="Identifier") + label: str = Field("", title="Label") + icon: str = Field("", title="Icon") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + description: str = Field( + "", + title="Description", + widget="textarea" + ) + detailed_description: str = Field( + "", + title="Detailed Description", + widget="textarea" + ) + allow_sequences: bool = Field( + False, + title="Allow sequences" + ) + allow_multiple_items: bool = Field( + False, + title="Allow multiple items" + ) + allow_version_control: bool = Field( + False, + title="Allow version control" + ) + extensions: list[str] = Field( + default_factory=list, + title="Extensions" + ) + + +DEFAULT_SIMPLE_CREATORS = [ + { + "product_type": "workfile", + "identifier": "", + "label": "Workfile", + "icon": "fa.file", + "default_variants": [ + "Main" + ], + "description": "Backup of a working scene", + "detailed_description": "Workfiles are full scenes from any application that are directly edited by artists. They represent a state of work on a task at a given point and are usually not directly referenced into other scenes.", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".nk", + ".hrox", + ".hip", + ".hiplc", + ".hipnc", + ".blend", + ".scn", + ".tvpp", + ".comp", + ".zip", + ".prproj", + ".drp", + ".psd", + ".psb", + ".aep" + ] + }, + { + "product_type": "model", + "identifier": "", + "label": "Model", + "icon": "fa.cubes", + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ], + "description": "Clean models", + "detailed_description": "Models should only contain geometry data, without any extras like cameras, locators or bones.\n\nKeep in mind that models published from tray publisher are not validated for correctness. ", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".obj", + ".abc", + ".fbx", + ".bgeo", + ".bgeogz", + ".bgeosc", + ".usd", + ".blend" + ] + }, + { + "product_type": "pointcache", + "identifier": "", + "label": "Pointcache", + "icon": "fa.gears", + "default_variants": [ + "Main" + ], + "description": "Geometry Caches", + "detailed_description": "Alembic or bgeo cache of animated data", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".bgeo", + ".bgeogz", + ".bgeosc" + ] + }, + { + "product_type": "plate", + "identifier": "", + "label": "Plate", + "icon": "mdi.camera-image", + "default_variants": [ + "Main", + "BG", + "Animatic", + "Reference", + "Offline" + ], + "description": "Footage Plates", + "detailed_description": "Any type of image seqeuence coming from outside of the studio. Usually camera footage, but could also be animatics used for reference.", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "render", + "identifier": "", + "label": "Render", + "icon": "mdi.folder-multiple-image", + "default_variants": [], + "description": "Rendered images or video", + "detailed_description": "Sequence or single file renders", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".jpeg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "camera", + "identifier": "", + "label": "Camera", + "icon": "fa.video-camera", + "default_variants": [], + "description": "3d Camera", + "detailed_description": "Ideally this should be only camera itself with baked animation, however, it can technically also include helper geometry.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".ma", + ".hip", + ".blend", + ".fbx", + ".usd" + ] + }, + { + "product_type": "image", + "identifier": "", + "label": "Image", + "icon": "fa.image", + "default_variants": [ + "Reference", + "Texture", + "Concept", + "Background" + ], + "description": "Single image", + "detailed_description": "Any image data can be published as image product type. References, textures, concept art, matte paints. This is a fallback 2d product type for everything that doesn't fit more specific product type.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".jpg", + ".jpeg", + ".dpx", + ".bmp", + ".tif", + ".tiff", + ".png", + ".psb", + ".psd" + ] + }, + { + "product_type": "vdb", + "identifier": "", + "label": "VDB Volumes", + "icon": "fa.cloud", + "default_variants": [], + "description": "Sparse volumetric data", + "detailed_description": "Hierarchical data structure for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".vdb" + ] + }, + { + "product_type": "matchmove", + "identifier": "", + "label": "Matchmove", + "icon": "fa.empire", + "default_variants": [ + "Camera", + "Object", + "Mocap" + ], + "description": "Matchmoving script", + "detailed_description": "Script exported from matchmoving application to be later processed into a tracked camera with additional data", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + }, + { + "product_type": "rig", + "identifier": "", + "label": "Rig", + "icon": "fa.wheelchair", + "default_variants": [], + "description": "CG rig file", + "detailed_description": "CG rigged character or prop. Rig should be clean of any extra data and directly loadable into it's respective application\t", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".blend", + ".hip", + ".hda" + ] + }, + { + "product_type": "simpleUnrealTexture", + "identifier": "", + "label": "Simple UE texture", + "icon": "fa.image", + "default_variants": [], + "description": "Simple Unreal Engine texture", + "detailed_description": "Texture files with Unreal Engine naming conventions", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + } +] diff --git a/server_addon/traypublisher/server/version.py b/server_addon/traypublisher/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/traypublisher/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/tvpaint/server/__init__.py b/server_addon/tvpaint/server/__init__.py new file mode 100644 index 0000000000..033d7d3792 --- /dev/null +++ b/server_addon/tvpaint/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TvpaintSettings, DEFAULT_VALUES + + +class TvpaintAddon(BaseServerAddon): + name = "tvpaint" + title = "TVPaint" + version = __version__ + settings_model: Type[TvpaintSettings] = TvpaintSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/tvpaint/server/settings/__init__.py b/server_addon/tvpaint/server/settings/__init__.py new file mode 100644 index 0000000000..abee32e897 --- /dev/null +++ b/server_addon/tvpaint/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TvpaintSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "TvpaintSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/tvpaint/server/settings/create_plugins.py b/server_addon/tvpaint/server/settings/create_plugins.py new file mode 100644 index 0000000000..349bfdd288 --- /dev/null +++ b/server_addon/tvpaint/server/settings/create_plugins.py @@ -0,0 +1,133 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateWorkfileModel(BaseSettingsModel): + enabled: bool = Field(True) + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateReviewModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderSceneModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderLayerModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderPassModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class AutoDetectCreateRenderModel(BaseSettingsModel): + """The creator tries to auto-detect Render Layers and Render Passes in scene. + + For Render Layers is used group name as a variant and for Render Passes is + used TVPaint layer name. + + Group names can be renamed by their used order in scene. The renaming + template where can be used '{group_index}' formatting key which is + filled by "used position index of group". + - Template: 'L{group_index}' + - Group offset: '10' + - Group padding: '3' + + Would create group names "L010", "L020", ... + """ + + enabled: bool = Field(True) + allow_group_rename: bool = Field(title="Allow group rename") + group_name_template: str = Field(title="Group name template") + group_idx_offset: int = Field(1, title="Group index Offset", ge=1) + group_idx_padding: int = Field(4, title="Group index Padding", ge=1) + + +class CreatePluginsModel(BaseSettingsModel): + create_workfile: CreateWorkfileModel = Field( + default_factory=CreateWorkfileModel, + title="Create Workfile" + ) + create_review: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + create_render_scene: CreateRenderSceneModel = Field( + default_factory=CreateReviewModel, + title="Create Render Scene" + ) + create_render_layer: CreateRenderLayerModel= Field( + default_factory=CreateRenderLayerModel, + title="Create Render Layer" + ) + create_render_pass: CreateRenderPassModel = Field( + default_factory=CreateRenderPassModel, + title="Create Render Pass" + ) + auto_detect_render: AutoDetectCreateRenderModel = Field( + default_factory=AutoDetectCreateRenderModel, + title="Auto-Detect Create Render", + ) + + +DEFAULT_CREATE_SETTINGS = { + "create_workfile": { + "enabled": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_review": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_render_scene": { + "enabled": True, + "active_on_create": False, + "mark_for_review": True, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_layer": { + "mark_for_review": False, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_pass": { + "mark_for_review": False, + "default_variant": "Main", + "default_variants": [] + }, + "auto_detect_render": { + "enabled": False, + "allow_group_rename": True, + "group_name_template": "L{group_index}", + "group_idx_offset": 10, + "group_idx_padding": 3 + } +} diff --git a/server_addon/tvpaint/server/settings/filters.py b/server_addon/tvpaint/server/settings/filters.py new file mode 100644 index 0000000000..009febae06 --- /dev/null +++ b/server_addon/tvpaint/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class FiltersSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field( + "", + title="Textarea", + widget="textarea", + ) + + +class PublishFiltersModel(BaseSettingsModel): + env_search_replace_values: list[FiltersSubmodel] = Field( + default_factory=list + ) diff --git a/server_addon/tvpaint/server/settings/imageio.py b/server_addon/tvpaint/server/settings/imageio.py new file mode 100644 index 0000000000..50f8b7eef4 --- /dev/null +++ b/server_addon/tvpaint/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TVPaintImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/tvpaint/server/settings/main.py b/server_addon/tvpaint/server/settings/main.py new file mode 100644 index 0000000000..4cd6ac4b1a --- /dev/null +++ b/server_addon/tvpaint/server/settings/main.py @@ -0,0 +1,90 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .imageio import TVPaintImageIOModel +from .workfile_builder import WorkfileBuilderPlugin +from .create_plugins import CreatePluginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import ( + PublishPluginsModel, + LoadPluginsModel, + DEFAULT_PUBLISH_SETTINGS, +) + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TvpaintSettings(BaseSettingsModel): + imageio: TVPaintImageIOModel = Field( + default_factory=TVPaintImageIOModel, + title="Color Management (ImageIO)" + ) + stop_timer_on_application_exit: bool = Field( + title="Stop timer on application exit") + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Create plugins" + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins") + load: LoadPluginsModel = Field( + default_factory=LoadPluginsModel, + title="Load plugins") + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "stop_timer_on_application_exit": False, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": { + "LoadImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + }, + "ImportImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + } + }, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "filters": [] +} diff --git a/server_addon/tvpaint/server/settings/publish_plugins.py b/server_addon/tvpaint/server/settings/publish_plugins.py new file mode 100644 index 0000000000..76c7eaac01 --- /dev/null +++ b/server_addon/tvpaint/server/settings/publish_plugins.py @@ -0,0 +1,132 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class CollectRenderInstancesModel(BaseSettingsModel): + ignore_render_pass_transparency: bool = Field( + title="Ignore Render Pass opacity" + ) + + +class ExtractSequenceModel(BaseSettingsModel): + """Review BG color is used for whole scene review and for thumbnails.""" + # TODO Use alpha color + review_bg: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Review BG color") + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +def compression_enum(): + return [ + {"value": "ZIP", "label": "ZIP"}, + {"value": "ZIPS", "label": "ZIPS"}, + {"value": "DWAA", "label": "DWAA"}, + {"value": "DWAB", "label": "DWAB"}, + {"value": "PIZ", "label": "PIZ"}, + {"value": "RLE", "label": "RLE"}, + {"value": "PXR24", "label": "PXR24"}, + {"value": "B44", "label": "B44"}, + {"value": "B44A", "label": "B44A"}, + {"value": "none", "label": "None"} + ] + + +class ExtractConvertToEXRModel(BaseSettingsModel): + """WARNING: This plugin does not work on MacOS (using OIIO tool).""" + enabled: bool = False + replace_pngs: bool = True + + exr_compression: str = Field( + "ZIP", + enum_resolver=compression_enum, + title="EXR Compression" + ) + + +class LoadImageDefaultModel(BaseSettingsModel): + _layout = "expanded" + stretch: bool = Field(title="Stretch") + timestretch: bool = Field(title="TimeStretch") + preload: bool = Field(title="Preload") + + +class LoadImageModel(BaseSettingsModel): + defaults: LoadImageDefaultModel = Field( + default_factory=LoadImageDefaultModel + ) + + +class PublishPluginsModel(BaseSettingsModel): + CollectRenderInstances: CollectRenderInstancesModel = Field( + default_factory=CollectRenderInstancesModel, + title="Collect Render Instances") + ExtractSequence: ExtractSequenceModel = Field( + default_factory=ExtractSequenceModel, + title="Extract Sequence") + ValidateProjectSettings: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Project Settings") + ValidateMarks: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate MarkIn/Out") + ValidateStartFrame: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Scene Start Frame") + ValidateAssetName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Folder Name") + ExtractConvertToEXR: ExtractConvertToEXRModel = Field( + default_factory=ExtractConvertToEXRModel, + title="Extract Convert To EXR") + + +class LoadPluginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image") + ImportImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Import Image") + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectRenderInstances": { + "ignore_render_pass_transparency": False + }, + "ExtractSequence": { + "review_bg": [255, 255, 255, 1.0] + }, + "ValidateProjectSettings": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMarks": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateStartFrame": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractConvertToEXR": { + "enabled": False, + "replace_pngs": True, + "exr_compression": "ZIP" + } +} diff --git a/server_addon/tvpaint/server/settings/workfile_builder.py b/server_addon/tvpaint/server/settings/workfile_builder.py new file mode 100644 index 0000000000..e0aba5da7e --- /dev/null +++ b/server_addon/tvpaint/server/settings/workfile_builder.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + task_types_enum, +) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/tvpaint/server/version.py b/server_addon/tvpaint/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/tvpaint/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/unreal/server/__init__.py b/server_addon/unreal/server/__init__.py new file mode 100644 index 0000000000..a5f3e9597d --- /dev/null +++ b/server_addon/unreal/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import UnrealSettings, DEFAULT_VALUES + + +class UnrealAddon(BaseServerAddon): + name = "unreal" + title = "Unreal" + version = __version__ + settings_model: Type[UnrealSettings] = UnrealSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/unreal/server/imageio.py b/server_addon/unreal/server/imageio.py new file mode 100644 index 0000000000..dde042ba47 --- /dev/null +++ b/server_addon/unreal/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class UnrealImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/unreal/server/settings.py b/server_addon/unreal/server/settings.py new file mode 100644 index 0000000000..479e041e25 --- /dev/null +++ b/server_addon/unreal/server/settings.py @@ -0,0 +1,64 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import UnrealImageIOModel + + +class ProjectSetup(BaseSettingsModel): + dev_mode: bool = Field( + False, + title="Dev mode" + ) + + +def _render_format_enum(): + return [ + {"value": "png", "label": "PNG"}, + {"value": "exr", "label": "EXR"}, + {"value": "jpg", "label": "JPG"}, + {"value": "bmp", "label": "BMP"} + ] + + +class UnrealSettings(BaseSettingsModel): + imageio: UnrealImageIOModel = Field( + default_factory=UnrealImageIOModel, + title="Color Management (ImageIO)" + ) + level_sequences_for_layouts: bool = Field( + False, + title="Generate level sequences when loading layouts" + ) + delete_unmatched_assets: bool = Field( + False, + title="Delete assets that are not matched" + ) + render_config_path: str = Field( + "", + title="Render Config Path" + ) + preroll_frames: int = Field( + 0, + title="Pre-roll frames" + ) + render_format: str = Field( + "png", + title="Render format", + enum_resolver=_render_format_enum + ) + project_setup: ProjectSetup = Field( + default_factory=ProjectSetup, + title="Project Setup", + ) + + +DEFAULT_VALUES = { + "level_sequences_for_layouts": False, + "delete_unmatched_assets": False, + "render_config_path": "", + "preroll_frames": 0, + "render_format": "png", + "project_setup": { + "dev_mode": False + } +} diff --git a/server_addon/unreal/server/version.py b/server_addon/unreal/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/unreal/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/setup.py b/setup.py index ab6e22bccc..4b6f286730 100644 --- a/setup.py +++ b/setup.py @@ -1,7 +1,6 @@ # -*- coding: utf-8 -*- """Setup info for building OpenPype 3.0.""" import os -import sys import re import platform import distutils.spawn @@ -89,7 +88,6 @@ install_requires = [ "keyring", "clique", "jsonschema", - "opentimelineio", "pathlib2", "pkg_resources", "PIL", @@ -158,11 +156,20 @@ bdist_mac_options = dict( ) executables = [ - Executable("start.py", base=base, - target_name="openpype_gui", icon=icon_path.as_posix()), - Executable("start.py", base=None, - target_name="openpype_console", icon=icon_path.as_posix()) + Executable( + "start.py", + base=base, + target_name="openpype_gui", + icon=icon_path.as_posix() + ), + Executable( + "start.py", + base=None, + target_name="openpype_console", + icon=icon_path.as_posix() + ), ] + if IS_LINUX: executables.append( Executable( diff --git a/start.py b/start.py index 91e5c29a53..3b020c76c0 100644 --- a/start.py +++ b/start.py @@ -133,6 +133,10 @@ else: vendor_python_path = os.path.join(OPENPYPE_ROOT, "vendor", "python") sys.path.insert(0, vendor_python_path) +# Add common package to sys path +# - common contains common code for bootstraping and OpenPype processes +sys.path.insert(0, os.path.join(OPENPYPE_ROOT, "common")) + import blessed # noqa: E402 import certifi # noqa: E402 @@ -140,8 +144,8 @@ import certifi # noqa: E402 if sys.__stdout__: term = blessed.Terminal() - def _print(message: str): - if silent_mode: + def _print(message: str, force=False): + if silent_mode and not force: return if message.startswith("!!! "): print(f'{term.orangered2("!!! ")}{message[4:]}') @@ -197,6 +201,15 @@ if "--headless" in sys.argv: elif os.getenv("OPENPYPE_HEADLESS_MODE") != "1": os.environ.pop("OPENPYPE_HEADLESS_MODE", None) +# Set builtin ocio root +os.environ["BUILTIN_OCIO_ROOT"] = os.path.join( + OPENPYPE_ROOT, + "vendor", + "bin", + "ocioconfig", + "OpenColorIOConfigs" +) + # Enabled logging debug mode when "--debug" is passed if "--verbose" in sys.argv: expected_values = ( @@ -255,6 +268,7 @@ from igniter import BootstrapRepos # noqa: E402 from igniter.tools import ( get_openpype_global_settings, get_openpype_path_from_settings, + get_local_openpype_path_from_settings, validate_mongo_connection, OpenPypeVersionNotFound, OpenPypeVersionIncompatible @@ -348,8 +362,15 @@ def run_disk_mapping_commands(settings): mappings = disk_mapping.get(low_platform) or [] for source, destination in mappings: - destination = destination.rstrip('/') - source = source.rstrip('/') + if low_platform == "windows": + destination = destination.replace("/", "\\").rstrip("\\") + source = source.replace("/", "\\").rstrip("\\") + # Add slash after ':' ('G:' -> 'G:\') + if destination.endswith(":"): + destination += "\\" + else: + destination = destination.rstrip("/") + source = source.rstrip("/") if low_platform == "darwin": scr = f'do shell script "ln -s {source} {destination}" with administrator privileges' # noqa @@ -507,8 +528,8 @@ def _process_arguments() -> tuple: not use_version_value or not use_version_value.startswith("=") ): - _print("!!! Please use option --use-version like:") - _print(" --use-version=3.0.0") + _print("!!! Please use option --use-version like:", True) + _print(" --use-version=3.0.0", True) sys.exit(1) version_str = use_version_value[1:] @@ -525,14 +546,14 @@ def _process_arguments() -> tuple: break if use_version is None: - _print("!!! Requested version isn't in correct format.") + _print("!!! Requested version isn't in correct format.", True) _print((" Use --list-versions to find out" - " proper version string.")) + " proper version string."), True) sys.exit(1) if arg == "--validate-version": - _print("!!! Please use option --validate-version like:") - _print(" --validate-version=3.0.0") + _print("!!! Please use option --validate-version like:", True) + _print(" --validate-version=3.0.0", True) sys.exit(1) if arg.startswith("--validate-version="): @@ -543,9 +564,9 @@ def _process_arguments() -> tuple: sys.argv.remove(arg) commands.append("validate") else: - _print("!!! Requested version isn't in correct format.") + _print("!!! Requested version isn't in correct format.", True) _print((" Use --list-versions to find out" - " proper version string.")) + " proper version string."), True) sys.exit(1) if "--list-versions" in sys.argv: @@ -556,7 +577,7 @@ def _process_arguments() -> tuple: # this is helper to run igniter before anything else if "igniter" in sys.argv: if os.getenv("OPENPYPE_HEADLESS_MODE") == "1": - _print("!!! Cannot open Igniter dialog in headless mode.") + _print("!!! Cannot open Igniter dialog in headless mode.", True) sys.exit(1) return_code = igniter.open_dialog() @@ -606,9 +627,9 @@ def _determine_mongodb() -> str: if not openpype_mongo: _print("*** No DB connection string specified.") if os.getenv("OPENPYPE_HEADLESS_MODE") == "1": - _print("!!! Cannot open Igniter dialog in headless mode.") - _print( - "!!! Please use `OPENPYPE_MONGO` to specify server address.") + _print("!!! Cannot open Igniter dialog in headless mode.", True) + _print(("!!! Please use `OPENPYPE_MONGO` to specify " + "server address."), True) sys.exit(1) _print("--- launching setup UI ...") @@ -783,7 +804,7 @@ def _find_frozen_openpype(use_version: str = None, try: version_path = bootstrap.extract_openpype(openpype_version) except OSError as e: - _print("!!! failed: {}".format(str(e))) + _print("!!! failed: {}".format(str(e)), True) sys.exit(1) else: # cleanup zip after extraction @@ -899,7 +920,7 @@ def _boot_validate_versions(use_version, local_version): v: OpenPypeVersion found = [v for v in openpype_versions if str(v) == use_version] if not found: - _print(f"!!! Version [ {use_version} ] not found.") + _print(f"!!! Version [ {use_version} ] not found.", True) list_versions(openpype_versions, local_version) sys.exit(1) @@ -908,7 +929,8 @@ def _boot_validate_versions(use_version, local_version): use_version, openpype_versions ) valid, message = bootstrap.validate_openpype_version(version_path) - _print(f'{">>> " if valid else "!!! "}{message}') + _print(f'{">>> " if valid else "!!! "}{message}', not valid) + return valid def _boot_print_versions(openpype_root): @@ -935,7 +957,7 @@ def _boot_print_versions(openpype_root): def _boot_handle_missing_version(local_version, message): - _print(message) + _print(message, True) if os.environ.get("OPENPYPE_HEADLESS_MODE") == "1": openpype_versions = bootstrap.find_openpype( include_zips=True) @@ -983,7 +1005,7 @@ def boot(): openpype_mongo = _determine_mongodb() except RuntimeError as e: # without mongodb url we are done for. - _print(f"!!! {e}") + _print(f"!!! {e}", True) sys.exit(1) os.environ["OPENPYPE_MONGO"] = openpype_mongo @@ -1018,14 +1040,18 @@ def boot(): # find its versions there and bootstrap them. openpype_path = get_openpype_path_from_settings(global_settings) + # Check if local versions should be installed in custom folder and not in + # user app data + data_dir = get_local_openpype_path_from_settings(global_settings) + bootstrap.set_data_dir(data_dir) if getattr(sys, 'frozen', False): local_version = bootstrap.get_version(Path(OPENPYPE_ROOT)) else: local_version = OpenPypeVersion.get_installed_version_str() if "validate" in commands: - _boot_validate_versions(use_version, local_version) - sys.exit(1) + valid = _boot_validate_versions(use_version, local_version) + sys.exit(0 if valid else 1) if not openpype_path: _print("*** Cannot get OpenPype path from database.") @@ -1035,7 +1061,7 @@ def boot(): if "print_versions" in commands: _boot_print_versions(OPENPYPE_ROOT) - sys.exit(1) + sys.exit(0) # ------------------------------------------------------------------------ # Find OpenPype versions @@ -1052,13 +1078,13 @@ def boot(): except RuntimeError as e: # no version to run - _print(f"!!! {e}") + _print(f"!!! {e}", True) sys.exit(1) # validate version - _print(f">>> Validating version [ {str(version_path)} ]") + _print(f">>> Validating version in frozen [ {str(version_path)} ]") result = bootstrap.validate_openpype_version(version_path) if not result[0]: - _print(f"!!! Invalid version: {result[1]}") + _print(f"!!! Invalid version: {result[1]}", True) sys.exit(1) _print("--- version is valid") else: @@ -1126,7 +1152,7 @@ def boot(): cli.main(obj={}, prog_name="openpype") except Exception: # noqa exc_info = sys.exc_info() - _print("!!! OpenPype crashed:") + _print("!!! OpenPype crashed:", True) traceback.print_exception(*exc_info) sys.exit(1) diff --git a/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py new file mode 100644 index 0000000000..57e2f78973 --- /dev/null +++ b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke_prerender.py @@ -0,0 +1,106 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.nuke.lib import NukeDeadlinePublishTestClass + +log = logging.getLogger("test_publish_in_nuke") + + +class TestDeadlinePublishInNukePrerender(NukeDeadlinePublishTestClass): + """Basic test case for publishing in Nuke and Deadline for prerender + + It is different from `test_deadline_publish_in_nuke` as that one is for + `render` family >> this test expects different subset names. + + Uses generic TestCase to prepare fixtures for test data, testing DBs, + env vars. + + !!! + It expects path in WriteNode starting with 'c:/projects', it replaces + it with correct value in temp folder. + Access file path by selecting WriteNode group, CTRL+Enter, update file + input + !!! + + Opens Nuke, run publish on prepared workile. + + Then checks content of DB (if subset, version, representations were + created. + Checks tmp folder if all expected files were published. + + How to run: + (in cmd with activated {OPENPYPE_ROOT}/.venv) + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py + runtests ../tests/integration/hosts/nuke # noqa: E501 + + To check log/errors from launched app's publish process keep PERSIST + to True and check `test_openpype.logs` collection. + """ + TEST_FILES = [ + ("1aQaKo3cF-fvbTfvODIRFMxgherjbJ4Ql", + "test_nuke_deadline_publish_in_nuke_prerender.zip", "") + ] + + APP_GROUP = "nuke" + + TIMEOUT = 180 # publish timeout + + # could be overwritten by command line arguments + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + PERSIST = False # True - keep test_db, test_openpype, outputted test files + TEST_DATA_FOLDER = None + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 2)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + # prerender has only default subset format `{family}{variant}`, + # Key01 is used variant + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="prerenderKey01")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2)) + + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "nk"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "prerenderKey01", + "context.ext": "exr"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + # prerender doesn't have set creation of review by default + additional_args = {"context.subset": "prerenderKey01", + "name": "thumbnail"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "prerenderKey01", + "name": "h264_mov"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestDeadlinePublishInNukePrerender() diff --git a/common/openpype_common/distribution/file_handler.py b/tests/lib/file_handler.py similarity index 66% rename from common/openpype_common/distribution/file_handler.py rename to tests/lib/file_handler.py index e649f143e9..07f6962c98 100644 --- a/common/openpype_common/distribution/file_handler.py +++ b/tests/lib/file_handler.py @@ -9,21 +9,23 @@ import hashlib import tarfile import zipfile +import requests -USER_AGENT = "openpype" +USER_AGENT = "AYON-launcher" class RemoteFileHandler: """Download file from url, might be GDrive shareable link""" - IMPLEMENTED_ZIP_FORMATS = ['zip', 'tar', 'tgz', - 'tar.gz', 'tar.xz', 'tar.bz2'] + IMPLEMENTED_ZIP_FORMATS = { + "zip", "tar", "tgz", "tar.gz", "tar.xz", "tar.bz2" + } @staticmethod def calculate_md5(fpath, chunk_size=10000): md5 = hashlib.md5() - with open(fpath, 'rb') as f: - for chunk in iter(lambda: f.read(chunk_size), b''): + with open(fpath, "rb") as f: + for chunk in iter(lambda: f.read(chunk_size), b""): md5.update(chunk) return md5.hexdigest() @@ -45,7 +47,7 @@ class RemoteFileHandler: h = hashlib.sha256() b = bytearray(128 * 1024) mv = memoryview(b) - with open(fpath, 'rb', buffering=0) as f: + with open(fpath, "rb", buffering=0) as f: for n in iter(lambda: f.readinto(mv), 0): h.update(mv[:n]) return h.hexdigest() @@ -62,27 +64,32 @@ class RemoteFileHandler: return True if not hash_type: raise ValueError("Provide hash type, md5 or sha256") - if hash_type == 'md5': + if hash_type == "md5": return RemoteFileHandler.check_md5(fpath, hash_value) if hash_type == "sha256": return RemoteFileHandler.check_sha256(fpath, hash_value) @staticmethod def download_url( - url, root, filename=None, - sha256=None, max_redirect_hops=3 + url, + root, + filename=None, + max_redirect_hops=3, + headers=None ): - """Download a file from a url and place it in root. + """Download a file from url and place it in root. + Args: url (str): URL to download file from root (str): Directory to place downloaded file in filename (str, optional): Name to save the file under. If None, use the basename of the URL - sha256 (str, optional): sha256 checksum of the download. - If None, do not check - max_redirect_hops (int, optional): Maximum number of redirect + max_redirect_hops (Optional[int]): Maximum number of redirect hops allowed + headers (Optional[dict[str, str]]): Additional required headers + - Authentication etc.. """ + root = os.path.expanduser(root) if not filename: filename = os.path.basename(url) @@ -90,55 +97,44 @@ class RemoteFileHandler: os.makedirs(root, exist_ok=True) - # check if file is already present locally - if RemoteFileHandler.check_integrity(fpath, - sha256, hash_type="sha256"): - print('Using downloaded and verified file: ' + fpath) - return - # expand redirect chain if needed - url = RemoteFileHandler._get_redirect_url(url, - max_hops=max_redirect_hops) + url = RemoteFileHandler._get_redirect_url( + url, max_hops=max_redirect_hops, headers=headers) # check if file is located on Google Drive file_id = RemoteFileHandler._get_google_drive_file_id(url) if file_id is not None: return RemoteFileHandler.download_file_from_google_drive( - file_id, root, filename, sha256) + file_id, root, filename) # download the file try: - print('Downloading ' + url + ' to ' + fpath) - RemoteFileHandler._urlretrieve(url, fpath) - except (urllib.error.URLError, IOError) as e: - if url[:5] == 'https': - url = url.replace('https:', 'http:') - print('Failed download. Trying https -> http instead.' - ' Downloading ' + url + ' to ' + fpath) - RemoteFileHandler._urlretrieve(url, fpath) - else: - raise e + print(f"Downloading {url} to {fpath}") + RemoteFileHandler._urlretrieve(url, fpath, headers=headers) + except (urllib.error.URLError, IOError) as exc: + if url[:5] != "https": + raise exc - # check integrity of downloaded file - if not RemoteFileHandler.check_integrity(fpath, - sha256, hash_type="sha256"): - raise RuntimeError("File not found or corrupted.") + url = url.replace("https:", "http:") + print(( + "Failed download. Trying https -> http instead." + f" Downloading {url} to {fpath}" + )) + RemoteFileHandler._urlretrieve(url, fpath, headers=headers) @staticmethod - def download_file_from_google_drive(file_id, root, - filename=None, - sha256=None): + def download_file_from_google_drive( + file_id, root, filename=None + ): """Download a Google Drive file from and place it in root. Args: file_id (str): id of file to be downloaded root (str): Directory to place downloaded file in filename (str, optional): Name to save the file under. If None, use the id of the file. - sha256 (str, optional): sha256 checksum of the download. - If None, do not check """ # Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url # noqa - import requests + url = "https://docs.google.com/uc?export=download" root = os.path.expanduser(root) @@ -148,17 +144,16 @@ class RemoteFileHandler: os.makedirs(root, exist_ok=True) - if os.path.isfile(fpath) and RemoteFileHandler.check_integrity( - fpath, sha256, hash_type="sha256"): - print('Using downloaded and verified file: ' + fpath) + if os.path.isfile(fpath) and RemoteFileHandler.check_integrity(fpath): + print(f"Using downloaded and verified file: {fpath}") else: session = requests.Session() - response = session.get(url, params={'id': file_id}, stream=True) + response = session.get(url, params={"id": file_id}, stream=True) token = RemoteFileHandler._get_confirm_token(response) if token: - params = {'id': file_id, 'confirm': token} + params = {"id": file_id, "confirm": token} response = session.get(url, params=params, stream=True) response_content_generator = response.iter_content(32768) @@ -186,28 +181,28 @@ class RemoteFileHandler: destination_path = os.path.dirname(path) _, archive_type = os.path.splitext(path) - archive_type = archive_type.lstrip('.') + archive_type = archive_type.lstrip(".") - if archive_type in ['zip']: - print("Unzipping {}->{}".format(path, destination_path)) + if archive_type in ["zip"]: + print(f"Unzipping {path}->{destination_path}") zip_file = zipfile.ZipFile(path) zip_file.extractall(destination_path) zip_file.close() elif archive_type in [ - 'tar', 'tgz', 'tar.gz', 'tar.xz', 'tar.bz2' + "tar", "tgz", "tar.gz", "tar.xz", "tar.bz2" ]: - print("Unzipping {}->{}".format(path, destination_path)) - if archive_type == 'tar': - tar_type = 'r:' - elif archive_type.endswith('xz'): - tar_type = 'r:xz' - elif archive_type.endswith('gz'): - tar_type = 'r:gz' - elif archive_type.endswith('bz2'): - tar_type = 'r:bz2' + print(f"Unzipping {path}->{destination_path}") + if archive_type == "tar": + tar_type = "r:" + elif archive_type.endswith("xz"): + tar_type = "r:xz" + elif archive_type.endswith("gz"): + tar_type = "r:gz" + elif archive_type.endswith("bz2"): + tar_type = "r:bz2" else: - tar_type = 'r:*' + tar_type = "r:*" try: tar_file = tarfile.open(path, tar_type) except tarfile.ReadError: @@ -216,29 +211,35 @@ class RemoteFileHandler: tar_file.close() @staticmethod - def _urlretrieve(url, filename, chunk_size): + def _urlretrieve(url, filename, chunk_size=None, headers=None): + final_headers = {"User-Agent": USER_AGENT} + if headers: + final_headers.update(headers) + + chunk_size = chunk_size or 8192 with open(filename, "wb") as fh: with urllib.request.urlopen( - urllib.request.Request(url, - headers={"User-Agent": USER_AGENT})) \ - as response: + urllib.request.Request(url, headers=final_headers) + ) as response: for chunk in iter(lambda: response.read(chunk_size), ""): if not chunk: break fh.write(chunk) @staticmethod - def _get_redirect_url(url, max_hops): + def _get_redirect_url(url, max_hops, headers=None): initial_url = url - headers = {"Method": "HEAD", "User-Agent": USER_AGENT} - + final_headers = {"Method": "HEAD", "User-Agent": USER_AGENT} + if headers: + final_headers.update(headers) for _ in range(max_hops + 1): with urllib.request.urlopen( - urllib.request.Request(url, headers=headers)) as response: + urllib.request.Request(url, headers=final_headers) + ) as response: if response.url == url or response.url is None: return url - url = response.url + return response.url else: raise RecursionError( f"Request to {initial_url} exceeded {max_hops} redirects. " @@ -248,7 +249,7 @@ class RemoteFileHandler: @staticmethod def _get_confirm_token(response): for key, value in response.cookies.items(): - if key.startswith('download_warning'): + if key.startswith("download_warning"): return value # handle antivirus warning for big zips diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index 300024dc98..2af4af02de 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -12,7 +12,7 @@ import requests import re from tests.lib.db_handler import DBHandler -from common.openpype_common.distribution.file_handler import RemoteFileHandler +from tests.lib.file_handler import RemoteFileHandler from openpype.modules import ModulesManager from openpype.settings import get_project_settings diff --git a/tests/unit/openpype/default_modules/royal_render/test_rr_job.py b/tests/unit/openpype/default_modules/royal_render/test_rr_job.py deleted file mode 100644 index ab8b1bfd50..0000000000 --- a/tests/unit/openpype/default_modules/royal_render/test_rr_job.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- -"""Test suite for User Settings.""" -# import pytest -# from openpype.modules import ModulesManager - - -def test_rr_job(): - # manager = ModulesManager() - # rr_module = manager.modules_by_name["royalrender"] - ... diff --git a/tools/fetch_thirdparty_libs.py b/tools/fetch_thirdparty_libs.py index 70257caa46..c2dc4636d0 100644 --- a/tools/fetch_thirdparty_libs.py +++ b/tools/fetch_thirdparty_libs.py @@ -67,40 +67,45 @@ def _print(msg: str, message_type: int = 0) -> None: print(f"{header}{msg}") -def install_qtbinding(pyproject, openpype_root, platform_name): - _print("Handling Qt binding framework ...") - qtbinding_def = pyproject["openpype"]["qtbinding"][platform_name] - package = qtbinding_def["package"] - version = qtbinding_def.get("version") - - qtbinding_arg = None +def _pip_install(openpype_root, package, version=None): + arg = None if package and version: - qtbinding_arg = f"{package}=={version}" + arg = f"{package}=={version}" elif package: - qtbinding_arg = package + arg = package - if not qtbinding_arg: - _print("Didn't find Qt binding to install") + if not arg: + _print("Couldn't find package to install") sys.exit(1) - _print(f"We'll install {qtbinding_arg}") + _print(f"We'll install {arg}") python_vendor_dir = openpype_root / "vendor" / "python" try: subprocess.run( [ sys.executable, - "-m", "pip", "install", "--upgrade", qtbinding_arg, + "-m", "pip", "install", "--upgrade", arg, "-t", str(python_vendor_dir) ], check=True, stdout=subprocess.DEVNULL ) except subprocess.CalledProcessError as e: - _print("Error during PySide2 installation.", 1) + _print(f"Error during {package} installation.", 1) _print(str(e), 1) sys.exit(1) + +def install_qtbinding(pyproject, openpype_root, platform_name): + _print("Handling Qt binding framework ...") + qtbinding_def = pyproject["openpype"]["qtbinding"][platform_name] + package = qtbinding_def["package"] + version = qtbinding_def.get("version") + _pip_install(openpype_root, package, version) + + python_vendor_dir = openpype_root / "vendor" / "python" + # Remove libraries for QtSql which don't have available libraries # by default and Postgre library would require to modify rpath of # dependency @@ -112,6 +117,13 @@ def install_qtbinding(pyproject, openpype_root, platform_name): os.remove(str(filepath)) +def install_runtime_dependencies(pyproject, openpype_root): + _print("Installing Runtime Dependencies ...") + runtime_deps = pyproject["openpype"]["runtime-deps"] + for package, version in runtime_deps.items(): + _pip_install(openpype_root, package, version) + + def install_thirdparty(pyproject, openpype_root, platform_name): _print("Processing third-party dependencies ...") try: @@ -221,6 +233,7 @@ def main(): pyproject = toml.load(openpype_root / "pyproject.toml") platform_name = platform.system().lower() install_qtbinding(pyproject, openpype_root, platform_name) + install_runtime_dependencies(pyproject, openpype_root) install_thirdparty(pyproject, openpype_root, platform_name) end_time = time.time_ns() total_time = (end_time - start_time) / 1000000000 diff --git a/website/docs/admin_hosts_maya.md b/website/docs/admin_hosts_maya.md index 700822843f..93acf316c2 100644 --- a/website/docs/admin_hosts_maya.md +++ b/website/docs/admin_hosts_maya.md @@ -113,7 +113,8 @@ This is useful to fix some specific renderer glitches and advanced hacking of Ma #### Namespace and Group Name Here you can create your own custom naming for the reference loader. -The custom naming is split into two parts: namespace and group name. If you don't set the namespace or the group name, an error will occur. +The custom naming is split into two parts: namespace and group name. If you don't set the namespace, an error will occur. +Group name could be set empty, that way no wrapping group will be created for loaded item. Here's the different variables you can use:

diff --git a/website/docs/admin_hosts_resolve.md b/website/docs/admin_hosts_resolve.md index 09e7df1d9f..8bb8440f78 100644 --- a/website/docs/admin_hosts_resolve.md +++ b/website/docs/admin_hosts_resolve.md @@ -4,100 +4,38 @@ title: DaVinci Resolve Setup sidebar_label: DaVinci Resolve --- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; +:::warning +Only Resolve Studio is supported due to Python API limitation in Resolve (free). +::: ## Resolve requirements Due to the way resolve handles python and python scripts there are a few steps required steps needed to be done on any machine that will be using OpenPype with resolve. -### Installing Resolve's own python 3.6 interpreter. -Resolve uses a hardcoded method to look for the python executable path. All of tho following paths are defined automatically by Python msi installer. We are using Python 3.6.2. +## Basic setup - +- Supported version is up to v18 +- Install Python 3.6.2 (latest tested v17) or up to 3.9.13 (latest tested on v18) +- pip install PySide2: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install PySide2` +- pip install OpenTimelineIO: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install OpenTimelineIO` + - Python 3.6: open terminal and go to python.exe directory, then `python -m pip install git+https://github.com/PixarAnimationStudios/OpenTimelineIO.git@5aa24fbe89d615448876948fe4b4900455c9a3e8` and move built files from `./Lib/site-packages/opentimelineio/cxx-libs/bin and lib` to `./Lib/site-packages/opentimelineio/`. I was building it on Win10 machine with Visual Studio Community 2019 and + ![image](https://user-images.githubusercontent.com/40640033/102792588-ffcb1c80-43a8-11eb-9c6b-bf2114ed578e.png) with installed CMake in PATH. +- make sure Resolve Fusion (Fusion Tab/menu/Fusion/Fusion Settings) is set to Python 3.6 + ![image](https://user-images.githubusercontent.com/40640033/102631545-280b0f00-414e-11eb-89fc-98ac268d209d.png) +- Open OpenPype **Tray/Admin/Studio settings** > `applications/resolve/environment` and add Python3 path to `RESOLVE_PYTHON3_HOME` platform related. - +## Editorial setup -`%LOCALAPPDATA%\Programs\Python\Python36` +This is how it looks on my testing project timeline +![image](https://user-images.githubusercontent.com/40640033/102637638-96ec6600-4156-11eb-9656-6e8e3ce4baf8.png) +Notice I had renamed tracks to `main` (holding metadata markers) and `review` used for generating review data with ffmpeg confersion to jpg sequence. - - - -`/opt/Python/3.6/bin` - - - - -`~/Library/Python/3.6/bin` - - - - - -### Installing PySide2 into python 3.6 for correct gui work - -OpenPype is using its own window widget inside Resolve, for that reason PySide2 has to be installed into the python 3.6 (as explained above). - - - - - -paste to any terminal of your choice - -```bash -%LOCALAPPDATA%\Programs\Python\Python36\python.exe -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -/opt/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -~/Library/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -
- -### Set Resolve's Fusion settings for Python 3.6 interpereter - -
- - -As it is shown in below picture you have to go to Fusion Tab and then in Fusion menu find Fusion Settings. Go to Fusion/Script and find Default Python Version and switch to Python 3.6 - -
- -
- -![Create menu](assets/resolve_fusion_tab.png) -![Create menu](assets/resolve_fusion_menu.png) -![Create menu](assets/resolve_fusion_script_settings.png) - -
-
\ No newline at end of file +1. you need to start OpenPype menu from Resolve/EditTab/Menu/Workspace/Scripts/Comp/**__OpenPype_Menu__** +2. then select any clips in `main` track and change their color to `Chocolate` +3. in OpenPype Menu select `Create` +4. in Creator select `Create Publishable Clip [New]` (temporary name) +5. set `Rename clips` to True, Master Track to `main` and Use review track to `review` as in picture + ![image](https://user-images.githubusercontent.com/40640033/102643773-0d419600-4160-11eb-919e-9c2be0aecab8.png) +6. after you hit `ok` all clips are colored to `ping` and marked with openpype metadata tag +7. git `Publish` on openpype menu and see that all had been collected correctly. That is the last step for now as rest is Work in progress. Next steps will follow. diff --git a/website/docs/admin_openpype_commands.md b/website/docs/admin_openpype_commands.md index 131b6c0a51..a149d78aa2 100644 --- a/website/docs/admin_openpype_commands.md +++ b/website/docs/admin_openpype_commands.md @@ -40,7 +40,6 @@ For more information [see here](admin_use.md#run-openpype). | module | Run command line arguments for modules. | | | repack-version | Tool to re-create version zip. | [📑](#repack-version-arguments) | | tray | Launch OpenPype Tray. | [📑](#tray-arguments) -| launch | Launch application in Pype environment. | [📑](#launch-arguments) | | publish | Pype takes JSON from provided path and use it to publish data in it. | [📑](#publish-arguments) | | extractenvironments | Extract environment variables for entered context to a json file. | [📑](#extractenvironments-arguments) | | run | Execute given python script within OpenPype environment. | [📑](#run-arguments) | @@ -54,26 +53,6 @@ For more information [see here](admin_use.md#run-openpype). ```shell openpype_console tray ``` ---- - -### `launch` arguments {#launch-arguments} - -| Argument | Description | -| --- | --- | -| `--app` | Application name - this should be the key for application from Settings. | -| `--project` | Project name (default taken from `AVALON_PROJECT` if set) | -| `--asset` | Asset name (default taken from `AVALON_ASSET` if set) | -| `--task` | Task name (default taken from `AVALON_TASK` is set) | -| `--tools` | *Optional: Additional tools to add* | -| `--user` | *Optional: User on behalf to run* | -| `--ftrack-server` / `-fs` | *Optional: Ftrack server URL* | -| `--ftrack-user` / `-fu` | *Optional: Ftrack user* | -| `--ftrack-key` / `-fk` | *Optional: Ftrack API key* | - -For example to run Python interactive console in Pype context: -```shell -pype launch --app python --project my_project --asset my_asset --task my_task -``` --- ### `publish` arguments {#publish-arguments} diff --git a/website/docs/admin_settings_system.md b/website/docs/admin_settings_system.md index d61713ccd5..8abcefd24d 100644 --- a/website/docs/admin_settings_system.md +++ b/website/docs/admin_settings_system.md @@ -102,6 +102,10 @@ workstation that should be submitting render jobs to muster via OpenPype. **`templates mapping`** - you can customize Muster templates to match your existing setup here. +### Royal Render + +**`Royal Render Root Paths`** - multi platform paths to Royal Render installation. + ### Clockify **`Workspace Name`** - name of the clockify workspace where you would like to be sending all the timelogs. diff --git a/website/docs/artist_hosts_houdini.md b/website/docs/artist_hosts_houdini.md index 0471765365..940d5ac351 100644 --- a/website/docs/artist_hosts_houdini.md +++ b/website/docs/artist_hosts_houdini.md @@ -132,3 +132,25 @@ switch versions between different hda types. When you load hda, it will install its type in your hip file and add published version as its definition file. When you switch version via Scene Manager, it will add its definition and set it as preferred. + +## Publishing and loading BGEO caches + +There is a simple support for publishing and loading **BGEO** files in all supported compression variants. + +### Creating BGEO instances + +Select your SOP node to be exported as BGEO. If your selection is in the object level, OpenPype will try to find if there is an `output` node inside, the one with the lowest index will be used: + +![BGEO output node](assets/houdini_bgeo_output_node.png) + +Then you can open Publisher, in Create you select **BGEO PointCache**: + +![BGEO Publisher](assets/houdini_bgeo-publisher.png) + +You can select compression type and if the current selection should be connected to ROPs SOP path parameter. Publishing will produce sequence of files based on your timeline settings. + +### Loading BGEO + +Select your published BGEO subsets in Loader, right click and load them in: + +![BGEO Publisher](assets/houdini_bgeo-loading.png) diff --git a/website/docs/assets/houdini_bgeo-loading.png b/website/docs/assets/houdini_bgeo-loading.png new file mode 100644 index 0000000000..e8aad66f43 Binary files /dev/null and b/website/docs/assets/houdini_bgeo-loading.png differ diff --git a/website/docs/assets/houdini_bgeo-publisher.png b/website/docs/assets/houdini_bgeo-publisher.png new file mode 100644 index 0000000000..5c3534077f Binary files /dev/null and b/website/docs/assets/houdini_bgeo-publisher.png differ diff --git a/website/docs/assets/houdini_bgeo_output_node.png b/website/docs/assets/houdini_bgeo_output_node.png new file mode 100644 index 0000000000..160f0a259b Binary files /dev/null and b/website/docs/assets/houdini_bgeo_output_node.png differ diff --git a/website/docs/dev_colorspace.md b/website/docs/dev_colorspace.md index c4b8e74d73..cb07cb18a0 100644 --- a/website/docs/dev_colorspace.md +++ b/website/docs/dev_colorspace.md @@ -80,7 +80,7 @@ from openpype.pipeline.colorspace import ( class YourLoader(api.Loader): def load(self, context, name=None, namespace=None, options=None): - path = self.fname + path = self.filepath_from_context(context) colorspace_data = context["representation"]["data"].get("colorspaceData", {}) colorspace = ( colorspace_data.get("colorspace") diff --git a/website/docs/module_royalrender.md b/website/docs/module_royalrender.md new file mode 100644 index 0000000000..2b75fbefef --- /dev/null +++ b/website/docs/module_royalrender.md @@ -0,0 +1,37 @@ +--- +id: module_royalrender +title: Royal Render Administration +sidebar_label: Royal Render +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + +## Preparation + +For [Royal Render](hhttps://www.royalrender.de/) support you need to set a few things up in both OpenPype and Royal Render itself + +1. Deploy OpenPype executable to all nodes of Royal Render farm. See [Install & Run](admin_use.md) + +2. Enable Royal Render Module in the [OpenPype Admin Settings](admin_settings_system.md#royal-render). + +3. Point OpenPype to your Royal Render installation in the [OpenPype Admin Settings](admin_settings_system.md#royal-render). + +4. Install our custom plugin and scripts to your RR repository. It should be as simple as copying content of `openpype/modules/royalrender/rr_root` to `path/to/your/royalrender/repository`. + + +## Configuration + +OpenPype integration for Royal Render consists of pointing RR to location of Openpype executable. That is being done by copying `_install_paths/OpenPype.cfg` to +RR root folder. This file contains reasonable defaults. They could be changed in this file or modified Render apps in `rrControl`. + + +## Debugging + +Current implementation uses dynamically build '.xml' file which is stored in temporary folder accessible by RR. It might make sense to +use this Openpype built file and try to run it via `*__rrServerConsole` executable from command line in case of unforeseeable issues. + +## Known issues + +Currently environment values set in Openpype are not propagated into render jobs on RR. It is studio responsibility to synchronize environment variables from Openpype with all render nodes for now. diff --git a/website/sidebars.js b/website/sidebars.js index 267cc7f6d7..b885181fb6 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -111,6 +111,7 @@ module.exports = { "module_site_sync", "module_deadline", "module_muster", + "module_royalrender", "module_clockify", "module_slack" ],