mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into bugfix/OP-5872_3dsmax-Loading-abc-can-use-wrong-type-with-Ornatrix
This commit is contained in:
commit
49482bc188
134 changed files with 7534 additions and 599 deletions
8
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
8
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,10 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.16.7-nightly.1
|
||||
- 3.16.6
|
||||
- 3.16.6-nightly.1
|
||||
- 3.16.5
|
||||
- 3.16.5-nightly.5
|
||||
- 3.16.5-nightly.4
|
||||
- 3.16.5-nightly.3
|
||||
|
|
@ -131,10 +135,6 @@ body:
|
|||
- 3.14.9
|
||||
- 3.14.9-nightly.5
|
||||
- 3.14.9-nightly.4
|
||||
- 3.14.9-nightly.3
|
||||
- 3.14.9-nightly.2
|
||||
- 3.14.9-nightly.1
|
||||
- 3.14.8
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
910
CHANGELOG.md
910
CHANGELOG.md
|
|
@ -1,6 +1,916 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.16.6](https://github.com/ynput/OpenPype/tree/3.16.6)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.5...3.16.6)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Workfiles tool: Refactor workfiles tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5550">#5550</a></summary>
|
||||
|
||||
Refactored workfiles tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AfterEffects: added validator for missing files in FootageItems <a href="https://github.com/ynput/OpenPype/pull/5590">#5590</a></summary>
|
||||
|
||||
Published composition in AE could contain multiple FootageItems as a layers. If FootageItem contains imported file and it doesn't exist, render triggered by Publish process will silently fail and no output is generated. This could cause failure later in the process with unclear reason. (In `ExtractReview`).This PR adds validation to protect from this.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Yeti Cache Include viewport preview settings from source <a href="https://github.com/ynput/OpenPype/pull/5561">#5561</a></summary>
|
||||
|
||||
When publishing and loading yeti caches persist the display output and preview colors + settings to ensure consistency in the view
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: validate colorspace in review rop <a href="https://github.com/ynput/OpenPype/pull/5322">#5322</a></summary>
|
||||
|
||||
Adding a validator that checks if 'OCIO Colorspace' parameter on review rop was set to a valid value.It is a step towards managing colorspace in review ropvalid values are the ones in the dropdown menuthis validator also provides some helper actions This PR is related to #4836 and #4833
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Colorspace: adding abstraction of publishing related functions <a href="https://github.com/ynput/OpenPype/pull/5497">#5497</a></summary>
|
||||
|
||||
The functionality of Colorspace has been abstracted for greater usability.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: removing redundant workfile colorspace attributes <a href="https://github.com/ynput/OpenPype/pull/5580">#5580</a></summary>
|
||||
|
||||
Nuke root workfile colorspace data type knobs are long time configured automatically via config roles or the default values are also working well. Therefore there is no need for pipeline managed knobs.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Less verbose logs for Ftrack integration in artist facing logs <a href="https://github.com/ynput/OpenPype/pull/5596">#5596</a></summary>
|
||||
|
||||
- Reduce artist-facing logs for component integration for Ftrack
|
||||
- Avoid "Comment is not set" log in artist facing report for Kitsu and Ftrack
|
||||
- Remove info log about `ffprobe` inspecting a file (changed to debug log)
|
||||
- interesting to see however that it ffprobes the same jpeg twice - but maybe once for thumbnail?
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix rig validators for new out_SET and controls_SET names <a href="https://github.com/ynput/OpenPype/pull/5595">#5595</a></summary>
|
||||
|
||||
Fix usage of `out_SET` and `controls_SET` since #5310 because they can now be prefixed by the subset name.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: set default frame values to sequential data <a href="https://github.com/ynput/OpenPype/pull/5530">#5530</a></summary>
|
||||
|
||||
We are inheriting default frame handles and fps data either from project or setting them to 0. This is just for case a production will decide not to injest the sequential representations with asset based metadata.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Screenshot opacity value fix <a href="https://github.com/ynput/OpenPype/pull/5576">#5576</a></summary>
|
||||
|
||||
Fix opacity value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AfterEffects: fix imports of image sequences <a href="https://github.com/ynput/OpenPype/pull/5581">#5581</a></summary>
|
||||
|
||||
#4602 broke imports of image sequences.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fix representation context conversion <a href="https://github.com/ynput/OpenPype/pull/5591">#5591</a></summary>
|
||||
|
||||
Do not fix `"folder"` key in representation context until it is needed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>ayon-nuke: default factory to lists <a href="https://github.com/ynput/OpenPype/pull/5594">#5594</a></summary>
|
||||
|
||||
Default factory were missing in settings schemas for complicated objects like lists and it was causing settings to be failing saving.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix look assigner showing no asset if 'not found' representations are present <a href="https://github.com/ynput/OpenPype/pull/5597">#5597</a></summary>
|
||||
|
||||
Fix Maya Look assigner failing to show any content if it finds an invalid container for which it can't find the asset in the current project. (This can happen when e.g. loading something from a library project).There was logic already to avoid this but there was a bug where it used variable `_id` which did not exist and likely had to be `asset_id`.I've fixed that and improved the logged message a bit, e.g.:
|
||||
```
|
||||
// Warning: openpype.hosts.maya.tools.mayalookassigner.commands : Id found on 22 nodes for which no asset is found database, skipping '641d78ec85c3c5b102e836b0'
|
||||
```
|
||||
Example not found representation in Loader:The issue isn't necessarily related to NOT FOUND representations but in essence boils down to finding nodes with asset ids that do not exist in the current project which could very well just be local meshes in your scene.**Note:**I've excluded logging the nodes themselves because that tends to be a very long list of nodes. Only downside to removing that is that it's unclear which nodes are related to that `id`. If there are any ideas on how to still provide a concise informational message about that that'd be great so I could add it. Things I had considered:
|
||||
- Report the containers, issue here is that it's about asset ids on nodes which don't HAVE to be in containers - it could be local geometry
|
||||
- Report the namespaces, issue here is that it could be nodes without namespaces (plus potentially not about ALL nodes in a namespace)
|
||||
- Report the short names of the nodes; it's shorter and readable but still likely a lot of nodes.@tokejepsen @LiborBatek any other ideas?
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Photoshop: fixed blank Flatten image <a href="https://github.com/ynput/OpenPype/pull/5600">#5600</a></summary>
|
||||
|
||||
Flatten image is simplified publishing approach where all visible layers are "flatten" and published together. This image could be used as a reference etc.This is implemented by auto creator which wasn't updated after first publish. This would result in missing newly created layers after `auto_image` instance was created.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Remove Hardcoded Subset Name for Reviews <a href="https://github.com/ynput/OpenPype/pull/5603">#5603</a></summary>
|
||||
|
||||
Fixes hardcoded subset name for Reviews in Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TVPaint: Fix tool callbacks <a href="https://github.com/ynput/OpenPype/pull/5608">#5608</a></summary>
|
||||
|
||||
Do not wait for callback to finish.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Remove unused variables and cleanup <a href="https://github.com/ynput/OpenPype/pull/5588">#5588</a></summary>
|
||||
|
||||
Removing some unused variables. In some cases the unused variables _seemed like they should've been used - maybe?_ so please **double check the code whether it doesn't hint to an already existing bug**.Also tweaked some other small bugs in code + tweaked logging levels.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Loader log deprecation warning for 'fname' attribute <a href="https://github.com/ynput/OpenPype/pull/5587">#5587</a></summary>
|
||||
|
||||
Since https://github.com/ynput/OpenPype/pull/4602 the `fname` attribute on the `LoaderPlugin` should've been deprecated and set for removal over time. However, no deprecation warning was logged whatsoever and thus one usage appears to have sneaked in (fixed with this PR) and a new one tried to sneak in with a recent PR
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.16.5](https://github.com/ynput/OpenPype/tree/3.16.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.4...3.16.5)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Attribute Definitions: Multiselection enum def <a href="https://github.com/ynput/OpenPype/pull/5547">#5547</a></summary>
|
||||
|
||||
Added `multiselection` option to `EnumDef`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Farm: adding target collector <a href="https://github.com/ynput/OpenPype/pull/5494">#5494</a></summary>
|
||||
|
||||
Enhancing farm publishing workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Optimize validate plug-in path attributes <a href="https://github.com/ynput/OpenPype/pull/5522">#5522</a></summary>
|
||||
|
||||
- Optimize query (use `cmds.ls` once)
|
||||
- Add Select Invalid action
|
||||
- Improve validation report
|
||||
- Avoid "Unknown object type" errors
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Remove Validate Instance Attributes plug-in <a href="https://github.com/ynput/OpenPype/pull/5525">#5525</a></summary>
|
||||
|
||||
Remove Validate Instance Attributes plug-in.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Tweak logging for artist facing reports <a href="https://github.com/ynput/OpenPype/pull/5537">#5537</a></summary>
|
||||
|
||||
Tweak the logging of publishing for global, deadline, maya and a fusion plugin to have a cleaner artist-facing report.
|
||||
- Fix context being reported correctly from CollectContext
|
||||
- Fix ValidateMeshArnoldAttributes: fix when arnold is not loaded, fix applying settings, fix for when ai attributes do not exist
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Update settings <a href="https://github.com/ynput/OpenPype/pull/5544">#5544</a></summary>
|
||||
|
||||
Updated settings in AYON addons and conversion of AYON settings in OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Removed Ass export script <a href="https://github.com/ynput/OpenPype/pull/5560">#5560</a></summary>
|
||||
|
||||
Removed Arnold render script, which was obsolete and unused.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Allow for knob values to be validated against multiple values. <a href="https://github.com/ynput/OpenPype/pull/5042">#5042</a></summary>
|
||||
|
||||
Knob values can now be validated against multiple values, so you can allow write nodes to be `exr` and `png`, or `16-bit` and `32-bit`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Cosmetics for Higher version of publish already exists validation error <a href="https://github.com/ynput/OpenPype/pull/5190">#5190</a></summary>
|
||||
|
||||
Fix double spaces in message.Example output **after** the PR:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: publish existing frames on farm <a href="https://github.com/ynput/OpenPype/pull/5409">#5409</a></summary>
|
||||
|
||||
This PR proposes adding a fourth option in Nuke render publish called "Use Existing Frames - Farm". This would be useful when the farm is busy or when the artist lacks enough farm licenses. Additionally, some artists prefer rendering on the farm but still want to check frames before publishing.By adding the "Use Existing Frames - Farm" option, artists will have more flexibility and control over their render publishing process. This enhancement will streamline the workflow and improve efficiency for Nuke users.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Create project in temp location and move to final when done <a href="https://github.com/ynput/OpenPype/pull/5476">#5476</a></summary>
|
||||
|
||||
Create Unreal project in local temporary folder and when done, move it to final destination.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: adding audio product type into default presets <a href="https://github.com/ynput/OpenPype/pull/5489">#5489</a></summary>
|
||||
|
||||
Adding Audio product type into default presets so anybody can publish audio to their shots.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: avoiding cleanup of flagged representation <a href="https://github.com/ynput/OpenPype/pull/5502">#5502</a></summary>
|
||||
|
||||
Publishing folder can be flagged as persistent at representation level.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: missing tag could raise error <a href="https://github.com/ynput/OpenPype/pull/5511">#5511</a></summary>
|
||||
|
||||
- avoiding potential situation where missing Tag key could raise error
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Queued event system <a href="https://github.com/ynput/OpenPype/pull/5514">#5514</a></summary>
|
||||
|
||||
Implemented event system with more expected behavior of event system. If an event is triggered during other event callback, it is not processed immediately but waits until all callbacks of previous events are done. The event system also allows to not trigger events directly once `emit_event` is called which gives option to process events in custom loops.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Tweak log message to provide plugin name after "Plugin" <a href="https://github.com/ynput/OpenPype/pull/5521">#5521</a></summary>
|
||||
|
||||
Fix logged message for settings automatically applied to plugin attributes
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Improve VDB Selection <a href="https://github.com/ynput/OpenPype/pull/5523">#5523</a></summary>
|
||||
|
||||
Improves VDB selection if selection is `SopNode`: return the selected sop nodeif selection is `ObjNode`: get the output node with the minimum 'outputidx' or the node with display flag
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Refactor/tweak Validate Instance In same Context plug-in <a href="https://github.com/ynput/OpenPype/pull/5526">#5526</a></summary>
|
||||
|
||||
- Chore/Refactor: Re-use existing select invalid and repair actions
|
||||
- Enhancement: provide more elaborate PublishValidationError report
|
||||
- Bugfix: fix "optional" support by using `OptionalPyblishPluginMixin` base class.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Update houdini main menu <a href="https://github.com/ynput/OpenPype/pull/5527">#5527</a></summary>
|
||||
|
||||
This PR adds two updates:
|
||||
- dynamic main menu
|
||||
- dynamic asset name and task
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Reset FPS when clicking Set Frame Range <a href="https://github.com/ynput/OpenPype/pull/5528">#5528</a></summary>
|
||||
|
||||
_Similar to Maya,_ Make `Set Frame Range` resets FPS, issue https://github.com/ynput/OpenPype/issues/5516
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Deadline plugins optimize, cleanup and fix optional support for validate deadline pools <a href="https://github.com/ynput/OpenPype/pull/5531">#5531</a></summary>
|
||||
|
||||
- Fix optional support of validate deadline pools
|
||||
- Query deadline webservice only once per URL for verification, and once for available deadline pools instead of for every instance
|
||||
- Use `deadlineUrl` in `instance.data` when validating pools if it is set.
|
||||
- Code cleanup: Re-use existing `requests_get` implementation
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: PowerShell script for docker build <a href="https://github.com/ynput/OpenPype/pull/5535">#5535</a></summary>
|
||||
|
||||
Added PowerShell script to run docker build.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Deadline expand userpaths in executables list <a href="https://github.com/ynput/OpenPype/pull/5540">#5540</a></summary>
|
||||
|
||||
Expande `~` paths in executables list.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Use correct git url <a href="https://github.com/ynput/OpenPype/pull/5542">#5542</a></summary>
|
||||
|
||||
Fixed github url in README.md.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Create plugin does not expect system settings <a href="https://github.com/ynput/OpenPype/pull/5553">#5553</a></summary>
|
||||
|
||||
System settings are not passed to initialization of create plugin initialization (and `apply_settings`).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Allow custom Qt scale factor rounding policy <a href="https://github.com/ynput/OpenPype/pull/5555">#5555</a></summary>
|
||||
|
||||
Do not force `PassThrough` rounding policy if different policy is defined via env variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Fix outdated containers pop-up on opening last workfile on launch <a href="https://github.com/ynput/OpenPype/pull/5567">#5567</a></summary>
|
||||
|
||||
Fix Houdini not showing outdated containers pop-up on scene open when launching with last workfile argument
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Improve errors e.g. raise PublishValidationError or cosmetics <a href="https://github.com/ynput/OpenPype/pull/5568">#5568</a></summary>
|
||||
|
||||
Improve errors e.g. raise PublishValidationError or cosmeticsThis also fixes the Increment Current File plug-in since due to an invalid import it was previously broken
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Code updates <a href="https://github.com/ynput/OpenPype/pull/5569">#5569</a></summary>
|
||||
|
||||
Update fusion code which contains obsolete code. Removed `switch_ui.py` script from fusion with related script in scripts.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Validate Shape Zero fix repair action + provide informational artist-facing report <a href="https://github.com/ynput/OpenPype/pull/5524">#5524</a></summary>
|
||||
|
||||
Refactor to PublishValidationError to allow the RepairAction to work + provide informational report message
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix attribute definitions for `CreateYetiCache` <a href="https://github.com/ynput/OpenPype/pull/5574">#5574</a></summary>
|
||||
|
||||
Fix attribute definitions for `CreateYetiCache`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Optional Renderable Camera Validator for Render Instance <a href="https://github.com/ynput/OpenPype/pull/5286">#5286</a></summary>
|
||||
|
||||
Optional validation to check on renderable camera being set up correctly for deadline submission.If not being set up correctly, it wont pass the validation and user can perform repair actions.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Adding custom modifiers back to the loaded objects <a href="https://github.com/ynput/OpenPype/pull/5378">#5378</a></summary>
|
||||
|
||||
The custom parameters OpenpypeData doesn't show in the loaded container when it is being loaded through the loader.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Use default_variant to Houdini Node TAB Creator <a href="https://github.com/ynput/OpenPype/pull/5421">#5421</a></summary>
|
||||
|
||||
Use the default variant of the creator plugins on the interactive creator from the TAB node search instead of hard-coding it to `Main`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: adding inherited colorspace from instance <a href="https://github.com/ynput/OpenPype/pull/5454">#5454</a></summary>
|
||||
|
||||
Thumbnails are extracted with inherited colorspace collected from rendering write node.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add kitsu credentials to deadline publish job <a href="https://github.com/ynput/OpenPype/pull/5455">#5455</a></summary>
|
||||
|
||||
This PR hopefully fixes this issue #5440
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fill entities during editorial <a href="https://github.com/ynput/OpenPype/pull/5475">#5475</a></summary>
|
||||
|
||||
Fill entities and update template data on instances during extract AYON hierarchy.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Fix version 0 when integrating to Ftrack - OP-6595 <a href="https://github.com/ynput/OpenPype/pull/5477">#5477</a></summary>
|
||||
|
||||
Fix publishing version 0 to Ftrack.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>OCIO: windows unc path support in Nuke and Hiero <a href="https://github.com/ynput/OpenPype/pull/5479">#5479</a></summary>
|
||||
|
||||
Hiero and Nuke is not supporting windows unc path formatting in OCIO environment variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: Added super call to init <a href="https://github.com/ynput/OpenPype/pull/5480">#5480</a></summary>
|
||||
|
||||
DL 10.3 requires plugin inheriting from DeadlinePlugin to call super's **init** explicitly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixing thumbnail and monitor out root attributes <a href="https://github.com/ynput/OpenPype/pull/5483">#5483</a></summary>
|
||||
|
||||
Nuke Root Colorspace settings for Thumbnail and Monitor Out schema was gradually changed between version 12, 13, 14 and we needed to address those changes individually for particular version.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixing missing `instance_id` error <a href="https://github.com/ynput/OpenPype/pull/5484">#5484</a></summary>
|
||||
|
||||
Workfiles with Instances created in old publisher workflow were rising error during converting method since they were missing `instance_id` key introduced in new publisher workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: existing frames validator is repairing render target <a href="https://github.com/ynput/OpenPype/pull/5486">#5486</a></summary>
|
||||
|
||||
Nuke is now correctly repairing render target after the existing frames validator finds missing frames and repair action is used.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>added UE to extract burnins families <a href="https://github.com/ynput/OpenPype/pull/5487">#5487</a></summary>
|
||||
|
||||
This PR fixes missing burnins in reviewables when rendering from UE.
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Harmony: refresh code for current Deadline <a href="https://github.com/ynput/OpenPype/pull/5493">#5493</a></summary>
|
||||
|
||||
- Added support in Deadline Plug-in for new versions of Harmony, in particular version 21 and 22.
|
||||
- Remove review=False flag on render instance
|
||||
- Add farm=True flag on render instance
|
||||
- Fix is_in_tests function call in Harmony Deadline submission plugin
|
||||
- Force HarmonyOpenPype.py Deadline Python plug-in to py3
|
||||
- Fix cosmetics/hound in HarmonyOpenPype.py Deadline Python plug-in
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix multiselection value <a href="https://github.com/ynput/OpenPype/pull/5505">#5505</a></summary>
|
||||
|
||||
Selection of multiple instances in Publisher does not cause that all instances change all publish attributes to the same value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Avoid warnings on thumbnails if source image also has alpha channel <a href="https://github.com/ynput/OpenPype/pull/5510">#5510</a></summary>
|
||||
|
||||
Avoids the following warning from `ExtractThumbnailFromSource`:
|
||||
```
|
||||
// pyblish.ExtractThumbnailFromSource : oiiotool WARNING: -o : Can't save 4 channels to jpeg... saving only R,G,B
|
||||
```
|
||||
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Update ayon-python-api <a href="https://github.com/ynput/OpenPype/pull/5512">#5512</a></summary>
|
||||
|
||||
Update ayon python api and related callbacks.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Fixing the bug of falling back to use workfile for Arnold or any renderers except Redshift <a href="https://github.com/ynput/OpenPype/pull/5520">#5520</a></summary>
|
||||
|
||||
Fix the bug of falling back to use workfile for Arnold
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Fix Validate Publish Dir Validator <a href="https://github.com/ynput/OpenPype/pull/5534">#5534</a></summary>
|
||||
|
||||
Nonsensical "family" key was used instead of real value (as 'render' etc.) which would result in wrong translation of intermediate family names.Updated docstring.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>have the addons loading respect a custom AYON_ADDONS_DIR <a href="https://github.com/ynput/OpenPype/pull/5539">#5539</a></summary>
|
||||
|
||||
When using a custom AYON_ADDONS_DIR environment variable that variable is used in the launcher correctly and downloads and extracts addons to there, however when running Ayon does not respect this environment variable
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: files on representation cannot be single item list <a href="https://github.com/ynput/OpenPype/pull/5545">#5545</a></summary>
|
||||
|
||||
Further logic expects that single item files will be only 'string' not 'list' (eg. repre["files"] = "abc.exr" not repre["files"] = ["abc.exr"].This would cause an issue in ExtractReview later.This could happen if DL rendered single frame file with different frame value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Webpublisher: better encode list values for click <a href="https://github.com/ynput/OpenPype/pull/5546">#5546</a></summary>
|
||||
|
||||
Targets could be a list, original implementation pushed it as a separate items, it must be added as `--targets webpulish --targets filepublish`.`wepublish_routes` handles triggering from UI, changes in `publish_functions` handle triggering from cmd (for tests, api access).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Introduce imprint function for correct version in hda loader <a href="https://github.com/ynput/OpenPype/pull/5548">#5548</a></summary>
|
||||
|
||||
Resolve #5478
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fill entities during editorial (2) <a href="https://github.com/ynput/OpenPype/pull/5549">#5549</a></summary>
|
||||
|
||||
Fix changes made in https://github.com/ynput/OpenPype/pull/5475.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: OP Data updates in Loaders <a href="https://github.com/ynput/OpenPype/pull/5563">#5563</a></summary>
|
||||
|
||||
Fix the bug on the loaders not being able to load the objects when iterating key and values with the dict.Max prefers list over the list in dict.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Create Plugins: Better check of overriden '__init__' method <a href="https://github.com/ynput/OpenPype/pull/5571">#5571</a></summary>
|
||||
|
||||
Create plugins do not log warning messages about each create plugin because of wrong `__init__` method check.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Tests: fix unit tests <a href="https://github.com/ynput/OpenPype/pull/5533">#5533</a></summary>
|
||||
|
||||
Fixed failing tests.Updated Unreal's validator to match removed general one which had a couple of issues fixed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -663,10 +663,13 @@ def convert_v4_representation_to_v3(representation):
|
|||
if isinstance(context, six.string_types):
|
||||
context = json.loads(context)
|
||||
|
||||
if "folder" in context:
|
||||
_c_folder = context.pop("folder")
|
||||
if "asset" not in context and "folder" in context:
|
||||
_c_folder = context["folder"]
|
||||
context["asset"] = _c_folder["name"]
|
||||
|
||||
elif "asset" in context and "folder" not in context:
|
||||
context["folder"] = {"name": context["asset"]}
|
||||
|
||||
if "product" in context:
|
||||
_c_product = context.pop("product")
|
||||
context["family"] = _c_product["type"]
|
||||
|
|
@ -959,9 +962,11 @@ def convert_create_representation_to_v4(representation, con):
|
|||
converted_representation["files"] = new_files
|
||||
|
||||
context = representation["context"]
|
||||
context["folder"] = {
|
||||
"name": context.pop("asset", None)
|
||||
}
|
||||
if "folder" not in context:
|
||||
context["folder"] = {
|
||||
"name": context.get("asset")
|
||||
}
|
||||
|
||||
context["product"] = {
|
||||
"type": context.pop("family", None),
|
||||
"name": context.pop("subset", None),
|
||||
|
|
@ -1285,7 +1290,7 @@ def convert_update_representation_to_v4(
|
|||
|
||||
if "context" in update_data:
|
||||
context = update_data["context"]
|
||||
if "asset" in context:
|
||||
if "folder" not in context and "asset" in context:
|
||||
context["folder"] = {"name": context.pop("asset")}
|
||||
|
||||
if "family" in context or "subset" in context:
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -1,5 +1,5 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.26"
|
||||
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.27"
|
||||
ExtensionBundleName="com.openpype.AE.panel" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.AE.panel" Version="1.0" />
|
||||
|
|
@ -10,22 +10,22 @@
|
|||
<!-- Photoshop -->
|
||||
<!--<Host Name="PHXS" Version="[14.0,19.0]" /> -->
|
||||
<!-- <Host Name="PHSP" Version="[14.0,19.0]" /> -->
|
||||
|
||||
|
||||
<!-- Illustrator -->
|
||||
<!-- <Host Name="ILST" Version="[18.0,22.0]" /> -->
|
||||
|
||||
|
||||
<!-- InDesign -->
|
||||
<!-- <Host Name="IDSN" Version="[10.0,13.0]" /> -->
|
||||
|
||||
<!-- <Host Name="IDSN" Version="[10.0,13.0]" /> -->
|
||||
|
||||
<!-- Premiere -->
|
||||
<!-- <Host Name="PPRO" Version="[8.0,12.0]" /> -->
|
||||
|
||||
|
||||
<!-- AfterEffects -->
|
||||
<Host Name="AEFT" Version="[13.0,99.0]" />
|
||||
|
||||
<!-- PRELUDE -->
|
||||
|
||||
<!-- PRELUDE -->
|
||||
<!-- <Host Name="PRLD" Version="[3.0,7.0]" /> -->
|
||||
|
||||
|
||||
<!-- FLASH Pro -->
|
||||
<!-- <Host Name="FLPR" Version="[14.0,18.0]" /> -->
|
||||
|
||||
|
|
@ -63,7 +63,7 @@
|
|||
<Height>550</Height>
|
||||
<Width>400</Width>
|
||||
</MaxSize>-->
|
||||
|
||||
|
||||
</Geometry>
|
||||
<Icons>
|
||||
<Icon Type="Normal">./icons/iconNormal.png</Icon>
|
||||
|
|
@ -71,9 +71,9 @@
|
|||
<Icon Type="Disabled">./icons/iconDisabled.png</Icon>
|
||||
<Icon Type="DarkNormal">./icons/iconDarkNormal.png</Icon>
|
||||
<Icon Type="DarkRollOver">./icons/iconDarkRollover.png</Icon>
|
||||
</Icons>
|
||||
</Icons>
|
||||
</UI>
|
||||
</DispatchInfo>
|
||||
</Extension>
|
||||
</DispatchInfoList>
|
||||
</ExtensionManifest>
|
||||
</ExtensionManifest>
|
||||
|
|
|
|||
|
|
@ -215,6 +215,8 @@ function _getItem(item, comps, folders, footages){
|
|||
* Refactor
|
||||
*/
|
||||
var item_type = '';
|
||||
var path = '';
|
||||
var containing_comps = [];
|
||||
if (item instanceof FolderItem){
|
||||
item_type = 'folder';
|
||||
if (!folders){
|
||||
|
|
@ -222,10 +224,18 @@ function _getItem(item, comps, folders, footages){
|
|||
}
|
||||
}
|
||||
if (item instanceof FootageItem){
|
||||
item_type = 'footage';
|
||||
if (!footages){
|
||||
return "{}";
|
||||
}
|
||||
item_type = 'footage';
|
||||
if (item.file){
|
||||
path = item.file.fsName;
|
||||
}
|
||||
if (item.usedIn){
|
||||
for (j = 0; j < item.usedIn.length; ++j){
|
||||
containing_comps.push(item.usedIn[j].id);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (item instanceof CompItem){
|
||||
item_type = 'comp';
|
||||
|
|
@ -236,7 +246,9 @@ function _getItem(item, comps, folders, footages){
|
|||
|
||||
var item = {"name": item.name,
|
||||
"id": item.id,
|
||||
"type": item_type};
|
||||
"type": item_type,
|
||||
"path": path,
|
||||
"containing_comps": containing_comps};
|
||||
return JSON.stringify(item);
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -37,6 +37,9 @@ class AEItem(object):
|
|||
height = attr.ib(default=None)
|
||||
is_placeholder = attr.ib(default=False)
|
||||
uuid = attr.ib(default=False)
|
||||
path = attr.ib(default=False) # path to FootageItem to validate
|
||||
# list of composition Footage is in
|
||||
containing_comps = attr.ib(factory=list)
|
||||
|
||||
|
||||
class AfterEffectsServerStub():
|
||||
|
|
@ -704,7 +707,10 @@ class AfterEffectsServerStub():
|
|||
d.get("instance_id"),
|
||||
d.get("width"),
|
||||
d.get("height"),
|
||||
d.get("is_placeholder"))
|
||||
d.get("is_placeholder"),
|
||||
d.get("uuid"),
|
||||
d.get("path"),
|
||||
d.get("containing_comps"),)
|
||||
|
||||
ret.append(item)
|
||||
return ret
|
||||
|
|
|
|||
|
|
@ -31,13 +31,8 @@ class FileLoader(api.AfterEffectsLoader):
|
|||
|
||||
path = self.filepath_from_context(context)
|
||||
|
||||
repr_cont = context["representation"]["context"]
|
||||
if "#" not in path:
|
||||
frame = repr_cont.get("frame")
|
||||
if frame:
|
||||
padding = len(frame)
|
||||
path = path.replace(frame, "#" * padding)
|
||||
import_options['sequence'] = True
|
||||
if len(context["representation"]["files"]) > 1:
|
||||
import_options['sequence'] = True
|
||||
|
||||
if not path:
|
||||
repr_id = context["representation"]["_id"]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,14 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Footage item missing</title>
|
||||
<description>
|
||||
## Footage item missing
|
||||
|
||||
FootageItem `{name}` contains missing `{path}`. Render will not produce any frames and AE will stop react to any integration
|
||||
### How to repair?
|
||||
|
||||
Remove `{name}` or provide missing file.
|
||||
</description>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate presence of footage items in composition
|
||||
Requires:
|
||||
"""
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
PublishXmlValidationError
|
||||
)
|
||||
from openpype.hosts.aftereffects.api import get_stub
|
||||
|
||||
|
||||
class ValidateFootageItems(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
Validates if FootageItems contained in composition exist.
|
||||
|
||||
AE fails silently and doesn't render anything if footage item file is
|
||||
missing. This will result in nonresponsiveness of AE UI as it expects
|
||||
reaction from user, but it will not provide dialog.
|
||||
This validator tries to check existence of the files.
|
||||
It will not protect from missing frame in multiframes though
|
||||
(as AE api doesn't provide this information and it cannot be told how many
|
||||
frames should be there easily). Missing frame is replaced by placeholder.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Footage Items"
|
||||
families = ["render.farm", "render.local", "render"]
|
||||
hosts = ["aftereffects"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
|
||||
comp_id = instance.data["comp_id"]
|
||||
for footage_item in get_stub().get_items(comps=False, folders=False,
|
||||
footages=True):
|
||||
self.log.info(footage_item)
|
||||
if comp_id not in footage_item.containing_comps:
|
||||
continue
|
||||
|
||||
path = footage_item.path
|
||||
if path and not os.path.exists(path):
|
||||
msg = f"File {path} not found."
|
||||
formatting = {"name": footage_item.name, "path": path}
|
||||
raise PublishXmlValidationError(self, msg,
|
||||
formatting_data=formatting)
|
||||
|
|
@ -119,7 +119,7 @@ class BlendLoader(plugin.AssetLoader):
|
|||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
libpath = self.filepath_from_context(context)
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ class AbcCameraLoader(plugin.AssetLoader):
|
|||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
self._process(libpath, asset_group, group_name)
|
||||
|
||||
objects = []
|
||||
nodes = list(asset_group.children)
|
||||
|
|
|
|||
|
|
@ -103,7 +103,7 @@ class FbxCameraLoader(plugin.AssetLoader):
|
|||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
self._process(libpath, asset_group, group_name)
|
||||
|
||||
objects = []
|
||||
nodes = list(asset_group.children)
|
||||
|
|
|
|||
|
|
@ -39,15 +39,11 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
if not instance.data.get("remove"):
|
||||
|
||||
task = instance.context.data["task"]
|
||||
|
||||
# Store focal length in `burninDataMembers`
|
||||
burninData = instance.data.setdefault("burninDataMembers", {})
|
||||
burninData["focalLength"] = focal_length
|
||||
|
||||
instance.data.update({
|
||||
"subset": f"{task}Review",
|
||||
"review_camera": camera,
|
||||
"frameStart": instance.context.data["frameStart"],
|
||||
"frameEnd": instance.context.data["frameEnd"],
|
||||
|
|
|
|||
|
|
@ -21,8 +21,6 @@ class ExtractABC(publish.Extractor):
|
|||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
|
|
|
|||
|
|
@ -20,8 +20,6 @@ class ExtractAnimationABC(publish.Extractor):
|
|||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
|
|
|
|||
|
|
@ -21,16 +21,11 @@ class ExtractCameraABC(publish.Extractor):
|
|||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
selected = []
|
||||
active = None
|
||||
|
||||
asset_group = None
|
||||
for obj in instance:
|
||||
if obj.get(AVALON_PROPERTY):
|
||||
|
|
|
|||
|
|
@ -48,7 +48,6 @@ class LoadClip(opfapi.ClipLoader):
|
|||
self.fpd = fproject.current_workspace.desktop
|
||||
|
||||
# load clip to timeline and get main variables
|
||||
namespace = namespace
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
|
|
|
|||
|
|
@ -45,7 +45,6 @@ class LoadClipBatch(opfapi.ClipLoader):
|
|||
self.batch = options.get("batch") or flame.batch
|
||||
|
||||
# load clip to timeline and get main variables
|
||||
namespace = namespace
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
|
|
|
|||
|
|
@ -325,7 +325,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
def _create_shot_instance(self, context, clip_name, **data):
|
||||
master_layer = data.get("heroTrack")
|
||||
hierarchy_data = data.get("hierarchyData")
|
||||
asset = data.get("asset")
|
||||
|
||||
if not master_layer:
|
||||
return
|
||||
|
|
|
|||
|
|
@ -82,7 +82,6 @@ class TemplateLoader(load.LoaderPlugin):
|
|||
node = harmony.find_node_by_name(node_name, "GROUP")
|
||||
self_name = self.__class__.__name__
|
||||
|
||||
update_and_replace = False
|
||||
if is_representation_from_latest(representation):
|
||||
self._set_green(node)
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -317,20 +317,6 @@ class Spacer(QtWidgets.QWidget):
|
|||
self.setLayout(layout)
|
||||
|
||||
|
||||
def get_reference_node_parents(ref):
|
||||
"""Return all parent reference nodes of reference node
|
||||
|
||||
Args:
|
||||
ref (str): reference node.
|
||||
|
||||
Returns:
|
||||
list: The upstream parent reference nodes.
|
||||
|
||||
"""
|
||||
parents = []
|
||||
return parents
|
||||
|
||||
|
||||
class SequenceLoader(LoaderPlugin):
|
||||
"""A basic SequenceLoader for Resolve
|
||||
|
||||
|
|
|
|||
|
|
@ -43,7 +43,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
if review and review_track_index == _track_index:
|
||||
continue
|
||||
for sitem in sub_track_items:
|
||||
effect = None
|
||||
# make sure this subtrack item is relative of track item
|
||||
if ((track_item not in sitem.linkedItems())
|
||||
and (len(sitem.linkedItems()) > 0)):
|
||||
|
|
@ -53,7 +52,6 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
effect = self.add_effect(_track_index, sitem)
|
||||
|
||||
if effect:
|
||||
effects.update(effect)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import attr
|
||||
import hou
|
||||
from openpype.hosts.houdini.api.lib import get_color_management_preferences
|
||||
|
||||
from openpype.pipeline.colorspace import get_display_view_colorspace_name
|
||||
|
||||
@attr.s
|
||||
class LayerMetadata(object):
|
||||
|
|
@ -54,3 +54,16 @@ class ARenderProduct(object):
|
|||
)
|
||||
]
|
||||
return colorspace_data
|
||||
|
||||
|
||||
def get_default_display_view_colorspace():
|
||||
"""Returns the colorspace attribute of the default (display, view) pair.
|
||||
|
||||
It's used for 'ociocolorspace' parm in OpenGL Node."""
|
||||
|
||||
prefs = get_color_management_preferences()
|
||||
return get_display_view_colorspace_name(
|
||||
config_path=prefs["config"],
|
||||
display=prefs["display"],
|
||||
view=prefs["view"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3,6 +3,9 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.lib import EnumDef, BoolDef, NumberDef
|
||||
|
||||
import os
|
||||
import hou
|
||||
|
||||
|
||||
class CreateReview(plugin.HoudiniCreator):
|
||||
"""Review with OpenGL ROP"""
|
||||
|
|
@ -13,7 +16,6 @@ class CreateReview(plugin.HoudiniCreator):
|
|||
icon = "video-camera"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "opengl"})
|
||||
|
|
@ -82,6 +84,11 @@ class CreateReview(plugin.HoudiniCreator):
|
|||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Set OCIO Colorspace to the default output colorspace
|
||||
# if there's OCIO
|
||||
if os.getenv("OCIO"):
|
||||
self.set_colorcorrect_to_default_view_space(instance_node)
|
||||
|
||||
to_lock = ["id", "family"]
|
||||
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
|
@ -123,3 +130,23 @@ class CreateReview(plugin.HoudiniCreator):
|
|||
minimum=0.0001,
|
||||
decimals=3)
|
||||
]
|
||||
|
||||
def set_colorcorrect_to_default_view_space(self,
|
||||
instance_node):
|
||||
"""Set ociocolorspace to the default output space."""
|
||||
from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa
|
||||
|
||||
# set Color Correction parameter to OpenColorIO
|
||||
instance_node.setParms({"colorcorrect": 2})
|
||||
|
||||
# Get default view space for ociocolorspace parm.
|
||||
default_view_space = get_default_display_view_colorspace()
|
||||
instance_node.setParms(
|
||||
{"ociocolorspace": default_view_space}
|
||||
)
|
||||
|
||||
self.log.debug(
|
||||
"'OCIO Colorspace' parm on '{}' has been set to "
|
||||
"the default view color space '{}'"
|
||||
.format(instance_node, default_view_space)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -34,7 +34,6 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
|
||||
# Create a new geo node
|
||||
container = obj.createNode("geo", node_name=node_name)
|
||||
is_sequence = bool(context["representation"]["context"].get("frame"))
|
||||
|
||||
# Remove the file node, it only loads static meshes
|
||||
# Houdini 17 has removed the file node from the geo node
|
||||
|
|
|
|||
|
|
@ -80,14 +80,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
def get_beauty_render_product(self, prefix, suffix="<reName>"):
|
||||
"""Return the beauty output filename if render element enabled
|
||||
"""
|
||||
# Remove aov suffix from the product: `prefix.aov_suffix` -> `prefix`
|
||||
aov_parm = ".{}".format(suffix)
|
||||
beauty_product = None
|
||||
if aov_parm in prefix:
|
||||
beauty_product = prefix.replace(aov_parm, "")
|
||||
else:
|
||||
beauty_product = prefix
|
||||
|
||||
return beauty_product
|
||||
return prefix.replace(aov_parm, "")
|
||||
|
||||
def get_render_element_name(self, node, prefix, suffix="<reName>"):
|
||||
"""Return the output filename using the AOV prefix and suffix
|
||||
|
|
|
|||
|
|
@ -0,0 +1,90 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.hosts.houdini.api.action import SelectROPAction
|
||||
|
||||
import os
|
||||
import hou
|
||||
|
||||
|
||||
class SetDefaultViewSpaceAction(RepairAction):
|
||||
label = "Set default view colorspace"
|
||||
icon = "mdi.monitor"
|
||||
|
||||
|
||||
class ValidateReviewColorspace(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validate Review Colorspace parameters.
|
||||
|
||||
It checks if 'OCIO Colorspace' parameter was set to valid value.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder + 0.1
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Review Colorspace"
|
||||
actions = [SetDefaultViewSpaceAction, SelectROPAction]
|
||||
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
if os.getenv("OCIO") is None:
|
||||
self.log.debug(
|
||||
"Using Houdini's Default Color Management, "
|
||||
" skipping check.."
|
||||
)
|
||||
return
|
||||
|
||||
rop_node = hou.node(instance.data["instance_node"])
|
||||
if rop_node.evalParm("colorcorrect") != 2:
|
||||
# any colorspace settings other than default requires
|
||||
# 'Color Correct' parm to be set to 'OpenColorIO'
|
||||
raise PublishValidationError(
|
||||
"'Color Correction' parm on '{}' ROP must be set to"
|
||||
" 'OpenColorIO'".format(rop_node.path())
|
||||
)
|
||||
|
||||
if rop_node.evalParm("ociocolorspace") not in \
|
||||
hou.Color.ocio_spaces():
|
||||
|
||||
raise PublishValidationError(
|
||||
"Invalid value: Colorspace name doesn't exist.\n"
|
||||
"Check 'OCIO Colorspace' parameter on '{}' ROP"
|
||||
.format(rop_node.path())
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
"""Set Default View Space Action.
|
||||
|
||||
It is a helper action more than a repair action,
|
||||
used to set colorspace on opengl node to the default view.
|
||||
"""
|
||||
from openpype.hosts.houdini.api.colorspace import get_default_display_view_colorspace # noqa
|
||||
|
||||
rop_node = hou.node(instance.data["instance_node"])
|
||||
|
||||
if rop_node.evalParm("colorcorrect") != 2:
|
||||
rop_node.setParms({"colorcorrect": 2})
|
||||
cls.log.debug(
|
||||
"'Color Correction' parm on '{}' has been set to"
|
||||
" 'OpenColorIO'".format(rop_node.path())
|
||||
)
|
||||
|
||||
# Get default view colorspace name
|
||||
default_view_space = get_default_display_view_colorspace()
|
||||
|
||||
rop_node.setParms({"ociocolorspace": default_view_space})
|
||||
cls.log.info(
|
||||
"'OCIO Colorspace' parm on '{}' has been set to "
|
||||
"the default view color space '{}'"
|
||||
.format(rop_node, default_view_space)
|
||||
)
|
||||
|
|
@ -37,13 +37,10 @@ class RenderSettings(object):
|
|||
def set_render_camera(self, selection):
|
||||
for sel in selection:
|
||||
# to avoid Attribute Error from pymxs wrapper
|
||||
found = False
|
||||
if rt.classOf(sel) in rt.Camera.classes:
|
||||
found = True
|
||||
rt.viewport.setCamera(sel)
|
||||
break
|
||||
if not found:
|
||||
raise RuntimeError("Active Camera not found")
|
||||
return
|
||||
raise RuntimeError("Active Camera not found")
|
||||
|
||||
def render_output(self, container):
|
||||
folder = rt.maxFilePath
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ class CreateRender(plugin.MaxCreator):
|
|||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
sel_obj = list(rt.selection)
|
||||
file = rt.maxFileName
|
||||
filename, _ = os.path.splitext(file)
|
||||
instance_data["AssetName"] = filename
|
||||
|
|
|
|||
|
|
@ -30,7 +30,6 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
asset = get_current_asset_name()
|
||||
|
||||
files_by_aov = RenderProducts().get_beauty(instance.name)
|
||||
folder = folder.replace("\\", "/")
|
||||
aovs = RenderProducts().get_aovs(instance.name)
|
||||
files_by_aov.update(aovs)
|
||||
|
||||
|
|
|
|||
|
|
@ -22,8 +22,6 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
start = float(instance.data.get("frameStartHandle", 1))
|
||||
end = float(instance.data.get("frameEndHandle", 1))
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Camera ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
|
|
|
|||
|
|
@ -19,9 +19,8 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Camera ...")
|
||||
self.log.debug("Extracting Camera ...")
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
|
||||
|
|
|
|||
|
|
@ -18,10 +18,9 @@ class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
# publish the raw scene for camera
|
||||
self.log.info("Extracting Raw Max Scene ...")
|
||||
self.log.debug("Extracting Raw Max Scene ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.max".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -20,9 +20,7 @@ class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -20,10 +20,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -20,9 +20,7 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.obj".format(**instance.data)
|
||||
|
|
|
|||
|
|
@ -54,8 +54,6 @@ class ExtractAlembic(publish.Extractor):
|
|||
start = float(instance.data.get("frameStartHandle", 1))
|
||||
end = float(instance.data.get("frameEndHandle", 1))
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.debug("Extracting pointcache ...")
|
||||
|
||||
parent_dir = self.staging_dir(instance)
|
||||
|
|
|
|||
|
|
@ -16,11 +16,10 @@ class ExtractRedshiftProxy(publish.Extractor):
|
|||
families = ["redshiftproxy"]
|
||||
|
||||
def process(self, instance):
|
||||
container = instance.data["instance_node"]
|
||||
start = int(instance.context.data.get("frameStart"))
|
||||
end = int(instance.context.data.get("frameEnd"))
|
||||
|
||||
self.log.info("Extracting Redshift Proxy...")
|
||||
self.log.debug("Extracting Redshift Proxy...")
|
||||
stagingdir = self.staging_dir(instance)
|
||||
rs_filename = "{name}.rs".format(**instance.data)
|
||||
rs_filepath = os.path.join(stagingdir, rs_filename)
|
||||
|
|
|
|||
|
|
@ -6,11 +6,6 @@ from openpype.pipeline import (
|
|||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api.lib import reset_scene_resolution
|
||||
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project_asset,
|
||||
get_current_project
|
||||
)
|
||||
|
||||
|
||||
class ValidateResolutionSetting(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
|
|
@ -43,22 +38,16 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin,
|
|||
"on asset or shot.")
|
||||
|
||||
def get_db_resolution(self, instance):
|
||||
data = ["data.resolutionWidth", "data.resolutionHeight"]
|
||||
project_resolution = get_current_project(fields=data)
|
||||
project_resolution_data = project_resolution["data"]
|
||||
asset_resolution = get_current_project_asset(fields=data)
|
||||
asset_resolution_data = asset_resolution["data"]
|
||||
# Set project resolution
|
||||
project_width = int(
|
||||
project_resolution_data.get("resolutionWidth", 1920))
|
||||
project_height = int(
|
||||
project_resolution_data.get("resolutionHeight", 1080))
|
||||
width = int(
|
||||
asset_resolution_data.get("resolutionWidth", project_width))
|
||||
height = int(
|
||||
asset_resolution_data.get("resolutionHeight", project_height))
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
for data in [asset_doc["data"], project_doc["data"]]:
|
||||
if "resolutionWidth" in data and "resolutionHeight" in data:
|
||||
width = data["resolutionWidth"]
|
||||
height = data["resolutionHeight"]
|
||||
return int(width), int(height)
|
||||
|
||||
return width, height
|
||||
# Defaults if not found in asset document or project document
|
||||
return 1920, 1080
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
|
|
|||
|
|
@ -177,12 +177,7 @@ class RenderSettings(object):
|
|||
# list all the aovs
|
||||
all_rs_aovs = cmds.ls(type='RedshiftAOV')
|
||||
for rs_aov in redshift_aovs:
|
||||
rs_layername = rs_aov
|
||||
if " " in rs_aov:
|
||||
rs_renderlayer = rs_aov.replace(" ", "")
|
||||
rs_layername = "rsAov_{}".format(rs_renderlayer)
|
||||
else:
|
||||
rs_layername = "rsAov_{}".format(rs_aov)
|
||||
rs_layername = "rsAov_{}".format(rs_aov.replace(" ", ""))
|
||||
if rs_layername in all_rs_aovs:
|
||||
continue
|
||||
cmds.rsCreateAov(type=rs_aov)
|
||||
|
|
@ -317,7 +312,7 @@ class RenderSettings(object):
|
|||
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
|
||||
try:
|
||||
sep_idx = separators.index(aov_separator)
|
||||
except ValueError as e:
|
||||
except ValueError:
|
||||
six.reraise(
|
||||
CreatorError,
|
||||
CreatorError(
|
||||
|
|
|
|||
|
|
@ -683,7 +683,6 @@ class ReferenceLoader(Loader):
|
|||
loaded_containers.append(container)
|
||||
self._organize_containers(nodes, container)
|
||||
c += 1
|
||||
namespace = None
|
||||
|
||||
return loaded_containers
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@ class ImportReference(InventoryAction):
|
|||
color = "#d8d8d8"
|
||||
|
||||
def process(self, containers):
|
||||
references = cmds.ls(type="reference")
|
||||
for container in containers:
|
||||
if container["loader"] != "ReferenceLoader":
|
||||
print("Not a reference, skipping")
|
||||
|
|
|
|||
|
|
@ -43,8 +43,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
import multiverse
|
||||
|
||||
# Create the shape
|
||||
shape = None
|
||||
transform = None
|
||||
with maintained_selection():
|
||||
cmds.namespace(addNamespace=namespace)
|
||||
with namespaced(namespace, new=False):
|
||||
|
|
|
|||
|
|
@ -205,7 +205,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
cmds.setAttr("{}.selectHandleZ".format(group_name), cz)
|
||||
|
||||
if family == "rig":
|
||||
self._post_process_rig(name, namespace, context, options)
|
||||
self._post_process_rig(namespace, context, options)
|
||||
else:
|
||||
if "translate" in options:
|
||||
if not attach_to_root and new_nodes:
|
||||
|
|
@ -229,7 +229,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
members = get_container_members(container)
|
||||
self._lock_camera_transforms(members)
|
||||
|
||||
def _post_process_rig(self, name, namespace, context, options):
|
||||
def _post_process_rig(self, namespace, context, options):
|
||||
|
||||
nodes = self[:]
|
||||
create_rig_animation_instance(
|
||||
|
|
|
|||
|
|
@ -53,8 +53,6 @@ class XgenLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
)
|
||||
|
||||
# Reference xgen. Xgen does not like being referenced in under a group.
|
||||
new_nodes = []
|
||||
|
||||
with maintained_selection():
|
||||
nodes = cmds.file(
|
||||
maya_filepath,
|
||||
|
|
|
|||
|
|
@ -15,6 +15,16 @@ from openpype.hosts.maya.api import lib
|
|||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
|
||||
|
||||
# Do not reset these values on update but only apply on first load
|
||||
# to preserve any potential local overrides
|
||||
SKIP_UPDATE_ATTRS = {
|
||||
"displayOutput",
|
||||
"viewportDensity",
|
||||
"viewportWidth",
|
||||
"viewportLength",
|
||||
}
|
||||
|
||||
|
||||
def set_attribute(node, attr, value):
|
||||
"""Wrapper of set attribute which ignores None values"""
|
||||
if value is None:
|
||||
|
|
@ -205,6 +215,8 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
yeti_node = yeti_nodes[0]
|
||||
|
||||
for attr, value in node_settings["attrs"].items():
|
||||
if attr in SKIP_UPDATE_ATTRS:
|
||||
continue
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
||||
cmds.setAttr("{}.representation".format(container_node),
|
||||
|
|
@ -311,7 +323,6 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
# Update attributes with defaults
|
||||
attributes = node_settings["attrs"]
|
||||
attributes.update({
|
||||
"viewportDensity": 0.1,
|
||||
"verbosity": 2,
|
||||
"fileMode": 1,
|
||||
|
||||
|
|
@ -321,6 +332,9 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
"visibleInRefractions": True
|
||||
})
|
||||
|
||||
if "viewportDensity" not in attributes:
|
||||
attributes["viewportDensity"] = 0.1
|
||||
|
||||
# Apply attributes to pgYetiMaya node
|
||||
for attr, value in attributes.items():
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
|
|
|||
|
|
@ -281,7 +281,6 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
long=True)
|
||||
nodes.update(nodes_of_interest)
|
||||
|
||||
files = []
|
||||
sets = {}
|
||||
instance.data["resources"] = []
|
||||
publishMipMap = instance.data["publishMipMap"]
|
||||
|
|
|
|||
39
openpype/hosts/maya/plugins/publish/collect_rig_sets.py
Normal file
39
openpype/hosts/maya/plugins/publish/collect_rig_sets.py
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
import pyblish.api
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class CollectRigSets(pyblish.api.InstancePlugin):
|
||||
"""Ensure rig contains pipeline-critical content
|
||||
|
||||
Every rig must contain at least two object sets:
|
||||
"controls_SET" - Set of all animatable controls
|
||||
"out_SET" - Set of all cacheable meshes
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.05
|
||||
label = "Collect Rig Sets"
|
||||
hosts = ["maya"]
|
||||
families = ["rig"]
|
||||
|
||||
accepted_output = ["mesh", "transform"]
|
||||
accepted_controllers = ["transform"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
# Find required sets by suffix
|
||||
searching = {"controls_SET", "out_SET"}
|
||||
found = {}
|
||||
for node in cmds.ls(instance, exactType="objectSet"):
|
||||
for suffix in searching:
|
||||
if node.endswith(suffix):
|
||||
found[suffix] = node
|
||||
searching.remove(suffix)
|
||||
break
|
||||
if not searching:
|
||||
break
|
||||
|
||||
self.log.debug("Found sets: {}".format(found))
|
||||
rig_sets = instance.data.setdefault("rig_sets", {})
|
||||
for name, objset in found.items():
|
||||
rig_sets[name] = objset
|
||||
|
|
@ -4,12 +4,23 @@ import pyblish.api
|
|||
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
SETTINGS = {"renderDensity",
|
||||
"renderWidth",
|
||||
"renderLength",
|
||||
"increaseRenderBounds",
|
||||
"imageSearchPath",
|
||||
"cbId"}
|
||||
|
||||
SETTINGS = {
|
||||
# Preview
|
||||
"displayOutput",
|
||||
"colorR", "colorG", "colorB",
|
||||
"viewportDensity",
|
||||
"viewportWidth",
|
||||
"viewportLength",
|
||||
# Render attributes
|
||||
"renderDensity",
|
||||
"renderWidth",
|
||||
"renderLength",
|
||||
"increaseRenderBounds",
|
||||
"imageSearchPath",
|
||||
# Pipeline specific
|
||||
"cbId"
|
||||
}
|
||||
|
||||
|
||||
class CollectYetiCache(pyblish.api.InstancePlugin):
|
||||
|
|
@ -39,10 +50,6 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
|
|||
# Get yeti nodes and their transforms
|
||||
yeti_shapes = cmds.ls(instance, type="pgYetiMaya")
|
||||
for shape in yeti_shapes:
|
||||
shape_data = {"transform": None,
|
||||
"name": shape,
|
||||
"cbId": lib.get_id(shape),
|
||||
"attrs": None}
|
||||
|
||||
# Get specific node attributes
|
||||
attr_data = {}
|
||||
|
|
@ -58,9 +65,12 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
|
|||
parent = cmds.listRelatives(shape, parent=True)[0]
|
||||
transform_data = {"name": parent, "cbId": lib.get_id(parent)}
|
||||
|
||||
# Store collected data
|
||||
shape_data["attrs"] = attr_data
|
||||
shape_data["transform"] = transform_data
|
||||
shape_data = {
|
||||
"transform": transform_data,
|
||||
"name": shape,
|
||||
"cbId": lib.get_id(shape),
|
||||
"attrs": attr_data,
|
||||
}
|
||||
|
||||
settings["nodes"].append(shape_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -30,8 +30,8 @@ class ExtractImportReference(publish.Extractor,
|
|||
tmp_format = "_tmp"
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_setting, system_settings):
|
||||
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
|
||||
def apply_settings(cls, project_settings):
|
||||
cls.active = project_settings["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
|
|||
)
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
def apply_settings(cls, project_settings):
|
||||
"""Apply project settings to creator"""
|
||||
settings = (
|
||||
project_settings["maya"]["publish"]["ValidateMayaUnits"]
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@ import pyblish.api
|
|||
from maya import cmds
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError, ValidateContentsOrder)
|
||||
PublishValidationError,
|
||||
ValidateContentsOrder
|
||||
)
|
||||
|
||||
|
||||
class ValidateRigContents(pyblish.api.InstancePlugin):
|
||||
|
|
@ -24,31 +26,45 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
|
||||
objectsets = ("controls_SET", "out_SET")
|
||||
missing = [obj for obj in objectsets if obj not in instance]
|
||||
assert not missing, ("%s is missing %s" % (instance, missing))
|
||||
# Find required sets by suffix
|
||||
required = ["controls_SET", "out_SET"]
|
||||
missing = [
|
||||
key for key in required if key not in instance.data["rig_sets"]
|
||||
]
|
||||
if missing:
|
||||
raise PublishValidationError(
|
||||
"%s is missing sets: %s" % (instance, ", ".join(missing))
|
||||
)
|
||||
|
||||
controls_set = instance.data["rig_sets"]["controls_SET"]
|
||||
out_set = instance.data["rig_sets"]["out_SET"]
|
||||
|
||||
# Ensure there are at least some transforms or dag nodes
|
||||
# in the rig instance
|
||||
set_members = instance.data['setMembers']
|
||||
if not cmds.ls(set_members, type="dagNode", long=True):
|
||||
raise PublishValidationError(
|
||||
("No dag nodes in the pointcache instance. "
|
||||
"(Empty instance?)"))
|
||||
"No dag nodes in the pointcache instance. "
|
||||
"(Empty instance?)"
|
||||
)
|
||||
|
||||
# Ensure contents in sets and retrieve long path for all objects
|
||||
output_content = cmds.sets("out_SET", query=True) or []
|
||||
assert output_content, "Must have members in rig out_SET"
|
||||
output_content = cmds.sets(out_set, query=True) or []
|
||||
if not output_content:
|
||||
raise PublishValidationError("Must have members in rig out_SET")
|
||||
output_content = cmds.ls(output_content, long=True)
|
||||
|
||||
controls_content = cmds.sets("controls_SET", query=True) or []
|
||||
assert controls_content, "Must have members in rig controls_SET"
|
||||
controls_content = cmds.sets(controls_set, query=True) or []
|
||||
if not controls_content:
|
||||
raise PublishValidationError(
|
||||
"Must have members in rig controls_SET"
|
||||
)
|
||||
controls_content = cmds.ls(controls_content, long=True)
|
||||
|
||||
# Validate members are inside the hierarchy from root node
|
||||
root_node = cmds.ls(set_members, assemblies=True)
|
||||
hierarchy = cmds.listRelatives(root_node, allDescendents=True,
|
||||
fullPath=True)
|
||||
root_nodes = cmds.ls(set_members, assemblies=True, long=True)
|
||||
hierarchy = cmds.listRelatives(root_nodes, allDescendents=True,
|
||||
fullPath=True) + root_nodes
|
||||
hierarchy = set(hierarchy)
|
||||
|
||||
invalid_hierarchy = []
|
||||
|
|
|
|||
|
|
@ -52,22 +52,30 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError('{} failed, see log '
|
||||
'information'.format(self.label))
|
||||
raise PublishValidationError(
|
||||
'{} failed, see log information'.format(self.label)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
controllers_sets = [i for i in instance if i == "controls_SET"]
|
||||
controls = cmds.sets(controllers_sets, query=True)
|
||||
assert controls, "Must have 'controls_SET' in rig instance"
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
if not controls_set:
|
||||
cls.log.error(
|
||||
"Must have 'controls_SET' in rig instance"
|
||||
)
|
||||
return [instance.data["instance_node"]]
|
||||
|
||||
controls = cmds.sets(controls_set, query=True)
|
||||
|
||||
# Ensure all controls are within the top group
|
||||
lookup = set(instance[:])
|
||||
assert all(control in lookup for control in cmds.ls(controls,
|
||||
long=True)), (
|
||||
"All controls must be inside the rig's group."
|
||||
)
|
||||
if not all(control in lookup for control in cmds.ls(controls,
|
||||
long=True)):
|
||||
cls.log.error(
|
||||
"All controls must be inside the rig's group."
|
||||
)
|
||||
return [controls_set]
|
||||
|
||||
# Validate all controls
|
||||
has_connections = list()
|
||||
|
|
@ -181,9 +189,17 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
if not controls_set:
|
||||
cls.log.error(
|
||||
"Unable to repair because no 'controls_SET' found in rig "
|
||||
"instance: {}".format(instance)
|
||||
)
|
||||
return
|
||||
|
||||
# Use a single undo chunk
|
||||
with undo_chunk():
|
||||
controls = cmds.sets("controls_SET", query=True)
|
||||
controls = cmds.sets(controls_set, query=True)
|
||||
for control in controls:
|
||||
|
||||
# Lock visibility
|
||||
|
|
|
|||
|
|
@ -56,11 +56,11 @@ class ValidateRigControllersArnoldAttributes(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
controllers_sets = [i for i in instance if i == "controls_SET"]
|
||||
if not controllers_sets:
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
if not controls_set:
|
||||
return []
|
||||
|
||||
controls = cmds.sets(controllers_sets, query=True) or []
|
||||
controls = cmds.sets(controls_set, query=True) or []
|
||||
if not controls:
|
||||
return []
|
||||
|
||||
|
|
|
|||
|
|
@ -38,16 +38,19 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
|
|||
# if a deformer has been created on the shape
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Nodes found with mismatching "
|
||||
"IDs: {0}".format(invalid))
|
||||
raise PublishValidationError(
|
||||
"Nodes found with mismatching IDs: {0}".format(invalid)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
"""Get all nodes which do not match the criteria"""
|
||||
|
||||
invalid = []
|
||||
out_set = instance.data["rig_sets"].get("out_SET")
|
||||
if not out_set:
|
||||
return []
|
||||
|
||||
out_set = next(x for x in instance if x.endswith("out_SET"))
|
||||
invalid = []
|
||||
members = cmds.sets(out_set, query=True)
|
||||
shapes = cmds.ls(members,
|
||||
dag=True,
|
||||
|
|
|
|||
|
|
@ -47,7 +47,10 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
invalid = {}
|
||||
|
||||
if compute:
|
||||
out_set = next(x for x in instance if "out_SET" in x)
|
||||
out_set = instance.data["rig_sets"].get("out_SET")
|
||||
if not out_set:
|
||||
instance.data["mismatched_output_ids"] = invalid
|
||||
return invalid
|
||||
|
||||
instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True)
|
||||
instance_nodes = cmds.ls(instance_nodes, long=True)
|
||||
|
|
|
|||
|
|
@ -138,8 +138,13 @@ def create_items_from_nodes(nodes):
|
|||
asset_doc = asset_docs_by_id.get(asset_id)
|
||||
# Skip if asset id is not found
|
||||
if not asset_doc:
|
||||
log.warning("Id not found in the database, skipping '%s'." % _id)
|
||||
log.warning("Nodes: %s" % id_nodes)
|
||||
log.warning(
|
||||
"Id found on {num} nodes for which no asset is found database,"
|
||||
" skipping '{asset_id}'".format(
|
||||
num=len(nodes),
|
||||
asset_id=asset_id
|
||||
)
|
||||
)
|
||||
continue
|
||||
|
||||
# Collect available look subsets for this asset
|
||||
|
|
|
|||
|
|
@ -90,15 +90,13 @@ class AssetOutliner(QtWidgets.QWidget):
|
|||
def get_all_assets(self):
|
||||
"""Add all items from the current scene"""
|
||||
|
||||
items = []
|
||||
with preserve_expanded_rows(self.view):
|
||||
with preserve_selection(self.view):
|
||||
self.clear()
|
||||
nodes = commands.get_all_asset_nodes()
|
||||
items = commands.create_items_from_nodes(nodes)
|
||||
self.add_items(items)
|
||||
|
||||
return len(items) > 0
|
||||
return len(items) > 0
|
||||
|
||||
def get_selected_assets(self):
|
||||
"""Add all selected items from the current scene"""
|
||||
|
|
|
|||
|
|
@ -112,8 +112,6 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
object_name = container['objectName']
|
||||
# get corresponding node
|
||||
camera_node = nuke.toNode(object_name)
|
||||
|
||||
# get main variables
|
||||
version_data = version_doc.get("data", {})
|
||||
|
|
|
|||
|
|
@ -20,7 +20,6 @@ class ExtractReviewDataLut(publish.Extractor):
|
|||
hosts = ["nuke"]
|
||||
|
||||
def process(self, instance):
|
||||
families = instance.data["families"]
|
||||
self.log.info("Creating staging dir...")
|
||||
if "representations" in instance.data:
|
||||
staging_dir = instance.data[
|
||||
|
|
|
|||
|
|
@ -91,8 +91,6 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
if collection:
|
||||
# get path
|
||||
fname = os.path.basename(collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
fhead = collection.format("{head}")
|
||||
|
||||
thumb_fname = list(collection)[mid_frame]
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from openpype.lib import BoolDef
|
|||
import openpype.hosts.photoshop.api as api
|
||||
from openpype.hosts.photoshop.lib import PSAutoCreator
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
|
|
@ -37,19 +38,14 @@ class AutoImageCreator(PSAutoCreator):
|
|||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
if existing_instance is None:
|
||||
subset_name = get_subset_name(
|
||||
self.family, self.default_variant, task_name, asset_doc,
|
||||
subset_name = self.get_subset_name(
|
||||
self.default_variant, task_name, asset_doc,
|
||||
project_name, host_name
|
||||
)
|
||||
|
||||
publishable_ids = [layer.id for layer in api.stub().get_layers()
|
||||
if layer.visible]
|
||||
data = {
|
||||
"asset": asset_name,
|
||||
"task": task_name,
|
||||
# ids are "virtual" layers, won't get grouped as 'members' do
|
||||
# same difference in color coded layers in WP
|
||||
"ids": publishable_ids
|
||||
}
|
||||
|
||||
if not self.active_on_create:
|
||||
|
|
@ -69,8 +65,8 @@ class AutoImageCreator(PSAutoCreator):
|
|||
existing_instance["asset"] != asset_name
|
||||
or existing_instance["task"] != task_name
|
||||
):
|
||||
subset_name = get_subset_name(
|
||||
self.family, self.default_variant, task_name, asset_doc,
|
||||
subset_name = self.get_subset_name(
|
||||
self.default_variant, task_name, asset_doc,
|
||||
project_name, host_name
|
||||
)
|
||||
|
||||
|
|
@ -118,3 +114,29 @@ class AutoImageCreator(PSAutoCreator):
|
|||
Artist might disable this instance from publishing or from creating
|
||||
review for it though.
|
||||
"""
|
||||
|
||||
def get_subset_name(
|
||||
self,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name=None,
|
||||
instance=None
|
||||
):
|
||||
dynamic_data = prepare_template_data({"layer": "{layer}"})
|
||||
subset_name = get_subset_name(
|
||||
self.family, variant, task_name, asset_doc,
|
||||
project_name, host_name, dynamic_data=dynamic_data
|
||||
)
|
||||
return self._clean_subset_name(subset_name)
|
||||
|
||||
def _clean_subset_name(self, subset_name):
|
||||
"""Clean all variants leftover {layer} from subset name."""
|
||||
dynamic_data = prepare_template_data({"layer": "{layer}"})
|
||||
for value in dynamic_data.values():
|
||||
if value in subset_name:
|
||||
return (subset_name.replace(value, "")
|
||||
.replace("__", "_")
|
||||
.replace("..", "."))
|
||||
return subset_name
|
||||
|
|
|
|||
|
|
@ -94,12 +94,17 @@ class ImageCreator(Creator):
|
|||
name = self._clean_highlights(stub, directory)
|
||||
layer_names_in_hierarchy.append(name)
|
||||
|
||||
data.update({"subset": subset_name})
|
||||
data.update({"members": [str(group.id)]})
|
||||
data.update({"layer_name": layer_name})
|
||||
data.update({"long_name": "_".join(layer_names_in_hierarchy)})
|
||||
data_update = {
|
||||
"subset": subset_name,
|
||||
"members": [str(group.id)],
|
||||
"layer_name": layer_name,
|
||||
"long_name": "_".join(layer_names_in_hierarchy)
|
||||
}
|
||||
data.update(data_update)
|
||||
|
||||
creator_attributes = {"mark_for_review": self.mark_for_review}
|
||||
mark_for_review = (pre_create_data.get("mark_for_review") or
|
||||
self.mark_for_review)
|
||||
creator_attributes = {"mark_for_review": mark_for_review}
|
||||
data.update({"creator_attributes": creator_attributes})
|
||||
|
||||
if not self.active_on_create:
|
||||
|
|
@ -124,8 +129,6 @@ class ImageCreator(Creator):
|
|||
|
||||
if creator_id == self.identifier:
|
||||
instance_data = self._handle_legacy(instance_data)
|
||||
layer = api.stub().get_layer(instance_data["members"][0])
|
||||
instance_data["layer"] = layer
|
||||
instance = CreatedInstance.from_existing(
|
||||
instance_data, self
|
||||
)
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ class CollectAutoImage(pyblish.api.ContextPlugin):
|
|||
targets = ["automated"]
|
||||
|
||||
def process(self, context):
|
||||
family = "image"
|
||||
for instance in context:
|
||||
creator_identifier = instance.data.get("creator_identifier")
|
||||
if creator_identifier and creator_identifier == "auto_image":
|
||||
|
|
|
|||
|
|
@ -0,0 +1,24 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class CollectAutoImageRefresh(pyblish.api.ContextPlugin):
|
||||
"""Refreshes auto_image instance with currently visible layers..
|
||||
"""
|
||||
|
||||
label = "Collect Auto Image Refresh"
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["photoshop"]
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
|
||||
def process(self, context):
|
||||
for instance in context:
|
||||
creator_identifier = instance.data.get("creator_identifier")
|
||||
if creator_identifier and creator_identifier == "auto_image":
|
||||
self.log.debug("Auto image instance found, won't create new")
|
||||
# refresh existing auto image instance with current visible
|
||||
publishable_ids = [layer.id for layer in photoshop.stub().get_layers() # noqa
|
||||
if layer.visible]
|
||||
instance.data["ids"] = publishable_ids
|
||||
return
|
||||
20
openpype/hosts/photoshop/plugins/publish/collect_image.py
Normal file
20
openpype/hosts/photoshop/plugins/publish/collect_image.py
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.hosts.photoshop import api
|
||||
|
||||
|
||||
class CollectImage(pyblish.api.InstancePlugin):
|
||||
"""Collect layer metadata into a instance.
|
||||
|
||||
Used later in validation
|
||||
"""
|
||||
order = pyblish.api.CollectorOrder + 0.200
|
||||
label = 'Collect Image'
|
||||
|
||||
hosts = ["photoshop"]
|
||||
families = ["image"]
|
||||
|
||||
def process(self, instance):
|
||||
if instance.data.get("members"):
|
||||
layer = api.stub().get_layer(instance.data["members"][0])
|
||||
instance.data["layer"] = layer
|
||||
|
|
@ -45,9 +45,11 @@ class ExtractImage(pyblish.api.ContextPlugin):
|
|||
# Perform extraction
|
||||
files = {}
|
||||
ids = set()
|
||||
layer = instance.data.get("layer")
|
||||
if layer:
|
||||
ids.add(layer.id)
|
||||
# real layers and groups
|
||||
members = instance.data("members")
|
||||
if members:
|
||||
ids.update(set([int(member) for member in members]))
|
||||
# virtual groups collected by color coding or auto_image
|
||||
add_ids = instance.data.pop("ids", None)
|
||||
if add_ids:
|
||||
ids.update(set(add_ids))
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import shutil
|
||||
from PIL import Image
|
||||
|
||||
from openpype.lib import (
|
||||
|
|
@ -55,6 +56,7 @@ class ExtractReview(publish.Extractor):
|
|||
}
|
||||
|
||||
if instance.data["family"] != "review":
|
||||
self.log.debug("Existing extracted file from image family used.")
|
||||
# enable creation of review, without this jpg review would clash
|
||||
# with jpg of the image family
|
||||
output_name = repre_name
|
||||
|
|
@ -62,8 +64,15 @@ class ExtractReview(publish.Extractor):
|
|||
repre_skeleton.update({"name": repre_name,
|
||||
"outputName": output_name})
|
||||
|
||||
if self.make_image_sequence and len(layers) > 1:
|
||||
self.log.info("Extract layers to image sequence.")
|
||||
img_file = self.output_seq_filename % 0
|
||||
self._prepare_file_for_image_family(img_file, instance,
|
||||
staging_dir)
|
||||
repre_skeleton.update({
|
||||
"files": img_file,
|
||||
})
|
||||
processed_img_names = [img_file]
|
||||
elif self.make_image_sequence and len(layers) > 1:
|
||||
self.log.debug("Extract layers to image sequence.")
|
||||
img_list = self._save_sequence_images(staging_dir, layers)
|
||||
|
||||
repre_skeleton.update({
|
||||
|
|
@ -72,17 +81,17 @@ class ExtractReview(publish.Extractor):
|
|||
"fps": fps,
|
||||
"files": img_list,
|
||||
})
|
||||
instance.data["representations"].append(repre_skeleton)
|
||||
processed_img_names = img_list
|
||||
else:
|
||||
self.log.info("Extract layers to flatten image.")
|
||||
img_list = self._save_flatten_image(staging_dir, layers)
|
||||
self.log.debug("Extract layers to flatten image.")
|
||||
img_file = self._save_flatten_image(staging_dir, layers)
|
||||
|
||||
repre_skeleton.update({
|
||||
"files": img_list,
|
||||
"files": img_file,
|
||||
})
|
||||
instance.data["representations"].append(repre_skeleton)
|
||||
processed_img_names = [img_list]
|
||||
processed_img_names = [img_file]
|
||||
|
||||
instance.data["representations"].append(repre_skeleton)
|
||||
|
||||
ffmpeg_args = get_ffmpeg_tool_args("ffmpeg")
|
||||
|
||||
|
|
@ -111,6 +120,35 @@ class ExtractReview(publish.Extractor):
|
|||
|
||||
self.log.info(f"Extracted {instance} to {staging_dir}")
|
||||
|
||||
def _prepare_file_for_image_family(self, img_file, instance, staging_dir):
|
||||
"""Converts existing file for image family to .jpg
|
||||
|
||||
Image instance could have its own separate review (instance per layer
|
||||
for example). This uses extracted file instead of extracting again.
|
||||
Args:
|
||||
img_file (str): name of output file (with 0000 value for ffmpeg
|
||||
later)
|
||||
instance:
|
||||
staging_dir (str): temporary folder where extracted file is located
|
||||
"""
|
||||
repre_file = instance.data["representations"][0]
|
||||
source_file_path = os.path.join(repre_file["stagingDir"],
|
||||
repre_file["files"])
|
||||
if not os.path.exists(source_file_path):
|
||||
raise RuntimeError(f"{source_file_path} doesn't exist for "
|
||||
"review to create from")
|
||||
_, ext = os.path.splitext(repre_file["files"])
|
||||
if ext != ".jpg":
|
||||
im = Image.open(source_file_path)
|
||||
# without this it produces messy low quality jpg
|
||||
rgb_im = Image.new("RGBA", (im.width, im.height), "#ffffff")
|
||||
rgb_im.alpha_composite(im)
|
||||
rgb_im.convert("RGB").save(os.path.join(staging_dir, img_file))
|
||||
else:
|
||||
# handles already .jpg
|
||||
shutil.copy(source_file_path,
|
||||
os.path.join(staging_dir, img_file))
|
||||
|
||||
def _generate_mov(self, ffmpeg_path, instance, fps, no_of_frames,
|
||||
source_files_pattern, staging_dir):
|
||||
"""Generates .mov to upload to Ftrack.
|
||||
|
|
@ -218,6 +256,11 @@ class ExtractReview(publish.Extractor):
|
|||
(list) of PSItem
|
||||
"""
|
||||
layers = []
|
||||
# creating review for existing 'image' instance
|
||||
if instance.data["family"] == "image" and instance.data.get("layer"):
|
||||
layers.append(instance.data["layer"])
|
||||
return layers
|
||||
|
||||
for image_instance in instance.context:
|
||||
if image_instance.data["family"] != "image":
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -413,8 +413,6 @@ class ClipLoader:
|
|||
if self.with_handles:
|
||||
source_in -= handle_start
|
||||
source_out += handle_end
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
|
||||
# make track item from source in bin as item
|
||||
timeline_item = lib.create_timeline_item(
|
||||
|
|
@ -433,14 +431,6 @@ class ClipLoader:
|
|||
self.data["path"], self.active_bin)
|
||||
_clip_property = media_pool_item.GetClipProperty
|
||||
|
||||
# get handles
|
||||
handle_start = self.data["versionData"].get("handleStart")
|
||||
handle_end = self.data["versionData"].get("handleEnd")
|
||||
if handle_start is None:
|
||||
handle_start = int(self.data["assetData"]["handleStart"])
|
||||
if handle_end is None:
|
||||
handle_end = int(self.data["assetData"]["handleEnd"])
|
||||
|
||||
source_in = int(_clip_property("Start"))
|
||||
source_out = int(_clip_property("End"))
|
||||
|
||||
|
|
|
|||
|
|
@ -49,8 +49,6 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
first_filename = files
|
||||
|
||||
staging_dir = None
|
||||
|
||||
# Convert to jpeg if not yet
|
||||
full_input_path = os.path.join(
|
||||
thumbnail_repre["stagingDir"], first_filename
|
||||
|
|
|
|||
|
|
@ -2,16 +2,18 @@ import pyblish.api
|
|||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Collect Frame Range data From Asset Entity
|
||||
class CollectMissingFrameDataFromAssetEntity(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Collect Missing Frame Range data From Asset Entity
|
||||
|
||||
Frame range data will only be collected if the keys
|
||||
are not yet collected for the instance.
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.491
|
||||
label = "Collect Frame Data From Asset Entity"
|
||||
label = "Collect Missing Frame Data From Asset Entity"
|
||||
families = ["plate", "pointcache",
|
||||
"vdbcache", "online",
|
||||
"render"]
|
||||
|
|
@ -15,7 +15,7 @@ class ValidateFrameRange(OptionalPyblishPluginMixin,
|
|||
|
||||
label = "Validate Frame Range"
|
||||
hosts = ["traypublisher"]
|
||||
families = ["render"]
|
||||
families = ["render", "plate"]
|
||||
order = ValidateContentsOrder
|
||||
|
||||
optional = True
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ import filecmp
|
|||
import tempfile
|
||||
import threading
|
||||
import shutil
|
||||
from queue import Queue
|
||||
|
||||
from contextlib import closing
|
||||
|
||||
from aiohttp import web
|
||||
|
|
@ -319,19 +319,19 @@ class QtTVPaintRpc(BaseTVPaintRpc):
|
|||
async def workfiles_tool(self):
|
||||
log.info("Triggering Workfile tool")
|
||||
item = MainThreadItem(self.tools_helper.show_workfiles)
|
||||
self._execute_in_main_thread(item)
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def loader_tool(self):
|
||||
log.info("Triggering Loader tool")
|
||||
item = MainThreadItem(self.tools_helper.show_loader)
|
||||
self._execute_in_main_thread(item)
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def publish_tool(self):
|
||||
log.info("Triggering Publish tool")
|
||||
item = MainThreadItem(self.tools_helper.show_publisher_tool)
|
||||
self._execute_in_main_thread(item)
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def scene_inventory_tool(self):
|
||||
|
|
@ -350,13 +350,13 @@ class QtTVPaintRpc(BaseTVPaintRpc):
|
|||
async def library_loader_tool(self):
|
||||
log.info("Triggering Library loader tool")
|
||||
item = MainThreadItem(self.tools_helper.show_library_loader)
|
||||
self._execute_in_main_thread(item)
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def experimental_tools(self):
|
||||
log.info("Triggering Library loader tool")
|
||||
item = MainThreadItem(self.tools_helper.show_experimental_tools_dialog)
|
||||
self._execute_in_main_thread(item)
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def _async_execute_in_main_thread(self, item, **kwargs):
|
||||
|
|
@ -867,7 +867,7 @@ class QtCommunicator(BaseCommunicator):
|
|||
|
||||
def __init__(self, qt_app):
|
||||
super().__init__()
|
||||
self.callback_queue = Queue()
|
||||
self.callback_queue = collections.deque()
|
||||
self.qt_app = qt_app
|
||||
|
||||
def _create_routes(self):
|
||||
|
|
@ -880,14 +880,14 @@ class QtCommunicator(BaseCommunicator):
|
|||
|
||||
def execute_in_main_thread(self, main_thread_item, wait=True):
|
||||
"""Add `MainThreadItem` to callback queue and wait for result."""
|
||||
self.callback_queue.put(main_thread_item)
|
||||
self.callback_queue.append(main_thread_item)
|
||||
if wait:
|
||||
return main_thread_item.wait()
|
||||
return
|
||||
|
||||
async def async_execute_in_main_thread(self, main_thread_item, wait=True):
|
||||
"""Add `MainThreadItem` to callback queue and wait for result."""
|
||||
self.callback_queue.put(main_thread_item)
|
||||
self.callback_queue.append(main_thread_item)
|
||||
if wait:
|
||||
return await main_thread_item.async_wait()
|
||||
|
||||
|
|
@ -904,9 +904,9 @@ class QtCommunicator(BaseCommunicator):
|
|||
self._exit()
|
||||
return None
|
||||
|
||||
if self.callback_queue.empty():
|
||||
return None
|
||||
return self.callback_queue.get()
|
||||
if self.callback_queue:
|
||||
return self.callback_queue.popleft()
|
||||
return None
|
||||
|
||||
def _on_client_connect(self):
|
||||
super()._on_client_connect()
|
||||
|
|
|
|||
|
|
@ -171,7 +171,7 @@ class LoadImage(plugin.Loader):
|
|||
george_script = "\n".join(george_script_lines)
|
||||
execute_george_through_file(george_script)
|
||||
|
||||
def _remove_container(self, container, members=None):
|
||||
def _remove_container(self, container):
|
||||
if not container:
|
||||
return
|
||||
representation = container["representation"]
|
||||
|
|
|
|||
|
|
@ -63,7 +63,6 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
"ignoreLayersTransparency", False
|
||||
)
|
||||
|
||||
family_lowered = instance.data["family"].lower()
|
||||
mark_in = instance.context.data["sceneMarkIn"]
|
||||
mark_out = instance.context.data["sceneMarkOut"]
|
||||
|
||||
|
|
@ -76,11 +75,9 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
# Frame start/end may be stored as float
|
||||
frame_start = int(instance.data["frameStart"])
|
||||
frame_end = int(instance.data["frameEnd"])
|
||||
|
||||
# Handles are not stored per instance but on Context
|
||||
handle_start = instance.context.data["handleStart"]
|
||||
handle_end = instance.context.data["handleEnd"]
|
||||
|
||||
scene_bg_color = instance.context.data["sceneBgColor"]
|
||||
|
||||
|
|
|
|||
|
|
@ -573,56 +573,6 @@ void FAR PASCAL PI_Close( PIFilter* iFilter )
|
|||
}
|
||||
|
||||
|
||||
/**************************************************************************************/
|
||||
// we have something to do !
|
||||
|
||||
int FAR PASCAL PI_Parameters( PIFilter* iFilter, char* iArg )
|
||||
{
|
||||
if( !iArg )
|
||||
{
|
||||
|
||||
// If the requester is not open, we open it.
|
||||
if( Data.mReq == 0)
|
||||
{
|
||||
// Create empty requester because menu items are defined with
|
||||
// `define_menu` callback
|
||||
DWORD req = TVOpenFilterReqEx(
|
||||
iFilter,
|
||||
185,
|
||||
20,
|
||||
NULL,
|
||||
NULL,
|
||||
PIRF_STANDARD_REQ | PIRF_COLLAPSABLE_REQ,
|
||||
FILTERREQ_NO_TBAR
|
||||
);
|
||||
if( req == 0 )
|
||||
{
|
||||
TVWarning( iFilter, TXT_REQUESTER_ERROR );
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
Data.mReq = req;
|
||||
// This is a very simple requester, so we create it's content right here instead
|
||||
// of waiting for the PICBREQ_OPEN message...
|
||||
// Not recommended for more complex requesters. (see the other examples)
|
||||
|
||||
// Sets the title of the requester.
|
||||
TVSetReqTitle( iFilter, Data.mReq, TXT_REQUESTER );
|
||||
// Request to listen to ticks
|
||||
TVGrabTicks(iFilter, req, PITICKS_FLAG_ON);
|
||||
}
|
||||
else
|
||||
{
|
||||
// If it is already open, we just put it on front of all other requesters.
|
||||
TVReqToFront( iFilter, Data.mReq );
|
||||
}
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
||||
int newMenuItemsProcess(PIFilter* iFilter) {
|
||||
// Menu items defined with `define_menu` should be propagated.
|
||||
|
||||
|
|
@ -702,6 +652,62 @@ int newMenuItemsProcess(PIFilter* iFilter) {
|
|||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/**************************************************************************************/
|
||||
// we have something to do !
|
||||
|
||||
int FAR PASCAL PI_Parameters( PIFilter* iFilter, char* iArg )
|
||||
{
|
||||
if( !iArg )
|
||||
{
|
||||
|
||||
// If the requester is not open, we open it.
|
||||
if( Data.mReq == 0)
|
||||
{
|
||||
// Create empty requester because menu items are defined with
|
||||
// `define_menu` callback
|
||||
DWORD req = TVOpenFilterReqEx(
|
||||
iFilter,
|
||||
185,
|
||||
20,
|
||||
NULL,
|
||||
NULL,
|
||||
PIRF_STANDARD_REQ | PIRF_COLLAPSABLE_REQ,
|
||||
FILTERREQ_NO_TBAR
|
||||
);
|
||||
if( req == 0 )
|
||||
{
|
||||
TVWarning( iFilter, TXT_REQUESTER_ERROR );
|
||||
return 0;
|
||||
}
|
||||
|
||||
Data.mReq = req;
|
||||
|
||||
// This is a very simple requester, so we create it's content right here instead
|
||||
// of waiting for the PICBREQ_OPEN message...
|
||||
// Not recommended for more complex requesters. (see the other examples)
|
||||
|
||||
// Sets the title of the requester.
|
||||
TVSetReqTitle( iFilter, Data.mReq, TXT_REQUESTER );
|
||||
// Request to listen to ticks
|
||||
TVGrabTicks(iFilter, req, PITICKS_FLAG_ON);
|
||||
|
||||
if ( Data.firstParams == true ) {
|
||||
Data.firstParams = false;
|
||||
} else {
|
||||
newMenuItemsProcess(iFilter);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
// If it is already open, we just put it on front of all other requesters.
|
||||
TVReqToFront( iFilter, Data.mReq );
|
||||
}
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/**************************************************************************************/
|
||||
// something happened that needs our attention.
|
||||
// Global variable where current button up data are stored
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
|
|
@ -19,9 +19,8 @@ class ExtractUAsset(publish.Extractor):
|
|||
"umap" if "umap" in instance.data.get("families") else "uasset")
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
self.log.debug("Performing extraction..")
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = f"{instance.name}.{extension}"
|
||||
|
||||
members = instance.data.get("members", [])
|
||||
|
||||
|
|
|
|||
|
|
@ -724,7 +724,7 @@ def get_ffprobe_data(path_to_file, logger=None):
|
|||
"""
|
||||
if not logger:
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.info(
|
||||
logger.debug(
|
||||
"Getting information about input \"{}\".".format(path_to_file)
|
||||
)
|
||||
ffprobe_args = get_ffmpeg_tool_args("ffprobe")
|
||||
|
|
|
|||
|
|
@ -27,8 +27,8 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
component_list = instance.data.get("ftrackComponentsList")
|
||||
if not component_list:
|
||||
self.log.info(
|
||||
"Instance don't have components to integrate to Ftrack."
|
||||
self.log.debug(
|
||||
"Instance doesn't have components to integrate to Ftrack."
|
||||
" Skipping."
|
||||
)
|
||||
return
|
||||
|
|
@ -37,7 +37,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
task_entity, parent_entity = self.get_instance_entities(
|
||||
instance, context)
|
||||
if parent_entity is None:
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Skipping ftrack integration. Instance \"{}\" does not"
|
||||
" have specified ftrack entities."
|
||||
).format(str(instance)))
|
||||
|
|
@ -323,7 +323,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
"type_id": asset_type_id,
|
||||
"context_id": parent_id
|
||||
}
|
||||
self.log.info("Created new Asset with data: {}.".format(asset_data))
|
||||
self.log.debug("Created new Asset with data: {}.".format(asset_data))
|
||||
session.create("Asset", asset_data)
|
||||
session.commit()
|
||||
return self._query_asset(session, asset_name, asset_type_id, parent_id)
|
||||
|
|
@ -384,7 +384,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
if comment:
|
||||
new_asset_version_data["comment"] = comment
|
||||
|
||||
self.log.info("Created new AssetVersion with data {}".format(
|
||||
self.log.debug("Created new AssetVersion with data {}".format(
|
||||
new_asset_version_data
|
||||
))
|
||||
session.create("AssetVersion", new_asset_version_data)
|
||||
|
|
@ -555,7 +555,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
location=location
|
||||
)
|
||||
data["component"] = component_entity
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
(
|
||||
"Created new Component with path: {0}, data: {1},"
|
||||
" metadata: {2}, location: {3}"
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ class IntegrateFtrackDescription(pyblish.api.InstancePlugin):
|
|||
|
||||
comment = instance.data["comment"]
|
||||
if not comment:
|
||||
self.log.info("Comment is not set.")
|
||||
self.log.debug("Comment is not set.")
|
||||
else:
|
||||
self.log.debug("Comment is set to `{}`".format(comment))
|
||||
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
|
|||
app_label = context.data["appLabel"]
|
||||
comment = instance.data["comment"]
|
||||
if not comment:
|
||||
self.log.info("Comment is not set.")
|
||||
self.log.debug("Comment is not set.")
|
||||
else:
|
||||
self.log.debug("Comment is set to `{}`".format(comment))
|
||||
|
||||
|
|
@ -127,14 +127,14 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
|
|||
|
||||
note_text = StringTemplate.format_template(template, format_data)
|
||||
if not note_text.solved:
|
||||
self.log.warning((
|
||||
self.log.debug((
|
||||
"Note template require more keys then can be provided."
|
||||
"\nTemplate: {}\nMissing values for keys:{}\nData: {}"
|
||||
).format(template, note_text.missing_keys, format_data))
|
||||
continue
|
||||
|
||||
if not note_text:
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Note for AssetVersion {} would be empty. Skipping."
|
||||
"\nTemplate: {}\nData: {}"
|
||||
).format(asset_version["id"], template, format_data))
|
||||
|
|
|
|||
|
|
@ -121,7 +121,7 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin):
|
|||
publish_comment = self.format_publish_comment(instance)
|
||||
|
||||
if not publish_comment:
|
||||
self.log.info("Comment is not set.")
|
||||
self.log.debug("Comment is not set.")
|
||||
else:
|
||||
self.log.debug("Comment is `{}`".format(publish_comment))
|
||||
|
||||
|
|
|
|||
|
|
@ -13,12 +13,17 @@ from openpype.lib import (
|
|||
Logger
|
||||
)
|
||||
from openpype.pipeline import Anatomy
|
||||
from openpype.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS
|
||||
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
class CashedData:
|
||||
remapping = None
|
||||
class CachedData:
|
||||
remapping = {}
|
||||
allowed_exts = {
|
||||
ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS)
|
||||
}
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
@ -546,15 +551,15 @@ def get_remapped_colorspace_to_native(
|
|||
Union[str, None]: native colorspace name defined in remapping or None
|
||||
"""
|
||||
|
||||
CashedData.remapping.setdefault(host_name, {})
|
||||
if CashedData.remapping[host_name].get("to_native") is None:
|
||||
CachedData.remapping.setdefault(host_name, {})
|
||||
if CachedData.remapping[host_name].get("to_native") is None:
|
||||
remapping_rules = imageio_host_settings["remapping"]["rules"]
|
||||
CashedData.remapping[host_name]["to_native"] = {
|
||||
CachedData.remapping[host_name]["to_native"] = {
|
||||
rule["ocio_name"]: rule["host_native_name"]
|
||||
for rule in remapping_rules
|
||||
}
|
||||
|
||||
return CashedData.remapping[host_name]["to_native"].get(
|
||||
return CachedData.remapping[host_name]["to_native"].get(
|
||||
ocio_colorspace_name)
|
||||
|
||||
|
||||
|
|
@ -572,15 +577,15 @@ def get_remapped_colorspace_from_native(
|
|||
Union[str, None]: Ocio colorspace name defined in remapping or None.
|
||||
"""
|
||||
|
||||
CashedData.remapping.setdefault(host_name, {})
|
||||
if CashedData.remapping[host_name].get("from_native") is None:
|
||||
CachedData.remapping.setdefault(host_name, {})
|
||||
if CachedData.remapping[host_name].get("from_native") is None:
|
||||
remapping_rules = imageio_host_settings["remapping"]["rules"]
|
||||
CashedData.remapping[host_name]["from_native"] = {
|
||||
CachedData.remapping[host_name]["from_native"] = {
|
||||
rule["host_native_name"]: rule["ocio_name"]
|
||||
for rule in remapping_rules
|
||||
}
|
||||
|
||||
return CashedData.remapping[host_name]["from_native"].get(
|
||||
return CachedData.remapping[host_name]["from_native"].get(
|
||||
host_native_colorspace_name)
|
||||
|
||||
|
||||
|
|
@ -601,3 +606,173 @@ def _get_imageio_settings(project_settings, host_name):
|
|||
imageio_host = project_settings.get(host_name, {}).get("imageio", {})
|
||||
|
||||
return imageio_global, imageio_host
|
||||
|
||||
|
||||
def get_colorspace_settings_from_publish_context(context_data):
|
||||
"""Returns solved settings for the host context.
|
||||
|
||||
Args:
|
||||
context_data (publish.Context.data): publishing context data
|
||||
|
||||
Returns:
|
||||
tuple | bool: config, file rules or None
|
||||
"""
|
||||
if "imageioSettings" in context_data and context_data["imageioSettings"]:
|
||||
return context_data["imageioSettings"]
|
||||
|
||||
project_name = context_data["projectName"]
|
||||
host_name = context_data["hostName"]
|
||||
anatomy_data = context_data["anatomyData"]
|
||||
project_settings_ = context_data["project_settings"]
|
||||
|
||||
config_data = get_imageio_config(
|
||||
project_name, host_name,
|
||||
project_settings=project_settings_,
|
||||
anatomy_data=anatomy_data
|
||||
)
|
||||
|
||||
# caching invalid state, so it's not recalculated all the time
|
||||
file_rules = None
|
||||
if config_data:
|
||||
file_rules = get_imageio_file_rules(
|
||||
project_name, host_name,
|
||||
project_settings=project_settings_
|
||||
)
|
||||
|
||||
# caching settings for future instance processing
|
||||
context_data["imageioSettings"] = (config_data, file_rules)
|
||||
|
||||
return config_data, file_rules
|
||||
|
||||
|
||||
def set_colorspace_data_to_representation(
|
||||
representation, context_data,
|
||||
colorspace=None,
|
||||
log=None
|
||||
):
|
||||
"""Sets colorspace data to representation.
|
||||
|
||||
Args:
|
||||
representation (dict): publishing representation
|
||||
context_data (publish.Context.data): publishing context data
|
||||
colorspace (str, optional): colorspace name. Defaults to None.
|
||||
log (logging.Logger, optional): logger instance. Defaults to None.
|
||||
|
||||
Example:
|
||||
```
|
||||
{
|
||||
# for other publish plugins and loaders
|
||||
"colorspace": "linear",
|
||||
"config": {
|
||||
# for future references in case need
|
||||
"path": "/abs/path/to/config.ocio",
|
||||
# for other plugins within remote publish cases
|
||||
"template": "{project[root]}/path/to/config.ocio"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
"""
|
||||
log = log or Logger.get_logger(__name__)
|
||||
|
||||
file_ext = representation["ext"]
|
||||
|
||||
# check if `file_ext` in lower case is in CachedData.allowed_exts
|
||||
if file_ext.lstrip(".").lower() not in CachedData.allowed_exts:
|
||||
log.debug(
|
||||
"Extension '{}' is not in allowed extensions.".format(file_ext)
|
||||
)
|
||||
return
|
||||
|
||||
# get colorspace settings
|
||||
config_data, file_rules = get_colorspace_settings_from_publish_context(
|
||||
context_data)
|
||||
|
||||
# in case host color management is not enabled
|
||||
if not config_data:
|
||||
log.warning("Host's colorspace management is disabled.")
|
||||
return
|
||||
|
||||
log.debug("Config data is: `{}`".format(config_data))
|
||||
|
||||
project_name = context_data["projectName"]
|
||||
host_name = context_data["hostName"]
|
||||
project_settings = context_data["project_settings"]
|
||||
|
||||
# get one filename
|
||||
filename = representation["files"]
|
||||
if isinstance(filename, list):
|
||||
filename = filename[0]
|
||||
|
||||
# get matching colorspace from rules
|
||||
colorspace = colorspace or get_imageio_colorspace_from_filepath(
|
||||
filename, host_name, project_name,
|
||||
config_data=config_data,
|
||||
file_rules=file_rules,
|
||||
project_settings=project_settings
|
||||
)
|
||||
|
||||
# infuse data to representation
|
||||
if colorspace:
|
||||
colorspace_data = {
|
||||
"colorspace": colorspace,
|
||||
"config": config_data
|
||||
}
|
||||
|
||||
# update data key
|
||||
representation["colorspaceData"] = colorspace_data
|
||||
|
||||
|
||||
def get_display_view_colorspace_name(config_path, display, view):
|
||||
"""Returns the colorspace attribute of the (display, view) pair.
|
||||
|
||||
Args:
|
||||
config_path (str): path string leading to config.ocio
|
||||
display (str): display name e.g. "ACES"
|
||||
view (str): view name e.g. "sRGB"
|
||||
|
||||
Returns:
|
||||
view color space name (str) e.g. "Output - sRGB"
|
||||
"""
|
||||
|
||||
if not compatibility_check():
|
||||
# python environment is not compatible with PyOpenColorIO
|
||||
# needs to be run in subprocess
|
||||
return get_display_view_colorspace_subprocess(config_path,
|
||||
display, view)
|
||||
|
||||
from openpype.scripts.ocio_wrapper import _get_display_view_colorspace_name # noqa
|
||||
|
||||
return _get_display_view_colorspace_name(config_path, display, view)
|
||||
|
||||
|
||||
def get_display_view_colorspace_subprocess(config_path, display, view):
|
||||
"""Returns the colorspace attribute of the (display, view) pair
|
||||
via subprocess.
|
||||
|
||||
Args:
|
||||
config_path (str): path string leading to config.ocio
|
||||
display (str): display name e.g. "ACES"
|
||||
view (str): view name e.g. "sRGB"
|
||||
|
||||
Returns:
|
||||
view color space name (str) e.g. "Output - sRGB"
|
||||
"""
|
||||
|
||||
with _make_temp_json_file() as tmp_json_path:
|
||||
# Prepare subprocess arguments
|
||||
args = [
|
||||
"run", get_ocio_config_script_path(),
|
||||
"config", "get_display_view_colorspace_name",
|
||||
"--in_path", config_path,
|
||||
"--out_path", tmp_json_path,
|
||||
"--display", display,
|
||||
"--view", view
|
||||
]
|
||||
log.debug("Executing: {}".format(" ".join(args)))
|
||||
|
||||
run_openpype_process(*args, logger=log)
|
||||
|
||||
# return default view colorspace name
|
||||
with open(tmp_json_path, "r") as f:
|
||||
return json.load(f)
|
||||
|
|
|
|||
|
|
@ -234,6 +234,19 @@ class LoaderPlugin(list):
|
|||
"""
|
||||
return cls.options or []
|
||||
|
||||
@property
|
||||
def fname(self):
|
||||
"""Backwards compatibility with deprecation warning"""
|
||||
|
||||
self.log.warning((
|
||||
"DEPRECATION WARNING: Source - Loader plugin {}."
|
||||
" The 'fname' property on the Loader plugin will be removed in"
|
||||
" future versions of OpenPype. Planned version to drop the support"
|
||||
" is 3.16.6 or 3.17.0."
|
||||
).format(self.__class__.__name__))
|
||||
if hasattr(self, "_fname"):
|
||||
return self._fname
|
||||
|
||||
|
||||
class SubsetLoaderPlugin(LoaderPlugin):
|
||||
"""Load subset into host application
|
||||
|
|
|
|||
|
|
@ -318,7 +318,8 @@ def load_with_repre_context(
|
|||
|
||||
# Backwards compatibility: Originally the loader's __init__ required the
|
||||
# representation context to set `fname` attribute to the filename to load
|
||||
loader.fname = get_representation_path_from_context(repre_context)
|
||||
# Deprecated - to be removed in OpenPype 3.16.6 or 3.17.0.
|
||||
loader._fname = get_representation_path_from_context(repre_context)
|
||||
|
||||
return loader.load(repre_context, name, namespace, options)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
import inspect
|
||||
from abc import ABCMeta
|
||||
from pprint import pformat
|
||||
import pyblish.api
|
||||
from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin
|
||||
from openpype.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS
|
||||
|
|
@ -14,9 +13,8 @@ from .lib import (
|
|||
)
|
||||
|
||||
from openpype.pipeline.colorspace import (
|
||||
get_imageio_colorspace_from_filepath,
|
||||
get_imageio_config,
|
||||
get_imageio_file_rules
|
||||
get_colorspace_settings_from_publish_context,
|
||||
set_colorspace_data_to_representation
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -306,12 +304,8 @@ class ColormanagedPyblishPluginMixin(object):
|
|||
matching colorspace from rules. Finally, it infuses this
|
||||
data into the representation.
|
||||
"""
|
||||
allowed_ext = set(
|
||||
ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS)
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def get_colorspace_settings(context):
|
||||
def get_colorspace_settings(self, context):
|
||||
"""Returns solved settings for the host context.
|
||||
|
||||
Args:
|
||||
|
|
@ -320,50 +314,18 @@ class ColormanagedPyblishPluginMixin(object):
|
|||
Returns:
|
||||
tuple | bool: config, file rules or None
|
||||
"""
|
||||
if "imageioSettings" in context.data:
|
||||
return context.data["imageioSettings"]
|
||||
|
||||
project_name = context.data["projectName"]
|
||||
host_name = context.data["hostName"]
|
||||
anatomy_data = context.data["anatomyData"]
|
||||
project_settings_ = context.data["project_settings"]
|
||||
|
||||
config_data = get_imageio_config(
|
||||
project_name, host_name,
|
||||
project_settings=project_settings_,
|
||||
anatomy_data=anatomy_data
|
||||
)
|
||||
|
||||
# in case host color management is not enabled
|
||||
if not config_data:
|
||||
return None
|
||||
|
||||
file_rules = get_imageio_file_rules(
|
||||
project_name, host_name,
|
||||
project_settings=project_settings_
|
||||
)
|
||||
|
||||
# caching settings for future instance processing
|
||||
context.data["imageioSettings"] = (config_data, file_rules)
|
||||
|
||||
return config_data, file_rules
|
||||
return get_colorspace_settings_from_publish_context(context.data)
|
||||
|
||||
def set_representation_colorspace(
|
||||
self, representation, context,
|
||||
colorspace=None,
|
||||
colorspace_settings=None
|
||||
):
|
||||
"""Sets colorspace data to representation.
|
||||
|
||||
Args:
|
||||
representation (dict): publishing representation
|
||||
context (publish.Context): publishing context
|
||||
config_data (dict): host resolved config data
|
||||
file_rules (dict): host resolved file rules data
|
||||
colorspace (str, optional): colorspace name. Defaults to None.
|
||||
colorspace_settings (tuple[dict, dict], optional):
|
||||
Settings for config_data and file_rules.
|
||||
Defaults to None.
|
||||
|
||||
Example:
|
||||
```
|
||||
|
|
@ -380,64 +342,10 @@ class ColormanagedPyblishPluginMixin(object):
|
|||
```
|
||||
|
||||
"""
|
||||
ext = representation["ext"]
|
||||
# check extension
|
||||
self.log.debug("__ ext: `{}`".format(ext))
|
||||
|
||||
# check if ext in lower case is in self.allowed_ext
|
||||
if ext.lstrip(".").lower() not in self.allowed_ext:
|
||||
self.log.debug(
|
||||
"Extension '{}' is not in allowed extensions.".format(ext)
|
||||
)
|
||||
return
|
||||
|
||||
if colorspace_settings is None:
|
||||
colorspace_settings = self.get_colorspace_settings(context)
|
||||
|
||||
# in case host color management is not enabled
|
||||
if not colorspace_settings:
|
||||
self.log.warning("Host's colorspace management is disabled.")
|
||||
return
|
||||
|
||||
# unpack colorspace settings
|
||||
config_data, file_rules = colorspace_settings
|
||||
|
||||
if not config_data:
|
||||
# warn in case no colorspace path was defined
|
||||
self.log.warning("No colorspace management was defined")
|
||||
return
|
||||
|
||||
self.log.debug("Config data is: `{}`".format(config_data))
|
||||
|
||||
project_name = context.data["projectName"]
|
||||
host_name = context.data["hostName"]
|
||||
project_settings = context.data["project_settings"]
|
||||
|
||||
# get one filename
|
||||
filename = representation["files"]
|
||||
if isinstance(filename, list):
|
||||
filename = filename[0]
|
||||
|
||||
self.log.debug("__ filename: `{}`".format(filename))
|
||||
|
||||
# get matching colorspace from rules
|
||||
colorspace = colorspace or get_imageio_colorspace_from_filepath(
|
||||
filename, host_name, project_name,
|
||||
config_data=config_data,
|
||||
file_rules=file_rules,
|
||||
project_settings=project_settings
|
||||
# using cached settings if available
|
||||
set_colorspace_data_to_representation(
|
||||
representation, context.data,
|
||||
colorspace,
|
||||
log=self.log
|
||||
)
|
||||
self.log.debug("__ colorspace: `{}`".format(colorspace))
|
||||
|
||||
# infuse data to representation
|
||||
if colorspace:
|
||||
colorspace_data = {
|
||||
"colorspace": colorspace,
|
||||
"config": config_data
|
||||
}
|
||||
|
||||
# update data key
|
||||
representation["colorspaceData"] = colorspace_data
|
||||
|
||||
self.log.debug("__ colorspace_data: `{}`".format(
|
||||
pformat(colorspace_data)))
|
||||
|
|
|
|||
|
|
@ -50,4 +50,7 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
|||
return {
|
||||
"frameStart": repres_frames[0],
|
||||
"frameEnd": repres_frames[-1],
|
||||
"handleStart": 0,
|
||||
"handleEnd": 0,
|
||||
"fps": instance.context.data["assetEntity"]["data"]["fps"]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -174,5 +174,79 @@ def _get_views_data(config_path):
|
|||
return data
|
||||
|
||||
|
||||
def _get_display_view_colorspace_name(config_path, display, view):
|
||||
"""Returns the colorspace attribute of the (display, view) pair.
|
||||
|
||||
Args:
|
||||
config_path (str): path string leading to config.ocio
|
||||
display (str): display name e.g. "ACES"
|
||||
view (str): view name e.g. "sRGB"
|
||||
|
||||
|
||||
Raises:
|
||||
IOError: Input config does not exist.
|
||||
|
||||
Returns:
|
||||
view color space name (str) e.g. "Output - sRGB"
|
||||
"""
|
||||
|
||||
config_path = Path(config_path)
|
||||
|
||||
if not config_path.is_file():
|
||||
raise IOError("Input path should be `config.ocio` file")
|
||||
|
||||
config = ocio.Config.CreateFromFile(str(config_path))
|
||||
colorspace = config.getDisplayViewColorSpaceName(display, view)
|
||||
|
||||
return colorspace
|
||||
|
||||
|
||||
@config.command(
|
||||
name="get_display_view_colorspace_name",
|
||||
help=(
|
||||
"return default view colorspace name "
|
||||
"for the given display and view "
|
||||
"--path input arg is required"
|
||||
)
|
||||
)
|
||||
@click.option("--in_path", required=True,
|
||||
help="path where to read ocio config file",
|
||||
type=click.Path(exists=True))
|
||||
@click.option("--out_path", required=True,
|
||||
help="path where to write output json file",
|
||||
type=click.Path())
|
||||
@click.option("--display", required=True,
|
||||
help="display name",
|
||||
type=click.STRING)
|
||||
@click.option("--view", required=True,
|
||||
help="view name",
|
||||
type=click.STRING)
|
||||
def get_display_view_colorspace_name(in_path, out_path,
|
||||
display, view):
|
||||
"""Aggregate view colorspace name to file.
|
||||
|
||||
Wrapper command for processes without access to OpenColorIO
|
||||
|
||||
Args:
|
||||
in_path (str): config file path string
|
||||
out_path (str): temp json file path string
|
||||
display (str): display name e.g. "ACES"
|
||||
view (str): view name e.g. "sRGB"
|
||||
|
||||
Example of use:
|
||||
> pyton.exe ./ocio_wrapper.py config \
|
||||
get_display_view_colorspace_name --in_path=<path> \
|
||||
--out_path=<path> --display=<display> --view=<view>
|
||||
"""
|
||||
|
||||
out_data = _get_display_view_colorspace_name(in_path,
|
||||
display,
|
||||
view)
|
||||
|
||||
with open(out_path, "w") as f:
|
||||
json.dump(out_data, f)
|
||||
|
||||
print(f"Display view colorspace saved to '{out_path}'")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
|
|
|||
|
|
@ -93,6 +93,11 @@
|
|||
"$JOB"
|
||||
]
|
||||
},
|
||||
"ValidateReviewColorspace": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
|
|
|
|||
|
|
@ -28,11 +28,7 @@
|
|||
"colorManagement": "Nuke",
|
||||
"OCIO_config": "nuke-default",
|
||||
"workingSpaceLUT": "linear",
|
||||
"monitorLut": "sRGB",
|
||||
"int8Lut": "sRGB",
|
||||
"int16Lut": "sRGB",
|
||||
"logLut": "Cineon",
|
||||
"floatLut": "linear"
|
||||
"monitorLut": "sRGB"
|
||||
},
|
||||
"nodes": {
|
||||
"requiredNodes": [
|
||||
|
|
|
|||
|
|
@ -40,6 +40,10 @@
|
|||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateReviewColorspace",
|
||||
"label": "Validate Review Colorspace"
|
||||
},
|
||||
{
|
||||
"key": "ValidateContainers",
|
||||
"label": "ValidateContainers"
|
||||
|
|
@ -47,4 +51,4 @@
|
|||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -106,26 +106,6 @@
|
|||
"type": "text",
|
||||
"key": "monitorLut",
|
||||
"label": "monitor"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "int8Lut",
|
||||
"label": "8-bit files"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "int16Lut",
|
||||
"label": "16-bit files"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "logLut",
|
||||
"label": "log files"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "floatLut",
|
||||
"label": "float files"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1427,6 +1427,10 @@ CreateNextPageOverlay {
|
|||
background: rgba(0, 0, 0, 127);
|
||||
}
|
||||
|
||||
#OverlayFrameLabel {
|
||||
font-size: 15pt;
|
||||
}
|
||||
|
||||
#BreadcrumbsPathInput {
|
||||
padding: 2px;
|
||||
font-size: 9pt;
|
||||
|
|
|
|||
0
openpype/tools/ayon_workfiles/__init__.py
Normal file
0
openpype/tools/ayon_workfiles/__init__.py
Normal file
984
openpype/tools/ayon_workfiles/abstract.py
Normal file
984
openpype/tools/ayon_workfiles/abstract.py
Normal file
|
|
@ -0,0 +1,984 @@
|
|||
import os
|
||||
from abc import ABCMeta, abstractmethod
|
||||
|
||||
import six
|
||||
from openpype.style import get_default_entity_icon_color
|
||||
|
||||
|
||||
class WorkfileInfo:
|
||||
"""Information about workarea file with possible additional from database.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
filepath (str): Filepath.
|
||||
filesize (int): File size.
|
||||
creation_time (int): Creation time (timestamp).
|
||||
modification_time (int): Modification time (timestamp).
|
||||
note (str): Note.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
folder_id,
|
||||
task_id,
|
||||
filepath,
|
||||
filesize,
|
||||
creation_time,
|
||||
modification_time,
|
||||
note,
|
||||
):
|
||||
self.folder_id = folder_id
|
||||
self.task_id = task_id
|
||||
self.filepath = filepath
|
||||
self.filesize = filesize
|
||||
self.creation_time = creation_time
|
||||
self.modification_time = modification_time
|
||||
self.note = note
|
||||
|
||||
def to_data(self):
|
||||
"""Converts WorkfileInfo item to data.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Folder item data.
|
||||
"""
|
||||
|
||||
return {
|
||||
"folder_id": self.folder_id,
|
||||
"task_id": self.task_id,
|
||||
"filepath": self.filepath,
|
||||
"filesize": self.filesize,
|
||||
"creation_time": self.creation_time,
|
||||
"modification_time": self.modification_time,
|
||||
"note": self.note,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, data):
|
||||
"""Re-creates WorkfileInfo item from data.
|
||||
|
||||
Args:
|
||||
data (dict[str, Any]): Workfile info item data.
|
||||
|
||||
Returns:
|
||||
WorkfileInfo: Workfile info item.
|
||||
"""
|
||||
|
||||
return cls(**data)
|
||||
|
||||
|
||||
class FolderItem:
|
||||
"""Item representing folder entity on a server.
|
||||
|
||||
Folder can be a child of another folder or a project.
|
||||
|
||||
Args:
|
||||
entity_id (str): Folder id.
|
||||
parent_id (Union[str, None]): Parent folder id. If 'None' then project
|
||||
is parent.
|
||||
name (str): Name of folder.
|
||||
label (str): Folder label.
|
||||
icon_name (str): Name of icon from font awesome.
|
||||
icon_color (str): Hex color string that will be used for icon.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, entity_id, parent_id, name, label, icon_name, icon_color
|
||||
):
|
||||
self.entity_id = entity_id
|
||||
self.parent_id = parent_id
|
||||
self.name = name
|
||||
self.icon_name = icon_name or "fa.folder"
|
||||
self.icon_color = icon_color or get_default_entity_icon_color()
|
||||
self.label = label or name
|
||||
|
||||
def to_data(self):
|
||||
"""Converts folder item to data.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Folder item data.
|
||||
"""
|
||||
|
||||
return {
|
||||
"entity_id": self.entity_id,
|
||||
"parent_id": self.parent_id,
|
||||
"name": self.name,
|
||||
"label": self.label,
|
||||
"icon_name": self.icon_name,
|
||||
"icon_color": self.icon_color,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, data):
|
||||
"""Re-creates folder item from data.
|
||||
|
||||
Args:
|
||||
data (dict[str, Any]): Folder item data.
|
||||
|
||||
Returns:
|
||||
FolderItem: Folder item.
|
||||
"""
|
||||
|
||||
return cls(**data)
|
||||
|
||||
|
||||
class TaskItem:
|
||||
"""Task item representing task entity on a server.
|
||||
|
||||
Task is child of a folder.
|
||||
|
||||
Task item has label that is used for display in UI. The label is by
|
||||
default using task name and type.
|
||||
|
||||
Args:
|
||||
task_id (str): Task id.
|
||||
name (str): Name of task.
|
||||
task_type (str): Type of task.
|
||||
parent_id (str): Parent folder id.
|
||||
icon_name (str): Name of icon from font awesome.
|
||||
icon_color (str): Hex color string that will be used for icon.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, task_id, name, task_type, parent_id, icon_name, icon_color
|
||||
):
|
||||
self.task_id = task_id
|
||||
self.name = name
|
||||
self.task_type = task_type
|
||||
self.parent_id = parent_id
|
||||
self.icon_name = icon_name or "fa.male"
|
||||
self.icon_color = icon_color or get_default_entity_icon_color()
|
||||
self._label = None
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
"""Alias for task_id.
|
||||
|
||||
Returns:
|
||||
str: Task id.
|
||||
"""
|
||||
|
||||
return self.task_id
|
||||
|
||||
@property
|
||||
def label(self):
|
||||
"""Label of task item for UI.
|
||||
|
||||
Returns:
|
||||
str: Label of task item.
|
||||
"""
|
||||
|
||||
if self._label is None:
|
||||
self._label = "{} ({})".format(self.name, self.task_type)
|
||||
return self._label
|
||||
|
||||
def to_data(self):
|
||||
"""Converts task item to data.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Task item data.
|
||||
"""
|
||||
|
||||
return {
|
||||
"task_id": self.task_id,
|
||||
"name": self.name,
|
||||
"parent_id": self.parent_id,
|
||||
"task_type": self.task_type,
|
||||
"icon_name": self.icon_name,
|
||||
"icon_color": self.icon_color,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, data):
|
||||
"""Re-create task item from data.
|
||||
|
||||
Args:
|
||||
data (dict[str, Any]): Task item data.
|
||||
|
||||
Returns:
|
||||
TaskItem: Task item.
|
||||
"""
|
||||
|
||||
return cls(**data)
|
||||
|
||||
|
||||
class FileItem:
|
||||
"""File item that represents a file.
|
||||
|
||||
Can be used for both Workarea and Published workfile. Workarea file
|
||||
will always exist on disk which is not the case for Published workfile.
|
||||
|
||||
Args:
|
||||
dirpath (str): Directory path of file.
|
||||
filename (str): Filename.
|
||||
modified (float): Modified timestamp.
|
||||
representation_id (Optional[str]): Representation id of published
|
||||
workfile.
|
||||
filepath (Optional[str]): Prepared filepath.
|
||||
exists (Optional[bool]): If file exists on disk.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dirpath,
|
||||
filename,
|
||||
modified,
|
||||
representation_id=None,
|
||||
filepath=None,
|
||||
exists=None
|
||||
):
|
||||
self.filename = filename
|
||||
self.dirpath = dirpath
|
||||
self.modified = modified
|
||||
self.representation_id = representation_id
|
||||
self._filepath = filepath
|
||||
self._exists = exists
|
||||
|
||||
@property
|
||||
def filepath(self):
|
||||
"""Filepath of file.
|
||||
|
||||
Returns:
|
||||
str: Full path to a file.
|
||||
"""
|
||||
|
||||
if self._filepath is None:
|
||||
self._filepath = os.path.join(self.dirpath, self.filename)
|
||||
return self._filepath
|
||||
|
||||
@property
|
||||
def exists(self):
|
||||
"""File is available.
|
||||
|
||||
Returns:
|
||||
bool: If file exists on disk.
|
||||
"""
|
||||
|
||||
if self._exists is None:
|
||||
self._exists = os.path.exists(self.filepath)
|
||||
return self._exists
|
||||
|
||||
def to_data(self):
|
||||
"""Converts file item to data.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: File item data.
|
||||
"""
|
||||
|
||||
return {
|
||||
"filename": self.filename,
|
||||
"dirpath": self.dirpath,
|
||||
"modified": self.modified,
|
||||
"representation_id": self.representation_id,
|
||||
"filepath": self.filepath,
|
||||
"exists": self.exists,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, data):
|
||||
"""Re-creates file item from data.
|
||||
|
||||
Args:
|
||||
data (dict[str, Any]): File item data.
|
||||
|
||||
Returns:
|
||||
FileItem: File item.
|
||||
"""
|
||||
|
||||
required_keys = {
|
||||
"filename",
|
||||
"dirpath",
|
||||
"modified",
|
||||
"representation_id"
|
||||
}
|
||||
missing_keys = required_keys - set(data.keys())
|
||||
if missing_keys:
|
||||
raise KeyError("Missing keys: {}".format(missing_keys))
|
||||
|
||||
return cls(**{
|
||||
key: data[key]
|
||||
for key in required_keys
|
||||
})
|
||||
|
||||
|
||||
class WorkareaFilepathResult:
|
||||
"""Result of workarea file formatting.
|
||||
|
||||
Args:
|
||||
root (str): Root path of workarea.
|
||||
filename (str): Filename.
|
||||
exists (bool): True if file exists.
|
||||
filepath (str): Filepath. If not provided it will be constructed
|
||||
from root and filename.
|
||||
"""
|
||||
|
||||
def __init__(self, root, filename, exists, filepath=None):
|
||||
if not filepath and root and filename:
|
||||
filepath = os.path.join(root, filename)
|
||||
self.root = root
|
||||
self.filename = filename
|
||||
self.exists = exists
|
||||
self.filepath = filepath
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class AbstractWorkfilesCommon(object):
|
||||
@abstractmethod
|
||||
def is_host_valid(self):
|
||||
"""Host is valid for workfiles tool work.
|
||||
|
||||
Returns:
|
||||
bool: True if host is valid.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_workfile_extensions(self):
|
||||
"""Get possible workfile extensions.
|
||||
|
||||
Defined by host implementation.
|
||||
|
||||
Returns:
|
||||
Iterable[str]: List of extensions.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def is_save_enabled(self):
|
||||
"""Is workfile save enabled.
|
||||
|
||||
Returns:
|
||||
bool: True if save is enabled.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_save_enabled(self, enabled):
|
||||
"""Enable or disabled workfile save.
|
||||
|
||||
Args:
|
||||
enabled (bool): Enable save workfile when True.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class AbstractWorkfilesBackend(AbstractWorkfilesCommon):
|
||||
# Current context
|
||||
@abstractmethod
|
||||
def get_host_name(self):
|
||||
"""Name of host.
|
||||
|
||||
Returns:
|
||||
str: Name of host.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_current_project_name(self):
|
||||
"""Project name from current context of host.
|
||||
|
||||
Returns:
|
||||
str: Name of project.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_current_folder_id(self):
|
||||
"""Folder id from current context of host.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Folder id or None if host does not have
|
||||
any context.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_current_task_name(self):
|
||||
"""Task name from current context of host.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Task name or None if host does not have
|
||||
any context.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_current_workfile(self):
|
||||
"""Current workfile from current context of host.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Path to workfile or None if host does
|
||||
not have opened specific file.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def project_anatomy(self):
|
||||
"""Project anatomy for current project.
|
||||
|
||||
Returns:
|
||||
Anatomy: Project anatomy.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def project_settings(self):
|
||||
"""Project settings for current project.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Project settings.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_folder_entity(self, folder_id):
|
||||
"""Get folder entity by id.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Folder entity data.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_task_entity(self, task_id):
|
||||
"""Get task entity by id.
|
||||
|
||||
Args:
|
||||
task_id (str): Task id.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Task entity data.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def emit_event(self, topic, data=None, source=None):
|
||||
"""Emit event.
|
||||
|
||||
Args:
|
||||
topic (str): Event topic used for callbacks filtering.
|
||||
data (Optional[dict[str, Any]]): Event data.
|
||||
source (Optional[str]): Event source.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class AbstractWorkfilesFrontend(AbstractWorkfilesCommon):
|
||||
"""UI controller abstraction that is used for workfiles tool frontend.
|
||||
|
||||
Abstraction to provide data for UI and to handle UI events.
|
||||
|
||||
Provide access to abstract backend data, like folders and tasks. Cares
|
||||
about handling of selection, keep information about current UI selection
|
||||
and have ability to tell what selection should UI show.
|
||||
|
||||
Selection is separated into 2 parts, first is what UI elements tell
|
||||
about selection, and second is what UI should show as selected.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def register_event_callback(self, topic, callback):
|
||||
"""Register event callback.
|
||||
|
||||
Listen for events with given topic.
|
||||
|
||||
Args:
|
||||
topic (str): Name of topic.
|
||||
callback (Callable): Callback that will be called when event
|
||||
is triggered.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
# Host information
|
||||
@abstractmethod
|
||||
def get_workfile_extensions(self):
|
||||
"""Each host can define extensions that can be used for workfile.
|
||||
|
||||
Returns:
|
||||
List[str]: File extensions that can be used as workfile for
|
||||
current host.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
# Selection information
|
||||
@abstractmethod
|
||||
def get_selected_folder_id(self):
|
||||
"""Currently selected folder id.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Folder id or None if no folder is selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_selected_folder(self, folder_id):
|
||||
"""Change selected folder.
|
||||
|
||||
This deselects currently selected task.
|
||||
|
||||
Args:
|
||||
folder_id (Union[str, None]): Folder id or None if no folder
|
||||
is selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_selected_task_id(self):
|
||||
"""Currently selected task id.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Task id or None if no folder is selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_selected_task_name(self):
|
||||
"""Currently selected task name.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Task name or None if no folder is selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_selected_task(self, folder_id, task_id, task_name):
|
||||
"""Change selected task.
|
||||
|
||||
Args:
|
||||
folder_id (Union[str, None]): Folder id or None if no folder
|
||||
is selected.
|
||||
task_id (Union[str, None]): Task id or None if no task
|
||||
is selected.
|
||||
task_name (Union[str, None]): Task name or None if no task
|
||||
is selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_selected_workfile_path(self):
|
||||
"""Currently selected workarea workile.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Selected workfile path.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_selected_workfile_path(self, path):
|
||||
"""Change selected workfile path.
|
||||
|
||||
Args:
|
||||
path (Union[str, None]): Selected workfile path.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_selected_representation_id(self):
|
||||
"""Currently selected workfile representation id.
|
||||
|
||||
Returns:
|
||||
Union[str, None]: Representation id or None if no representation
|
||||
is selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_selected_representation_id(self, representation_id):
|
||||
"""Change selected representation.
|
||||
|
||||
Args:
|
||||
representation_id (Union[str, None]): Selected workfile
|
||||
representation id.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def get_selected_context(self):
|
||||
"""Obtain selected context.
|
||||
|
||||
Returns:
|
||||
dict[str, Union[str, None]]: Selected context.
|
||||
"""
|
||||
|
||||
return {
|
||||
"folder_id": self.get_selected_folder_id(),
|
||||
"task_id": self.get_selected_task_id(),
|
||||
"task_name": self.get_selected_task_name(),
|
||||
"workfile_path": self.get_selected_workfile_path(),
|
||||
"representation_id": self.get_selected_representation_id(),
|
||||
}
|
||||
|
||||
# Expected selection
|
||||
# - expected selection is used to restore selection after refresh
|
||||
# or when current context should be used
|
||||
@abstractmethod
|
||||
def set_expected_selection(
|
||||
self,
|
||||
folder_id,
|
||||
task_name,
|
||||
workfile_name=None,
|
||||
representation_id=None
|
||||
):
|
||||
"""Define what should be selected in UI.
|
||||
|
||||
Expected selection provide a way to define/change selection of
|
||||
sequential UI elements. For example, if folder and task should be
|
||||
selected a task element should wait until folder element has selected
|
||||
folder.
|
||||
|
||||
Triggers 'expected_selection.changed' event.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_name (str): Task name.
|
||||
workfile_name (Optional[str]): Workfile name. Used for workarea
|
||||
files UI element.
|
||||
representation_id (Optional[str]): Representation id. Used for
|
||||
published filed UI element.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_expected_selection_data(self):
|
||||
"""Data of expected selection.
|
||||
|
||||
TODOs:
|
||||
Return defined object instead of dict.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Expected selection data.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def expected_folder_selected(self, folder_id):
|
||||
"""Expected folder was selected in UI.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id which was selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def expected_task_selected(self, folder_id, task_name):
|
||||
"""Expected task was selected in UI.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id under which task is.
|
||||
task_name (str): Task name which was selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def expected_representation_selected(self, representation_id):
|
||||
"""Expected representation was selected in UI.
|
||||
|
||||
Args:
|
||||
representation_id (str): Representation id which was selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def expected_workfile_selected(self, workfile_path):
|
||||
"""Expected workfile was selected in UI.
|
||||
|
||||
Args:
|
||||
workfile_path (str): Workfile path which was selected.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def go_to_current_context(self):
|
||||
"""Set expected selection to current context."""
|
||||
|
||||
pass
|
||||
|
||||
# Model functions
|
||||
@abstractmethod
|
||||
def get_folder_items(self, sender):
|
||||
"""Folder items to visualize project hierarchy.
|
||||
|
||||
This function may trigger events 'folders.refresh.started' and
|
||||
'folders.refresh.finished' which will contain 'sender' value in data.
|
||||
That may help to avoid re-refresh of folder items in UI elements.
|
||||
|
||||
Args:
|
||||
sender (str): Who requested folder items.
|
||||
|
||||
Returns:
|
||||
list[FolderItem]: Minimum possible information needed
|
||||
for visualisation of folder hierarchy.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_task_items(self, folder_id, sender):
|
||||
"""Task items.
|
||||
|
||||
This function may trigger events 'tasks.refresh.started' and
|
||||
'tasks.refresh.finished' which will contain 'sender' value in data.
|
||||
That may help to avoid re-refresh of task items in UI elements.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder ID for which are tasks requested.
|
||||
sender (str): Who requested folder items.
|
||||
|
||||
Returns:
|
||||
list[TaskItem]: Minimum possible information needed
|
||||
for visualisation of tasks.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def has_unsaved_changes(self):
|
||||
"""Has host unsaved change in currently running session.
|
||||
|
||||
Returns:
|
||||
bool: Has unsaved changes.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_workarea_dir_by_context(self, folder_id, task_id):
|
||||
"""Get workarea directory by context.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
|
||||
Returns:
|
||||
str: Workarea directory.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_workarea_file_items(self, folder_id, task_id):
|
||||
"""Get workarea file items.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
|
||||
Returns:
|
||||
list[FileItem]: List of workarea file items.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_workarea_save_as_data(self, folder_id, task_id):
|
||||
"""Prepare data for Save As operation.
|
||||
|
||||
Todos:
|
||||
Return defined object instead of dict.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Data for Save As operation.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def fill_workarea_filepath(
|
||||
self,
|
||||
folder_id,
|
||||
task_id,
|
||||
extension,
|
||||
use_last_version,
|
||||
version,
|
||||
comment,
|
||||
):
|
||||
"""Calculate workfile path for passed context.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
extension (str): File extension.
|
||||
use_last_version (bool): Use last version.
|
||||
version (int): Version used if 'use_last_version' if 'False'.
|
||||
comment (str): User's comment (subversion).
|
||||
|
||||
Returns:
|
||||
WorkareaFilepathResult: Result of the operation.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_published_file_items(self, folder_id, task_id):
|
||||
"""Get published file items.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (Union[str, None]): Task id.
|
||||
|
||||
Returns:
|
||||
list[FileItem]: List of published file items.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_workfile_info(self, folder_id, task_id, filepath):
|
||||
"""Workfile info from database.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
filepath (str): Workfile path.
|
||||
|
||||
Returns:
|
||||
Union[WorkfileInfo, None]: Workfile info or None if was passed
|
||||
invalid context.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def save_workfile_info(self, folder_id, task_id, filepath, note):
|
||||
"""Save workfile info to database.
|
||||
|
||||
At this moment the only information which can be saved about
|
||||
workfile is 'note'.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
filepath (str): Workfile path.
|
||||
note (str): Note.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
# General commands
|
||||
@abstractmethod
|
||||
def refresh(self):
|
||||
"""Refresh everything, models, ui etc.
|
||||
|
||||
Triggers 'controller.refresh.started' event at the beginning and
|
||||
'controller.refresh.finished' at the end.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
# Controller actions
|
||||
@abstractmethod
|
||||
def open_workfile(self, filepath):
|
||||
"""Open a workfile.
|
||||
|
||||
Args:
|
||||
filepath (str): Workfile path.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def save_current_workfile(self):
|
||||
"""Save state of current workfile."""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def save_as_workfile(
|
||||
self,
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
):
|
||||
"""Save current state of workfile to workarea.
|
||||
|
||||
Args:
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
workdir (str): Workarea directory.
|
||||
filename (str): Workarea filename.
|
||||
template_key (str): Template key used to get the workdir
|
||||
and filename.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def copy_workfile_representation(
|
||||
self,
|
||||
representation_id,
|
||||
representation_filepath,
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
):
|
||||
"""Action to copy published workfile representation to workarea.
|
||||
|
||||
Triggers 'copy_representation.started' event on start and
|
||||
'copy_representation.finished' event with '{"failed": bool}'.
|
||||
|
||||
Args:
|
||||
representation_id (str): Representation id.
|
||||
representation_filepath (str): Path to representation file.
|
||||
folder_id (str): Folder id.
|
||||
task_id (str): Task id.
|
||||
workdir (str): Workarea directory.
|
||||
filename (str): Workarea filename.
|
||||
template_key (str): Template key.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def duplicate_workfile(self, src_filepath, workdir, filename):
|
||||
"""Duplicate workfile.
|
||||
|
||||
Workfiles is not opened when done.
|
||||
|
||||
Args:
|
||||
src_filepath (str): Source workfile path.
|
||||
workdir (str): Destination workdir.
|
||||
filename (str): Destination filename.
|
||||
"""
|
||||
|
||||
pass
|
||||
642
openpype/tools/ayon_workfiles/control.py
Normal file
642
openpype/tools/ayon_workfiles/control.py
Normal file
|
|
@ -0,0 +1,642 @@
|
|||
import os
|
||||
import shutil
|
||||
|
||||
import ayon_api
|
||||
|
||||
from openpype.client import get_asset_by_id
|
||||
from openpype.host import IWorkfileHost
|
||||
from openpype.lib import Logger, emit_event
|
||||
from openpype.lib.events import QueuedEventSystem
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import Anatomy, registered_host
|
||||
from openpype.pipeline.context_tools import (
|
||||
change_current_context,
|
||||
get_current_host_name,
|
||||
get_global_context,
|
||||
)
|
||||
from openpype.pipeline.workfile import create_workdir_extra_folders
|
||||
|
||||
from .abstract import (
|
||||
AbstractWorkfilesFrontend,
|
||||
AbstractWorkfilesBackend,
|
||||
)
|
||||
from .models import SelectionModel, EntitiesModel, WorkfilesModel
|
||||
|
||||
|
||||
class ExpectedSelection:
|
||||
def __init__(self):
|
||||
self._folder_id = None
|
||||
self._task_name = None
|
||||
self._workfile_name = None
|
||||
self._representation_id = None
|
||||
self._folder_selected = True
|
||||
self._task_selected = True
|
||||
self._workfile_name_selected = True
|
||||
self._representation_id_selected = True
|
||||
|
||||
def set_expected_selection(
|
||||
self,
|
||||
folder_id,
|
||||
task_name,
|
||||
workfile_name=None,
|
||||
representation_id=None
|
||||
):
|
||||
self._folder_id = folder_id
|
||||
self._task_name = task_name
|
||||
self._workfile_name = workfile_name
|
||||
self._representation_id = representation_id
|
||||
self._folder_selected = False
|
||||
self._task_selected = False
|
||||
self._workfile_name_selected = workfile_name is None
|
||||
self._representation_id_selected = representation_id is None
|
||||
|
||||
def get_expected_selection_data(self):
|
||||
return {
|
||||
"folder_id": self._folder_id,
|
||||
"task_name": self._task_name,
|
||||
"workfile_name": self._workfile_name,
|
||||
"representation_id": self._representation_id,
|
||||
"folder_selected": self._folder_selected,
|
||||
"task_selected": self._task_selected,
|
||||
"workfile_name_selected": self._workfile_name_selected,
|
||||
"representation_id_selected": self._representation_id_selected,
|
||||
}
|
||||
|
||||
def is_expected_folder_selected(self, folder_id):
|
||||
return folder_id == self._folder_id and self._folder_selected
|
||||
|
||||
def is_expected_task_selected(self, folder_id, task_name):
|
||||
if not self.is_expected_folder_selected(folder_id):
|
||||
return False
|
||||
return task_name == self._task_name and self._task_selected
|
||||
|
||||
def expected_folder_selected(self, folder_id):
|
||||
if folder_id != self._folder_id:
|
||||
return False
|
||||
self._folder_selected = True
|
||||
return True
|
||||
|
||||
def expected_task_selected(self, folder_id, task_name):
|
||||
if not self.is_expected_folder_selected(folder_id):
|
||||
return False
|
||||
|
||||
if task_name != self._task_name:
|
||||
return False
|
||||
|
||||
self._task_selected = True
|
||||
return True
|
||||
|
||||
def expected_workfile_selected(self, folder_id, task_name, workfile_name):
|
||||
if not self.is_expected_task_selected(folder_id, task_name):
|
||||
return False
|
||||
|
||||
if workfile_name != self._workfile_name:
|
||||
return False
|
||||
self._workfile_name_selected = True
|
||||
return True
|
||||
|
||||
def expected_representation_selected(
|
||||
self, folder_id, task_name, representation_id
|
||||
):
|
||||
if not self.is_expected_task_selected(folder_id, task_name):
|
||||
return False
|
||||
if representation_id != self._representation_id:
|
||||
return False
|
||||
self._representation_id_selected = True
|
||||
return True
|
||||
|
||||
|
||||
class BaseWorkfileController(
|
||||
AbstractWorkfilesFrontend, AbstractWorkfilesBackend
|
||||
):
|
||||
def __init__(self, host=None):
|
||||
if host is None:
|
||||
host = registered_host()
|
||||
|
||||
host_is_valid = False
|
||||
if host is not None:
|
||||
missing_methods = (
|
||||
IWorkfileHost.get_missing_workfile_methods(host)
|
||||
)
|
||||
host_is_valid = len(missing_methods) == 0
|
||||
|
||||
self._host = host
|
||||
self._host_is_valid = host_is_valid
|
||||
|
||||
self._project_anatomy = None
|
||||
self._project_settings = None
|
||||
self._event_system = None
|
||||
self._log = None
|
||||
|
||||
self._current_project_name = None
|
||||
self._current_folder_name = None
|
||||
self._current_folder_id = None
|
||||
self._current_task_name = None
|
||||
self._save_is_enabled = True
|
||||
|
||||
# Expected selected folder and task
|
||||
self._expected_selection = self._create_expected_selection_obj()
|
||||
|
||||
self._selection_model = self._create_selection_model()
|
||||
self._entities_model = self._create_entities_model()
|
||||
self._workfiles_model = self._create_workfiles_model()
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
if self._log is None:
|
||||
self._log = Logger.get_logger("WorkfilesUI")
|
||||
return self._log
|
||||
|
||||
def is_host_valid(self):
|
||||
return self._host_is_valid
|
||||
|
||||
def _create_expected_selection_obj(self):
|
||||
return ExpectedSelection()
|
||||
|
||||
def _create_selection_model(self):
|
||||
return SelectionModel(self)
|
||||
|
||||
def _create_entities_model(self):
|
||||
return EntitiesModel(self)
|
||||
|
||||
def _create_workfiles_model(self):
|
||||
return WorkfilesModel(self)
|
||||
|
||||
@property
|
||||
def event_system(self):
|
||||
"""Inner event system for workfiles tool controller.
|
||||
|
||||
Is used for communication with UI. Event system is created on demand.
|
||||
|
||||
Returns:
|
||||
QueuedEventSystem: Event system which can trigger callbacks
|
||||
for topics.
|
||||
"""
|
||||
|
||||
if self._event_system is None:
|
||||
self._event_system = QueuedEventSystem()
|
||||
return self._event_system
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Implementation of methods required for backend logic
|
||||
# ----------------------------------------------------
|
||||
@property
|
||||
def project_settings(self):
|
||||
if self._project_settings is None:
|
||||
self._project_settings = get_project_settings(
|
||||
self.get_current_project_name())
|
||||
return self._project_settings
|
||||
|
||||
@property
|
||||
def project_anatomy(self):
|
||||
if self._project_anatomy is None:
|
||||
self._project_anatomy = Anatomy(self.get_current_project_name())
|
||||
return self._project_anatomy
|
||||
|
||||
def get_folder_entity(self, folder_id):
|
||||
return self._entities_model.get_folder_entity(folder_id)
|
||||
|
||||
def get_task_entity(self, task_id):
|
||||
return self._entities_model.get_task_entity(task_id)
|
||||
|
||||
# ---------------------------------
|
||||
# Implementation of abstract methods
|
||||
# ---------------------------------
|
||||
def emit_event(self, topic, data=None, source=None):
|
||||
"""Use implemented event system to trigger event."""
|
||||
|
||||
if data is None:
|
||||
data = {}
|
||||
self.event_system.emit(topic, data, source)
|
||||
|
||||
def register_event_callback(self, topic, callback):
|
||||
self.event_system.add_callback(topic, callback)
|
||||
|
||||
def is_save_enabled(self):
|
||||
"""Is workfile save enabled.
|
||||
|
||||
Returns:
|
||||
bool: True if save is enabled.
|
||||
"""
|
||||
|
||||
return self._save_is_enabled
|
||||
|
||||
def set_save_enabled(self, enabled):
|
||||
"""Enable or disabled workfile save.
|
||||
|
||||
Args:
|
||||
enabled (bool): Enable save workfile when True.
|
||||
"""
|
||||
|
||||
if self._save_is_enabled == enabled:
|
||||
return
|
||||
|
||||
self._save_is_enabled = enabled
|
||||
self._emit_event(
|
||||
"workfile_save_enable.changed",
|
||||
{"enabled": enabled}
|
||||
)
|
||||
|
||||
# Host information
|
||||
def get_workfile_extensions(self):
|
||||
host = self._host
|
||||
if isinstance(host, IWorkfileHost):
|
||||
return host.get_workfile_extensions()
|
||||
return host.file_extensions()
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
host = self._host
|
||||
if isinstance(host, IWorkfileHost):
|
||||
return host.workfile_has_unsaved_changes()
|
||||
return host.has_unsaved_changes()
|
||||
|
||||
# Current context
|
||||
def get_host_name(self):
|
||||
host = self._host
|
||||
if isinstance(host, IWorkfileHost):
|
||||
return host.name
|
||||
return get_current_host_name()
|
||||
|
||||
def _get_host_current_context(self):
|
||||
if hasattr(self._host, "get_current_context"):
|
||||
return self._host.get_current_context()
|
||||
return get_global_context()
|
||||
|
||||
def get_current_project_name(self):
|
||||
return self._current_project_name
|
||||
|
||||
def get_current_folder_id(self):
|
||||
return self._current_folder_id
|
||||
|
||||
def get_current_task_name(self):
|
||||
return self._current_task_name
|
||||
|
||||
def get_current_workfile(self):
|
||||
host = self._host
|
||||
if isinstance(host, IWorkfileHost):
|
||||
return host.get_current_workfile()
|
||||
return host.current_file()
|
||||
|
||||
# Selection information
|
||||
def get_selected_folder_id(self):
|
||||
return self._selection_model.get_selected_folder_id()
|
||||
|
||||
def set_selected_folder(self, folder_id):
|
||||
self._selection_model.set_selected_folder(folder_id)
|
||||
|
||||
def get_selected_task_id(self):
|
||||
return self._selection_model.get_selected_task_id()
|
||||
|
||||
def get_selected_task_name(self):
|
||||
return self._selection_model.get_selected_task_name()
|
||||
|
||||
def set_selected_task(self, folder_id, task_id, task_name):
|
||||
return self._selection_model.set_selected_task(
|
||||
folder_id, task_id, task_name)
|
||||
|
||||
def get_selected_workfile_path(self):
|
||||
return self._selection_model.get_selected_workfile_path()
|
||||
|
||||
def set_selected_workfile_path(self, path):
|
||||
self._selection_model.set_selected_workfile_path(path)
|
||||
|
||||
def get_selected_representation_id(self):
|
||||
return self._selection_model.get_selected_representation_id()
|
||||
|
||||
def set_selected_representation_id(self, representation_id):
|
||||
self._selection_model.set_selected_representation_id(
|
||||
representation_id)
|
||||
|
||||
def set_expected_selection(
|
||||
self,
|
||||
folder_id,
|
||||
task_name,
|
||||
workfile_name=None,
|
||||
representation_id=None
|
||||
):
|
||||
self._expected_selection.set_expected_selection(
|
||||
folder_id, task_name, workfile_name, representation_id
|
||||
)
|
||||
self._trigger_expected_selection_changed()
|
||||
|
||||
def expected_folder_selected(self, folder_id):
|
||||
if self._expected_selection.expected_folder_selected(folder_id):
|
||||
self._trigger_expected_selection_changed()
|
||||
|
||||
def expected_task_selected(self, folder_id, task_name):
|
||||
if self._expected_selection.expected_task_selected(
|
||||
folder_id, task_name
|
||||
):
|
||||
self._trigger_expected_selection_changed()
|
||||
|
||||
def expected_workfile_selected(self, folder_id, task_name, workfile_name):
|
||||
if self._expected_selection.expected_workfile_selected(
|
||||
folder_id, task_name, workfile_name
|
||||
):
|
||||
self._trigger_expected_selection_changed()
|
||||
|
||||
def expected_representation_selected(
|
||||
self, folder_id, task_name, representation_id
|
||||
):
|
||||
if self._expected_selection.expected_representation_selected(
|
||||
folder_id, task_name, representation_id
|
||||
):
|
||||
self._trigger_expected_selection_changed()
|
||||
|
||||
def get_expected_selection_data(self):
|
||||
return self._expected_selection.get_expected_selection_data()
|
||||
|
||||
def go_to_current_context(self):
|
||||
self.set_expected_selection(
|
||||
self._current_folder_id, self._current_task_name
|
||||
)
|
||||
|
||||
# Model functions
|
||||
def get_folder_items(self, sender):
|
||||
return self._entities_model.get_folder_items(sender)
|
||||
|
||||
def get_task_items(self, folder_id, sender):
|
||||
return self._entities_model.get_tasks_items(folder_id, sender)
|
||||
|
||||
def get_workarea_dir_by_context(self, folder_id, task_id):
|
||||
return self._workfiles_model.get_workarea_dir_by_context(
|
||||
folder_id, task_id)
|
||||
|
||||
def get_workarea_file_items(self, folder_id, task_id):
|
||||
return self._workfiles_model.get_workarea_file_items(
|
||||
folder_id, task_id)
|
||||
|
||||
def get_workarea_save_as_data(self, folder_id, task_id):
|
||||
return self._workfiles_model.get_workarea_save_as_data(
|
||||
folder_id, task_id)
|
||||
|
||||
def fill_workarea_filepath(
|
||||
self,
|
||||
folder_id,
|
||||
task_id,
|
||||
extension,
|
||||
use_last_version,
|
||||
version,
|
||||
comment,
|
||||
):
|
||||
return self._workfiles_model.fill_workarea_filepath(
|
||||
folder_id,
|
||||
task_id,
|
||||
extension,
|
||||
use_last_version,
|
||||
version,
|
||||
comment,
|
||||
)
|
||||
|
||||
def get_published_file_items(self, folder_id, task_id):
|
||||
task_name = None
|
||||
if task_id:
|
||||
task = self.get_task_entity(task_id)
|
||||
task_name = task.get("name")
|
||||
|
||||
return self._workfiles_model.get_published_file_items(
|
||||
folder_id, task_name)
|
||||
|
||||
def get_workfile_info(self, folder_id, task_id, filepath):
|
||||
return self._workfiles_model.get_workfile_info(
|
||||
folder_id, task_id, filepath
|
||||
)
|
||||
|
||||
def save_workfile_info(self, folder_id, task_id, filepath, note):
|
||||
self._workfiles_model.save_workfile_info(
|
||||
folder_id, task_id, filepath, note
|
||||
)
|
||||
|
||||
def refresh(self):
|
||||
if not self._host_is_valid:
|
||||
self._emit_event("controller.refresh.started")
|
||||
self._emit_event("controller.refresh.finished")
|
||||
return
|
||||
expected_folder_id = self.get_selected_folder_id()
|
||||
expected_task_name = self.get_selected_task_name()
|
||||
|
||||
self._emit_event("controller.refresh.started")
|
||||
|
||||
context = self._get_host_current_context()
|
||||
|
||||
project_name = context["project_name"]
|
||||
folder_name = context["asset_name"]
|
||||
task_name = context["task_name"]
|
||||
folder_id = None
|
||||
if folder_name:
|
||||
folder = ayon_api.get_folder_by_name(project_name, folder_name)
|
||||
if folder:
|
||||
folder_id = folder["id"]
|
||||
|
||||
self._project_settings = None
|
||||
self._project_anatomy = None
|
||||
|
||||
self._current_project_name = project_name
|
||||
self._current_folder_name = folder_name
|
||||
self._current_folder_id = folder_id
|
||||
self._current_task_name = task_name
|
||||
|
||||
if not expected_folder_id:
|
||||
expected_folder_id = folder_id
|
||||
expected_task_name = task_name
|
||||
|
||||
self._expected_selection.set_expected_selection(
|
||||
expected_folder_id, expected_task_name
|
||||
)
|
||||
|
||||
self._entities_model.refresh()
|
||||
|
||||
self._emit_event("controller.refresh.finished")
|
||||
|
||||
# Controller actions
|
||||
def open_workfile(self, filepath):
|
||||
self._emit_event("open_workfile.started")
|
||||
|
||||
failed = False
|
||||
try:
|
||||
self._host_open_workfile(filepath)
|
||||
|
||||
except Exception:
|
||||
failed = True
|
||||
self.log.warning("Open of workfile failed", exc_info=True)
|
||||
|
||||
self._emit_event(
|
||||
"open_workfile.finished",
|
||||
{"failed": failed},
|
||||
)
|
||||
|
||||
def save_current_workfile(self):
|
||||
current_file = self.get_current_workfile()
|
||||
self._host_save_workfile(current_file)
|
||||
|
||||
def save_as_workfile(
|
||||
self,
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
):
|
||||
self._emit_event("save_as.started")
|
||||
|
||||
failed = False
|
||||
try:
|
||||
self._save_as_workfile(
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
)
|
||||
except Exception:
|
||||
failed = True
|
||||
self.log.warning("Save as failed", exc_info=True)
|
||||
|
||||
self._emit_event(
|
||||
"save_as.finished",
|
||||
{"failed": failed},
|
||||
)
|
||||
|
||||
def copy_workfile_representation(
|
||||
self,
|
||||
representation_id,
|
||||
representation_filepath,
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
):
|
||||
self._emit_event("copy_representation.started")
|
||||
|
||||
failed = False
|
||||
try:
|
||||
self._save_as_workfile(
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
)
|
||||
except Exception:
|
||||
failed = True
|
||||
self.log.warning(
|
||||
"Copy of workfile representation failed", exc_info=True
|
||||
)
|
||||
|
||||
self._emit_event(
|
||||
"copy_representation.finished",
|
||||
{"failed": failed},
|
||||
)
|
||||
|
||||
def duplicate_workfile(self, src_filepath, workdir, filename):
|
||||
self._emit_event("workfile_duplicate.started")
|
||||
|
||||
failed = False
|
||||
try:
|
||||
dst_filepath = os.path.join(workdir, filename)
|
||||
shutil.copy(src_filepath, dst_filepath)
|
||||
except Exception:
|
||||
failed = True
|
||||
self.log.warning("Duplication of workfile failed", exc_info=True)
|
||||
|
||||
self._emit_event(
|
||||
"workfile_duplicate.finished",
|
||||
{"failed": failed},
|
||||
)
|
||||
|
||||
# Helper host methods that resolve 'IWorkfileHost' interface
|
||||
def _host_open_workfile(self, filepath):
|
||||
host = self._host
|
||||
if isinstance(host, IWorkfileHost):
|
||||
host.open_workfile(filepath)
|
||||
else:
|
||||
host.open_file(filepath)
|
||||
|
||||
def _host_save_workfile(self, filepath):
|
||||
host = self._host
|
||||
if isinstance(host, IWorkfileHost):
|
||||
host.save_workfile(filepath)
|
||||
else:
|
||||
host.save_file(filepath)
|
||||
|
||||
def _emit_event(self, topic, data=None):
|
||||
self.emit_event(topic, data, "controller")
|
||||
|
||||
# Expected selection
|
||||
# - expected selection is used to restore selection after refresh
|
||||
# or when current context should be used
|
||||
def _trigger_expected_selection_changed(self):
|
||||
self._emit_event(
|
||||
"expected_selection_changed",
|
||||
self._expected_selection.get_expected_selection_data(),
|
||||
)
|
||||
|
||||
def _save_as_workfile(
|
||||
self,
|
||||
folder_id,
|
||||
task_id,
|
||||
workdir,
|
||||
filename,
|
||||
template_key,
|
||||
src_filepath=None,
|
||||
):
|
||||
# Trigger before save event
|
||||
project_name = self.get_current_project_name()
|
||||
folder = self.get_folder_entity(folder_id)
|
||||
task = self.get_task_entity(task_id)
|
||||
task_name = task["name"]
|
||||
|
||||
# QUESTION should the data be different for 'before' and 'after'?
|
||||
# NOTE keys should be OpenPype compatible
|
||||
event_data = {
|
||||
"project_name": project_name,
|
||||
"folder_id": folder_id,
|
||||
"asset_id": folder_id,
|
||||
"asset_name": folder["name"],
|
||||
"task_id": task_id,
|
||||
"task_name": task_name,
|
||||
"host_name": self.get_host_name(),
|
||||
"filename": filename,
|
||||
"workdir_path": workdir,
|
||||
}
|
||||
emit_event("workfile.save.before", event_data, source="workfiles.tool")
|
||||
|
||||
# Create workfiles root folder
|
||||
if not os.path.exists(workdir):
|
||||
self.log.debug("Initializing work directory: %s", workdir)
|
||||
os.makedirs(workdir)
|
||||
|
||||
# Change context
|
||||
if (
|
||||
folder_id != self.get_current_folder_id()
|
||||
or task_name != self.get_current_task_name()
|
||||
):
|
||||
# Use OpenPype asset-like object
|
||||
asset_doc = get_asset_by_id(project_name, folder["id"])
|
||||
change_current_context(
|
||||
asset_doc,
|
||||
task["name"],
|
||||
template_key=template_key
|
||||
)
|
||||
|
||||
# Save workfile
|
||||
dst_filepath = os.path.join(workdir, filename)
|
||||
if src_filepath:
|
||||
shutil.copyfile(src_filepath, dst_filepath)
|
||||
self._host_open_workfile(dst_filepath)
|
||||
else:
|
||||
self._host_save_workfile(dst_filepath)
|
||||
|
||||
# Create extra folders
|
||||
create_workdir_extra_folders(
|
||||
workdir,
|
||||
self.get_host_name(),
|
||||
task["taskType"],
|
||||
task_name,
|
||||
project_name
|
||||
)
|
||||
|
||||
# Trigger after save events
|
||||
emit_event("workfile.save.after", event_data, source="workfiles.tool")
|
||||
self.refresh()
|
||||
10
openpype/tools/ayon_workfiles/models/__init__.py
Normal file
10
openpype/tools/ayon_workfiles/models/__init__.py
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
from .hierarchy import EntitiesModel
|
||||
from .selection import SelectionModel
|
||||
from .workfiles import WorkfilesModel
|
||||
|
||||
|
||||
__all__ = (
|
||||
"SelectionModel",
|
||||
"EntitiesModel",
|
||||
"WorkfilesModel",
|
||||
)
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue