mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-26 13:52:15 +01:00
Merge remote-tracking branch 'upstream/develop' into feature/maya_usd_native_support
This commit is contained in:
commit
1ad404100c
110 changed files with 1136 additions and 406 deletions
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,8 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.16.5
|
||||
- 3.16.5-nightly.5
|
||||
- 3.16.5-nightly.4
|
||||
- 3.16.5-nightly.3
|
||||
- 3.16.5-nightly.2
|
||||
|
|
@ -133,8 +135,6 @@ body:
|
|||
- 3.14.9-nightly.3
|
||||
- 3.14.9-nightly.2
|
||||
- 3.14.9-nightly.1
|
||||
- 3.14.8
|
||||
- 3.14.8-nightly.4
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
673
CHANGELOG.md
673
CHANGELOG.md
|
|
@ -1,6 +1,679 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.16.5](https://github.com/ynput/OpenPype/tree/3.16.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.4...3.16.5)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Attribute Definitions: Multiselection enum def <a href="https://github.com/ynput/OpenPype/pull/5547">#5547</a></summary>
|
||||
|
||||
Added `multiselection` option to `EnumDef`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Farm: adding target collector <a href="https://github.com/ynput/OpenPype/pull/5494">#5494</a></summary>
|
||||
|
||||
Enhancing farm publishing workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Optimize validate plug-in path attributes <a href="https://github.com/ynput/OpenPype/pull/5522">#5522</a></summary>
|
||||
|
||||
- Optimize query (use `cmds.ls` once)
|
||||
- Add Select Invalid action
|
||||
- Improve validation report
|
||||
- Avoid "Unknown object type" errors
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Remove Validate Instance Attributes plug-in <a href="https://github.com/ynput/OpenPype/pull/5525">#5525</a></summary>
|
||||
|
||||
Remove Validate Instance Attributes plug-in.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Tweak logging for artist facing reports <a href="https://github.com/ynput/OpenPype/pull/5537">#5537</a></summary>
|
||||
|
||||
Tweak the logging of publishing for global, deadline, maya and a fusion plugin to have a cleaner artist-facing report.
|
||||
- Fix context being reported correctly from CollectContext
|
||||
- Fix ValidateMeshArnoldAttributes: fix when arnold is not loaded, fix applying settings, fix for when ai attributes do not exist
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Update settings <a href="https://github.com/ynput/OpenPype/pull/5544">#5544</a></summary>
|
||||
|
||||
Updated settings in AYON addons and conversion of AYON settings in OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Removed Ass export script <a href="https://github.com/ynput/OpenPype/pull/5560">#5560</a></summary>
|
||||
|
||||
Removed Arnold render script, which was obsolete and unused.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Allow for knob values to be validated against multiple values. <a href="https://github.com/ynput/OpenPype/pull/5042">#5042</a></summary>
|
||||
|
||||
Knob values can now be validated against multiple values, so you can allow write nodes to be `exr` and `png`, or `16-bit` and `32-bit`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Cosmetics for Higher version of publish already exists validation error <a href="https://github.com/ynput/OpenPype/pull/5190">#5190</a></summary>
|
||||
|
||||
Fix double spaces in message.Example output **after** the PR:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: publish existing frames on farm <a href="https://github.com/ynput/OpenPype/pull/5409">#5409</a></summary>
|
||||
|
||||
This PR proposes adding a fourth option in Nuke render publish called "Use Existing Frames - Farm". This would be useful when the farm is busy or when the artist lacks enough farm licenses. Additionally, some artists prefer rendering on the farm but still want to check frames before publishing.By adding the "Use Existing Frames - Farm" option, artists will have more flexibility and control over their render publishing process. This enhancement will streamline the workflow and improve efficiency for Nuke users.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Create project in temp location and move to final when done <a href="https://github.com/ynput/OpenPype/pull/5476">#5476</a></summary>
|
||||
|
||||
Create Unreal project in local temporary folder and when done, move it to final destination.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: adding audio product type into default presets <a href="https://github.com/ynput/OpenPype/pull/5489">#5489</a></summary>
|
||||
|
||||
Adding Audio product type into default presets so anybody can publish audio to their shots.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: avoiding cleanup of flagged representation <a href="https://github.com/ynput/OpenPype/pull/5502">#5502</a></summary>
|
||||
|
||||
Publishing folder can be flagged as persistent at representation level.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: missing tag could raise error <a href="https://github.com/ynput/OpenPype/pull/5511">#5511</a></summary>
|
||||
|
||||
- avoiding potential situation where missing Tag key could raise error
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Queued event system <a href="https://github.com/ynput/OpenPype/pull/5514">#5514</a></summary>
|
||||
|
||||
Implemented event system with more expected behavior of event system. If an event is triggered during other event callback, it is not processed immediately but waits until all callbacks of previous events are done. The event system also allows to not trigger events directly once `emit_event` is called which gives option to process events in custom loops.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Tweak log message to provide plugin name after "Plugin" <a href="https://github.com/ynput/OpenPype/pull/5521">#5521</a></summary>
|
||||
|
||||
Fix logged message for settings automatically applied to plugin attributes
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Improve VDB Selection <a href="https://github.com/ynput/OpenPype/pull/5523">#5523</a></summary>
|
||||
|
||||
Improves VDB selection if selection is `SopNode`: return the selected sop nodeif selection is `ObjNode`: get the output node with the minimum 'outputidx' or the node with display flag
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Refactor/tweak Validate Instance In same Context plug-in <a href="https://github.com/ynput/OpenPype/pull/5526">#5526</a></summary>
|
||||
|
||||
- Chore/Refactor: Re-use existing select invalid and repair actions
|
||||
- Enhancement: provide more elaborate PublishValidationError report
|
||||
- Bugfix: fix "optional" support by using `OptionalPyblishPluginMixin` base class.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Update houdini main menu <a href="https://github.com/ynput/OpenPype/pull/5527">#5527</a></summary>
|
||||
|
||||
This PR adds two updates:
|
||||
- dynamic main menu
|
||||
- dynamic asset name and task
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Reset FPS when clicking Set Frame Range <a href="https://github.com/ynput/OpenPype/pull/5528">#5528</a></summary>
|
||||
|
||||
_Similar to Maya,_ Make `Set Frame Range` resets FPS, issue https://github.com/ynput/OpenPype/issues/5516
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Deadline plugins optimize, cleanup and fix optional support for validate deadline pools <a href="https://github.com/ynput/OpenPype/pull/5531">#5531</a></summary>
|
||||
|
||||
- Fix optional support of validate deadline pools
|
||||
- Query deadline webservice only once per URL for verification, and once for available deadline pools instead of for every instance
|
||||
- Use `deadlineUrl` in `instance.data` when validating pools if it is set.
|
||||
- Code cleanup: Re-use existing `requests_get` implementation
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: PowerShell script for docker build <a href="https://github.com/ynput/OpenPype/pull/5535">#5535</a></summary>
|
||||
|
||||
Added PowerShell script to run docker build.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Deadline expand userpaths in executables list <a href="https://github.com/ynput/OpenPype/pull/5540">#5540</a></summary>
|
||||
|
||||
Expande `~` paths in executables list.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Use correct git url <a href="https://github.com/ynput/OpenPype/pull/5542">#5542</a></summary>
|
||||
|
||||
Fixed github url in README.md.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Create plugin does not expect system settings <a href="https://github.com/ynput/OpenPype/pull/5553">#5553</a></summary>
|
||||
|
||||
System settings are not passed to initialization of create plugin initialization (and `apply_settings`).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Allow custom Qt scale factor rounding policy <a href="https://github.com/ynput/OpenPype/pull/5555">#5555</a></summary>
|
||||
|
||||
Do not force `PassThrough` rounding policy if different policy is defined via env variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Fix outdated containers pop-up on opening last workfile on launch <a href="https://github.com/ynput/OpenPype/pull/5567">#5567</a></summary>
|
||||
|
||||
Fix Houdini not showing outdated containers pop-up on scene open when launching with last workfile argument
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Improve errors e.g. raise PublishValidationError or cosmetics <a href="https://github.com/ynput/OpenPype/pull/5568">#5568</a></summary>
|
||||
|
||||
Improve errors e.g. raise PublishValidationError or cosmeticsThis also fixes the Increment Current File plug-in since due to an invalid import it was previously broken
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Code updates <a href="https://github.com/ynput/OpenPype/pull/5569">#5569</a></summary>
|
||||
|
||||
Update fusion code which contains obsolete code. Removed `switch_ui.py` script from fusion with related script in scripts.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Validate Shape Zero fix repair action + provide informational artist-facing report <a href="https://github.com/ynput/OpenPype/pull/5524">#5524</a></summary>
|
||||
|
||||
Refactor to PublishValidationError to allow the RepairAction to work + provide informational report message
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix attribute definitions for `CreateYetiCache` <a href="https://github.com/ynput/OpenPype/pull/5574">#5574</a></summary>
|
||||
|
||||
Fix attribute definitions for `CreateYetiCache`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Optional Renderable Camera Validator for Render Instance <a href="https://github.com/ynput/OpenPype/pull/5286">#5286</a></summary>
|
||||
|
||||
Optional validation to check on renderable camera being set up correctly for deadline submission.If not being set up correctly, it wont pass the validation and user can perform repair actions.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Adding custom modifiers back to the loaded objects <a href="https://github.com/ynput/OpenPype/pull/5378">#5378</a></summary>
|
||||
|
||||
The custom parameters OpenpypeData doesn't show in the loaded container when it is being loaded through the loader.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Use default_variant to Houdini Node TAB Creator <a href="https://github.com/ynput/OpenPype/pull/5421">#5421</a></summary>
|
||||
|
||||
Use the default variant of the creator plugins on the interactive creator from the TAB node search instead of hard-coding it to `Main`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: adding inherited colorspace from instance <a href="https://github.com/ynput/OpenPype/pull/5454">#5454</a></summary>
|
||||
|
||||
Thumbnails are extracted with inherited colorspace collected from rendering write node.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add kitsu credentials to deadline publish job <a href="https://github.com/ynput/OpenPype/pull/5455">#5455</a></summary>
|
||||
|
||||
This PR hopefully fixes this issue #5440
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fill entities during editorial <a href="https://github.com/ynput/OpenPype/pull/5475">#5475</a></summary>
|
||||
|
||||
Fill entities and update template data on instances during extract AYON hierarchy.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Fix version 0 when integrating to Ftrack - OP-6595 <a href="https://github.com/ynput/OpenPype/pull/5477">#5477</a></summary>
|
||||
|
||||
Fix publishing version 0 to Ftrack.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>OCIO: windows unc path support in Nuke and Hiero <a href="https://github.com/ynput/OpenPype/pull/5479">#5479</a></summary>
|
||||
|
||||
Hiero and Nuke is not supporting windows unc path formatting in OCIO environment variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: Added super call to init <a href="https://github.com/ynput/OpenPype/pull/5480">#5480</a></summary>
|
||||
|
||||
DL 10.3 requires plugin inheriting from DeadlinePlugin to call super's **init** explicitly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixing thumbnail and monitor out root attributes <a href="https://github.com/ynput/OpenPype/pull/5483">#5483</a></summary>
|
||||
|
||||
Nuke Root Colorspace settings for Thumbnail and Monitor Out schema was gradually changed between version 12, 13, 14 and we needed to address those changes individually for particular version.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixing missing `instance_id` error <a href="https://github.com/ynput/OpenPype/pull/5484">#5484</a></summary>
|
||||
|
||||
Workfiles with Instances created in old publisher workflow were rising error during converting method since they were missing `instance_id` key introduced in new publisher workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: existing frames validator is repairing render target <a href="https://github.com/ynput/OpenPype/pull/5486">#5486</a></summary>
|
||||
|
||||
Nuke is now correctly repairing render target after the existing frames validator finds missing frames and repair action is used.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>added UE to extract burnins families <a href="https://github.com/ynput/OpenPype/pull/5487">#5487</a></summary>
|
||||
|
||||
This PR fixes missing burnins in reviewables when rendering from UE.
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Harmony: refresh code for current Deadline <a href="https://github.com/ynput/OpenPype/pull/5493">#5493</a></summary>
|
||||
|
||||
- Added support in Deadline Plug-in for new versions of Harmony, in particular version 21 and 22.
|
||||
- Remove review=False flag on render instance
|
||||
- Add farm=True flag on render instance
|
||||
- Fix is_in_tests function call in Harmony Deadline submission plugin
|
||||
- Force HarmonyOpenPype.py Deadline Python plug-in to py3
|
||||
- Fix cosmetics/hound in HarmonyOpenPype.py Deadline Python plug-in
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix multiselection value <a href="https://github.com/ynput/OpenPype/pull/5505">#5505</a></summary>
|
||||
|
||||
Selection of multiple instances in Publisher does not cause that all instances change all publish attributes to the same value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Avoid warnings on thumbnails if source image also has alpha channel <a href="https://github.com/ynput/OpenPype/pull/5510">#5510</a></summary>
|
||||
|
||||
Avoids the following warning from `ExtractThumbnailFromSource`:
|
||||
```
|
||||
// pyblish.ExtractThumbnailFromSource : oiiotool WARNING: -o : Can't save 4 channels to jpeg... saving only R,G,B
|
||||
```
|
||||
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Update ayon-python-api <a href="https://github.com/ynput/OpenPype/pull/5512">#5512</a></summary>
|
||||
|
||||
Update ayon python api and related callbacks.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Fixing the bug of falling back to use workfile for Arnold or any renderers except Redshift <a href="https://github.com/ynput/OpenPype/pull/5520">#5520</a></summary>
|
||||
|
||||
Fix the bug of falling back to use workfile for Arnold
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Fix Validate Publish Dir Validator <a href="https://github.com/ynput/OpenPype/pull/5534">#5534</a></summary>
|
||||
|
||||
Nonsensical "family" key was used instead of real value (as 'render' etc.) which would result in wrong translation of intermediate family names.Updated docstring.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>have the addons loading respect a custom AYON_ADDONS_DIR <a href="https://github.com/ynput/OpenPype/pull/5539">#5539</a></summary>
|
||||
|
||||
When using a custom AYON_ADDONS_DIR environment variable that variable is used in the launcher correctly and downloads and extracts addons to there, however when running Ayon does not respect this environment variable
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: files on representation cannot be single item list <a href="https://github.com/ynput/OpenPype/pull/5545">#5545</a></summary>
|
||||
|
||||
Further logic expects that single item files will be only 'string' not 'list' (eg. repre["files"] = "abc.exr" not repre["files"] = ["abc.exr"].This would cause an issue in ExtractReview later.This could happen if DL rendered single frame file with different frame value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Webpublisher: better encode list values for click <a href="https://github.com/ynput/OpenPype/pull/5546">#5546</a></summary>
|
||||
|
||||
Targets could be a list, original implementation pushed it as a separate items, it must be added as `--targets webpulish --targets filepublish`.`wepublish_routes` handles triggering from UI, changes in `publish_functions` handle triggering from cmd (for tests, api access).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Introduce imprint function for correct version in hda loader <a href="https://github.com/ynput/OpenPype/pull/5548">#5548</a></summary>
|
||||
|
||||
Resolve #5478
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fill entities during editorial (2) <a href="https://github.com/ynput/OpenPype/pull/5549">#5549</a></summary>
|
||||
|
||||
Fix changes made in https://github.com/ynput/OpenPype/pull/5475.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: OP Data updates in Loaders <a href="https://github.com/ynput/OpenPype/pull/5563">#5563</a></summary>
|
||||
|
||||
Fix the bug on the loaders not being able to load the objects when iterating key and values with the dict.Max prefers list over the list in dict.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Create Plugins: Better check of overriden '__init__' method <a href="https://github.com/ynput/OpenPype/pull/5571">#5571</a></summary>
|
||||
|
||||
Create plugins do not log warning messages about each create plugin because of wrong `__init__` method check.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Tests: fix unit tests <a href="https://github.com/ynput/OpenPype/pull/5533">#5533</a></summary>
|
||||
|
||||
Fixed failing tests.Updated Unreal's validator to match removed general one which had a couple of issues fixed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.16.4](https://github.com/ynput/OpenPype/tree/3.16.4)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -85,5 +85,5 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
|
|||
# Add review family if the instance is marked as 'review'
|
||||
# This could be done through a 'review' Creator attribute.
|
||||
if instance.data.get("review", False):
|
||||
self.log.info("Adding review family..")
|
||||
self.log.debug("Adding review family..")
|
||||
instance.data["families"].append("review")
|
||||
|
|
|
|||
|
|
@ -303,6 +303,28 @@ def on_save():
|
|||
lib.set_id(node, new_id, overwrite=False)
|
||||
|
||||
|
||||
def _show_outdated_content_popup():
|
||||
# Get main window
|
||||
parent = lib.get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated content pop-up "
|
||||
"because Houdini window can't be found.")
|
||||
else:
|
||||
from openpype.widgets import popup
|
||||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
from openpype.tools.utils import host_tools
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Houdini scene has outdated content")
|
||||
dialog.setMessage("There are outdated containers in "
|
||||
"your Houdini scene.")
|
||||
dialog.on_clicked.connect(_on_show_inventory)
|
||||
dialog.show()
|
||||
|
||||
|
||||
def on_open():
|
||||
|
||||
if not hou.isUIAvailable():
|
||||
|
|
@ -316,28 +338,18 @@ def on_open():
|
|||
lib.validate_fps()
|
||||
|
||||
if any_outdated_containers():
|
||||
from openpype.widgets import popup
|
||||
|
||||
log.warning("Scene has outdated content.")
|
||||
|
||||
# Get main window
|
||||
parent = lib.get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated content pop-up "
|
||||
"because Houdini window can't be found.")
|
||||
# When opening Houdini with last workfile on launch the UI hasn't
|
||||
# initialized yet completely when the `on_open` callback triggers.
|
||||
# We defer the dialog popup to wait for the UI to become available.
|
||||
# We assume it will open because `hou.isUIAvailable()` returns True
|
||||
import hdefereval
|
||||
hdefereval.executeDeferred(_show_outdated_content_popup)
|
||||
else:
|
||||
_show_outdated_content_popup()
|
||||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
from openpype.tools.utils import host_tools
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Houdini scene has outdated content")
|
||||
dialog.setMessage("There are outdated containers in "
|
||||
"your Houdini scene.")
|
||||
dialog.on_clicked.connect(_on_show_inventory)
|
||||
dialog.show()
|
||||
log.warning("Scene has outdated content.")
|
||||
|
||||
|
||||
def on_new():
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
||||
class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
||||
"""Collect the out node's SOP/COP Path value."""
|
||||
|
|
@ -58,8 +60,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
elif node_type == "Redshift_Proxy_Output":
|
||||
out_node = node.parm("RS_archive_sopPath").evalAsNode()
|
||||
else:
|
||||
raise ValueError(
|
||||
"ROP node type '%s' is" " not supported." % node_type
|
||||
raise KnownPublishError(
|
||||
"ROP node type '{}' is not supported.".format(node_type)
|
||||
)
|
||||
|
||||
if not out_node:
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import pyblish.api
|
|||
|
||||
from openpype.lib import version_up
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.action import get_errored_plugins_from_data
|
||||
from openpype.pipeline.publish import get_errored_plugins_from_context
|
||||
from openpype.hosts.houdini.api import HoudiniHost
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
|
@ -27,7 +27,7 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
|
||||
errored_plugins = get_errored_plugins_from_data(context)
|
||||
errored_plugins = get_errored_plugins_from_context(context)
|
||||
if any(
|
||||
plugin.__name__ == "HoudiniSubmitPublishDeadline"
|
||||
for plugin in errored_plugins
|
||||
|
|
@ -40,9 +40,10 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin):
|
|||
# Filename must not have changed since collecting
|
||||
host = registered_host() # type: HoudiniHost
|
||||
current_file = host.current_file()
|
||||
assert (
|
||||
context.data["currentFile"] == current_file
|
||||
), "Collected filename mismatches from current scene name."
|
||||
if context.data["currentFile"] != current_file:
|
||||
raise KnownPublishError(
|
||||
"Collected filename mismatches from current scene name."
|
||||
)
|
||||
|
||||
new_filepath = version_up(current_file)
|
||||
host.save_workfile(new_filepath)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import PublishValidationError
|
||||
from openpype.hosts.houdini.api import lib
|
||||
import hou
|
||||
|
||||
|
|
@ -30,7 +31,7 @@ class ValidateAnimationSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
raise PublishValidationError(
|
||||
"Output settings do no match for '%s'" % instance
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -36,11 +36,11 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin):
|
|||
if node.parm("shellexec").eval():
|
||||
self.raise_error("Must not execute in shell")
|
||||
if node.parm("prerender").eval() != cmd:
|
||||
self.raise_error(("REMOTE_PUBLISH node does not have "
|
||||
"correct prerender script."))
|
||||
self.raise_error("REMOTE_PUBLISH node does not have "
|
||||
"correct prerender script.")
|
||||
if node.parm("lprerender").eval() != "python":
|
||||
self.raise_error(("REMOTE_PUBLISH node prerender script "
|
||||
"type not set to 'python'"))
|
||||
self.raise_error("REMOTE_PUBLISH node prerender script "
|
||||
"type not set to 'python'")
|
||||
|
||||
@classmethod
|
||||
def repair(cls, context):
|
||||
|
|
@ -48,5 +48,4 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin):
|
|||
lib.create_remote_publish_node(force=True)
|
||||
|
||||
def raise_error(self, message):
|
||||
self.log.error(message)
|
||||
raise PublishValidationError(message, title=self.label)
|
||||
raise PublishValidationError(message)
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class ValidateUSDRenderProductNames(pyblish.api.InstancePlugin):
|
|||
|
||||
if not os.path.isabs(filepath):
|
||||
invalid.append(
|
||||
"Output file path is not " "absolute path: %s" % filepath
|
||||
"Output file path is not absolute path: %s" % filepath
|
||||
)
|
||||
|
||||
if invalid:
|
||||
|
|
|
|||
|
|
@ -197,19 +197,20 @@ def import_custom_attribute_data(container: str, selections: list):
|
|||
rt.addModifier(container, modifier)
|
||||
container.modifiers[0].name = "OP Data"
|
||||
rt.custAttributes.add(container.modifiers[0], attrs)
|
||||
nodes = {}
|
||||
node_list = []
|
||||
sel_list = []
|
||||
for i in selections:
|
||||
nodes = {
|
||||
str(i): rt.NodeTransformMonitor(node=i),
|
||||
}
|
||||
node_ref = rt.NodeTransformMonitor(node=i)
|
||||
node_list.append(node_ref)
|
||||
sel_list.append(str(i))
|
||||
|
||||
# Setting the property
|
||||
rt.setProperty(
|
||||
container.modifiers[0].openPypeData,
|
||||
"all_handles", nodes.values())
|
||||
"all_handles", node_list)
|
||||
rt.setProperty(
|
||||
container.modifiers[0].openPypeData,
|
||||
"sel_list", nodes.keys())
|
||||
|
||||
"sel_list", sel_list)
|
||||
|
||||
def update_custom_attribute_data(container: str, selections: list):
|
||||
"""Updating the Openpype/AYON custom parameter built by the creator
|
||||
|
|
|
|||
|
|
@ -13,8 +13,7 @@ class CreateYetiCache(plugin.MayaCreator):
|
|||
family = "yeticache"
|
||||
icon = "pagelines"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateYetiCache, self).__init__(*args, **kwargs)
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
defs = [
|
||||
NumberDef("preroll",
|
||||
|
|
@ -36,3 +35,5 @@ class CreateYetiCache(plugin.MayaCreator):
|
|||
default=3,
|
||||
decimals=0)
|
||||
)
|
||||
|
||||
return defs
|
||||
|
|
|
|||
|
|
@ -35,14 +35,11 @@ class CollectAssembly(pyblish.api.InstancePlugin):
|
|||
# Get all content from the instance
|
||||
instance_lookup = set(cmds.ls(instance, type="transform", long=True))
|
||||
data = defaultdict(list)
|
||||
self.log.info(instance_lookup)
|
||||
|
||||
hierarchy_nodes = []
|
||||
for container in containers:
|
||||
|
||||
self.log.info(container)
|
||||
root = lib.get_container_transforms(container, root=True)
|
||||
self.log.info(root)
|
||||
if not root or root not in instance_lookup:
|
||||
continue
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,6 @@ class CollectMayaHistory(pyblish.api.InstancePlugin):
|
|||
hosts = ["maya"]
|
||||
label = "Maya History"
|
||||
families = ["rig"]
|
||||
verbose = False
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -28,6 +28,8 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["maya"]
|
||||
|
||||
valid_empty_families = {"workfile", "renderlayer"}
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
objset = instance.data.get("instance_node")
|
||||
|
|
@ -58,7 +60,7 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
instance[:] = members_hierarchy
|
||||
|
||||
elif instance.data["family"] != "workfile":
|
||||
elif instance.data["family"] not in self.valid_empty_families:
|
||||
self.log.warning("Empty instance: \"%s\" " % objset)
|
||||
# Store the exact members of the object set
|
||||
instance.data["setMembers"] = members
|
||||
|
|
|
|||
|
|
@ -356,8 +356,9 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
# Thus the data will be limited to only what we need.
|
||||
self.log.debug("obj_set {}".format(sets[obj_set]))
|
||||
if not sets[obj_set]["members"]:
|
||||
self.log.info(
|
||||
"Removing redundant set information: {}".format(obj_set))
|
||||
self.log.debug(
|
||||
"Removing redundant set information: {}".format(obj_set)
|
||||
)
|
||||
sets.pop(obj_set, None)
|
||||
|
||||
self.log.debug("Gathering attribute changes to instance members..")
|
||||
|
|
@ -396,9 +397,9 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if con:
|
||||
materials.extend(con)
|
||||
|
||||
self.log.info("Found materials:\n{}".format(materials))
|
||||
self.log.debug("Found materials:\n{}".format(materials))
|
||||
|
||||
self.log.info("Found the following sets:\n{}".format(look_sets))
|
||||
self.log.debug("Found the following sets:\n{}".format(look_sets))
|
||||
# Get the entire node chain of the look sets
|
||||
# history = cmds.listHistory(look_sets)
|
||||
history = []
|
||||
|
|
@ -456,7 +457,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
instance.extend(shader for shader in look_sets if shader
|
||||
not in instance_lookup)
|
||||
|
||||
self.log.info("Collected look for %s" % instance)
|
||||
self.log.debug("Collected look for %s" % instance)
|
||||
|
||||
def collect_sets(self, instance):
|
||||
"""Collect all objectSets which are of importance for publishing
|
||||
|
|
@ -593,7 +594,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if attribute == "fileTextureName":
|
||||
computed_attribute = node + ".computedFileTextureNamePattern"
|
||||
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
self.log.debug(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
|
|
@ -621,7 +622,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
dependNode=True)
|
||||
)
|
||||
if not source and cmds.nodeType(node) in pxr_nodes:
|
||||
self.log.info("Renderman: source is empty, skipping...")
|
||||
self.log.debug("Renderman: source is empty, skipping...")
|
||||
continue
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
|
|
@ -630,14 +631,14 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
files = get_file_node_files(node)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
self.log.debug("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
self.log.debug("collection of resource done:")
|
||||
self.log.debug(" - node: {}".format(node))
|
||||
self.log.debug(" - attribute: {}".format(attribute))
|
||||
self.log.debug(" - source: {}".format(source))
|
||||
self.log.debug(" - file: {}".format(files))
|
||||
self.log.debug(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
yield {
|
||||
|
|
|
|||
|
|
@ -268,7 +268,7 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
self.log.info("Processing mvLook for '{}'".format(instance))
|
||||
self.log.debug("Processing mvLook for '{}'".format(instance))
|
||||
|
||||
nodes = set()
|
||||
for node in instance:
|
||||
|
|
@ -287,7 +287,7 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
publishMipMap = instance.data["publishMipMap"]
|
||||
|
||||
for node in nodes:
|
||||
self.log.info("Getting resources for '{}'".format(node))
|
||||
self.log.debug("Getting resources for '{}'".format(node))
|
||||
|
||||
# We know what nodes need to be collected, now we need to
|
||||
# extract the materials overrides.
|
||||
|
|
@ -380,12 +380,12 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(fname_attrib))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
self.log.debug("collection of resource done:")
|
||||
self.log.debug(" - node: {}".format(node))
|
||||
self.log.debug(" - attribute: {}".format(fname_attrib))
|
||||
self.log.debug(" - source: {}".format(source))
|
||||
self.log.debug(" - file: {}".format(files))
|
||||
self.log.debug(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
resource = {"node": node,
|
||||
|
|
@ -406,14 +406,14 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
extra_files = []
|
||||
self.log.debug("Expecting MipMaps, going to look for them.")
|
||||
for fname in files:
|
||||
self.log.info("Checking '{}' for mipmaps".format(fname))
|
||||
self.log.debug("Checking '{}' for mipmaps".format(fname))
|
||||
if is_mipmap(fname):
|
||||
self.log.debug(" - file is already MipMap, skipping.")
|
||||
continue
|
||||
|
||||
mipmap = get_mipmap(fname)
|
||||
if mipmap:
|
||||
self.log.info(" mipmap found for '{}'".format(fname))
|
||||
self.log.debug(" mipmap found for '{}'".format(fname))
|
||||
extra_files.append(mipmap)
|
||||
else:
|
||||
self.log.warning(" no mipmap found for '{}'".format(fname))
|
||||
|
|
|
|||
|
|
@ -105,7 +105,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
"family": cmds.getAttr("{}.family".format(s)),
|
||||
}
|
||||
)
|
||||
self.log.info(" -> attach render to: {}".format(s))
|
||||
self.log.debug(" -> attach render to: {}".format(s))
|
||||
|
||||
layer_name = layer.name()
|
||||
|
||||
|
|
@ -137,10 +137,10 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
has_cameras = any(product.camera for product in render_products)
|
||||
assert has_cameras, "No render cameras found."
|
||||
|
||||
self.log.info("multipart: {}".format(
|
||||
self.log.debug("multipart: {}".format(
|
||||
multipart))
|
||||
assert expected_files, "no file names were generated, this is a bug"
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"expected files: {}".format(
|
||||
json.dumps(expected_files, indent=4, sort_keys=True)
|
||||
)
|
||||
|
|
@ -175,7 +175,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
publish_meta_path = os.path.dirname(full_path)
|
||||
aov_dict[aov_first_key] = full_paths
|
||||
full_exp_files = [aov_dict]
|
||||
self.log.info(full_exp_files)
|
||||
self.log.debug(full_exp_files)
|
||||
|
||||
if publish_meta_path is None:
|
||||
raise KnownPublishError("Unable to detect any expected output "
|
||||
|
|
@ -227,7 +227,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
if platform.system().lower() in ["linux", "darwin"]:
|
||||
common_publish_meta_path = "/" + common_publish_meta_path
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Publish meta path: {}".format(common_publish_meta_path))
|
||||
|
||||
# Get layer specific settings, might be overrides
|
||||
|
|
@ -300,7 +300,7 @@ class CollectMayaRender(pyblish.api.InstancePlugin):
|
|||
)
|
||||
if rr_settings["enabled"]:
|
||||
data["rrPathName"] = instance.data.get("rrPathName")
|
||||
self.log.info(data["rrPathName"])
|
||||
self.log.debug(data["rrPathName"])
|
||||
|
||||
if self.sync_workfile_version:
|
||||
data["version"] = context.data["version"]
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
|
|||
|
||||
# Get renderer
|
||||
renderer = instance.data["renderer"]
|
||||
self.log.info("Renderer found: {}".format(renderer))
|
||||
self.log.debug("Renderer found: {}".format(renderer))
|
||||
|
||||
rp_node_types = {"vray": ["VRayRenderElement", "VRayRenderElementSet"],
|
||||
"arnold": ["aiAOV"],
|
||||
|
|
@ -66,8 +66,8 @@ class CollectRenderLayerAOVS(pyblish.api.InstancePlugin):
|
|||
|
||||
result.append(render_pass)
|
||||
|
||||
self.log.info("Found {} render elements / AOVs for "
|
||||
"'{}'".format(len(result), instance.data["subset"]))
|
||||
self.log.debug("Found {} render elements / AOVs for "
|
||||
"'{}'".format(len(result), instance.data["subset"]))
|
||||
|
||||
instance.data["renderPasses"] = result
|
||||
|
||||
|
|
|
|||
|
|
@ -21,11 +21,12 @@ class CollectRenderableCamera(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
layer = instance.data["renderlayer"]
|
||||
|
||||
self.log.info("layer: {}".format(layer))
|
||||
cameras = cmds.ls(type="camera", long=True)
|
||||
renderable = [c for c in cameras if
|
||||
get_attr_in_layer("%s.renderable" % c, layer)]
|
||||
renderable = [cam for cam in cameras if
|
||||
get_attr_in_layer("{}.renderable".format(cam), layer)]
|
||||
|
||||
self.log.info("Found cameras %s: %s" % (len(renderable), renderable))
|
||||
self.log.debug(
|
||||
"Found renderable cameras %s: %s", len(renderable), renderable
|
||||
)
|
||||
|
||||
instance.data["cameras"] = renderable
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
|
|||
instance.data["geometryMembers"] = cmds.sets(
|
||||
geometry_set, query=True)
|
||||
|
||||
self.log.info("geometry: {}".format(
|
||||
self.log.debug("geometry: {}".format(
|
||||
pformat(instance.data.get("geometryMembers"))))
|
||||
|
||||
collision_set = [
|
||||
|
|
@ -29,7 +29,7 @@ class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
|
|||
instance.data["collisionMembers"] = cmds.sets(
|
||||
collision_set, query=True)
|
||||
|
||||
self.log.info("collisions: {}".format(
|
||||
self.log.debug("collisions: {}".format(
|
||||
pformat(instance.data.get("collisionMembers"))))
|
||||
|
||||
frame = cmds.currentTime(query=True)
|
||||
|
|
|
|||
|
|
@ -67,5 +67,5 @@ class CollectXgen(pyblish.api.InstancePlugin):
|
|||
|
||||
data["transfers"] = transfers
|
||||
|
||||
self.log.info(data)
|
||||
self.log.debug(data)
|
||||
instance.data.update(data)
|
||||
|
|
|
|||
|
|
@ -119,7 +119,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
texture_filenames = []
|
||||
if image_search_paths:
|
||||
|
||||
|
||||
# TODO: Somehow this uses OS environment path separator, `:` vs `;`
|
||||
# Later on check whether this is pipeline OS cross-compatible.
|
||||
image_search_paths = [p for p in
|
||||
|
|
@ -130,13 +129,13 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
|
||||
# List all related textures
|
||||
texture_filenames = cmds.pgYetiCommand(node, listTextures=True)
|
||||
self.log.info("Found %i texture(s)" % len(texture_filenames))
|
||||
self.log.debug("Found %i texture(s)" % len(texture_filenames))
|
||||
|
||||
# Get all reference nodes
|
||||
reference_nodes = cmds.pgYetiGraph(node,
|
||||
listNodes=True,
|
||||
type="reference")
|
||||
self.log.info("Found %i reference node(s)" % len(reference_nodes))
|
||||
self.log.debug("Found %i reference node(s)" % len(reference_nodes))
|
||||
|
||||
if texture_filenames and not image_search_paths:
|
||||
raise ValueError("pgYetiMaya node '%s' is missing the path to the "
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Extracted instance {} to: {}".format(instance.name, staging_dir)
|
||||
)
|
||||
|
||||
|
|
@ -126,7 +126,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
instance.data["representations"].append(representation)
|
||||
|
||||
def _extract(self, nodes, attribute_data, kwargs):
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Writing {} with:\n{}".format(kwargs["filename"], kwargs)
|
||||
)
|
||||
filenames = []
|
||||
|
|
@ -180,12 +180,12 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
with lib.attribute_values(attribute_data):
|
||||
with lib.maintained_selection():
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Writing: {}".format(duplicate_nodes)
|
||||
)
|
||||
cmds.select(duplicate_nodes, noExpand=True)
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Extracting ass sequence with: {}".format(kwargs)
|
||||
)
|
||||
|
||||
|
|
@ -194,6 +194,6 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
for file in exported_files:
|
||||
filenames.append(os.path.split(file)[1])
|
||||
|
||||
self.log.info("Exported: {}".format(filenames))
|
||||
self.log.debug("Exported: {}".format(filenames))
|
||||
|
||||
return filenames, nodes_by_id
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class ExtractAssembly(publish.Extractor):
|
|||
json_filename = "{}.json".format(instance.name)
|
||||
json_path = os.path.join(staging_dir, json_filename)
|
||||
|
||||
self.log.info("Dumping scene data for debugging ..")
|
||||
self.log.debug("Dumping scene data for debugging ..")
|
||||
with open(json_path, "w") as filepath:
|
||||
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
|
||||
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
"Attributes to bake must be specified as a list"
|
||||
)
|
||||
for attr in self.bake_attributes:
|
||||
self.log.info("Adding {} attribute".format(attr))
|
||||
self.log.debug("Adding {} attribute".format(attr))
|
||||
job_str += " -attr {0}".format(attr)
|
||||
|
||||
with lib.evaluation("off"):
|
||||
|
|
@ -112,5 +112,5 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
|
|
@ -151,7 +151,7 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
with lib.evaluation("off"):
|
||||
with lib.suspended_refresh():
|
||||
if bake_to_worldspace:
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Performing camera bakes: {}".format(transform))
|
||||
baked = lib.bake_to_world_space(
|
||||
transform,
|
||||
|
|
@ -186,7 +186,7 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
unlock(plug)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
self.log.debug("Performing extraction..")
|
||||
cmds.select(cmds.ls(members, dag=True,
|
||||
shapes=True, long=True), noExpand=True)
|
||||
cmds.file(path,
|
||||
|
|
@ -217,5 +217,5 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -33,11 +33,11 @@ class ExtractFBX(publish.Extractor):
|
|||
# to format it into a string in a mel expression
|
||||
path = path.replace('\\', '/')
|
||||
|
||||
self.log.info("Extracting FBX to: {0}".format(path))
|
||||
self.log.debug("Extracting FBX to: {0}".format(path))
|
||||
|
||||
members = instance.data["setMembers"]
|
||||
self.log.info("Members: {0}".format(members))
|
||||
self.log.info("Instance: {0}".format(instance[:]))
|
||||
self.log.debug("Members: {0}".format(members))
|
||||
self.log.debug("Instance: {0}".format(instance[:]))
|
||||
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
|
||||
|
|
@ -58,4 +58,4 @@ class ExtractFBX(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract FBX successful to: {0}".format(path))
|
||||
self.log.debug("Extract FBX successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -20,14 +20,10 @@ class ExtractGLB(publish.Extractor):
|
|||
filename = "{0}.glb".format(instance.name)
|
||||
path = os.path.join(staging_dir, filename)
|
||||
|
||||
self.log.info("Extracting GLB to: {}".format(path))
|
||||
|
||||
cmds.loadPlugin("maya2glTF", quiet=True)
|
||||
|
||||
nodes = instance[:]
|
||||
|
||||
self.log.info("Instance: {0}".format(nodes))
|
||||
|
||||
start_frame = instance.data('frameStart') or \
|
||||
int(cmds.playbackOptions(query=True,
|
||||
animationStartTime=True))# noqa
|
||||
|
|
@ -48,6 +44,7 @@ class ExtractGLB(publish.Extractor):
|
|||
"vno": True # visibleNodeOnly
|
||||
}
|
||||
|
||||
self.log.debug("Extracting GLB to: {}".format(path))
|
||||
with lib.maintained_selection():
|
||||
cmds.select(nodes, hi=True, noExpand=True)
|
||||
extract_gltf(staging_dir,
|
||||
|
|
@ -65,4 +62,4 @@ class ExtractGLB(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract GLB successful to: {0}".format(path))
|
||||
self.log.debug("Extract GLB successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -60,6 +60,6 @@ class ExtractGPUCache(publish.Extractor):
|
|||
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Extracted instance {} to: {}".format(instance.name, staging_dir)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ class ExtractImportReference(publish.Extractor,
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
|
||||
|
|
@ -69,7 +69,7 @@ class ExtractImportReference(publish.Extractor,
|
|||
reference_path = os.path.join(dir_path, ref_scene_name)
|
||||
tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
self.log.debug("Performing extraction..")
|
||||
|
||||
# This generates script for mayapy to take care of reference
|
||||
# importing outside current session. It is passing current scene
|
||||
|
|
@ -111,7 +111,7 @@ print("*** Done")
|
|||
# process until handles are closed by context manager.
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py")
|
||||
self.log.info("Using script file: {}".format(tmp_script_path))
|
||||
self.log.debug("Using script file: {}".format(tmp_script_path))
|
||||
with open(tmp_script_path, "wt") as tmp:
|
||||
tmp.write(script)
|
||||
|
||||
|
|
@ -149,9 +149,9 @@ print("*** Done")
|
|||
"stagingDir": os.path.dirname(current_name),
|
||||
"outputName": "imported"
|
||||
}
|
||||
self.log.info("%s" % ref_representation)
|
||||
self.log.debug(ref_representation)
|
||||
|
||||
instance.data["representations"].append(ref_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name,
|
||||
reference_path))
|
||||
self.log.debug("Extracted instance '%s' to : '%s'" % (ref_scene_name,
|
||||
reference_path))
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class ExtractLayout(publish.Extractor):
|
|||
stagingdir = self.staging_dir(instance)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
self.log.debug("Performing extraction..")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
@ -64,7 +64,7 @@ class ExtractLayout(publish.Extractor):
|
|||
fields=["parent", "context.family"]
|
||||
)
|
||||
|
||||
self.log.info(representation)
|
||||
self.log.debug(representation)
|
||||
|
||||
version_id = representation.get("parent")
|
||||
family = representation.get("context").get("family")
|
||||
|
|
@ -159,5 +159,5 @@ class ExtractLayout(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(json_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s",
|
||||
instance.name, json_representation)
|
||||
self.log.debug("Extracted instance '%s' to: %s",
|
||||
instance.name, json_representation)
|
||||
|
|
|
|||
|
|
@ -307,7 +307,7 @@ class MakeTX(TextureProcessor):
|
|||
|
||||
render_colorspace = color_management["rendering_space"]
|
||||
|
||||
self.log.info("tx: converting colorspace {0} "
|
||||
self.log.debug("tx: converting colorspace {0} "
|
||||
"-> {1}".format(colorspace,
|
||||
render_colorspace))
|
||||
args.extend(["--colorconvert", colorspace, render_colorspace])
|
||||
|
|
@ -331,7 +331,7 @@ class MakeTX(TextureProcessor):
|
|||
if not os.path.exists(resources_dir):
|
||||
os.makedirs(resources_dir)
|
||||
|
||||
self.log.info("Generating .tx file for %s .." % source)
|
||||
self.log.debug("Generating .tx file for %s .." % source)
|
||||
|
||||
subprocess_args = maketx_args + [
|
||||
"-v", # verbose
|
||||
|
|
@ -421,7 +421,7 @@ class ExtractLook(publish.Extractor):
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
|
|
@ -453,7 +453,7 @@ class ExtractLook(publish.Extractor):
|
|||
relationships = lookdata["relationships"]
|
||||
sets = list(relationships.keys())
|
||||
if not sets:
|
||||
self.log.info("No sets found for the look")
|
||||
self.log.debug("No sets found for the look")
|
||||
return
|
||||
|
||||
# Specify texture processing executables to activate
|
||||
|
|
@ -485,7 +485,7 @@ class ExtractLook(publish.Extractor):
|
|||
remap = results["attrRemap"]
|
||||
|
||||
# Extract in correct render layer
|
||||
self.log.info("Extracting look maya scene file: {}".format(maya_path))
|
||||
self.log.debug("Extracting look maya scene file: {}".format(maya_path))
|
||||
layer = instance.data.get("renderlayer", "defaultRenderLayer")
|
||||
with lib.renderlayer(layer):
|
||||
# TODO: Ensure membership edits don't become renderlayer overrides
|
||||
|
|
@ -511,12 +511,12 @@ class ExtractLook(publish.Extractor):
|
|||
)
|
||||
|
||||
# Write the JSON data
|
||||
self.log.info("Extract json..")
|
||||
data = {
|
||||
"attributes": lookdata["attributes"],
|
||||
"relationships": relationships
|
||||
}
|
||||
|
||||
self.log.debug("Extracting json file: {}".format(json_path))
|
||||
with open(json_path, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
|
|
@ -557,8 +557,8 @@ class ExtractLook(publish.Extractor):
|
|||
# Source hash for the textures
|
||||
instance.data["sourceHashes"] = hashes
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
maya_path))
|
||||
self.log.debug("Extracted instance '%s' to: %s" % (instance.name,
|
||||
maya_path))
|
||||
|
||||
def _set_resource_result_colorspace(self, resource, colorspace):
|
||||
"""Update resource resulting colorspace after texture processing"""
|
||||
|
|
@ -589,14 +589,13 @@ class ExtractLook(publish.Extractor):
|
|||
resources = instance.data["resources"]
|
||||
color_management = lib.get_color_management_preferences()
|
||||
|
||||
# Temporary fix to NOT create hardlinks on windows machines
|
||||
if platform.system().lower() == "windows":
|
||||
self.log.info(
|
||||
force_copy = instance.data.get("forceCopy", False)
|
||||
if not force_copy and platform.system().lower() == "windows":
|
||||
# Temporary fix to NOT create hardlinks on windows machines
|
||||
self.log.warning(
|
||||
"Forcing copy instead of hardlink due to issues on Windows..."
|
||||
)
|
||||
force_copy = True
|
||||
else:
|
||||
force_copy = instance.data.get("forceCopy", False)
|
||||
|
||||
destinations_cache = {}
|
||||
|
||||
|
|
@ -671,11 +670,11 @@ class ExtractLook(publish.Extractor):
|
|||
destination = get_resource_destination_cached(source)
|
||||
if force_copy or texture_result.transfer_mode == COPY:
|
||||
transfers.append((source, destination))
|
||||
self.log.info('file will be copied {} -> {}'.format(
|
||||
self.log.debug('file will be copied {} -> {}'.format(
|
||||
source, destination))
|
||||
elif texture_result.transfer_mode == HARDLINK:
|
||||
hardlinks.append((source, destination))
|
||||
self.log.info('file will be hardlinked {} -> {}'.format(
|
||||
self.log.debug('file will be hardlinked {} -> {}'.format(
|
||||
source, destination))
|
||||
|
||||
# Store the hashes from hash to destination to include in the
|
||||
|
|
@ -707,7 +706,7 @@ class ExtractLook(publish.Extractor):
|
|||
color_space_attr = "{}.colorSpace".format(node)
|
||||
remap[color_space_attr] = resource["result_color_space"]
|
||||
|
||||
self.log.info("Finished remapping destinations ...")
|
||||
self.log.debug("Finished remapping destinations ...")
|
||||
|
||||
return {
|
||||
"fileTransfers": transfers,
|
||||
|
|
@ -815,8 +814,8 @@ class ExtractLook(publish.Extractor):
|
|||
if not processed_result:
|
||||
raise RuntimeError("Texture Processor {} returned "
|
||||
"no result.".format(processor))
|
||||
self.log.info("Generated processed "
|
||||
"texture: {}".format(processed_result.path))
|
||||
self.log.debug("Generated processed "
|
||||
"texture: {}".format(processed_result.path))
|
||||
|
||||
# TODO: Currently all processors force copy instead of allowing
|
||||
# hardlinks using source hashes. This should be refactored
|
||||
|
|
@ -827,7 +826,7 @@ class ExtractLook(publish.Extractor):
|
|||
if not force_copy:
|
||||
existing = self._get_existing_hashed_texture(filepath)
|
||||
if existing:
|
||||
self.log.info("Found hash in database, preparing hardlink..")
|
||||
self.log.debug("Found hash in database, preparing hardlink..")
|
||||
return TextureResult(
|
||||
path=filepath,
|
||||
file_hash=texture_hash,
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ class ExtractMayaSceneRaw(publish.Extractor):
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
|
|
@ -63,7 +63,7 @@ class ExtractMayaSceneRaw(publish.Extractor):
|
|||
selection += self._get_loaded_containers(members)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
with maintained_selection():
|
||||
cmds.select(selection, noExpand=True)
|
||||
cmds.file(path,
|
||||
|
|
@ -87,7 +87,8 @@ class ExtractMayaSceneRaw(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
self.log.debug("Extracted instance '%s' to: %s" % (instance.name,
|
||||
path))
|
||||
|
||||
@staticmethod
|
||||
def _get_loaded_containers(members):
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ class ExtractModel(publish.Extractor,
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
|
|
@ -56,7 +56,7 @@ class ExtractModel(publish.Extractor,
|
|||
path = os.path.join(stagingdir, filename)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
|
||||
# Get only the shape contents we need in such a way that we avoid
|
||||
# taking along intermediateObjects
|
||||
|
|
@ -102,4 +102,5 @@ class ExtractModel(publish.Extractor,
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
self.log.debug("Extracted instance '%s' to: %s" % (instance.name,
|
||||
path))
|
||||
|
|
|
|||
|
|
@ -101,10 +101,10 @@ class ExtractMultiverseLook(publish.Extractor):
|
|||
|
||||
# Parse export options
|
||||
options = self.default_options
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
self.log.debug("Export options: {0}".format(options))
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
|
|
@ -114,7 +114,7 @@ class ExtractMultiverseLook(publish.Extractor):
|
|||
type="mvUsdCompoundShape",
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
self.log.debug('Collected object {}'.format(members))
|
||||
if len(members) > 1:
|
||||
self.log.error('More than one member: {}'.format(members))
|
||||
|
||||
|
|
@ -153,5 +153,5 @@ class ExtractMultiverseLook(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
self.log.debug("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
|
|
|
|||
|
|
@ -150,7 +150,6 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
return options
|
||||
|
||||
def get_default_options(self):
|
||||
self.log.info("ExtractMultiverseUsd get_default_options")
|
||||
return self.default_options
|
||||
|
||||
def filter_members(self, members):
|
||||
|
|
@ -173,19 +172,19 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
# Parse export options
|
||||
options = self.get_default_options()
|
||||
options = self.parse_overrides(instance, options)
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
self.log.debug("Export options: {0}".format(options))
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
self.log.info('Collected objects: {}'.format(members))
|
||||
self.log.debug('Collected objects: {}'.format(members))
|
||||
members = self.filter_members(members)
|
||||
if not members:
|
||||
self.log.error('No members!')
|
||||
return
|
||||
self.log.info(' - filtered: {}'.format(members))
|
||||
self.log.debug(' - filtered: {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
||||
|
|
@ -229,7 +228,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
self.log.debug(" - {}={}".format(key, value))
|
||||
setattr(asset_write_opts, key, value)
|
||||
|
||||
self.log.info('WriteAsset: {} / {}'.format(file_path, members))
|
||||
self.log.debug('WriteAsset: {} / {}'.format(file_path, members))
|
||||
multiverse.WriteAsset(file_path, members, asset_write_opts)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -243,7 +242,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
self.log.debug("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -105,14 +105,14 @@ class ExtractMultiverseUsdComposition(publish.Extractor):
|
|||
# Parse export options
|
||||
options = self.default_options
|
||||
options = self.parse_overrides(instance, options)
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
self.log.debug("Export options: {0}".format(options))
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
self.log.debug('Collected object {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
||||
|
|
@ -175,5 +175,5 @@ class ExtractMultiverseUsdComposition(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
self.log.debug("Extracted instance {} to {}".format(instance.name,
|
||||
file_path))
|
||||
|
|
|
|||
|
|
@ -87,10 +87,10 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
|
|||
|
||||
# Parse export options
|
||||
options = self.default_options
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
self.log.debug("Export options: {0}".format(options))
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
|
|
@ -100,7 +100,7 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
|
|||
type="mvUsdCompoundShape",
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
self.log.info("Collected object {}".format(members))
|
||||
self.log.debug("Collected object {}".format(members))
|
||||
|
||||
# TODO: Deal with asset, composition, override with options.
|
||||
import multiverse
|
||||
|
|
@ -153,5 +153,5 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
self.log.debug("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class ExtractObj(publish.Extractor):
|
|||
# The export requires forward slashes because we need to
|
||||
# format it into a string in a mel expression
|
||||
|
||||
self.log.info("Extracting OBJ to: {0}".format(path))
|
||||
self.log.debug("Extracting OBJ to: {0}".format(path))
|
||||
|
||||
members = instance.data("setMembers")
|
||||
members = cmds.ls(members,
|
||||
|
|
@ -39,8 +39,8 @@ class ExtractObj(publish.Extractor):
|
|||
type=("mesh", "nurbsCurve"),
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
self.log.info("Members: {0}".format(members))
|
||||
self.log.info("Instance: {0}".format(instance[:]))
|
||||
self.log.debug("Members: {0}".format(members))
|
||||
self.log.debug("Instance: {0}".format(instance[:]))
|
||||
|
||||
if not cmds.pluginInfo('objExport', query=True, loaded=True):
|
||||
cmds.loadPlugin('objExport')
|
||||
|
|
@ -74,4 +74,4 @@ class ExtractObj(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract OBJ successful to: {0}".format(path))
|
||||
self.log.debug("Extract OBJ successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
self.log.debug("playblast path {}".format(path))
|
||||
|
||||
def process(self, instance):
|
||||
self.log.info("Extracting capture..")
|
||||
self.log.debug("Extracting capture..")
|
||||
|
||||
# get scene fps
|
||||
fps = instance.data.get("fps") or instance.context.data.get("fps")
|
||||
|
|
@ -62,7 +62,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
if end is None:
|
||||
end = cmds.playbackOptions(query=True, animationEndTime=True)
|
||||
|
||||
self.log.info("start: {}, end: {}".format(start, end))
|
||||
self.log.debug("start: {}, end: {}".format(start, end))
|
||||
|
||||
# get cameras
|
||||
camera = instance.data["review_camera"]
|
||||
|
|
@ -119,7 +119,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
filename = "{0}".format(instance.name)
|
||||
path = os.path.join(stagingdir, filename)
|
||||
|
||||
self.log.info("Outputting images to %s" % path)
|
||||
self.log.debug("Outputting images to %s" % path)
|
||||
|
||||
preset["filename"] = path
|
||||
preset["overwrite"] = True
|
||||
|
|
@ -237,7 +237,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
self.log.debug("collection head {}".format(filebase))
|
||||
if filebase in filename:
|
||||
frame_collection = collection
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"we found collection of interest {}".format(
|
||||
str(frame_collection)))
|
||||
|
||||
|
|
|
|||
|
|
@ -109,11 +109,11 @@ class ExtractAlembic(publish.Extractor):
|
|||
|
||||
instance.context.data["cleanupFullPaths"].append(path)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
self.log.debug("Extracted {} to {}".format(instance, dirname))
|
||||
|
||||
# Extract proxy.
|
||||
if not instance.data.get("proxy"):
|
||||
self.log.info("No proxy nodes found. Skipping proxy extraction.")
|
||||
self.log.debug("No proxy nodes found. Skipping proxy extraction.")
|
||||
return
|
||||
|
||||
path = path.replace(".abc", "_proxy.abc")
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ class ExtractProxyAlembic(publish.Extractor):
|
|||
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
|
||||
attr_prefixes = [value for value in attr_prefixes if value.strip()]
|
||||
|
||||
self.log.info("Extracting Proxy Alembic..")
|
||||
self.log.debug("Extracting Proxy Alembic..")
|
||||
dirname = self.staging_dir(instance)
|
||||
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
|
|
@ -82,7 +82,7 @@ class ExtractProxyAlembic(publish.Extractor):
|
|||
|
||||
instance.context.data["cleanupFullPaths"].append(path)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
self.log.debug("Extracted {} to {}".format(instance, dirname))
|
||||
# remove the bounding box
|
||||
bbox_master = cmds.ls("bbox_grp")
|
||||
cmds.delete(bbox_master)
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ class ExtractRedshiftProxy(publish.Extractor):
|
|||
# vertex_colors = instance.data.get("vertexColors", False)
|
||||
|
||||
# Write out rs file
|
||||
self.log.info("Writing: '%s'" % file_path)
|
||||
self.log.debug("Writing: '%s'" % file_path)
|
||||
with maintained_selection():
|
||||
cmds.select(instance.data["setMembers"], noExpand=True)
|
||||
cmds.file(file_path,
|
||||
|
|
@ -82,5 +82,5 @@ class ExtractRedshiftProxy(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
self.log.debug("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
|
|
|
|||
|
|
@ -37,5 +37,5 @@ class ExtractRenderSetup(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Extracted instance '%s' to: %s" % (instance.name, json_path))
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class ExtractRig(publish.Extractor):
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using '.{}' as scene type".format(self.scene_type))
|
||||
break
|
||||
except AttributeError:
|
||||
|
|
@ -39,7 +39,7 @@ class ExtractRig(publish.Extractor):
|
|||
path = os.path.join(dir_path, filename)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
self.log.debug("Performing extraction ...")
|
||||
with maintained_selection():
|
||||
cmds.select(instance, noExpand=True)
|
||||
cmds.file(path,
|
||||
|
|
@ -63,4 +63,4 @@ class ExtractRig(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
self.log.debug("Extracted instance '%s' to: %s", instance.name, path)
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class ExtractThumbnail(publish.Extractor):
|
|||
families = ["review"]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.info("Extracting capture..")
|
||||
self.log.debug("Extracting capture..")
|
||||
|
||||
camera = instance.data["review_camera"]
|
||||
|
||||
|
|
@ -96,7 +96,7 @@ class ExtractThumbnail(publish.Extractor):
|
|||
filename = "{0}".format(instance.name)
|
||||
path = os.path.join(dst_staging, filename)
|
||||
|
||||
self.log.info("Outputting images to %s" % path)
|
||||
self.log.debug("Outputting images to %s" % path)
|
||||
|
||||
preset["filename"] = path
|
||||
preset["overwrite"] = True
|
||||
|
|
@ -159,7 +159,7 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
_, thumbnail = os.path.split(playblast)
|
||||
|
||||
self.log.info("file list {}".format(thumbnail))
|
||||
self.log.debug("file list {}".format(thumbnail))
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
|
|||
|
|
@ -57,9 +57,9 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
|
|||
# to format it into a string in a mel expression
|
||||
path = path.replace('\\', '/')
|
||||
|
||||
self.log.info("Extracting ABC to: {0}".format(path))
|
||||
self.log.info("Members: {0}".format(nodes))
|
||||
self.log.info("Instance: {0}".format(instance[:]))
|
||||
self.log.debug("Extracting ABC to: {0}".format(path))
|
||||
self.log.debug("Members: {0}".format(nodes))
|
||||
self.log.debug("Instance: {0}".format(instance[:]))
|
||||
|
||||
options = {
|
||||
"step": instance.data.get("step", 1.0),
|
||||
|
|
@ -74,7 +74,7 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
|
|||
"worldSpace": instance.data.get("worldSpace", True)
|
||||
}
|
||||
|
||||
self.log.info("Options: {}".format(options))
|
||||
self.log.debug("Options: {}".format(options))
|
||||
|
||||
if int(cmds.about(version=True)) >= 2017:
|
||||
# Since Maya 2017 alembic supports multiple uv sets - write them.
|
||||
|
|
@ -105,4 +105,4 @@ class ExtractUnrealSkeletalMeshAbc(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract ABC successful to: {0}".format(path))
|
||||
self.log.debug("Extract ABC successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -46,9 +46,9 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor):
|
|||
# to format it into a string in a mel expression
|
||||
path = path.replace('\\', '/')
|
||||
|
||||
self.log.info("Extracting FBX to: {0}".format(path))
|
||||
self.log.info("Members: {0}".format(to_extract))
|
||||
self.log.info("Instance: {0}".format(instance[:]))
|
||||
self.log.debug("Extracting FBX to: {0}".format(path))
|
||||
self.log.debug("Members: {0}".format(to_extract))
|
||||
self.log.debug("Instance: {0}".format(instance[:]))
|
||||
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
|
||||
|
|
@ -70,7 +70,7 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor):
|
|||
renamed_to_extract.append("|".join(node_path))
|
||||
|
||||
with renamed(original_parent, parent_node):
|
||||
self.log.info("Extracting: {}".format(renamed_to_extract, path))
|
||||
self.log.debug("Extracting: {}".format(renamed_to_extract, path))
|
||||
fbx_exporter.export(renamed_to_extract, path)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -84,4 +84,4 @@ class ExtractUnrealSkeletalMeshFbx(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract FBX successful to: {0}".format(path))
|
||||
self.log.debug("Extract FBX successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -37,15 +37,15 @@ class ExtractUnrealStaticMesh(publish.Extractor):
|
|||
# to format it into a string in a mel expression
|
||||
path = path.replace('\\', '/')
|
||||
|
||||
self.log.info("Extracting FBX to: {0}".format(path))
|
||||
self.log.info("Members: {0}".format(members))
|
||||
self.log.info("Instance: {0}".format(instance[:]))
|
||||
self.log.debug("Extracting FBX to: {0}".format(path))
|
||||
self.log.debug("Members: {0}".format(members))
|
||||
self.log.debug("Instance: {0}".format(instance[:]))
|
||||
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
|
||||
with maintained_selection():
|
||||
with parent_nodes(members):
|
||||
self.log.info("Un-parenting: {}".format(members))
|
||||
self.log.debug("Un-parenting: {}".format(members))
|
||||
fbx_exporter.export(members, path)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -59,4 +59,4 @@ class ExtractUnrealStaticMesh(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extract FBX successful to: {0}".format(path))
|
||||
self.log.debug("Extract FBX successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ class ExtractVRayProxy(publish.Extractor):
|
|||
vertex_colors = instance.data.get("vertexColors", False)
|
||||
|
||||
# Write out vrmesh file
|
||||
self.log.info("Writing: '%s'" % file_path)
|
||||
self.log.debug("Writing: '%s'" % file_path)
|
||||
with maintained_selection():
|
||||
cmds.select(instance.data["setMembers"], noExpand=True)
|
||||
cmds.vrayCreateProxy(exportType=1,
|
||||
|
|
@ -68,5 +68,5 @@ class ExtractVRayProxy(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
self.log.debug("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
|
|
|
|||
|
|
@ -20,13 +20,13 @@ class ExtractVrayscene(publish.Extractor):
|
|||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
if instance.data.get("exportOnFarm"):
|
||||
self.log.info("vrayscenes will be exported on farm.")
|
||||
self.log.debug("vrayscenes will be exported on farm.")
|
||||
raise NotImplementedError(
|
||||
"exporting vrayscenes is not implemented")
|
||||
|
||||
# handle sequence
|
||||
if instance.data.get("vraySceneMultipleFiles"):
|
||||
self.log.info("vrayscenes will be exported on farm.")
|
||||
self.log.debug("vrayscenes will be exported on farm.")
|
||||
raise NotImplementedError(
|
||||
"exporting vrayscene sequences not implemented yet")
|
||||
|
||||
|
|
@ -40,7 +40,6 @@ class ExtractVrayscene(publish.Extractor):
|
|||
layer_name = instance.data.get("layer")
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
self.log.info("staging: {}".format(staging_dir))
|
||||
template = cmds.getAttr("{}.vrscene_filename".format(node))
|
||||
start_frame = instance.data.get(
|
||||
"frameStartHandle") if instance.data.get(
|
||||
|
|
@ -56,21 +55,21 @@ class ExtractVrayscene(publish.Extractor):
|
|||
staging_dir, "vrayscene", *formatted_name.split("/"))
|
||||
|
||||
# Write out vrscene file
|
||||
self.log.info("Writing: '%s'" % file_path)
|
||||
self.log.debug("Writing: '%s'" % file_path)
|
||||
with maintained_selection():
|
||||
if "*" not in instance.data["setMembers"]:
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Exporting: {}".format(instance.data["setMembers"]))
|
||||
set_members = instance.data["setMembers"]
|
||||
cmds.select(set_members, noExpand=True)
|
||||
else:
|
||||
self.log.info("Exporting all ...")
|
||||
self.log.debug("Exporting all ...")
|
||||
set_members = cmds.ls(
|
||||
long=True, objectsOnly=True,
|
||||
geometry=True, lights=True, cameras=True)
|
||||
cmds.select(set_members, noExpand=True)
|
||||
|
||||
self.log.info("Appending layer name {}".format(layer_name))
|
||||
self.log.debug("Appending layer name {}".format(layer_name))
|
||||
set_members.append(layer_name)
|
||||
|
||||
export_in_rs_layer(
|
||||
|
|
@ -93,8 +92,8 @@ class ExtractVrayscene(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
self.log.debug("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
|
||||
@staticmethod
|
||||
def format_vray_output_filename(
|
||||
|
|
|
|||
|
|
@ -241,7 +241,7 @@ class ExtractWorkfileXgen(publish.Extractor):
|
|||
data[palette] = {attr: old_value}
|
||||
|
||||
cmds.setAttr(node_attr, value, type="string")
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Setting \"{}\" on \"{}\"".format(value, node_attr)
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ class ExtractXgen(publish.Extractor):
|
|||
xgenm.exportPalette(
|
||||
instance.data["xgmPalette"].replace("|", ""), temp_xgen_path
|
||||
)
|
||||
self.log.info("Extracted to {}".format(temp_xgen_path))
|
||||
self.log.debug("Extracted to {}".format(temp_xgen_path))
|
||||
|
||||
# Import xgen onto the duplicate.
|
||||
with maintained_selection():
|
||||
|
|
@ -118,7 +118,7 @@ class ExtractXgen(publish.Extractor):
|
|||
expressions=True
|
||||
)
|
||||
|
||||
self.log.info("Extracted to {}".format(maya_filepath))
|
||||
self.log.debug("Extracted to {}".format(maya_filepath))
|
||||
|
||||
if os.path.exists(temp_xgen_path):
|
||||
os.remove(temp_xgen_path)
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ class ExtractYetiCache(publish.Extractor):
|
|||
else:
|
||||
kwargs.update({"samples": samples})
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Writing out cache {} - {}".format(start_frame, end_frame))
|
||||
# Start writing the files for snap shot
|
||||
# <NAME> will be replace by the Yeti node name
|
||||
|
|
@ -53,7 +53,7 @@ class ExtractYetiCache(publish.Extractor):
|
|||
|
||||
cache_files = [x for x in os.listdir(dirname) if x.endswith(".fur")]
|
||||
|
||||
self.log.info("Writing metadata file")
|
||||
self.log.debug("Writing metadata file")
|
||||
settings = instance.data["fursettings"]
|
||||
fursettings_path = os.path.join(dirname, "yeti.fursettings")
|
||||
with open(fursettings_path, "w") as fp:
|
||||
|
|
@ -63,7 +63,7 @@ class ExtractYetiCache(publish.Extractor):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self.log.info("cache files: {}".format(cache_files[0]))
|
||||
self.log.debug("cache files: {}".format(cache_files[0]))
|
||||
|
||||
# Workaround: We do not explicitly register these files with the
|
||||
# representation solely so that we can write multiple sequences
|
||||
|
|
@ -87,4 +87,4 @@ class ExtractYetiCache(publish.Extractor):
|
|||
}
|
||||
)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
self.log.debug("Extracted {} to {}".format(instance, dirname))
|
||||
|
|
|
|||
|
|
@ -109,7 +109,7 @@ class ExtractYetiRig(publish.Extractor):
|
|||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
|
|
@ -127,7 +127,7 @@ class ExtractYetiRig(publish.Extractor):
|
|||
maya_path = os.path.join(dirname,
|
||||
"yeti_rig.{}".format(self.scene_type))
|
||||
|
||||
self.log.info("Writing metadata file")
|
||||
self.log.debug("Writing metadata file: {}".format(settings_path))
|
||||
|
||||
image_search_path = resources_dir = instance.data["resourcesDir"]
|
||||
|
||||
|
|
@ -147,7 +147,7 @@ class ExtractYetiRig(publish.Extractor):
|
|||
dst = os.path.join(image_search_path, os.path.basename(file))
|
||||
instance.data['transfers'].append([src, dst])
|
||||
|
||||
self.log.info("adding transfer {} -> {}". format(src, dst))
|
||||
self.log.debug("adding transfer {} -> {}". format(src, dst))
|
||||
|
||||
# Ensure the imageSearchPath is being remapped to the publish folder
|
||||
attr_value = {"%s.imageSearchPath" % n: str(image_search_path) for
|
||||
|
|
@ -182,7 +182,7 @@ class ExtractYetiRig(publish.Extractor):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self.log.info("rig file: {}".format(maya_path))
|
||||
self.log.debug("rig file: {}".format(maya_path))
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
'name': self.scene_type,
|
||||
|
|
@ -191,7 +191,7 @@ class ExtractYetiRig(publish.Extractor):
|
|||
'stagingDir': dirname
|
||||
}
|
||||
)
|
||||
self.log.info("settings file: {}".format(settings_path))
|
||||
self.log.debug("settings file: {}".format(settings_path))
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
'name': 'rigsettings',
|
||||
|
|
@ -201,6 +201,6 @@ class ExtractYetiRig(publish.Extractor):
|
|||
}
|
||||
)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
self.log.debug("Extracted {} to {}".format(instance, dirname))
|
||||
|
||||
cmds.select(clear=True)
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin):
|
|||
for palette, data in xgen_attributes.items():
|
||||
for attr, value in data.items():
|
||||
node_attr = "{}.{}".format(palette, attr)
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Setting \"{}\" on \"{}\"".format(value, node_attr)
|
||||
)
|
||||
cmds.setAttr(node_attr, value, type="string")
|
||||
|
|
@ -32,5 +32,5 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin):
|
|||
# Need to save the scene, cause the attribute changes above does not
|
||||
# mark the scene as modified so user can exit without committing the
|
||||
# changes.
|
||||
self.log.info("Saving changes.")
|
||||
self.log.debug("Saving changes.")
|
||||
cmds.file(save=True)
|
||||
|
|
|
|||
|
|
@ -215,9 +215,9 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
|
|||
:rtype: int
|
||||
:raises: Exception if template ID isn't found
|
||||
"""
|
||||
self.log.info("Trying to find template for [{}]".format(renderer))
|
||||
self.log.debug("Trying to find template for [{}]".format(renderer))
|
||||
mapped = _get_template_id(renderer)
|
||||
self.log.info("got id [{}]".format(mapped))
|
||||
self.log.debug("got id [{}]".format(mapped))
|
||||
return self._templates.get(mapped)
|
||||
|
||||
def _submit(self, payload):
|
||||
|
|
@ -453,8 +453,8 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
|
|||
|
||||
self.preflight_check(instance)
|
||||
|
||||
self.log.info("Submitting ...")
|
||||
self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
self.log.debug("Submitting ...")
|
||||
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
response = self._submit(payload)
|
||||
# response = requests.post(url, json=payload)
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
cls.log.info("Checking name of {}".format(instance.name))
|
||||
cls.log.debug("Checking name of {}".format(instance.name))
|
||||
|
||||
content_instance = instance.data.get("setMembers", None)
|
||||
if not content_instance:
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class ValidateAssemblyNamespaces(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
|
||||
self.log.info("Checking namespace for %s" % instance.name)
|
||||
self.log.debug("Checking namespace for %s" % instance.name)
|
||||
if self.get_invalid(instance):
|
||||
raise PublishValidationError("Nested namespaces found")
|
||||
|
||||
|
|
|
|||
|
|
@ -47,10 +47,10 @@ class ValidateFrameRange(pyblish.api.InstancePlugin,
|
|||
|
||||
context = instance.context
|
||||
if instance.data.get("tileRendering"):
|
||||
self.log.info((
|
||||
self.log.debug(
|
||||
"Skipping frame range validation because "
|
||||
"tile rendering is enabled."
|
||||
))
|
||||
)
|
||||
return
|
||||
|
||||
frame_start_handle = int(context.data.get("frameStartHandle"))
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
meshes = cmds.ls(instance, type="mesh", long=True)
|
||||
cls.log.info("meshes: {}".format(meshes))
|
||||
cls.log.debug("meshes: {}".format(meshes))
|
||||
# load the glsl shader plugin
|
||||
cmds.loadPlugin("glslShader", quiet=True)
|
||||
|
||||
|
|
@ -96,8 +96,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
|
|||
cls.log.warning("ogsfx shader file "
|
||||
"not found in {}".format(ogsfx_path))
|
||||
|
||||
cls.log.info("Find the ogsfx shader file in "
|
||||
"default maya directory...")
|
||||
cls.log.debug("Searching the ogsfx shader file in "
|
||||
"default maya directory...")
|
||||
# re-direct to search the ogsfx path in maya_dir
|
||||
ogsfx_path = os.getenv("MAYA_APP_DIR") + ogsfx_path
|
||||
if not os.path.exists(ogsfx_path):
|
||||
|
|
@ -130,8 +130,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def pbs_shader_conversion(cls, main_shader, glsl):
|
||||
|
||||
cls.log.info("StringrayPBS detected "
|
||||
"-> Can do texture conversion")
|
||||
cls.log.debug("StringrayPBS detected "
|
||||
"-> Can do texture conversion")
|
||||
|
||||
for shader in main_shader:
|
||||
# get the file textures related to the PBS Shader
|
||||
|
|
@ -168,8 +168,8 @@ class ValidateGLSLMaterial(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def arnold_shader_conversion(cls, main_shader, glsl):
|
||||
cls.log.info("aiStandardSurface detected "
|
||||
"-> Can do texture conversion")
|
||||
cls.log.debug("aiStandardSurface detected "
|
||||
"-> Can do texture conversion")
|
||||
|
||||
for shader in main_shader:
|
||||
# get the file textures related to the PBS Shader
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class ValidateInstancerContent(pyblish.api.InstancePlugin):
|
|||
members = instance.data['setMembers']
|
||||
export_members = instance.data['exactExportMembers']
|
||||
|
||||
self.log.info("Contents {0}".format(members))
|
||||
self.log.debug("Contents {0}".format(members))
|
||||
|
||||
if not len(members) == len(cmds.ls(members, type="instancer")):
|
||||
self.log.error("Instancer can only contain instancers")
|
||||
|
|
|
|||
|
|
@ -5,8 +5,6 @@ import pyblish.api
|
|||
|
||||
from openpype.pipeline.publish import PublishValidationError
|
||||
|
||||
VERBOSE = False
|
||||
|
||||
|
||||
def is_cache_resource(resource):
|
||||
"""Return whether resource is a cacheFile resource"""
|
||||
|
|
@ -73,9 +71,6 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin):
|
|||
xml = all_files.pop(0)
|
||||
assert xml.endswith(".xml")
|
||||
|
||||
if VERBOSE:
|
||||
cls.log.info("Checking: {0}".format(all_files))
|
||||
|
||||
# Ensure all files exist (including ticks)
|
||||
# The remainder file paths should be the .mcx or .mcc files
|
||||
valdidate_files(all_files)
|
||||
|
|
@ -129,8 +124,8 @@ class ValidateInstancerFrameRanges(pyblish.api.InstancePlugin):
|
|||
# for the frames required by the time range.
|
||||
if ticks:
|
||||
ticks = list(sorted(ticks))
|
||||
cls.log.info("Found ticks: {0} "
|
||||
"(substeps: {1})".format(ticks, len(ticks)))
|
||||
cls.log.debug("Found ticks: {0} "
|
||||
"(substeps: {1})".format(ticks, len(ticks)))
|
||||
|
||||
# Check all frames except the last since we don't
|
||||
# require subframes after our time range.
|
||||
|
|
|
|||
|
|
@ -36,28 +36,34 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin,
|
|||
|
||||
optional = True
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
# todo: this should not be done this way
|
||||
attr = "defaultRenderGlobals.currentRenderer"
|
||||
cls.active = cmds.getAttr(attr).lower() == "arnold"
|
||||
# cache (will be `dict` when cached)
|
||||
arnold_mesh_defaults = None
|
||||
|
||||
@classmethod
|
||||
def get_default_attributes(cls):
|
||||
|
||||
if cls.arnold_mesh_defaults is not None:
|
||||
# Use from cache
|
||||
return cls.arnold_mesh_defaults
|
||||
|
||||
# Get default arnold attribute values for mesh type.
|
||||
defaults = {}
|
||||
with delete_after() as tmp:
|
||||
transform = cmds.createNode("transform")
|
||||
transform = cmds.createNode("transform", skipSelect=True)
|
||||
tmp.append(transform)
|
||||
|
||||
mesh = cmds.createNode("mesh", parent=transform)
|
||||
for attr in cmds.listAttr(mesh, string="ai*"):
|
||||
mesh = cmds.createNode("mesh", parent=transform, skipSelect=True)
|
||||
arnold_attributes = cmds.listAttr(mesh,
|
||||
string="ai*",
|
||||
fromPlugin=True) or []
|
||||
for attr in arnold_attributes:
|
||||
plug = "{}.{}".format(mesh, attr)
|
||||
try:
|
||||
defaults[attr] = get_attribute(plug)
|
||||
except PublishValidationError:
|
||||
cls.log.debug("Ignoring arnold attribute: {}".format(attr))
|
||||
|
||||
cls.arnold_mesh_defaults = defaults # assign cache
|
||||
return defaults
|
||||
|
||||
@classmethod
|
||||
|
|
@ -109,6 +115,10 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin,
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
if not cmds.pluginInfo("mtoa", query=True, loaded=True):
|
||||
# Arnold attributes only exist if plug-in is loaded
|
||||
return
|
||||
|
||||
invalid = self.get_invalid_attributes(instance, compute=True)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
|
|
|
|||
|
|
@ -125,7 +125,7 @@ class ValidateModelName(pyblish.api.InstancePlugin,
|
|||
r = re.compile(regex)
|
||||
|
||||
for obj in filtered:
|
||||
cls.log.info("testing: {}".format(obj))
|
||||
cls.log.debug("testing: {}".format(obj))
|
||||
m = r.match(obj)
|
||||
if m is None:
|
||||
cls.log.error("invalid name on: {}".format(obj))
|
||||
|
|
|
|||
|
|
@ -35,12 +35,12 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin,
|
|||
publishMipMap = instance.data["publishMipMap"]
|
||||
enforced = True
|
||||
if intent in self.enforced_intents:
|
||||
self.log.info("This validation will be enforced: '{}'"
|
||||
.format(intent))
|
||||
self.log.debug("This validation will be enforced: '{}'"
|
||||
.format(intent))
|
||||
else:
|
||||
enforced = False
|
||||
self.log.info("This validation will NOT be enforced: '{}'"
|
||||
.format(intent))
|
||||
self.log.debug("This validation will NOT be enforced: '{}'"
|
||||
.format(intent))
|
||||
|
||||
if not instance[:]:
|
||||
raise PublishValidationError("Instance is empty")
|
||||
|
|
@ -75,8 +75,9 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin,
|
|||
self.log.warning(msg)
|
||||
|
||||
if invalid:
|
||||
raise PublishValidationError("'{}' has invalid look "
|
||||
"content".format(instance.name))
|
||||
raise PublishValidationError(
|
||||
"'{}' has invalid look content".format(instance.name)
|
||||
)
|
||||
|
||||
def valid_file(self, fname):
|
||||
self.log.debug("Checking validity of '{}'".format(fname))
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class ValidateSkeletalMeshHierarchy(pyblish.api.InstancePlugin):
|
|||
parent.split("|")[1] for parent in (joints_parents + geo_parents)
|
||||
}
|
||||
|
||||
self.log.info(parents_set)
|
||||
self.log.debug(parents_set)
|
||||
|
||||
if len(set(parents_set)) > 2:
|
||||
raise PublishXmlValidationError(
|
||||
|
|
|
|||
|
|
@ -140,12 +140,12 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin,
|
|||
return
|
||||
|
||||
if not self.validate_mesh and not self.validate_collision:
|
||||
self.log.info("Validation of both mesh and collision names"
|
||||
"is disabled.")
|
||||
self.log.debug("Validation of both mesh and collision names"
|
||||
"is disabled.")
|
||||
return
|
||||
|
||||
if not instance.data.get("collisionMembers", None):
|
||||
self.log.info("There are no collision objects to validate")
|
||||
self.log.debug("There are no collision objects to validate")
|
||||
return
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
|
|
|
|||
|
|
@ -52,6 +52,6 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin):
|
|||
|
||||
renderlayer = instance.data.get("renderlayer")
|
||||
with lib.renderlayer(renderlayer):
|
||||
cls.log.info("Enabling Distributed Rendering "
|
||||
"ignore in batch mode..")
|
||||
cls.log.debug("Enabling Distributed Rendering "
|
||||
"ignore in batch mode..")
|
||||
cmds.setAttr(cls.ignored_attr, True)
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin):
|
|||
# has any yeti callback set or not since if the callback
|
||||
# is there it wouldn't error and if it weren't then
|
||||
# nothing happens because there are no yeti nodes.
|
||||
cls.log.info(
|
||||
cls.log.debug(
|
||||
"Yeti is loaded but no yeti nodes were found. "
|
||||
"Callback validation skipped.."
|
||||
)
|
||||
|
|
@ -62,7 +62,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin):
|
|||
|
||||
renderer = instance.data["renderer"]
|
||||
if renderer == "redshift":
|
||||
cls.log.info("Redshift ignores any pre and post render callbacks")
|
||||
cls.log.debug("Redshift ignores any pre and post render callbacks")
|
||||
return False
|
||||
|
||||
callback_lookup = cls.callbacks.get(renderer, {})
|
||||
|
|
|
|||
|
|
@ -37,8 +37,8 @@ class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator):
|
|||
|
||||
# Allow publish without input meshes.
|
||||
if not shapes:
|
||||
cls.log.info("Found no input meshes for %s, skipping ..."
|
||||
% instance)
|
||||
cls.log.debug("Found no input meshes for %s, skipping ..."
|
||||
% instance)
|
||||
return []
|
||||
|
||||
# check if input node is part of groomRig instance
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
|
|||
instance.data["deadlineUrl"] = self._collect_deadline_url(instance)
|
||||
instance.data["deadlineUrl"] = \
|
||||
instance.data["deadlineUrl"].strip().rstrip("/")
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Using {} for submission.".format(instance.data["deadlineUrl"]))
|
||||
|
||||
def _collect_deadline_url(self, render_instance):
|
||||
|
|
|
|||
|
|
@ -183,10 +183,10 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
}
|
||||
|
||||
plugin = payload["JobInfo"]["Plugin"]
|
||||
self.log.info("using render plugin : {}".format(plugin))
|
||||
self.log.debug("using render plugin : {}".format(plugin))
|
||||
|
||||
self.log.info("Submitting..")
|
||||
self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
self.log.debug("Submitting..")
|
||||
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
# adding expectied files to instance.data
|
||||
self.expected_files(instance, render_path)
|
||||
|
|
|
|||
|
|
@ -233,8 +233,8 @@ class FusionSubmitDeadline(
|
|||
) for index, key in enumerate(environment)
|
||||
})
|
||||
|
||||
self.log.info("Submitting..")
|
||||
self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
self.log.debug("Submitting..")
|
||||
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
# E.g. http://192.168.0.1:8082/api/jobs
|
||||
url = "{}/api/jobs".format(deadline_url)
|
||||
|
|
|
|||
|
|
@ -369,7 +369,7 @@ class HarmonySubmitDeadline(
|
|||
# rendering, we need to unzip it.
|
||||
published_scene = Path(
|
||||
self.from_published_scene(False))
|
||||
self.log.info(f"Processing {published_scene.as_posix()}")
|
||||
self.log.debug(f"Processing {published_scene.as_posix()}")
|
||||
xstage_path = self._unzip_scene_file(published_scene)
|
||||
render_path = xstage_path.parent / "renders"
|
||||
|
||||
|
|
|
|||
|
|
@ -162,7 +162,7 @@ class HoudiniSubmitPublishDeadline(pyblish.api.ContextPlugin):
|
|||
)
|
||||
|
||||
# Submit
|
||||
self.log.info("Submitting..")
|
||||
self.log.debug("Submitting..")
|
||||
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
# E.g. http://192.168.0.1:8082/api/jobs
|
||||
|
|
|
|||
|
|
@ -427,7 +427,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
new_job_info.update(tiles_data["JobInfo"])
|
||||
new_plugin_info.update(tiles_data["PluginInfo"])
|
||||
|
||||
self.log.info("hashing {} - {}".format(file_index, file))
|
||||
self.log.debug("hashing {} - {}".format(file_index, file))
|
||||
job_hash = hashlib.sha256(
|
||||
("{}_{}".format(file_index, file)).encode("utf-8"))
|
||||
|
||||
|
|
@ -443,7 +443,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
)
|
||||
file_index += 1
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Submitting tile job(s) [{}] ...".format(len(frame_payloads)))
|
||||
|
||||
# Submit frame tile jobs
|
||||
|
|
@ -553,7 +553,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
assembly_job_ids = []
|
||||
num_assemblies = len(assembly_payloads)
|
||||
for i, payload in enumerate(assembly_payloads):
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"submitting assembly job {} of {}".format(i + 1,
|
||||
num_assemblies)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -243,7 +243,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
|
||||
# resolve any limit groups
|
||||
limit_groups = self.get_limit_groups()
|
||||
self.log.info("Limit groups: `{}`".format(limit_groups))
|
||||
self.log.debug("Limit groups: `{}`".format(limit_groups))
|
||||
|
||||
payload = {
|
||||
"JobInfo": {
|
||||
|
|
@ -386,10 +386,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
})
|
||||
|
||||
plugin = payload["JobInfo"]["Plugin"]
|
||||
self.log.info("using render plugin : {}".format(plugin))
|
||||
self.log.debug("using render plugin : {}".format(plugin))
|
||||
|
||||
self.log.info("Submitting..")
|
||||
self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
self.log.debug("Submitting..")
|
||||
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
# adding expectied files to instance.data
|
||||
self.expected_files(
|
||||
|
|
|
|||
|
|
@ -317,7 +317,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
# remove secondary pool
|
||||
payload["JobInfo"].pop("SecondaryPool", None)
|
||||
|
||||
self.log.info("Submitting Deadline job ...")
|
||||
self.log.debug("Submitting Deadline publish job ...")
|
||||
|
||||
url = "{}/api/jobs".format(self.deadline_url)
|
||||
response = requests.post(url, json=payload, timeout=10)
|
||||
|
|
@ -454,7 +454,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
import getpass
|
||||
|
||||
render_job = {}
|
||||
self.log.info("Faking job data ...")
|
||||
self.log.debug("Faking job data ...")
|
||||
render_job["Props"] = {}
|
||||
# Render job doesn't exist because we do not have prior submission.
|
||||
# We still use data from it so lets fake it.
|
||||
|
|
|
|||
|
|
@ -214,8 +214,16 @@ class BaseCreator:
|
|||
# Backwards compatibility for system settings
|
||||
self.apply_settings(project_settings, system_settings)
|
||||
|
||||
init_overriden = self.__class__.__init__ is not BaseCreator.__init__
|
||||
if init_overriden or expect_system_settings:
|
||||
init_use_base = any(
|
||||
self.__class__.__init__ is cls.__init__
|
||||
for cls in {
|
||||
BaseCreator,
|
||||
Creator,
|
||||
HiddenCreator,
|
||||
AutoCreator,
|
||||
}
|
||||
)
|
||||
if not init_use_base or expect_system_settings:
|
||||
self.log.warning((
|
||||
"WARNING: Source - Create plugin {}."
|
||||
" System settings argument will not be passed to"
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ class CleanUp(pyblish.api.InstancePlugin):
|
|||
skip_cleanup_filepaths.add(os.path.normpath(path))
|
||||
|
||||
if self.remove_temp_renders:
|
||||
self.log.info("Cleaning renders new...")
|
||||
self.log.debug("Cleaning renders new...")
|
||||
self.clean_renders(instance, skip_cleanup_filepaths)
|
||||
|
||||
if [ef for ef in self.exclude_families
|
||||
|
|
@ -95,10 +95,12 @@ class CleanUp(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
if instance.data.get("stagingDir_persistent"):
|
||||
self.log.info("Staging dir: %s should be persistent" % staging_dir)
|
||||
self.log.debug(
|
||||
"Staging dir {} should be persistent".format(staging_dir)
|
||||
)
|
||||
return
|
||||
|
||||
self.log.info("Removing staging directory {}".format(staging_dir))
|
||||
self.log.debug("Removing staging directory {}".format(staging_dir))
|
||||
shutil.rmtree(staging_dir)
|
||||
|
||||
def clean_renders(self, instance, skip_cleanup_filepaths):
|
||||
|
|
|
|||
|
|
@ -26,10 +26,10 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
|
|||
# Skip process if is not in list of source hosts in which this
|
||||
# plugin should run
|
||||
if src_host_name not in self.allowed_hosts:
|
||||
self.log.info((
|
||||
self.log.debug(
|
||||
"Source host \"{}\" is not in list of enabled hosts {}."
|
||||
" Skipping"
|
||||
).format(str(src_host_name), str(self.allowed_hosts)))
|
||||
" Skipping".format(src_host_name, self.allowed_hosts)
|
||||
)
|
||||
return
|
||||
|
||||
self.log.debug("Preparing filepaths to remove")
|
||||
|
|
@ -47,7 +47,7 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
|
|||
dirpaths_to_remove.add(os.path.normpath(staging_dir))
|
||||
|
||||
if not dirpaths_to_remove:
|
||||
self.log.info("Nothing to remove. Skipping")
|
||||
self.log.debug("Nothing to remove. Skipping")
|
||||
return
|
||||
|
||||
self.log.debug("Filepaths to remove are:\n{}".format(
|
||||
|
|
|
|||
|
|
@ -53,8 +53,8 @@ class CollectAudio(pyblish.api.ContextPlugin):
|
|||
):
|
||||
# Skip instances that already have audio filled
|
||||
if instance.data.get("audio"):
|
||||
self.log.info(
|
||||
"Skipping Audio collecion. It is already collected"
|
||||
self.log.debug(
|
||||
"Skipping Audio collection. It is already collected"
|
||||
)
|
||||
continue
|
||||
filtered_instances.append(instance)
|
||||
|
|
@ -70,7 +70,7 @@ class CollectAudio(pyblish.api.ContextPlugin):
|
|||
instances_by_asset_name[asset_name].append(instance)
|
||||
|
||||
asset_names = set(instances_by_asset_name.keys())
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Searching for audio subset '{subset}' in assets {assets}"
|
||||
).format(
|
||||
subset=self.audio_subset_name,
|
||||
|
|
@ -100,7 +100,7 @@ class CollectAudio(pyblish.api.ContextPlugin):
|
|||
"offset": 0,
|
||||
"filename": repre_path
|
||||
}]
|
||||
self.log.info("Audio Data added to instance ...")
|
||||
self.log.debug("Audio Data added to instance ...")
|
||||
|
||||
def query_representations(self, project_name, asset_names):
|
||||
"""Query representations related to audio subsets for passed assets.
|
||||
|
|
|
|||
|
|
@ -39,5 +39,12 @@ class CollectCurrentContext(pyblish.api.ContextPlugin):
|
|||
# - 'task' -> 'taskName'
|
||||
|
||||
self.log.info((
|
||||
"Collected project context\nProject: {}\nAsset: {}\nTask: {}"
|
||||
).format(project_name, asset_name, task_name))
|
||||
"Collected project context\n"
|
||||
"Project: {project_name}\n"
|
||||
"Asset: {asset_name}\n"
|
||||
"Task: {task_name}"
|
||||
).format(
|
||||
project_name=context.data["projectName"],
|
||||
asset_name=context.data["asset"],
|
||||
task_name=context.data["task"]
|
||||
))
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class CollectHierarchy(pyblish.api.ContextPlugin):
|
|||
final_context[project_name]['entity_type'] = 'Project'
|
||||
|
||||
for instance in context:
|
||||
self.log.info("Processing instance: `{}` ...".format(instance))
|
||||
self.log.debug("Processing instance: `{}` ...".format(instance))
|
||||
|
||||
# shot data dict
|
||||
shot_data = {}
|
||||
|
|
|
|||
|
|
@ -46,3 +46,10 @@ class CollectInputRepresentationsToVersions(pyblish.api.ContextPlugin):
|
|||
version_id = representation_id_to_version_id.get(repre_id)
|
||||
if version_id:
|
||||
input_versions.append(version_id)
|
||||
else:
|
||||
self.log.debug(
|
||||
"Representation id {} skipped because its version is "
|
||||
"not found in current project. Likely it is loaded "
|
||||
"from a library project or uses a deleted "
|
||||
"representation or version.".format(repre_id)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -91,12 +91,12 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
|
|||
# now we can just add instances from json file and we are done
|
||||
for instance_data in data.get("instances"):
|
||||
|
||||
self.log.info(" - processing instance for {}".format(
|
||||
self.log.debug(" - processing instance for {}".format(
|
||||
instance_data.get("subset")))
|
||||
instance = self._context.create_instance(
|
||||
instance_data.get("subset")
|
||||
)
|
||||
self.log.info("Filling stagingDir...")
|
||||
self.log.debug("Filling stagingDir...")
|
||||
|
||||
self._fill_staging_dir(instance_data, anatomy)
|
||||
instance.data.update(instance_data)
|
||||
|
|
@ -121,7 +121,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
|
|||
"offset": 0
|
||||
}]
|
||||
})
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
f"Adding audio to instance: {instance.data['audio']}")
|
||||
|
||||
def process(self, context):
|
||||
|
|
@ -137,11 +137,11 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
|
|||
|
||||
# Using already collected Anatomy
|
||||
anatomy = context.data["anatomy"]
|
||||
self.log.info("Getting root setting for project \"{}\"".format(
|
||||
self.log.debug("Getting root setting for project \"{}\"".format(
|
||||
anatomy.project_name
|
||||
))
|
||||
|
||||
self.log.info("anatomy: {}".format(anatomy.roots))
|
||||
self.log.debug("anatomy: {}".format(anatomy.roots))
|
||||
try:
|
||||
session_is_set = False
|
||||
for path in paths:
|
||||
|
|
@ -156,7 +156,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
|
|||
if remapped:
|
||||
session_data["AVALON_WORKDIR"] = remapped
|
||||
|
||||
self.log.info("Setting session using data from file")
|
||||
self.log.debug("Setting session using data from file")
|
||||
legacy_io.Session.update(session_data)
|
||||
os.environ.update(session_data)
|
||||
session_is_set = True
|
||||
|
|
|
|||
|
|
@ -63,4 +63,6 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
|
|||
"filename: {}".format(filename))
|
||||
|
||||
context.data['version'] = int(version)
|
||||
self.log.info('Scene Version: %s' % context.data.get('version'))
|
||||
self.log.debug(
|
||||
"Collected scene version: {}".format(context.data.get('version'))
|
||||
)
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ class ExtractBurnin(publish.Extractor):
|
|||
return
|
||||
|
||||
if not instance.data.get("representations"):
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Instance does not have filled representations. Skipping")
|
||||
return
|
||||
|
||||
|
|
@ -135,11 +135,11 @@ class ExtractBurnin(publish.Extractor):
|
|||
burnin_defs, repre["tags"]
|
||||
)
|
||||
if not repre_burnin_defs:
|
||||
self.log.info((
|
||||
self.log.debug(
|
||||
"Skipped representation. All burnin definitions from"
|
||||
" selected profile does not match to representation's"
|
||||
" tags. \"{}\""
|
||||
).format(str(repre["tags"])))
|
||||
" selected profile do not match to representation's"
|
||||
" tags. \"{}\"".format(repre["tags"])
|
||||
)
|
||||
continue
|
||||
filtered_repres.append((repre, repre_burnin_defs))
|
||||
|
||||
|
|
@ -164,7 +164,7 @@ class ExtractBurnin(publish.Extractor):
|
|||
logger=self.log)
|
||||
|
||||
if not profile:
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Skipped instance. None of profiles in presets are for"
|
||||
" Host: \"{}\" | Families: \"{}\" | Task \"{}\""
|
||||
" | Task type \"{}\" | Subset \"{}\" "
|
||||
|
|
@ -176,7 +176,7 @@ class ExtractBurnin(publish.Extractor):
|
|||
# Pre-filter burnin definitions by instance families
|
||||
burnin_defs = self.filter_burnins_defs(profile, instance)
|
||||
if not burnin_defs:
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Skipped instance. Burnin definitions are not set for profile"
|
||||
" Host: \"{}\" | Families: \"{}\" | Task \"{}\""
|
||||
" | Profile \"{}\""
|
||||
|
|
@ -223,10 +223,10 @@ class ExtractBurnin(publish.Extractor):
|
|||
# If result is None the requirement of conversion can't be
|
||||
# determined
|
||||
if do_convert is None:
|
||||
self.log.info((
|
||||
self.log.debug(
|
||||
"Can't determine if representation requires conversion."
|
||||
" Skipped."
|
||||
))
|
||||
)
|
||||
continue
|
||||
|
||||
# Do conversion if needed
|
||||
|
|
|
|||
|
|
@ -320,7 +320,7 @@ class ExtractOIIOTranscode(publish.Extractor):
|
|||
logger=self.log)
|
||||
|
||||
if not profile:
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Skipped instance. None of profiles in presets are for"
|
||||
" Host: \"{}\" | Families: \"{}\" | Task \"{}\""
|
||||
" | Task type \"{}\" | Subset \"{}\" "
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class ExtractColorspaceData(publish.Extractor,
|
|||
def process(self, instance):
|
||||
representations = instance.data.get("representations")
|
||||
if not representations:
|
||||
self.log.info("No representations at instance : `{}`".format(
|
||||
self.log.debug("No representations at instance : `{}`".format(
|
||||
instance))
|
||||
return
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
return
|
||||
|
||||
if "hierarchyContext" not in context.data:
|
||||
self.log.info("skipping IntegrateHierarchyToAvalon")
|
||||
self.log.debug("skipping ExtractHierarchyToAvalon")
|
||||
return
|
||||
|
||||
if not legacy_io.Session:
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ class ExtractHierarchyToAYON(pyblish.api.ContextPlugin):
|
|||
|
||||
hierarchy_context = context.data.get("hierarchyContext")
|
||||
if not hierarchy_context:
|
||||
self.log.debug("Skipping")
|
||||
self.log.debug("Skipping ExtractHierarchyToAYON")
|
||||
return
|
||||
|
||||
project_name = context.data["projectName"]
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ from openpype.lib import (
|
|||
get_ffmpeg_format_args,
|
||||
)
|
||||
from openpype.pipeline import publish
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
||||
class ExtractReviewSlate(publish.Extractor):
|
||||
|
|
@ -46,7 +47,7 @@ class ExtractReviewSlate(publish.Extractor):
|
|||
"*": inst_data["slateFrame"]
|
||||
}
|
||||
|
||||
self.log.info("_ slates_data: {}".format(pformat(slates_data)))
|
||||
self.log.debug("_ slates_data: {}".format(pformat(slates_data)))
|
||||
|
||||
if "reviewToWidth" in inst_data:
|
||||
use_legacy_code = True
|
||||
|
|
@ -76,7 +77,7 @@ class ExtractReviewSlate(publish.Extractor):
|
|||
)
|
||||
# get slate data
|
||||
slate_path = self._get_slate_path(input_file, slates_data)
|
||||
self.log.info("_ slate_path: {}".format(slate_path))
|
||||
self.log.debug("_ slate_path: {}".format(slate_path))
|
||||
|
||||
slate_width, slate_height = self._get_slates_resolution(slate_path)
|
||||
|
||||
|
|
@ -93,9 +94,10 @@ class ExtractReviewSlate(publish.Extractor):
|
|||
|
||||
# Raise exception of any stream didn't define input resolution
|
||||
if input_width is None:
|
||||
raise AssertionError((
|
||||
raise KnownPublishError(
|
||||
"FFprobe couldn't read resolution from input file: \"{}\""
|
||||
).format(input_path))
|
||||
.format(input_path)
|
||||
)
|
||||
|
||||
(
|
||||
audio_codec,
|
||||
|
|
|
|||
|
|
@ -29,24 +29,24 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin):
|
|||
representations_new = []
|
||||
|
||||
for repre in representations:
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Processing representation {}".format(repre.get("name")))
|
||||
tags = repre.get("tags", [])
|
||||
if "toScanline" not in tags:
|
||||
self.log.info(" - missing toScanline tag")
|
||||
self.log.debug(" - missing toScanline tag")
|
||||
continue
|
||||
|
||||
# run only on exrs
|
||||
if repre.get("ext") != "exr":
|
||||
self.log.info("- not EXR files")
|
||||
self.log.debug("- not EXR files")
|
||||
continue
|
||||
|
||||
if not isinstance(repre['files'], (list, tuple)):
|
||||
input_files = [repre['files']]
|
||||
self.log.info("We have a single frame")
|
||||
self.log.debug("We have a single frame")
|
||||
else:
|
||||
input_files = repre['files']
|
||||
self.log.info("We have a sequence")
|
||||
self.log.debug("We have a sequence")
|
||||
|
||||
stagingdir = os.path.normpath(repre.get("stagingDir"))
|
||||
|
||||
|
|
@ -68,7 +68,7 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
subprocess_exr = " ".join(oiio_cmd)
|
||||
self.log.info(f"running: {subprocess_exr}")
|
||||
self.log.debug(f"running: {subprocess_exr}")
|
||||
run_subprocess(subprocess_exr, logger=self.log)
|
||||
|
||||
# raise error if there is no ouptput
|
||||
|
|
|
|||
|
|
@ -43,12 +43,12 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
|
||||
# Skip if instance have 'review' key in data set to 'False'
|
||||
if not self._is_review_instance(instance):
|
||||
self.log.info("Skipping - no review set on instance.")
|
||||
self.log.debug("Skipping - no review set on instance.")
|
||||
return
|
||||
|
||||
# Check if already has thumbnail created
|
||||
if self._already_has_thumbnail(instance_repres):
|
||||
self.log.info("Thumbnail representation already present.")
|
||||
self.log.debug("Thumbnail representation already present.")
|
||||
return
|
||||
|
||||
# skip crypto passes.
|
||||
|
|
@ -58,15 +58,15 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
# representation that can be determined much earlier and
|
||||
# with better precision.
|
||||
if "crypto" in subset_name.lower():
|
||||
self.log.info("Skipping crypto passes.")
|
||||
self.log.debug("Skipping crypto passes.")
|
||||
return
|
||||
|
||||
filtered_repres = self._get_filtered_repres(instance)
|
||||
if not filtered_repres:
|
||||
self.log.info((
|
||||
"Instance don't have representations"
|
||||
" that can be used as source for thumbnail. Skipping"
|
||||
))
|
||||
self.log.info(
|
||||
"Instance doesn't have representations that can be used "
|
||||
"as source for thumbnail. Skipping thumbnail extraction."
|
||||
)
|
||||
return
|
||||
|
||||
# Create temp directory for thumbnail
|
||||
|
|
@ -107,10 +107,10 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
# oiiotool isn't available
|
||||
if not thumbnail_created:
|
||||
if oiio_supported:
|
||||
self.log.info((
|
||||
self.log.debug(
|
||||
"Converting with FFMPEG because input"
|
||||
" can't be read by OIIO."
|
||||
))
|
||||
)
|
||||
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(
|
||||
full_input_path, full_output_path
|
||||
|
|
@ -165,8 +165,8 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
if not repre.get("files"):
|
||||
self.log.info((
|
||||
"Representation \"{}\" don't have files. Skipping"
|
||||
self.log.debug((
|
||||
"Representation \"{}\" doesn't have files. Skipping"
|
||||
).format(repre["name"]))
|
||||
continue
|
||||
|
||||
|
|
@ -174,7 +174,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
return filtered_repres
|
||||
|
||||
def create_thumbnail_oiio(self, src_path, dst_path):
|
||||
self.log.info("Extracting thumbnail {}".format(dst_path))
|
||||
self.log.debug("Extracting thumbnail with OIIO: {}".format(dst_path))
|
||||
oiio_cmd = get_oiio_tool_args(
|
||||
"oiiotool",
|
||||
"-a", src_path,
|
||||
|
|
@ -192,7 +192,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
return False
|
||||
|
||||
def create_thumbnail_ffmpeg(self, src_path, dst_path):
|
||||
self.log.info("outputting {}".format(dst_path))
|
||||
self.log.debug("Extracting thumbnail with FFMPEG: {}".format(dst_path))
|
||||
|
||||
ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg")
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
|
|
@ -225,7 +225,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using ffmpeg",
|
||||
"Failed to create thumbnail using ffmpeg",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
|
||||
# Check if already has thumbnail created
|
||||
if self._instance_has_thumbnail(instance):
|
||||
self.log.info("Thumbnail representation already present.")
|
||||
self.log.debug("Thumbnail representation already present.")
|
||||
return
|
||||
|
||||
dst_filepath = self._create_thumbnail(
|
||||
|
|
@ -98,7 +98,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
thumbnail_created = False
|
||||
oiio_supported = is_oiio_supported()
|
||||
|
||||
self.log.info("Thumbnail source: {}".format(thumbnail_source))
|
||||
self.log.debug("Thumbnail source: {}".format(thumbnail_source))
|
||||
src_basename = os.path.basename(thumbnail_source)
|
||||
dst_filename = os.path.splitext(src_basename)[0] + "_thumb.jpg"
|
||||
full_output_path = os.path.join(dst_staging, dst_filename)
|
||||
|
|
@ -115,10 +115,10 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
# oiiotool isn't available
|
||||
if not thumbnail_created:
|
||||
if oiio_supported:
|
||||
self.log.info((
|
||||
self.log.info(
|
||||
"Converting with FFMPEG because input"
|
||||
" can't be read by OIIO."
|
||||
))
|
||||
)
|
||||
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(
|
||||
thumbnail_source, full_output_path
|
||||
|
|
@ -143,20 +143,20 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
return False
|
||||
|
||||
def create_thumbnail_oiio(self, src_path, dst_path):
|
||||
self.log.info("outputting {}".format(dst_path))
|
||||
self.log.debug("Outputting thumbnail with OIIO: {}".format(dst_path))
|
||||
oiio_cmd = get_oiio_tool_args(
|
||||
"oiiotool",
|
||||
"-a", src_path,
|
||||
"--ch", "R,G,B",
|
||||
"-o", dst_path
|
||||
)
|
||||
self.log.info("Running: {}".format(" ".join(oiio_cmd)))
|
||||
self.log.debug("Running: {}".format(" ".join(oiio_cmd)))
|
||||
try:
|
||||
run_subprocess(oiio_cmd, logger=self.log)
|
||||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using oiiotool",
|
||||
"Failed to create thumbnail using oiiotool",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
|
@ -173,13 +173,13 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
dst_path
|
||||
)
|
||||
|
||||
self.log.info("Running: {}".format(" ".join(ffmpeg_cmd)))
|
||||
self.log.debug("Running: {}".format(" ".join(ffmpeg_cmd)))
|
||||
try:
|
||||
run_subprocess(ffmpeg_cmd, logger=self.log)
|
||||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using ffmpeg",
|
||||
"Failed to create thumbnail using ffmpeg",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ class ExtractTrimVideoAudio(publish.Extractor):
|
|||
|
||||
# get staging dir
|
||||
staging_dir = self.staging_dir(instance)
|
||||
self.log.info("Staging dir set to: `{}`".format(staging_dir))
|
||||
self.log.debug("Staging dir set to: `{}`".format(staging_dir))
|
||||
|
||||
# Generate mov file.
|
||||
fps = instance.data["fps"]
|
||||
|
|
@ -59,7 +59,7 @@ class ExtractTrimVideoAudio(publish.Extractor):
|
|||
extensions = [output_file_type]
|
||||
|
||||
for ext in extensions:
|
||||
self.log.info("Processing ext: `{}`".format(ext))
|
||||
self.log.debug("Processing ext: `{}`".format(ext))
|
||||
|
||||
if not ext.startswith("."):
|
||||
ext = "." + ext
|
||||
|
|
@ -98,7 +98,7 @@ class ExtractTrimVideoAudio(publish.Extractor):
|
|||
ffmpeg_args.append(clip_trimed_path)
|
||||
|
||||
joined_args = " ".join(ffmpeg_args)
|
||||
self.log.info(f"Processing: {joined_args}")
|
||||
self.log.debug(f"Processing: {joined_args}")
|
||||
run_subprocess(
|
||||
ffmpeg_args, logger=self.log
|
||||
)
|
||||
|
|
|
|||
|
|
@ -155,13 +155,13 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
|
||||
# Instance should be integrated on a farm
|
||||
if instance.data.get("farm"):
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Instance is marked to be processed on farm. Skipping")
|
||||
return
|
||||
|
||||
# Instance is marked to not get integrated
|
||||
if not instance.data.get("integrate", True):
|
||||
self.log.info("Instance is marked to skip integrating. Skipping")
|
||||
self.log.debug("Instance is marked to skip integrating. Skipping")
|
||||
return
|
||||
|
||||
filtered_repres = self.filter_representations(instance)
|
||||
|
|
@ -306,7 +306,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# increase if the file transaction takes a long time.
|
||||
op_session.commit()
|
||||
|
||||
self.log.info("Subset {subset[name]} and Version {version[name]} "
|
||||
self.log.info("Subset '{subset[name]}' version {version[name]} "
|
||||
"written to database..".format(subset=subset,
|
||||
version=version))
|
||||
|
||||
|
|
@ -392,8 +392,13 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
p["representation"]["_id"]: p for p in prepared_representations
|
||||
}
|
||||
|
||||
self.log.info("Registered {} representations"
|
||||
"".format(len(prepared_representations)))
|
||||
self.log.info(
|
||||
"Registered {} representations: {}".format(
|
||||
len(prepared_representations),
|
||||
", ".join(p["representation"]["name"]
|
||||
for p in prepared_representations)
|
||||
)
|
||||
)
|
||||
|
||||
def prepare_subset(self, instance, op_session, project_name):
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
|
|
|
|||
|
|
@ -275,10 +275,10 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
backup_hero_publish_dir = _backup_hero_publish_dir
|
||||
break
|
||||
except Exception:
|
||||
self.log.info((
|
||||
self.log.info(
|
||||
"Could not remove previous backup folder."
|
||||
" Trying to add index to folder name"
|
||||
))
|
||||
" Trying to add index to folder name."
|
||||
)
|
||||
|
||||
_backup_hero_publish_dir = (
|
||||
backup_hero_publish_dir + str(idx)
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue