mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge remote-tracking branch 'BigRoy/maya_new_publisher' into maye_new_publisher_with_RR
This commit is contained in:
commit
b24db5580e
64 changed files with 1674 additions and 336 deletions
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
6
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,9 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.15.12-nightly.1
|
||||
- 3.15.11
|
||||
- 3.15.11-nightly.5
|
||||
- 3.15.11-nightly.4
|
||||
- 3.15.11-nightly.3
|
||||
- 3.15.11-nightly.2
|
||||
|
|
@ -132,9 +135,6 @@ body:
|
|||
- 3.14.4-nightly.3
|
||||
- 3.14.4-nightly.2
|
||||
- 3.14.4-nightly.1
|
||||
- 3.14.3
|
||||
- 3.14.3-nightly.7
|
||||
- 3.14.3-nightly.6
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
419
CHANGELOG.md
419
CHANGELOG.md
|
|
@ -1,6 +1,425 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.15.11](https://github.com/ynput/OpenPype/tree/3.15.11)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.10...3.15.11)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Task status during publishing <a href="https://github.com/ynput/OpenPype/pull/5123">#5123</a></summary>
|
||||
|
||||
Added option to change task status during publishing for 3 possible cases: "sending to farm", "local integration" and "on farm integration".
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Allow for more complex temp rendering paths <a href="https://github.com/ynput/OpenPype/pull/5132">#5132</a></summary>
|
||||
|
||||
When changing the temporary rendering template (i.e., add `{asset}` to the path) to something a bit more complex the formatting was erroring due to missing keys.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Add support for custom path for app templates <a href="https://github.com/ynput/OpenPype/pull/5137">#5137</a></summary>
|
||||
|
||||
This PR adds support for a custom App Templates path in Blender by setting the `BLENDER_USER_SCRIPTS` environment variable to the path specified in `OPENPYPE_APP_TEMPLATES_PATH`. This allows users to use their own custom app templates in Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher & StandalonePublisher: Specify version <a href="https://github.com/ynput/OpenPype/pull/5142">#5142</a></summary>
|
||||
|
||||
Simple creators in TrayPublisher can affect which version will be integrated. Standalone publisher respects the version change from UI.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Workfile Builder UI: Workfile builder window is not modal <a href="https://github.com/ynput/OpenPype/pull/5131">#5131</a></summary>
|
||||
|
||||
Workfile Templates Builder:
|
||||
- Create dialog is not a modal dialog
|
||||
- Create dialog remains open after create, so you can directly create a new placeholder with similar settings
|
||||
- In Maya allow to create root level placeholders (no selection during create) - **this felt more like a bugfix than anything else.**
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: Use custom modifiers to hold instance members <a href="https://github.com/ynput/OpenPype/pull/4931">#4931</a></summary>
|
||||
|
||||
Moving logic to handle members of publishing instance from children/parent relationship on Container to tracking via custom attribute on modifier. This eliminates limitations where you couldn't have one node multiple times under one Container and because it stores those relationships as weak references, they are easily transferable even when original nodes are renamed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add height, width and fps setup to project manager <a href="https://github.com/ynput/OpenPype/pull/5075">#5075</a></summary>
|
||||
|
||||
Add Width, Height, FPS, Pixel Aspect and Frame Start/End to the Project creation dialogue in the Project Manager.I understand that the Project manager will be replaced in the upcoming Ayon, but for the time being I believe setting new project with these options available would be more fun.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: connect custom write node script to the OP setting <a href="https://github.com/ynput/OpenPype/pull/5113">#5113</a></summary>
|
||||
|
||||
Allows user to customize the values of knobs attribute in the OP setting and use it in custom write node
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Keep `publisher.create_widget` variant when creating subsets <a href="https://github.com/ynput/OpenPype/pull/5119">#5119</a></summary>
|
||||
|
||||
Whenever a person is creating a subset to publish, the "creator" widget resets (where you choose the variant, product, etc.) so if the person is publishing several images of the a variant which is not the default one, they have to keep selecting the correct one after every "create".
|
||||
|
||||
This commit resets the original variant upon successful creation of a subset for publishing.
|
||||
|
||||
Demo:
|
||||
[Screencast from 2023-06-08 10-46-40.webm](https://github.com/ynput/OpenPype/assets/1800151/ca1c91d4-b8f3-43d2-a7b7-35987f5b6a3f)
|
||||
|
||||
## Testing notes:
|
||||
1. Launch AYON/OP
|
||||
2. Launch the publisher (select a project, shot, etc.)
|
||||
3. Crete a publish type (any works)
|
||||
4. Choose a variant for the publish that is not the default
|
||||
5. "Create >>"
|
||||
|
||||
The Variant fields should still have the variant you choose.
|
||||
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Color Management- added color management support for simple expected files on Deadline <a href="https://github.com/ynput/OpenPype/pull/5122">#5122</a></summary>
|
||||
|
||||
Running of `ExtractOIIOTranscode` during Deadline publish was previously implemented only on DCCs with AOVs (Maya, Max).This PR extends this for other DCCs with flat structure of expected files.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>hide macos dock icon on build <a href="https://github.com/ynput/OpenPype/pull/5133">#5133</a></summary>
|
||||
|
||||
Set `LSUIElement` to `1` in the `Info.plist` to hide OP icon from the macos dock by default.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Pack project: Raise exception with reasonable message <a href="https://github.com/ynput/OpenPype/pull/5145">#5145</a></summary>
|
||||
|
||||
Pack project crashes with relevant message when destination directory is not set.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Allow "inventory" actions to be supplied by a Module/Addon. <a href="https://github.com/ynput/OpenPype/pull/5146">#5146</a></summary>
|
||||
|
||||
Adds "inventory" as a possible key to the plugin paths to be returned from a module.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: make code compatible with 3dsmax 2022 <a href="https://github.com/ynput/OpenPype/pull/5164">#5164</a></summary>
|
||||
|
||||
Python 3.7 in 3dsmax 2022 is not supporting walrus operator. This is removing it from the code for the sake of compatibility
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Support same attribute names on different node types. <a href="https://github.com/ynput/OpenPype/pull/5054">#5054</a></summary>
|
||||
|
||||
When validating render settings attributes, support same attribute names on different node types.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: bug fix the standin being not loaded when they are first loaded <a href="https://github.com/ynput/OpenPype/pull/5143">#5143</a></summary>
|
||||
|
||||
fix the bug of raising error when the first two standins are loaded through the loaderThe bug mentioned in the related issue: https://github.com/ynput/OpenPype/issues/5129For some reason, `defaultArnoldRenderOptions.operator` is not listed in the connection node attribute even if `cmds.loadPlugin("mtoa", quiet=True)` executed before loading the object as standins for the first time.But if you manually turn on mtoa through plugin preference and load the standins for the first time, it won't raise the related `defaultArnoldRenderOptions.operator` error.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: bug fix arnoldExportAss unable to export selected set members <a href="https://github.com/ynput/OpenPype/pull/5150">#5150</a></summary>
|
||||
|
||||
See #5108 fix the bug arnoldExportAss being not able to export and error out during extraction.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Xgen multiple descriptions on single shape - OP-6039 <a href="https://github.com/ynput/OpenPype/pull/5160">#5160</a></summary>
|
||||
|
||||
When having multiple descriptions on the same geometry, the extraction would produce redundant duplicate geometries.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Xgen export of Abc's during Render Publishing - OP-6206 <a href="https://github.com/ynput/OpenPype/pull/5167">#5167</a></summary>
|
||||
|
||||
Shading assignments was missing duplicating the setup for Xgen publishing and the exporting of patches was getting the end frame incorrectly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Include handles - OP-6236 <a href="https://github.com/ynput/OpenPype/pull/5175">#5175</a></summary>
|
||||
|
||||
Render range was missing the handles.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>OCIO: Support working with single frame renders <a href="https://github.com/ynput/OpenPype/pull/5053">#5053</a></summary>
|
||||
|
||||
When there is only 1 file, the datamember `files` on the representation should be a string.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Burnins: Refactored burnins script <a href="https://github.com/ynput/OpenPype/pull/5094">#5094</a></summary>
|
||||
|
||||
Refactored list value for burnins and fixed command length limit by using temp file for filters string.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: open_file function can open autosave script <a href="https://github.com/ynput/OpenPype/pull/5107">#5107</a></summary>
|
||||
|
||||
Fix the bug of the workfile dialog being unable to open autosave nuke script
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>ImageIO: Minor fixes <a href="https://github.com/ynput/OpenPype/pull/5147">#5147</a></summary>
|
||||
|
||||
Resolve few minor fixes related to latest image io changes from PR.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix save shortcut <a href="https://github.com/ynput/OpenPype/pull/5148">#5148</a></summary>
|
||||
|
||||
Save shortcut should work for both PySide2 and PySide6.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Pack Project: Fix files packing <a href="https://github.com/ynput/OpenPype/pull/5154">#5154</a></summary>
|
||||
|
||||
Packing of project with files does work again.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Xgen version mismatch after publish - OP-6204 <a href="https://github.com/ynput/OpenPype/pull/5161">#5161</a></summary>
|
||||
|
||||
Xgen was not updating correctly when for example adding or removing descriptions. This resolve the issue by overwritting the workspace xgen file.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Edge case fixes <a href="https://github.com/ynput/OpenPype/pull/5165">#5165</a></summary>
|
||||
|
||||
Fix few edge case issues that may cause issues in Publisher UI.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Colorspace: host config path backward compatibility <a href="https://github.com/ynput/OpenPype/pull/5166">#5166</a></summary>
|
||||
|
||||
Old project settings overrides are now fully backward compatible. The issue with host config paths overrides were solved and now once a project used to be set to ocio_config **enabled** with found filepaths - this is now considered as activated host ocio_config paths overrides.Nuke is having an popup dialogue which is letting know to a user that settings for config path were changed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: import workfile missing - OP-6233 <a href="https://github.com/ynput/OpenPype/pull/5174">#5174</a></summary>
|
||||
|
||||
Missing `workfile` family to import.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Fix ignore sync filter <a href="https://github.com/ynput/OpenPype/pull/5176">#5176</a></summary>
|
||||
|
||||
Ftrack ignore filter does not crash because of dictionary modifications during it's iteration.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Webpublisher - headless publish shouldn't be blocking operation <a href="https://github.com/ynput/OpenPype/pull/5177">#5177</a></summary>
|
||||
|
||||
`subprocess.call` was blocking, which resulted in UI non responsiveness as it was waiting for publish to finish.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix disappearing actions <a href="https://github.com/ynput/OpenPype/pull/5184">#5184</a></summary>
|
||||
|
||||
Pyblish plugin actions are visible as expected.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement:animation family loaded as standing (abc) uses "use file sequence" <a href="https://github.com/ynput/OpenPype/pull/5110">#5110</a></summary>
|
||||
|
||||
The changes are the following. We started by updating the the is_sequence(files) function allowing it to return True for a list of files which has only one file, since our animation in this provides just one alembic file. For the correct FPS number, we got the fps from the published ass/abc from the version data.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>add label to matching family <a href="https://github.com/ynput/OpenPype/pull/5128">#5128</a></summary>
|
||||
|
||||
I added the possibility to filter the `family smart select` with the label in addition to the family.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.15.10](https://github.com/ynput/OpenPype/tree/3.15.10)
|
||||
|
||||
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -1,5 +1,5 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.25"
|
||||
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.26"
|
||||
ExtensionBundleName="com.openpype.AE.panel" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.AE.panel" Version="1.0" />
|
||||
|
|
|
|||
|
|
@ -104,6 +104,39 @@
|
|||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#create-placeholder-button").bind("click", function() {
|
||||
RPC.call('AfterEffects.create_placeholder_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#update-placeholder-button").bind("click", function() {
|
||||
RPC.call('AfterEffects.update_placeholder_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#build-workfile-button").bind("click", function() {
|
||||
RPC.call('AfterEffects.build_workfile_template_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#experimental-button").bind("click", function() {
|
||||
|
|
@ -127,9 +160,15 @@
|
|||
<div><a href=# id=loader-button><button class="hostFontSize">Load...</button></a></div>
|
||||
<div><a href=# id=publish-button><button class="hostFontSize">Publish...</button></a></div>
|
||||
<div><a href=# id=sceneinventory-button><button class="hostFontSize">Manage...</button></a></div>
|
||||
<div><a href=# id=separator0><button class="hostFontSize"> </button></a></div>
|
||||
<div><a href=# id=setresolution-button><button class="hostFontSize">Set Resolution</button></a></div>
|
||||
<div><a href=# id=setframes-button><button class="hostFontSize">Set Frame Range</button></a></div>
|
||||
<div><a href=# id=setall-button><button class="hostFontSize">Apply All Settings</button></a></div>
|
||||
<div><a href=# id=separator1><button class="hostFontSize"> </button></a></div>
|
||||
<div><a href=# id=create-placeholder-button><button class="hostFontSize">Create placeholder</button></a></div>
|
||||
<div><a href=# id=update-placeholder-button><button class="hostFontSize">Update placeholder</button></a></div>
|
||||
<div><a href=# id=build-workfile-button><button class="hostFontSize">Build Workfile from template</button></a></div>
|
||||
<div><a href=# id=separator3><button class="hostFontSize"> </button></a></div>
|
||||
<div><a href=# id=experimental-button><button class="hostFontSize">Experimental Tools...</button></a></div>
|
||||
</div>
|
||||
|
||||
|
|
|
|||
|
|
@ -107,6 +107,17 @@ function main(websocket_url){
|
|||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.add_item', function (data) {
|
||||
log.warn('Server called client route "add_item":', data);
|
||||
var escapedName = EscapeStringForJSX(data.name);
|
||||
return runEvalScript("addItem('" + escapedName +"', " +
|
||||
"'" + data.item_type + "')")
|
||||
.then(function(result){
|
||||
log.warn("get_items: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.get_items', function (data) {
|
||||
log.warn('Server called client route "get_items":', data);
|
||||
return runEvalScript("getItems(" + data.comps + "," +
|
||||
|
|
@ -118,6 +129,15 @@ function main(websocket_url){
|
|||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.select_items', function (data) {
|
||||
log.warn('Server called client route "select_items":', data);
|
||||
return runEvalScript("selectItems(" + JSON.stringify(data.items) + ")")
|
||||
.then(function(result){
|
||||
log.warn("select_items: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
RPC.addRoute('AfterEffects.get_selected_items', function (data) {
|
||||
log.warn('Server called client route "get_selected_items":', data);
|
||||
|
|
@ -280,7 +300,7 @@ function main(websocket_url){
|
|||
RPC.addRoute('AfterEffects.add_item_as_layer', function (data) {
|
||||
log.warn('Server called client route "add_item_as_layer":', data);
|
||||
return runEvalScript("addItemAsLayerToComp(" + data.comp_id + ", " +
|
||||
data.item_id + "," +
|
||||
data.item_id + "," +
|
||||
" null )")
|
||||
.then(function(result){
|
||||
log.warn("addItemAsLayerToComp: " + result);
|
||||
|
|
@ -288,6 +308,16 @@ function main(websocket_url){
|
|||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.add_item_instead_placeholder', function (data) {
|
||||
log.warn('Server called client route "add_item_instead_placeholder":', data);
|
||||
return runEvalScript("addItemInstead(" + data.placeholder_item_id + ", " +
|
||||
data.item_id + ")")
|
||||
.then(function(result){
|
||||
log.warn("add_item_instead_placeholder: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.render', function (data) {
|
||||
log.warn('Server called client route "render":', data);
|
||||
var escapedPath = EscapeStringForJSX(data.folder_url);
|
||||
|
|
@ -312,6 +342,20 @@ function main(websocket_url){
|
|||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.add_placeholder', function (data) {
|
||||
log.warn('Server called client route "add_placeholder":', data);
|
||||
var escapedName = EscapeStringForJSX(data.name);
|
||||
return runEvalScript("addPlaceholder('" + escapedName +"',"+
|
||||
data.width + ',' +
|
||||
data.height + ',' +
|
||||
data.fps + ',' +
|
||||
data.duration + ")")
|
||||
.then(function(result){
|
||||
log.warn("add_placeholder: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('AfterEffects.close', function (data) {
|
||||
log.warn('Server called client route "close":', data);
|
||||
return runEvalScript("close()");
|
||||
|
|
|
|||
|
|
@ -112,6 +112,32 @@ function getActiveDocumentFullName(){
|
|||
return _prepareError("No file open currently");
|
||||
}
|
||||
|
||||
|
||||
function addItem(name, item_type){
|
||||
/**
|
||||
* Adds comp or folder to project items.
|
||||
*
|
||||
* Could be called when creating publishable instance to prepare
|
||||
* composition (and render queue).
|
||||
*
|
||||
* Args:
|
||||
* name (str): composition name
|
||||
* item_type (str): COMP|FOLDER
|
||||
* Returns:
|
||||
* SingleItemValue: eg {"result": VALUE}
|
||||
*/
|
||||
if (item_type == "COMP"){
|
||||
// dummy values, will be rewritten later
|
||||
item = app.project.items.addComp(name, 1920, 1060, 1, 10, 25);
|
||||
}else if (item_type == "FOLDER"){
|
||||
item = app.project.items.addFolder(name);
|
||||
}else{
|
||||
return _prepareError("Only 'COMP' or 'FOLDER' can be created");
|
||||
}
|
||||
return _prepareSingleValue(item.id);
|
||||
|
||||
}
|
||||
|
||||
function getItems(comps, folders, footages){
|
||||
/**
|
||||
* Returns JSON representation of compositions and
|
||||
|
|
@ -139,6 +165,24 @@ function getItems(comps, folders, footages){
|
|||
|
||||
}
|
||||
|
||||
function selectItems(items){
|
||||
/**
|
||||
* Select all items from `items`, deselect other.
|
||||
*
|
||||
* Args:
|
||||
* items (list)
|
||||
*/
|
||||
for (i = 1; i <= app.project.items.length; ++i){
|
||||
item = app.project.items[i];
|
||||
if (items.indexOf(item.id) > -1){
|
||||
item.selected = true;
|
||||
}else{
|
||||
item.selected = false;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
function getSelectedItems(comps, folders, footages){
|
||||
/**
|
||||
* Returns list of selected items from Project menu
|
||||
|
|
@ -280,12 +324,12 @@ function setLabelColor(comp_id, color_idx){
|
|||
}
|
||||
}
|
||||
|
||||
function replaceItem(comp_id, path, item_name){
|
||||
function replaceItem(item_id, path, item_name){
|
||||
/**
|
||||
* Replaces loaded file with new file and updates name
|
||||
*
|
||||
* Args:
|
||||
* comp_id (int): id of composition, not a index!
|
||||
* item_id (int): id of composition, not a index!
|
||||
* path (string): absolute path to new file
|
||||
* item_name (string): new composition name
|
||||
*/
|
||||
|
|
@ -295,7 +339,7 @@ function replaceItem(comp_id, path, item_name){
|
|||
if (!fp.exists){
|
||||
return _prepareError("File " + path + " not found.");
|
||||
}
|
||||
var item = app.project.itemByID(comp_id);
|
||||
var item = app.project.itemByID(item_id);
|
||||
if (item){
|
||||
try{
|
||||
if (isFileSequence(item)) {
|
||||
|
|
@ -311,7 +355,7 @@ function replaceItem(comp_id, path, item_name){
|
|||
fp.close();
|
||||
}
|
||||
}else{
|
||||
return _prepareError("There is no composition with "+ comp_id);
|
||||
return _prepareError("There is no item with "+ item_id);
|
||||
}
|
||||
app.endUndoGroup();
|
||||
}
|
||||
|
|
@ -821,6 +865,67 @@ function printMsg(msg){
|
|||
alert(msg);
|
||||
}
|
||||
|
||||
function addPlaceholder(name, width, height, fps, duration){
|
||||
/** Add AE PlaceholderItem to Project list.
|
||||
*
|
||||
* PlaceholderItem chosen as it doesn't require existing file and
|
||||
* might potentially allow nice functionality in the future.
|
||||
*
|
||||
*/
|
||||
app.beginUndoGroup('change comp properties');
|
||||
try{
|
||||
item = app.project.importPlaceholder(name, width, height,
|
||||
fps, duration);
|
||||
|
||||
return _prepareSingleValue(item.id);
|
||||
}catch (error) {
|
||||
writeLn(_prepareError("Cannot add placeholder " + error.toString()));
|
||||
}
|
||||
app.endUndoGroup();
|
||||
}
|
||||
|
||||
function addItemInstead(placeholder_item_id, item_id){
|
||||
/** Add new loaded item in place of load placeholder.
|
||||
*
|
||||
* Each placeholder could be placed multiple times into multiple
|
||||
* composition. This loops through all compositions and
|
||||
* places loaded item under placeholder.
|
||||
* Placeholder item gets deleted later separately according
|
||||
* to configuration in Settings.
|
||||
*
|
||||
* Args:
|
||||
* placeholder_item_id (int)
|
||||
* item_id (int)
|
||||
*/
|
||||
var item = app.project.itemByID(item_id);
|
||||
if (!item){
|
||||
return _prepareError("There is no item with "+ item_id);
|
||||
}
|
||||
|
||||
app.beginUndoGroup('Add loaded items');
|
||||
for (i = 1; i <= app.project.items.length; ++i){
|
||||
var comp = app.project.items[i];
|
||||
if (!(comp instanceof CompItem)){
|
||||
continue
|
||||
}
|
||||
|
||||
var i = 1;
|
||||
while (i <= comp.numLayers) {
|
||||
var layer = comp.layer(i);
|
||||
var layer_source = layer.source;
|
||||
if (layer_source && layer_source.id == placeholder_item_id){
|
||||
var new_layer = comp.layers.add(item);
|
||||
new_layer.moveAfter(layer);
|
||||
// copy all(?) properties to new layer
|
||||
layer.property("ADBE Transform Group").copyToComp(new_layer);
|
||||
i = i + 1;
|
||||
}
|
||||
i = i + 1;
|
||||
}
|
||||
}
|
||||
app.endUndoGroup();
|
||||
}
|
||||
|
||||
function _prepareSingleValue(value){
|
||||
return JSON.stringify({"result": value})
|
||||
}
|
||||
|
|
|
|||
|
|
@ -357,3 +357,33 @@ class AfterEffectsRoute(WebSocketRoute):
|
|||
|
||||
# Required return statement.
|
||||
return "nothing"
|
||||
|
||||
def create_placeholder_route(self):
|
||||
from openpype.hosts.aftereffects.api.workfile_template_builder import \
|
||||
create_placeholder
|
||||
partial_method = functools.partial(create_placeholder)
|
||||
|
||||
ProcessLauncher.execute_in_main_thread(partial_method)
|
||||
|
||||
# Required return statement.
|
||||
return "nothing"
|
||||
|
||||
def update_placeholder_route(self):
|
||||
from openpype.hosts.aftereffects.api.workfile_template_builder import \
|
||||
update_placeholder
|
||||
partial_method = functools.partial(update_placeholder)
|
||||
|
||||
ProcessLauncher.execute_in_main_thread(partial_method)
|
||||
|
||||
# Required return statement.
|
||||
return "nothing"
|
||||
|
||||
def build_workfile_template_route(self):
|
||||
from openpype.hosts.aftereffects.api.workfile_template_builder import \
|
||||
build_workfile_template
|
||||
partial_method = functools.partial(build_workfile_template)
|
||||
|
||||
ProcessLauncher.execute_in_main_thread(partial_method)
|
||||
|
||||
# Required return statement.
|
||||
return "nothing"
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ from openpype.pipeline import (
|
|||
register_creator_plugin_path,
|
||||
AVALON_CONTAINER_ID,
|
||||
)
|
||||
from openpype.hosts.aftereffects.api.workfile_template_builder import (
|
||||
AEPlaceholderLoadPlugin,
|
||||
AEPlaceholderCreatePlugin
|
||||
)
|
||||
from openpype.pipeline.load import any_outdated_containers
|
||||
import openpype.hosts.aftereffects
|
||||
|
||||
|
|
@ -116,6 +120,12 @@ class AfterEffectsHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
item["id"] = "publish_context"
|
||||
self.stub.imprint(item["id"], item)
|
||||
|
||||
def get_workfile_build_placeholder_plugins(self):
|
||||
return [
|
||||
AEPlaceholderLoadPlugin,
|
||||
AEPlaceholderCreatePlugin
|
||||
]
|
||||
|
||||
# created instances section
|
||||
def list_instances(self):
|
||||
"""List all created instances from current workfile which
|
||||
|
|
|
|||
|
|
@ -1,7 +1,11 @@
|
|||
import six
|
||||
from abc import ABCMeta
|
||||
|
||||
from openpype.pipeline import LoaderPlugin
|
||||
from .launch_logic import get_stub
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class AfterEffectsLoader(LoaderPlugin):
|
||||
@staticmethod
|
||||
def get_stub():
|
||||
|
|
|
|||
271
openpype/hosts/aftereffects/api/workfile_template_builder.py
Normal file
271
openpype/hosts/aftereffects/api/workfile_template_builder.py
Normal file
|
|
@ -0,0 +1,271 @@
|
|||
import os.path
|
||||
import uuid
|
||||
import shutil
|
||||
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.tools.workfile_template_build import (
|
||||
WorkfileBuildPlaceholderDialog,
|
||||
)
|
||||
from openpype.pipeline.workfile.workfile_template_builder import (
|
||||
AbstractTemplateBuilder,
|
||||
PlaceholderPlugin,
|
||||
LoadPlaceholderItem,
|
||||
CreatePlaceholderItem,
|
||||
PlaceholderLoadMixin,
|
||||
PlaceholderCreateMixin
|
||||
)
|
||||
from openpype.hosts.aftereffects.api import get_stub
|
||||
from openpype.hosts.aftereffects.api.lib import set_settings
|
||||
|
||||
PLACEHOLDER_SET = "PLACEHOLDERS_SET"
|
||||
PLACEHOLDER_ID = "openpype.placeholder"
|
||||
|
||||
|
||||
class AETemplateBuilder(AbstractTemplateBuilder):
|
||||
"""Concrete implementation of AbstractTemplateBuilder for AE"""
|
||||
|
||||
def import_template(self, path):
|
||||
"""Import template into current scene.
|
||||
Block if a template is already loaded.
|
||||
|
||||
Args:
|
||||
path (str): A path to current template (usually given by
|
||||
get_template_preset implementation)
|
||||
|
||||
Returns:
|
||||
bool: Whether the template was successfully imported or not
|
||||
"""
|
||||
stub = get_stub()
|
||||
if not os.path.exists(path):
|
||||
stub.print_msg(f"Template file on {path} doesn't exist.")
|
||||
return
|
||||
|
||||
stub.save()
|
||||
workfile_path = stub.get_active_document_full_name()
|
||||
shutil.copy2(path, workfile_path)
|
||||
stub.open(workfile_path)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
class AEPlaceholderPlugin(PlaceholderPlugin):
|
||||
"""Contains generic methods for all PlaceholderPlugins."""
|
||||
|
||||
def collect_placeholders(self):
|
||||
"""Collect info from file metadata about created placeholders.
|
||||
|
||||
Returns:
|
||||
(list) (LoadPlaceholderItem)
|
||||
"""
|
||||
output = []
|
||||
scene_placeholders = self._collect_scene_placeholders()
|
||||
for item in scene_placeholders:
|
||||
if item.get("plugin_identifier") != self.identifier:
|
||||
continue
|
||||
|
||||
if isinstance(self, AEPlaceholderLoadPlugin):
|
||||
item = LoadPlaceholderItem(item["uuid"],
|
||||
item["data"],
|
||||
self)
|
||||
elif isinstance(self, AEPlaceholderCreatePlugin):
|
||||
item = CreatePlaceholderItem(item["uuid"],
|
||||
item["data"],
|
||||
self)
|
||||
else:
|
||||
raise NotImplementedError(f"Not implemented for {type(self)}")
|
||||
|
||||
output.append(item)
|
||||
|
||||
return output
|
||||
|
||||
def update_placeholder(self, placeholder_item, placeholder_data):
|
||||
"""Resave changed properties for placeholders"""
|
||||
item_id, metadata_item = self._get_item(placeholder_item)
|
||||
stub = get_stub()
|
||||
if not item_id:
|
||||
stub.print_msg("Cannot find item for "
|
||||
f"{placeholder_item.scene_identifier}")
|
||||
return
|
||||
metadata_item["data"] = placeholder_data
|
||||
stub.imprint(item_id, metadata_item)
|
||||
|
||||
def _get_item(self, placeholder_item):
|
||||
"""Returns item id and item metadata for placeholder from file meta"""
|
||||
stub = get_stub()
|
||||
placeholder_uuid = placeholder_item.scene_identifier
|
||||
for metadata_item in stub.get_metadata():
|
||||
if not metadata_item.get("is_placeholder"):
|
||||
continue
|
||||
if placeholder_uuid in metadata_item.get("uuid"):
|
||||
return metadata_item["members"][0], metadata_item
|
||||
return None, None
|
||||
|
||||
def _collect_scene_placeholders(self):
|
||||
"""" Cache placeholder data to shared data.
|
||||
Returns:
|
||||
(list) of dicts
|
||||
"""
|
||||
placeholder_items = self.builder.get_shared_populate_data(
|
||||
"placeholder_items"
|
||||
)
|
||||
if not placeholder_items:
|
||||
placeholder_items = []
|
||||
for item in get_stub().get_metadata():
|
||||
if not item.get("is_placeholder"):
|
||||
continue
|
||||
placeholder_items.append(item)
|
||||
|
||||
self.builder.set_shared_populate_data(
|
||||
"placeholder_items", placeholder_items
|
||||
)
|
||||
return placeholder_items
|
||||
|
||||
def _imprint_item(self, item_id, name, placeholder_data, stub):
|
||||
if not item_id:
|
||||
raise ValueError("Couldn't create a placeholder")
|
||||
container_data = {
|
||||
"id": "openpype.placeholder",
|
||||
"name": name,
|
||||
"is_placeholder": True,
|
||||
"plugin_identifier": self.identifier,
|
||||
"uuid": str(uuid.uuid4()), # scene_identifier
|
||||
"data": placeholder_data,
|
||||
"members": [item_id]
|
||||
}
|
||||
stub.imprint(item_id, container_data)
|
||||
|
||||
|
||||
class AEPlaceholderCreatePlugin(AEPlaceholderPlugin, PlaceholderCreateMixin):
|
||||
"""Adds Create placeholder.
|
||||
|
||||
This adds composition and runs Create
|
||||
"""
|
||||
identifier = "aftereffects.create"
|
||||
label = "AfterEffects create"
|
||||
|
||||
def create_placeholder(self, placeholder_data):
|
||||
stub = get_stub()
|
||||
name = "CREATEPLACEHOLDER"
|
||||
item_id = stub.add_item(name, "COMP")
|
||||
|
||||
self._imprint_item(item_id, name, placeholder_data, stub)
|
||||
|
||||
def populate_placeholder(self, placeholder):
|
||||
"""Replace 'placeholder' with publishable instance.
|
||||
|
||||
Renames prepared composition name, creates publishable instance, sets
|
||||
frame/duration settings according to DB.
|
||||
"""
|
||||
pre_create_data = {"use_selection": True}
|
||||
item_id, item = self._get_item(placeholder)
|
||||
get_stub().select_items([item_id])
|
||||
self.populate_create_placeholder(placeholder, pre_create_data)
|
||||
|
||||
# apply settings for populated composition
|
||||
item_id, metadata_item = self._get_item(placeholder)
|
||||
set_settings(True, True, [item_id])
|
||||
|
||||
def get_placeholder_options(self, options=None):
|
||||
return self.get_create_plugin_options(options)
|
||||
|
||||
|
||||
class AEPlaceholderLoadPlugin(AEPlaceholderPlugin, PlaceholderLoadMixin):
|
||||
identifier = "aftereffects.load"
|
||||
label = "AfterEffects load"
|
||||
|
||||
def create_placeholder(self, placeholder_data):
|
||||
"""Creates AE's Placeholder item in Project items list.
|
||||
|
||||
Sets dummy resolution/duration/fps settings, will be replaced when
|
||||
populated.
|
||||
"""
|
||||
stub = get_stub()
|
||||
name = "LOADERPLACEHOLDER"
|
||||
item_id = stub.add_placeholder(name, 1920, 1060, 25, 10)
|
||||
|
||||
self._imprint_item(item_id, name, placeholder_data, stub)
|
||||
|
||||
def populate_placeholder(self, placeholder):
|
||||
"""Use Openpype Loader from `placeholder` to create new FootageItems
|
||||
|
||||
New FootageItems are created, files are imported.
|
||||
"""
|
||||
self.populate_load_placeholder(placeholder)
|
||||
errors = placeholder.get_errors()
|
||||
stub = get_stub()
|
||||
if errors:
|
||||
stub.print_msg("\n".join(errors))
|
||||
else:
|
||||
if not placeholder.data["keep_placeholder"]:
|
||||
metadata = stub.get_metadata()
|
||||
for item in metadata:
|
||||
if not item.get("is_placeholder"):
|
||||
continue
|
||||
scene_identifier = item.get("uuid")
|
||||
if (scene_identifier and
|
||||
scene_identifier == placeholder.scene_identifier):
|
||||
stub.delete_item(item["members"][0])
|
||||
stub.remove_instance(placeholder.scene_identifier, metadata)
|
||||
|
||||
def get_placeholder_options(self, options=None):
|
||||
return self.get_load_plugin_options(options)
|
||||
|
||||
def load_succeed(self, placeholder, container):
|
||||
placeholder_item_id, _ = self._get_item(placeholder)
|
||||
item_id = container.id
|
||||
get_stub().add_item_instead_placeholder(placeholder_item_id, item_id)
|
||||
|
||||
|
||||
def build_workfile_template(*args, **kwargs):
|
||||
builder = AETemplateBuilder(registered_host())
|
||||
builder.build_template(*args, **kwargs)
|
||||
|
||||
|
||||
def update_workfile_template(*args):
|
||||
builder = AETemplateBuilder(registered_host())
|
||||
builder.rebuild_template()
|
||||
|
||||
|
||||
def create_placeholder(*args):
|
||||
"""Called when new workile placeholder should be created."""
|
||||
host = registered_host()
|
||||
builder = AETemplateBuilder(host)
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder)
|
||||
window.exec_()
|
||||
|
||||
|
||||
def update_placeholder(*args):
|
||||
"""Called after placeholder item is selected to modify it."""
|
||||
host = registered_host()
|
||||
builder = AETemplateBuilder(host)
|
||||
|
||||
stub = get_stub()
|
||||
selected_items = stub.get_selected_items(True, True, True)
|
||||
|
||||
if len(selected_items) != 1:
|
||||
stub.print_msg("Please select just 1 placeholder")
|
||||
return
|
||||
|
||||
selected_id = selected_items[0].id
|
||||
placeholder_item = None
|
||||
|
||||
placeholder_items_by_id = {
|
||||
placeholder_item.scene_identifier: placeholder_item
|
||||
for placeholder_item in builder.get_placeholders()
|
||||
}
|
||||
for metadata_item in stub.get_metadata():
|
||||
if not metadata_item.get("is_placeholder"):
|
||||
continue
|
||||
if selected_id in metadata_item.get("members"):
|
||||
placeholder_item = placeholder_items_by_id.get(
|
||||
metadata_item["uuid"])
|
||||
break
|
||||
|
||||
if not placeholder_item:
|
||||
stub.print_msg("Didn't find placeholder metadata. "
|
||||
"Remove and re-create placeholder.")
|
||||
return
|
||||
|
||||
window = WorkfileBuildPlaceholderDialog(host, builder)
|
||||
window.set_update_mode(placeholder_item)
|
||||
window.exec_()
|
||||
|
|
@ -35,6 +35,8 @@ class AEItem(object):
|
|||
instance_id = attr.ib(default=None) # New Publisher
|
||||
width = attr.ib(default=None)
|
||||
height = attr.ib(default=None)
|
||||
is_placeholder = attr.ib(default=False)
|
||||
uuid = attr.ib(default=False)
|
||||
|
||||
|
||||
class AfterEffectsServerStub():
|
||||
|
|
@ -220,6 +222,16 @@ class AfterEffectsServerStub():
|
|||
)
|
||||
return self._to_records(self._handle_return(res))
|
||||
|
||||
def select_items(self, items):
|
||||
"""
|
||||
Select items in Project list
|
||||
Args:
|
||||
items (list): of int item ids
|
||||
"""
|
||||
self.websocketserver.call(
|
||||
self.client.call('AfterEffects.select_items', items=items))
|
||||
|
||||
|
||||
def get_selected_items(self, comps, folders=False, footages=False):
|
||||
"""
|
||||
Same as get_items but using selected items only
|
||||
|
|
@ -240,6 +252,21 @@ class AfterEffectsServerStub():
|
|||
)
|
||||
return self._to_records(self._handle_return(res))
|
||||
|
||||
def add_item(self, name, item_type):
|
||||
"""
|
||||
Adds either composition or folder to project item list.
|
||||
|
||||
Args:
|
||||
name (str)
|
||||
item_type (str): COMP|FOLDER
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('AfterEffects.add_item',
|
||||
name=name,
|
||||
item_type=item_type))
|
||||
|
||||
return self._handle_return(res)
|
||||
|
||||
def get_item(self, item_id):
|
||||
"""
|
||||
Returns metadata for particular 'item_id' or None
|
||||
|
|
@ -316,7 +343,7 @@ class AfterEffectsServerStub():
|
|||
|
||||
return self._handle_return(res)
|
||||
|
||||
def remove_instance(self, instance_id):
|
||||
def remove_instance(self, instance_id, metadata=None):
|
||||
"""
|
||||
Removes instance with 'instance_id' from file's metadata and
|
||||
saves them.
|
||||
|
|
@ -328,7 +355,10 @@ class AfterEffectsServerStub():
|
|||
"""
|
||||
cleaned_data = []
|
||||
|
||||
for instance in self.get_metadata():
|
||||
if metadata is None:
|
||||
metadata = self.get_metadata()
|
||||
|
||||
for instance in metadata:
|
||||
inst_id = instance.get("instance_id") or instance.get("uuid")
|
||||
if inst_id != instance_id:
|
||||
cleaned_data.append(instance)
|
||||
|
|
@ -534,6 +564,47 @@ class AfterEffectsServerStub():
|
|||
if records:
|
||||
return records.pop()
|
||||
|
||||
def add_item_instead_placeholder(self, placeholder_item_id, item_id):
|
||||
"""
|
||||
Adds item_id to layers where plaeholder_item_id is present.
|
||||
|
||||
1 placeholder could result in multiple loaded containers (eg items)
|
||||
|
||||
Args:
|
||||
placeholder_item_id (int): id of placeholder item
|
||||
item_id (int): loaded FootageItem id
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('AfterEffects.add_item_instead_placeholder', # noqa
|
||||
placeholder_item_id=placeholder_item_id, # noqa
|
||||
item_id=item_id))
|
||||
|
||||
return self._handle_return(res)
|
||||
|
||||
def add_placeholder(self, name, width, height, fps, duration):
|
||||
"""
|
||||
Adds new FootageItem as a placeholder for workfile builder
|
||||
|
||||
Placeholder requires width etc, currently probably only hardcoded
|
||||
values.
|
||||
|
||||
Args:
|
||||
name (str)
|
||||
width (int)
|
||||
height (int)
|
||||
fps (float)
|
||||
duration (int)
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('AfterEffects.add_placeholder',
|
||||
name=name,
|
||||
width=width,
|
||||
height=height,
|
||||
fps=fps,
|
||||
duration=duration))
|
||||
|
||||
return self._handle_return(res)
|
||||
|
||||
def render(self, folder_url, comp_id):
|
||||
"""
|
||||
Render all renderqueueitem to 'folder_url'
|
||||
|
|
@ -632,7 +703,8 @@ class AfterEffectsServerStub():
|
|||
d.get('file_name'),
|
||||
d.get("instance_id"),
|
||||
d.get("width"),
|
||||
d.get("height"))
|
||||
d.get("height"),
|
||||
d.get("is_placeholder"))
|
||||
|
||||
ret.append(item)
|
||||
return ret
|
||||
|
|
|
|||
|
|
@ -1,17 +1,15 @@
|
|||
import re
|
||||
|
||||
from openpype.pipeline import get_representation_path
|
||||
from openpype.hosts.aftereffects.api import (
|
||||
AfterEffectsLoader,
|
||||
containerise
|
||||
)
|
||||
from openpype.hosts.aftereffects import api
|
||||
|
||||
from openpype.hosts.aftereffects.api.lib import (
|
||||
get_background_layers,
|
||||
get_unique_layer_name,
|
||||
)
|
||||
|
||||
|
||||
class BackgroundLoader(AfterEffectsLoader):
|
||||
class BackgroundLoader(api.AfterEffectsLoader):
|
||||
"""
|
||||
Load images from Background family
|
||||
Creates for each background separate folder with all imported images
|
||||
|
|
@ -21,6 +19,7 @@ class BackgroundLoader(AfterEffectsLoader):
|
|||
For each load container is created and stored in project (.aep)
|
||||
metadata
|
||||
"""
|
||||
label = "Load JSON Background"
|
||||
families = ["background"]
|
||||
representations = ["json"]
|
||||
|
||||
|
|
@ -48,7 +47,7 @@ class BackgroundLoader(AfterEffectsLoader):
|
|||
self[:] = [comp]
|
||||
namespace = namespace or comp_name
|
||||
|
||||
return containerise(
|
||||
return api.containerise(
|
||||
name,
|
||||
namespace,
|
||||
comp,
|
||||
|
|
|
|||
|
|
@ -1,14 +1,11 @@
|
|||
import re
|
||||
|
||||
from openpype.pipeline import get_representation_path
|
||||
from openpype.hosts.aftereffects.api import (
|
||||
AfterEffectsLoader,
|
||||
containerise
|
||||
)
|
||||
from openpype.hosts.aftereffects import api
|
||||
from openpype.hosts.aftereffects.api.lib import get_unique_layer_name
|
||||
|
||||
|
||||
class FileLoader(AfterEffectsLoader):
|
||||
class FileLoader(api.AfterEffectsLoader):
|
||||
"""Load images
|
||||
|
||||
Stores the imported asset in a container named after the asset.
|
||||
|
|
@ -64,7 +61,7 @@ class FileLoader(AfterEffectsLoader):
|
|||
self[:] = [comp]
|
||||
namespace = namespace or comp_name
|
||||
|
||||
return containerise(
|
||||
return api.containerise(
|
||||
name,
|
||||
namespace,
|
||||
comp,
|
||||
|
|
|
|||
|
|
@ -21,8 +21,13 @@ from .pipeline import (
|
|||
reset_selection
|
||||
)
|
||||
|
||||
from .constants import (
|
||||
OPENPYPE_TAG_NAME,
|
||||
DEFAULT_SEQUENCE_NAME,
|
||||
DEFAULT_BIN_NAME
|
||||
)
|
||||
|
||||
from .lib import (
|
||||
pype_tag_name,
|
||||
flatten,
|
||||
get_track_items,
|
||||
get_current_project,
|
||||
|
|
@ -82,8 +87,12 @@ __all__ = [
|
|||
"file_extensions",
|
||||
"work_root",
|
||||
|
||||
# Constants
|
||||
"OPENPYPE_TAG_NAME",
|
||||
"DEFAULT_SEQUENCE_NAME",
|
||||
"DEFAULT_BIN_NAME",
|
||||
|
||||
# Lib functions
|
||||
"pype_tag_name",
|
||||
"flatten",
|
||||
"get_track_items",
|
||||
"get_current_project",
|
||||
|
|
|
|||
3
openpype/hosts/hiero/api/constants.py
Normal file
3
openpype/hosts/hiero/api/constants.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
OPENPYPE_TAG_NAME = "openpypeData"
|
||||
DEFAULT_SEQUENCE_NAME = "openpypeSequence"
|
||||
DEFAULT_BIN_NAME = "openpypeBin"
|
||||
|
|
@ -5,7 +5,6 @@ Host specific functions where host api is connected
|
|||
from copy import deepcopy
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import platform
|
||||
import functools
|
||||
import warnings
|
||||
|
|
@ -29,12 +28,22 @@ from openpype.pipeline import (
|
|||
from openpype.pipeline.load import filter_containers
|
||||
from openpype.lib import Logger
|
||||
from . import tags
|
||||
|
||||
from .constants import (
|
||||
OPENPYPE_TAG_NAME,
|
||||
DEFAULT_SEQUENCE_NAME,
|
||||
DEFAULT_BIN_NAME
|
||||
)
|
||||
from openpype.pipeline.colorspace import (
|
||||
get_imageio_config
|
||||
)
|
||||
|
||||
|
||||
class _CTX:
|
||||
has_been_setup = False
|
||||
has_menu = False
|
||||
parent_gui = None
|
||||
|
||||
|
||||
class DeprecatedWarning(DeprecationWarning):
|
||||
pass
|
||||
|
||||
|
|
@ -82,23 +91,14 @@ def deprecated(new_destination):
|
|||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self._has_been_setup = False
|
||||
self._has_menu = False
|
||||
self._registered_gui = None
|
||||
self._parent = None
|
||||
self.pype_tag_name = "openpypeData"
|
||||
self.default_sequence_name = "openpypeSequence"
|
||||
self.default_bin_name = "openpypeBin"
|
||||
|
||||
|
||||
def flatten(_list):
|
||||
for item in _list:
|
||||
if isinstance(item, (list, tuple)):
|
||||
for sub_item in flatten(item):
|
||||
def flatten(list_):
|
||||
for item_ in list_:
|
||||
if isinstance(item_, (list, tuple)):
|
||||
for sub_item in flatten(item_):
|
||||
yield sub_item
|
||||
else:
|
||||
yield item
|
||||
yield item_
|
||||
|
||||
|
||||
def get_current_project(remove_untitled=False):
|
||||
|
|
@ -131,7 +131,7 @@ def get_current_sequence(name=None, new=False):
|
|||
|
||||
if new:
|
||||
# create new
|
||||
name = name or self.default_sequence_name
|
||||
name = name or DEFAULT_SEQUENCE_NAME
|
||||
sequence = hiero.core.Sequence(name)
|
||||
root_bin.addItem(hiero.core.BinItem(sequence))
|
||||
elif name:
|
||||
|
|
@ -345,7 +345,7 @@ def get_track_item_tags(track_item):
|
|||
# collect all tags which are not openpype tag
|
||||
returning_tag_data.extend(
|
||||
tag for tag in _tags
|
||||
if tag.name() != self.pype_tag_name
|
||||
if tag.name() != OPENPYPE_TAG_NAME
|
||||
)
|
||||
|
||||
return returning_tag_data
|
||||
|
|
@ -385,7 +385,7 @@ def set_track_openpype_tag(track, data=None):
|
|||
# if pype tag available then update with input data
|
||||
tag = tags.create_tag(
|
||||
"{}_{}".format(
|
||||
self.pype_tag_name,
|
||||
OPENPYPE_TAG_NAME,
|
||||
_get_tag_unique_hash()
|
||||
),
|
||||
tag_data
|
||||
|
|
@ -412,7 +412,7 @@ def get_track_openpype_tag(track):
|
|||
return None
|
||||
for tag in _tags:
|
||||
# return only correct tag defined by global name
|
||||
if self.pype_tag_name in tag.name():
|
||||
if OPENPYPE_TAG_NAME in tag.name():
|
||||
return tag
|
||||
|
||||
|
||||
|
|
@ -484,7 +484,7 @@ def get_trackitem_openpype_tag(track_item):
|
|||
return None
|
||||
for tag in _tags:
|
||||
# return only correct tag defined by global name
|
||||
if self.pype_tag_name in tag.name():
|
||||
if OPENPYPE_TAG_NAME in tag.name():
|
||||
return tag
|
||||
|
||||
|
||||
|
|
@ -516,7 +516,7 @@ def set_trackitem_openpype_tag(track_item, data=None):
|
|||
# if pype tag available then update with input data
|
||||
tag = tags.create_tag(
|
||||
"{}_{}".format(
|
||||
self.pype_tag_name,
|
||||
OPENPYPE_TAG_NAME,
|
||||
_get_tag_unique_hash()
|
||||
),
|
||||
tag_data
|
||||
|
|
@ -698,29 +698,29 @@ def setup(console=False, port=None, menu=True):
|
|||
menu (bool, optional): Display file menu in Hiero.
|
||||
"""
|
||||
|
||||
if self._has_been_setup:
|
||||
if _CTX.has_been_setup:
|
||||
teardown()
|
||||
|
||||
add_submission()
|
||||
|
||||
if menu:
|
||||
add_to_filemenu()
|
||||
self._has_menu = True
|
||||
_CTX.has_menu = True
|
||||
|
||||
self._has_been_setup = True
|
||||
_CTX.has_been_setup = True
|
||||
log.debug("pyblish: Loaded successfully.")
|
||||
|
||||
|
||||
def teardown():
|
||||
"""Remove integration"""
|
||||
if not self._has_been_setup:
|
||||
if not _CTX.has_been_setup:
|
||||
return
|
||||
|
||||
if self._has_menu:
|
||||
if _CTX.has_menu:
|
||||
remove_from_filemenu()
|
||||
self._has_menu = False
|
||||
_CTX.has_menu = False
|
||||
|
||||
self._has_been_setup = False
|
||||
_CTX.has_been_setup = False
|
||||
log.debug("pyblish: Integration torn down successfully")
|
||||
|
||||
|
||||
|
|
@ -928,7 +928,7 @@ def create_bin(path=None, project=None):
|
|||
# get the first loaded project
|
||||
project = project or get_current_project()
|
||||
|
||||
path = path or self.default_bin_name
|
||||
path = path or DEFAULT_BIN_NAME
|
||||
|
||||
path = path.replace("\\", "/").split("/")
|
||||
|
||||
|
|
@ -1311,11 +1311,11 @@ def before_project_save(event):
|
|||
|
||||
def get_main_window():
|
||||
"""Acquire Nuke's main window"""
|
||||
if self._parent is None:
|
||||
if _CTX.parent_gui is None:
|
||||
top_widgets = QtWidgets.QApplication.topLevelWidgets()
|
||||
name = "Foundry::UI::DockMainWindow"
|
||||
main_window = next(widget for widget in top_widgets if
|
||||
widget.inherits("QMainWindow") and
|
||||
widget.metaObject().className() == name)
|
||||
self._parent = main_window
|
||||
return self._parent
|
||||
_CTX.parent_gui = main_window
|
||||
return _CTX.parent_gui
|
||||
|
|
|
|||
|
|
@ -3,20 +3,18 @@
|
|||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import ast
|
||||
import opentimelineio as otio
|
||||
from . import utils
|
||||
import hiero.core
|
||||
import hiero.ui
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self.track_types = {
|
||||
|
||||
TRACK_TYPE_MAP = {
|
||||
hiero.core.VideoTrack: otio.schema.TrackKind.Video,
|
||||
hiero.core.AudioTrack: otio.schema.TrackKind.Audio
|
||||
}
|
||||
self.project_fps = None
|
||||
self.marker_color_map = {
|
||||
MARKER_COLOR_MAP = {
|
||||
"magenta": otio.schema.MarkerColor.MAGENTA,
|
||||
"red": otio.schema.MarkerColor.RED,
|
||||
"yellow": otio.schema.MarkerColor.YELLOW,
|
||||
|
|
@ -24,30 +22,21 @@ self.marker_color_map = {
|
|||
"cyan": otio.schema.MarkerColor.CYAN,
|
||||
"blue": otio.schema.MarkerColor.BLUE,
|
||||
}
|
||||
self.timeline = None
|
||||
self.include_tags = True
|
||||
|
||||
|
||||
def flatten(_list):
|
||||
for item in _list:
|
||||
if isinstance(item, (list, tuple)):
|
||||
for sub_item in flatten(item):
|
||||
class CTX:
|
||||
project_fps = None
|
||||
timeline = None
|
||||
include_tags = True
|
||||
|
||||
|
||||
def flatten(list_):
|
||||
for item_ in list_:
|
||||
if isinstance(item_, (list, tuple)):
|
||||
for sub_item in flatten(item_):
|
||||
yield sub_item
|
||||
else:
|
||||
yield item
|
||||
|
||||
|
||||
def get_current_hiero_project(remove_untitled=False):
|
||||
projects = flatten(hiero.core.projects())
|
||||
if not remove_untitled:
|
||||
return next(iter(projects))
|
||||
|
||||
# if remove_untitled
|
||||
for proj in projects:
|
||||
if "Untitled" in proj.name():
|
||||
proj.close()
|
||||
else:
|
||||
return proj
|
||||
yield item_
|
||||
|
||||
|
||||
def create_otio_rational_time(frame, fps):
|
||||
|
|
@ -152,7 +141,7 @@ def create_otio_reference(clip):
|
|||
file_head = media_source.filenameHead()
|
||||
is_sequence = not media_source.singleFile()
|
||||
frame_duration = media_source.duration()
|
||||
fps = utils.get_rate(clip) or self.project_fps
|
||||
fps = utils.get_rate(clip) or CTX.project_fps
|
||||
extension = os.path.splitext(path)[-1]
|
||||
|
||||
if is_sequence:
|
||||
|
|
@ -217,8 +206,8 @@ def get_marker_color(tag):
|
|||
res = re.search(pat, icon)
|
||||
if res:
|
||||
color = res.groupdict().get('color')
|
||||
if color.lower() in self.marker_color_map:
|
||||
return self.marker_color_map[color.lower()]
|
||||
if color.lower() in MARKER_COLOR_MAP:
|
||||
return MARKER_COLOR_MAP[color.lower()]
|
||||
|
||||
return otio.schema.MarkerColor.RED
|
||||
|
||||
|
|
@ -232,7 +221,7 @@ def create_otio_markers(otio_item, item):
|
|||
# Hiero adds this tag to a lot of clips
|
||||
continue
|
||||
|
||||
frame_rate = utils.get_rate(item) or self.project_fps
|
||||
frame_rate = utils.get_rate(item) or CTX.project_fps
|
||||
|
||||
marked_range = otio.opentime.TimeRange(
|
||||
start_time=otio.opentime.RationalTime(
|
||||
|
|
@ -279,7 +268,7 @@ def create_otio_clip(track_item):
|
|||
|
||||
duration = int(track_item.duration())
|
||||
|
||||
fps = utils.get_rate(track_item) or self.project_fps
|
||||
fps = utils.get_rate(track_item) or CTX.project_fps
|
||||
name = track_item.name()
|
||||
|
||||
media_reference = create_otio_reference(clip)
|
||||
|
|
@ -296,7 +285,7 @@ def create_otio_clip(track_item):
|
|||
)
|
||||
|
||||
# Add tags as markers
|
||||
if self.include_tags:
|
||||
if CTX.include_tags:
|
||||
create_otio_markers(otio_clip, track_item)
|
||||
create_otio_markers(otio_clip, track_item.source())
|
||||
|
||||
|
|
@ -319,13 +308,13 @@ def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
|
|||
|
||||
|
||||
def _create_otio_timeline():
|
||||
project = get_current_hiero_project(remove_untitled=False)
|
||||
metadata = _get_metadata(self.timeline)
|
||||
project = CTX.timeline.project()
|
||||
metadata = _get_metadata(CTX.timeline)
|
||||
|
||||
metadata.update({
|
||||
"openpype.timeline.width": int(self.timeline.format().width()),
|
||||
"openpype.timeline.height": int(self.timeline.format().height()),
|
||||
"openpype.timeline.pixelAspect": int(self.timeline.format().pixelAspect()), # noqa
|
||||
"openpype.timeline.width": int(CTX.timeline.format().width()),
|
||||
"openpype.timeline.height": int(CTX.timeline.format().height()),
|
||||
"openpype.timeline.pixelAspect": int(CTX.timeline.format().pixelAspect()), # noqa
|
||||
"openpype.project.useOCIOEnvironmentOverride": project.useOCIOEnvironmentOverride(), # noqa
|
||||
"openpype.project.lutSetting16Bit": project.lutSetting16Bit(),
|
||||
"openpype.project.lutSetting8Bit": project.lutSetting8Bit(),
|
||||
|
|
@ -339,10 +328,10 @@ def _create_otio_timeline():
|
|||
})
|
||||
|
||||
start_time = create_otio_rational_time(
|
||||
self.timeline.timecodeStart(), self.project_fps)
|
||||
CTX.timeline.timecodeStart(), CTX.project_fps)
|
||||
|
||||
return otio.schema.Timeline(
|
||||
name=self.timeline.name(),
|
||||
name=CTX.timeline.name(),
|
||||
global_start_time=start_time,
|
||||
metadata=metadata
|
||||
)
|
||||
|
|
@ -351,7 +340,7 @@ def _create_otio_timeline():
|
|||
def create_otio_track(track_type, track_name):
|
||||
return otio.schema.Track(
|
||||
name=track_name,
|
||||
kind=self.track_types[track_type]
|
||||
kind=TRACK_TYPE_MAP[track_type]
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -363,7 +352,7 @@ def add_otio_gap(track_item, otio_track, prev_out):
|
|||
gap = otio.opentime.TimeRange(
|
||||
duration=otio.opentime.RationalTime(
|
||||
gap_length,
|
||||
self.project_fps
|
||||
CTX.project_fps
|
||||
)
|
||||
)
|
||||
otio_gap = otio.schema.Gap(source_range=gap)
|
||||
|
|
@ -396,14 +385,14 @@ def create_otio_timeline():
|
|||
return track_item.parent().items()[itemindex - 1]
|
||||
|
||||
# get current timeline
|
||||
self.timeline = hiero.ui.activeSequence()
|
||||
self.project_fps = self.timeline.framerate().toFloat()
|
||||
CTX.timeline = hiero.ui.activeSequence()
|
||||
CTX.project_fps = CTX.timeline.framerate().toFloat()
|
||||
|
||||
# convert timeline to otio
|
||||
otio_timeline = _create_otio_timeline()
|
||||
|
||||
# loop all defined track types
|
||||
for track in self.timeline.items():
|
||||
for track in CTX.timeline.items():
|
||||
# skip if track is disabled
|
||||
if not track.isEnabled():
|
||||
continue
|
||||
|
|
@ -441,7 +430,7 @@ def create_otio_timeline():
|
|||
otio_track.append(otio_clip)
|
||||
|
||||
# Add tags as markers
|
||||
if self.include_tags:
|
||||
if CTX.include_tags:
|
||||
create_otio_markers(otio_track, track)
|
||||
|
||||
# add track to otio timeline
|
||||
|
|
|
|||
|
|
@ -310,7 +310,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
# add pypedata marker to otio_clip metadata
|
||||
for marker in otio_clip.markers:
|
||||
if phiero.pype_tag_name in marker.name:
|
||||
if phiero.OPENPYPE_TAG_NAME in marker.name:
|
||||
otio_clip.metadata.update(marker.metadata)
|
||||
return {"otioClip": otio_clip}
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ from qtpy.QtGui import QPixmap
|
|||
import hiero.ui
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
from openpype.hosts.hiero.api.otio import hiero_export
|
||||
|
||||
|
||||
|
|
@ -22,8 +21,8 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
subset = "workfile"
|
||||
project = phiero.get_current_project()
|
||||
active_timeline = hiero.ui.activeSequence()
|
||||
project = active_timeline.project()
|
||||
fps = active_timeline.framerate().toFloat()
|
||||
|
||||
# adding otio timeline to context
|
||||
|
|
|
|||
|
|
@ -3,12 +3,12 @@
|
|||
import hou # noqa
|
||||
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.lib import EnumDef
|
||||
|
||||
|
||||
class CreateRedshiftROP(plugin.HoudiniCreator):
|
||||
"""Redshift ROP"""
|
||||
|
||||
identifier = "io.openpype.creators.houdini.redshift_rop"
|
||||
label = "Redshift ROP"
|
||||
family = "redshift_rop"
|
||||
|
|
@ -28,7 +28,7 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
instance = super(CreateRedshiftROP, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
pre_create_data)
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
|
|
@ -57,6 +57,8 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
fmt="${aov}.$F4.{ext}".format(aov="AOV", ext=ext)
|
||||
)
|
||||
|
||||
ext_format_index = {"exr": 0, "tif": 1, "jpg": 2, "png": 3}
|
||||
|
||||
parms = {
|
||||
# Render frame range
|
||||
"trange": 1,
|
||||
|
|
@ -64,6 +66,7 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
"RS_outputFileNamePrefix": filepath,
|
||||
"RS_outputMultilayerMode": "1", # no multi-layered exr
|
||||
"RS_outputBeautyAOVSuffix": "beauty",
|
||||
"RS_outputFileFormat": ext_format_index[ext],
|
||||
}
|
||||
|
||||
if self.selected_nodes:
|
||||
|
|
@ -93,8 +96,7 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateRedshiftROP, self).get_pre_create_attr_defs()
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
"exr", "tif", "jpg", "png",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
|
|
|
|||
|
|
@ -250,10 +250,7 @@ def reset_frame_range(fps: bool = True):
|
|||
frame_range["handleStart"]
|
||||
)
|
||||
frame_end_handle = frame_range["frameEnd"] + int(frame_range["handleEnd"])
|
||||
frange_cmd = (
|
||||
f"animationRange = interval {frame_start_handle} {frame_end_handle}"
|
||||
)
|
||||
rt.Execute(frange_cmd)
|
||||
set_timeline(frame_start_handle, frame_end_handle)
|
||||
set_render_frame_range(frame_start_handle, frame_end_handle)
|
||||
|
||||
|
||||
|
|
@ -285,3 +282,10 @@ def get_max_version():
|
|||
"""
|
||||
max_info = rt.MaxVersion()
|
||||
return max_info[7]
|
||||
|
||||
|
||||
def set_timeline(frameStart, frameEnd):
|
||||
"""Set frame range for timeline editor in Max
|
||||
"""
|
||||
rt.animationRange = rt.interval(frameStart, frameEnd)
|
||||
return rt.animationRange
|
||||
|
|
|
|||
|
|
@ -121,16 +121,14 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94}
|
|||
|
||||
RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"]
|
||||
|
||||
DISPLAY_LIGHTS_VALUES = [
|
||||
"project_settings", "default", "all", "selected", "flat", "none"
|
||||
]
|
||||
DISPLAY_LIGHTS_LABELS = [
|
||||
"Use Project Settings",
|
||||
"Default Lighting",
|
||||
"All Lights",
|
||||
"Selected Lights",
|
||||
"Flat Lighting",
|
||||
"No Lights"
|
||||
|
||||
DISPLAY_LIGHTS_ENUM = [
|
||||
{"label": "Use Project Settings", "value": "project_settings"},
|
||||
{"label": "Default Lighting", "value": "default"},
|
||||
{"label": "All Lights", "value": "all"},
|
||||
{"label": "Selected Lights", "value": "selected"},
|
||||
{"label": "Flat Lighting", "value": "flat"},
|
||||
{"label": "No Lights", "value": "none"}
|
||||
]
|
||||
|
||||
|
||||
|
|
@ -2320,8 +2318,8 @@ def reset_frame_range(playback=True, render=True, fps=True):
|
|||
cmds.currentTime(frame_start)
|
||||
|
||||
if render:
|
||||
cmds.setAttr("defaultRenderGlobals.startFrame", frame_start)
|
||||
cmds.setAttr("defaultRenderGlobals.endFrame", frame_end)
|
||||
cmds.setAttr("defaultRenderGlobals.startFrame", animation_start)
|
||||
cmds.setAttr("defaultRenderGlobals.endFrame", animation_end)
|
||||
|
||||
|
||||
def reset_scene_resolution():
|
||||
|
|
@ -3989,6 +3987,71 @@ def get_capture_preset(task_name, task_type, subset, project_settings, log):
|
|||
return capture_preset or {}
|
||||
|
||||
|
||||
def get_reference_node(members, log=None):
|
||||
"""Get the reference node from the container members
|
||||
Args:
|
||||
members: list of node names
|
||||
|
||||
Returns:
|
||||
str: Reference node name.
|
||||
|
||||
"""
|
||||
|
||||
# Collect the references without .placeHolderList[] attributes as
|
||||
# unique entries (objects only) and skipping the sharedReferenceNode.
|
||||
references = set()
|
||||
for ref in cmds.ls(members, exactType="reference", objectsOnly=True):
|
||||
|
||||
# Ignore any `:sharedReferenceNode`
|
||||
if ref.rsplit(":", 1)[-1].startswith("sharedReferenceNode"):
|
||||
continue
|
||||
|
||||
# Ignore _UNKNOWN_REF_NODE_ (PLN-160)
|
||||
if ref.rsplit(":", 1)[-1].startswith("_UNKNOWN_REF_NODE_"):
|
||||
continue
|
||||
|
||||
references.add(ref)
|
||||
|
||||
assert references, "No reference node found in container"
|
||||
|
||||
# Get highest reference node (least parents)
|
||||
highest = min(references,
|
||||
key=lambda x: len(get_reference_node_parents(x)))
|
||||
|
||||
# Warn the user when we're taking the highest reference node
|
||||
if len(references) > 1:
|
||||
if not log:
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
log.warning("More than one reference node found in "
|
||||
"container, using highest reference node: "
|
||||
"%s (in: %s)", highest, list(references))
|
||||
|
||||
return highest
|
||||
|
||||
|
||||
def get_reference_node_parents(ref):
|
||||
"""Return all parent reference nodes of reference node
|
||||
|
||||
Args:
|
||||
ref (str): reference node.
|
||||
|
||||
Returns:
|
||||
list: The upstream parent reference nodes.
|
||||
|
||||
"""
|
||||
parent = cmds.referenceQuery(ref,
|
||||
referenceNode=True,
|
||||
parent=True)
|
||||
parents = []
|
||||
while parent:
|
||||
parents.append(parent)
|
||||
parent = cmds.referenceQuery(parent,
|
||||
referenceNode=True,
|
||||
parent=True)
|
||||
return parents
|
||||
|
||||
|
||||
def create_rig_animation_instance(
|
||||
nodes, context, namespace, options=None, log=None
|
||||
):
|
||||
|
|
|
|||
|
|
@ -116,7 +116,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
register_event_callback("taskChanged", on_task_changed)
|
||||
register_event_callback("workfile.open.before", before_workfile_open)
|
||||
register_event_callback("workfile.save.before", before_workfile_save)
|
||||
register_event_callback("workfile.save.before", after_workfile_save)
|
||||
register_event_callback("workfile.save.after", after_workfile_save)
|
||||
|
||||
def open_workfile(self, filepath):
|
||||
return open_file(filepath)
|
||||
|
|
|
|||
|
|
@ -32,6 +32,9 @@ from .pipeline import containerise
|
|||
from . import lib
|
||||
|
||||
|
||||
log = Logger.get_logger()
|
||||
|
||||
|
||||
def _get_attr(node, attr, default=None):
|
||||
"""Helper to get attribute which allows attribute to not exist."""
|
||||
if not cmds.attributeQuery(attr, node=node, exists=True):
|
||||
|
|
@ -39,69 +42,28 @@ def _get_attr(node, attr, default=None):
|
|||
return cmds.getAttr("{}.{}".format(node, attr))
|
||||
|
||||
|
||||
def get_reference_node(members, log=None):
|
||||
# Backwards compatibility: these functions has been moved to lib.
|
||||
def get_reference_node(*args, **kwargs):
|
||||
"""Get the reference node from the container members
|
||||
Args:
|
||||
members: list of node names
|
||||
|
||||
Returns:
|
||||
str: Reference node name.
|
||||
|
||||
Deprecated:
|
||||
This function was moved and will be removed in 3.16.x.
|
||||
"""
|
||||
|
||||
# Collect the references without .placeHolderList[] attributes as
|
||||
# unique entries (objects only) and skipping the sharedReferenceNode.
|
||||
references = set()
|
||||
for ref in cmds.ls(members, exactType="reference", objectsOnly=True):
|
||||
|
||||
# Ignore any `:sharedReferenceNode`
|
||||
if ref.rsplit(":", 1)[-1].startswith("sharedReferenceNode"):
|
||||
continue
|
||||
|
||||
# Ignore _UNKNOWN_REF_NODE_ (PLN-160)
|
||||
if ref.rsplit(":", 1)[-1].startswith("_UNKNOWN_REF_NODE_"):
|
||||
continue
|
||||
|
||||
references.add(ref)
|
||||
|
||||
assert references, "No reference node found in container"
|
||||
|
||||
# Get highest reference node (least parents)
|
||||
highest = min(references,
|
||||
key=lambda x: len(get_reference_node_parents(x)))
|
||||
|
||||
# Warn the user when we're taking the highest reference node
|
||||
if len(references) > 1:
|
||||
if not log:
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
log.warning("More than one reference node found in "
|
||||
"container, using highest reference node: "
|
||||
"%s (in: %s)", highest, list(references))
|
||||
|
||||
return highest
|
||||
msg = "Function 'get_reference_node' has been moved."
|
||||
log.warning(msg)
|
||||
cmds.warning(msg)
|
||||
return lib.get_reference_node(*args, **kwargs)
|
||||
|
||||
|
||||
def get_reference_node_parents(ref):
|
||||
"""Return all parent reference nodes of reference node
|
||||
|
||||
Args:
|
||||
ref (str): reference node.
|
||||
|
||||
Returns:
|
||||
list: The upstream parent reference nodes.
|
||||
|
||||
def get_reference_node_parents(*args, **kwargs):
|
||||
"""
|
||||
parent = cmds.referenceQuery(ref,
|
||||
referenceNode=True,
|
||||
parent=True)
|
||||
parents = []
|
||||
while parent:
|
||||
parents.append(parent)
|
||||
parent = cmds.referenceQuery(parent,
|
||||
referenceNode=True,
|
||||
parent=True)
|
||||
return parents
|
||||
Deprecated:
|
||||
This function was moved and will be removed in 3.16.x.
|
||||
"""
|
||||
msg = "Function 'get_reference_node_parents' has been moved."
|
||||
log.warning(msg)
|
||||
cmds.warning(msg)
|
||||
return lib.get_reference_node_parents(*args, **kwargs)
|
||||
|
||||
|
||||
class Creator(LegacyCreator):
|
||||
|
|
@ -598,7 +560,7 @@ class ReferenceLoader(Loader):
|
|||
if not nodes:
|
||||
return
|
||||
|
||||
ref_node = get_reference_node(nodes, self.log)
|
||||
ref_node = lib.get_reference_node(nodes, self.log)
|
||||
container = containerise(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
|
|
@ -627,7 +589,7 @@ class ReferenceLoader(Loader):
|
|||
|
||||
# Get reference node from container members
|
||||
members = get_container_members(node)
|
||||
reference_node = get_reference_node(members, self.log)
|
||||
reference_node = lib.get_reference_node(members, self.log)
|
||||
namespace = cmds.referenceQuery(reference_node, namespace=True)
|
||||
|
||||
file_type = {
|
||||
|
|
@ -775,7 +737,7 @@ class ReferenceLoader(Loader):
|
|||
|
||||
# Assume asset has been referenced
|
||||
members = cmds.sets(node, query=True)
|
||||
reference_node = get_reference_node(members, self.log)
|
||||
reference_node = lib.get_reference_node(members, self.log)
|
||||
|
||||
assert reference_node, ("Imported container not supported; "
|
||||
"container must be referenced.")
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from openpype.tools.workfile_template_build import (
|
|||
WorkfileBuildPlaceholderDialog,
|
||||
)
|
||||
|
||||
from .lib import read, imprint, get_main_window
|
||||
from .lib import read, imprint, get_reference_node, get_main_window
|
||||
|
||||
PLACEHOLDER_SET = "PLACEHOLDERS_SET"
|
||||
|
||||
|
|
@ -243,15 +243,19 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
def get_placeholder_options(self, options=None):
|
||||
return self.get_load_plugin_options(options)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
def post_placeholder_process(self, placeholder, failed):
|
||||
"""Hide placeholder, add them to placeholder set
|
||||
"""
|
||||
node = placeholder._scene_identifier
|
||||
node = placeholder.scene_identifier
|
||||
|
||||
cmds.sets(node, addElement=PLACEHOLDER_SET)
|
||||
cmds.hide(node)
|
||||
cmds.setAttr(node + ".hiddenInOutliner", True)
|
||||
|
||||
def delete_placeholder(self, placeholder):
|
||||
"""Remove placeholder if building was successful"""
|
||||
cmds.delete(placeholder.scene_identifier)
|
||||
|
||||
def load_succeed(self, placeholder, container):
|
||||
self._parent_in_hierarchy(placeholder, container)
|
||||
|
||||
|
|
@ -268,9 +272,19 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
return
|
||||
|
||||
roots = cmds.sets(container, q=True)
|
||||
ref_node = get_reference_node(roots)
|
||||
nodes_to_parent = []
|
||||
for root in roots:
|
||||
if ref_node:
|
||||
ref_root = cmds.referenceQuery(root, nodes=True)[0]
|
||||
ref_root = (
|
||||
cmds.listRelatives(ref_root, parent=True, path=True) or
|
||||
[ref_root]
|
||||
)
|
||||
nodes_to_parent.extend(ref_root)
|
||||
continue
|
||||
if root.endswith("_RN"):
|
||||
# Backwards compatibility for hardcoded reference names.
|
||||
refRoot = cmds.referenceQuery(root, n=True)[0]
|
||||
refRoot = cmds.listRelatives(refRoot, parent=True) or [refRoot]
|
||||
nodes_to_parent.extend(refRoot)
|
||||
|
|
@ -287,10 +301,17 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
matrix=True,
|
||||
worldSpace=True
|
||||
)
|
||||
scene_parent = cmds.listRelatives(
|
||||
placeholder.scene_identifier, parent=True, fullPath=True
|
||||
)
|
||||
for node in set(nodes_to_parent):
|
||||
cmds.reorder(node, front=True)
|
||||
cmds.reorder(node, relative=placeholder.data["index"])
|
||||
cmds.xform(node, matrix=placeholder_form, ws=True)
|
||||
if scene_parent:
|
||||
cmds.parent(node, scene_parent)
|
||||
else:
|
||||
cmds.parent(node, world=True)
|
||||
|
||||
holding_sets = cmds.listSets(object=placeholder.scene_identifier)
|
||||
if not holding_sets:
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ class CreateLook(plugin.MayaCreator):
|
|||
label="Convert textures to .rstex",
|
||||
tooltip="Whether to generate Redshift .rstex files for "
|
||||
"your textures",
|
||||
default=self.make_tx),
|
||||
default=self.rs_tex),
|
||||
BoolDef("forceCopy",
|
||||
label="Force Copy",
|
||||
tooltip="Enable users to force a copy instead of hardlink."
|
||||
|
|
|
|||
|
|
@ -136,7 +136,7 @@ class CreateReview(plugin.MayaCreator):
|
|||
default=True),
|
||||
EnumDef("displayLights",
|
||||
label="Display Lights",
|
||||
items=lib.DISPLAY_LIGHTS_LABELS),
|
||||
items=lib.DISPLAY_LIGHTS_ENUM),
|
||||
])
|
||||
|
||||
return defs
|
||||
|
|
|
|||
|
|
@ -42,37 +42,52 @@ class CollectNewInstances(pyblish.api.InstancePlugin):
|
|||
instance.data.update(creator_attributes)
|
||||
|
||||
members = cmds.sets(objset, query=True) or []
|
||||
if not members:
|
||||
self.log.warning("Empty instance: \"%s\" " % objset)
|
||||
else:
|
||||
if members:
|
||||
# Collect members
|
||||
members = cmds.ls(members, long=True) or []
|
||||
|
||||
dag_members = cmds.ls(members, type="dagNode", long=True)
|
||||
children = get_all_children(dag_members)
|
||||
children = cmds.ls(children, noIntermediate=True, long=True)
|
||||
parents = []
|
||||
if creator_attributes.get("includeParentHierarchy", True):
|
||||
# If `includeParentHierarchy` then include the parents
|
||||
# so they will also be picked up in the instance by validators
|
||||
parents = self.get_all_parents(members)
|
||||
parents = (
|
||||
self.get_all_parents(members)
|
||||
if creator_attributes.get("includeParentHierarchy", True)
|
||||
else []
|
||||
)
|
||||
members_hierarchy = list(set(members + children + parents))
|
||||
|
||||
instance[:] = members_hierarchy
|
||||
|
||||
elif instance.data["family"] != "workfile":
|
||||
self.log.warning("Empty instance: \"%s\" " % objset)
|
||||
# Store the exact members of the object set
|
||||
instance.data["setMembers"] = members
|
||||
|
||||
# TODO: This might make more sense as a separate collector
|
||||
# Collect frameStartHandle and frameEndHandle if frames present
|
||||
if "frameStart" in instance.data:
|
||||
handle_start = instance.data.get("handleStart", 0)
|
||||
frame_start_handle = instance.data["frameStart"] - handle_start
|
||||
instance.data["frameStartHandle"] = frame_start_handle
|
||||
if "frameEnd" in instance.data:
|
||||
handle_end = instance.data.get("handleEnd", 0)
|
||||
frame_end_handle = instance.data["frameEnd"] + handle_end
|
||||
instance.data["frameEndHandle"] = frame_end_handle
|
||||
# Convert frame values to integers
|
||||
for attr_name in (
|
||||
"handleStart", "handleEnd", "frameStart", "frameEnd",
|
||||
):
|
||||
value = instance.data.get(attr_name)
|
||||
if value is not None:
|
||||
instance.data[attr_name] = int(value)
|
||||
|
||||
# Append start frame and end frame to label if present
|
||||
if "frameStart" in instance.data and "frameEnd" in instance.data:
|
||||
# Take handles from context if not set locally on the instance
|
||||
for key in ["handleStart", "handleEnd"]:
|
||||
if key not in instance.data:
|
||||
value = instance.context.data[key]
|
||||
if value is not None:
|
||||
value = int(value)
|
||||
instance.data[key] = value
|
||||
|
||||
instance.data["frameStartHandle"] = int(
|
||||
instance.data["frameStart"] - instance.data["handleStart"]
|
||||
)
|
||||
instance.data["frameEndHandle"] = int(
|
||||
instance.data["frameEnd"] + instance.data["handleEnd"]
|
||||
)
|
||||
|
||||
def get_all_parents(self, nodes):
|
||||
"""Get all parents by using string operations (optimization)
|
||||
|
|
|
|||
|
|
@ -17,10 +17,12 @@ class CollectMayaSceneTime(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
instance.data.update({
|
||||
"frameStart": cmds.playbackOptions(query=True, minTime=True),
|
||||
"frameEnd": cmds.playbackOptions(query=True, maxTime=True),
|
||||
"frameStartHandle": cmds.playbackOptions(query=True,
|
||||
animationStartTime=True),
|
||||
"frameEndHandle": cmds.playbackOptions(query=True,
|
||||
animationEndTime=True)
|
||||
"frameStart": int(
|
||||
cmds.playbackOptions(query=True, minTime=True)),
|
||||
"frameEnd": int(
|
||||
cmds.playbackOptions(query=True, maxTime=True)),
|
||||
"frameStartHandle": int(
|
||||
cmds.playbackOptions(query=True, animationStartTime=True)),
|
||||
"frameEndHandle": int(
|
||||
cmds.playbackOptions(query=True, animationEndTime=True))
|
||||
})
|
||||
|
|
|
|||
|
|
@ -94,7 +94,11 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
data["frameStart"] = instance.data["frameStart"]
|
||||
data["frameEnd"] = instance.data["frameEnd"]
|
||||
data['step'] = instance.data['step']
|
||||
data['fps'] = instance.data['fps']
|
||||
# this (with other time related data) should be set on
|
||||
# representations. Once plugins like Extract Review start
|
||||
# using representations, this should be removed from here
|
||||
# as Extract Playblast is already adding fps to representation.
|
||||
data['fps'] = instance.context.data['fps']
|
||||
data['review_width'] = instance.data['review_width']
|
||||
data['review_height'] = instance.data['review_height']
|
||||
data["isolate"] = instance.data["isolate"]
|
||||
|
|
@ -131,6 +135,11 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
instance.data["frameEndHandle"]
|
||||
instance.data["displayLights"] = display_lights
|
||||
instance.data["burninDataMembers"] = burninDataMembers
|
||||
# this (with other time related data) should be set on
|
||||
# representations. Once plugins like Extract Review start
|
||||
# using representations, this should be removed from here
|
||||
# as Extract Playblast is already adding fps to representation.
|
||||
instance.data["fps"] = instance.context.data["fps"]
|
||||
|
||||
# make ftrack publishable
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
|
|
|||
|
|
@ -261,8 +261,8 @@ class ExtractPlayblast(publish.Extractor):
|
|||
"ext": capture_preset["Codec"]["compression"],
|
||||
"files": collected_files,
|
||||
"stagingDir": stagingdir,
|
||||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
"frameStart": int(start),
|
||||
"frameEnd": int(end),
|
||||
"fps": fps,
|
||||
"tags": tags,
|
||||
"camera_name": camera_node_name
|
||||
|
|
|
|||
|
|
@ -2250,16 +2250,15 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
log.warning(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
data = self._asset_entity["data"]
|
||||
|
||||
log.debug("__ asset data: `{}`".format(data))
|
||||
asset_data = self._asset_entity["data"]
|
||||
|
||||
missing_cols = []
|
||||
check_cols = ["fps", "frameStart", "frameEnd",
|
||||
"handleStart", "handleEnd"]
|
||||
|
||||
for col in check_cols:
|
||||
if col not in data:
|
||||
if col not in asset_data:
|
||||
missing_cols.append(col)
|
||||
|
||||
if len(missing_cols) > 0:
|
||||
|
|
@ -2271,12 +2270,12 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
return
|
||||
|
||||
# get handles values
|
||||
handle_start = data["handleStart"]
|
||||
handle_end = data["handleEnd"]
|
||||
handle_start = asset_data["handleStart"]
|
||||
handle_end = asset_data["handleEnd"]
|
||||
|
||||
fps = float(data["fps"])
|
||||
frame_start_handle = int(data["frameStart"]) - handle_start
|
||||
frame_end_handle = int(data["frameEnd"]) + handle_end
|
||||
fps = float(asset_data["fps"])
|
||||
frame_start_handle = int(asset_data["frameStart"]) - handle_start
|
||||
frame_end_handle = int(asset_data["frameEnd"]) + handle_end
|
||||
|
||||
self._root_node["lock_range"].setValue(False)
|
||||
self._root_node["fps"].setValue(fps)
|
||||
|
|
@ -2284,21 +2283,18 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
self._root_node["last_frame"].setValue(frame_end_handle)
|
||||
self._root_node["lock_range"].setValue(True)
|
||||
|
||||
# setting active viewers
|
||||
try:
|
||||
nuke.frame(int(data["frameStart"]))
|
||||
except Exception as e:
|
||||
log.warning("no viewer in scene: `{}`".format(e))
|
||||
# update node graph so knobs are updated
|
||||
update_node_graph()
|
||||
|
||||
range = '{0}-{1}'.format(
|
||||
int(data["frameStart"]),
|
||||
int(data["frameEnd"])
|
||||
frame_range = '{0}-{1}'.format(
|
||||
int(asset_data["frameStart"]),
|
||||
int(asset_data["frameEnd"])
|
||||
)
|
||||
|
||||
for node in nuke.allNodes(filter="Viewer"):
|
||||
node['frame_range'].setValue(range)
|
||||
node['frame_range'].setValue(frame_range)
|
||||
node['frame_range_lock'].setValue(True)
|
||||
node['frame_range'].setValue(range)
|
||||
node['frame_range'].setValue(frame_range)
|
||||
node['frame_range_lock'].setValue(True)
|
||||
|
||||
if not ASSIST:
|
||||
|
|
@ -2320,12 +2316,9 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
"""Set resolution to project resolution."""
|
||||
log.info("Resetting resolution")
|
||||
project_name = legacy_io.active_project()
|
||||
project = get_project(project_name)
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
asset_data = asset.get('data', {})
|
||||
asset_data = self._asset_entity["data"]
|
||||
|
||||
data = {
|
||||
format_data = {
|
||||
"width": int(asset_data.get(
|
||||
'resolutionWidth',
|
||||
asset_data.get('resolution_width'))),
|
||||
|
|
@ -2335,37 +2328,40 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
"pixel_aspect": asset_data.get(
|
||||
'pixelAspect',
|
||||
asset_data.get('pixel_aspect', 1)),
|
||||
"name": project["name"]
|
||||
"name": project_name
|
||||
}
|
||||
|
||||
if any(x for x in data.values() if x is None):
|
||||
if any(x_ for x_ in format_data.values() if x_ is None):
|
||||
msg = ("Missing set shot attributes in DB."
|
||||
"\nContact your supervisor!."
|
||||
"\n\nWidth: `{width}`"
|
||||
"\nHeight: `{height}`"
|
||||
"\nPixel Asspect: `{pixel_aspect}`").format(**data)
|
||||
"\nPixel Aspect: `{pixel_aspect}`").format(**format_data)
|
||||
log.error(msg)
|
||||
nuke.message(msg)
|
||||
|
||||
existing_format = None
|
||||
for format in nuke.formats():
|
||||
if data["name"] == format.name():
|
||||
if format_data["name"] == format.name():
|
||||
existing_format = format
|
||||
break
|
||||
|
||||
if existing_format:
|
||||
# Enforce existing format to be correct.
|
||||
existing_format.setWidth(data["width"])
|
||||
existing_format.setHeight(data["height"])
|
||||
existing_format.setPixelAspect(data["pixel_aspect"])
|
||||
existing_format.setWidth(format_data["width"])
|
||||
existing_format.setHeight(format_data["height"])
|
||||
existing_format.setPixelAspect(format_data["pixel_aspect"])
|
||||
else:
|
||||
format_string = self.make_format_string(**data)
|
||||
format_string = self.make_format_string(**format_data)
|
||||
log.info("Creating new format: {}".format(format_string))
|
||||
nuke.addFormat(format_string)
|
||||
|
||||
nuke.root()["format"].setValue(data["name"])
|
||||
nuke.root()["format"].setValue(format_data["name"])
|
||||
log.info("Format is set.")
|
||||
|
||||
# update node graph so knobs are updated
|
||||
update_node_graph()
|
||||
|
||||
def make_format_string(self, **kwargs):
|
||||
if kwargs.get("r"):
|
||||
return (
|
||||
|
|
@ -2484,6 +2480,20 @@ def get_dependent_nodes(nodes):
|
|||
return connections_in, connections_out
|
||||
|
||||
|
||||
def update_node_graph():
|
||||
# Resetting frame will update knob values
|
||||
try:
|
||||
root_node_lock = nuke.root()["lock_range"].value()
|
||||
nuke.root()["lock_range"].setValue(not root_node_lock)
|
||||
nuke.root()["lock_range"].setValue(root_node_lock)
|
||||
|
||||
current_frame = nuke.frame()
|
||||
nuke.frame(1)
|
||||
nuke.frame(int(current_frame))
|
||||
except Exception as error:
|
||||
log.warning(error)
|
||||
|
||||
|
||||
def find_free_space_to_paste_nodes(
|
||||
nodes,
|
||||
group=nuke.root(),
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import nuke
|
||||
|
||||
from openpype import resources
|
||||
from .lib import maintained_selection
|
||||
from qtpy import QtWidgets
|
||||
|
||||
|
||||
def set_context_favorites(favorites=None):
|
||||
|
|
@ -55,6 +55,7 @@ def bake_gizmos_recursively(in_group=None):
|
|||
Arguments:
|
||||
is_group (nuke.Node)[optonal]: group node or all nodes
|
||||
"""
|
||||
from .lib import maintained_selection
|
||||
if in_group is None:
|
||||
in_group = nuke.Root()
|
||||
# preserve selection after all is done
|
||||
|
|
@ -129,3 +130,11 @@ def get_colorspace_list(colorspace_knob):
|
|||
reduced_clrs.append(clrs)
|
||||
|
||||
return reduced_clrs
|
||||
|
||||
|
||||
def is_headless():
|
||||
"""
|
||||
Returns:
|
||||
bool: headless
|
||||
"""
|
||||
return QtWidgets.QApplication.instance() is None
|
||||
|
|
|
|||
|
|
@ -190,7 +190,7 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
def get_placeholder_options(self, options=None):
|
||||
return self.get_load_plugin_options(options)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
def post_placeholder_process(self, placeholder, failed):
|
||||
# deselect all selected nodes
|
||||
placeholder_node = nuke.toNode(placeholder.scene_identifier)
|
||||
|
||||
|
|
@ -604,7 +604,7 @@ class NukePlaceholderCreatePlugin(
|
|||
def get_placeholder_options(self, options=None):
|
||||
return self.get_create_plugin_options(options)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
def post_placeholder_process(self, placeholder, failed):
|
||||
# deselect all selected nodes
|
||||
placeholder_node = nuke.toNode(placeholder.scene_identifier)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
"""Host API required Work Files tool"""
|
||||
import os
|
||||
import nuke
|
||||
from .utils import is_headless
|
||||
|
||||
|
||||
def file_extensions():
|
||||
|
|
@ -25,6 +26,12 @@ def open_file(filepath):
|
|||
# To remain in the same window, we have to clear the script and read
|
||||
# in the contents of the workfile.
|
||||
nuke.scriptClear()
|
||||
if not is_headless():
|
||||
autosave = nuke.toNode("preferences")["AutoSaveName"].evaluate()
|
||||
autosave_prmpt = "Autosave detected.\nWould you like to load the autosave file?" # noqa
|
||||
if os.path.isfile(autosave) and nuke.ask(autosave_prmpt):
|
||||
filepath = autosave
|
||||
|
||||
nuke.scriptReadFile(filepath)
|
||||
nuke.Root()["name"].setValue(filepath)
|
||||
nuke.Root()["project_directory"].setValue(os.path.dirname(filepath))
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ log = Logger.get_logger(__name__)
|
|||
self = sys.modules[__name__]
|
||||
self.project_manager = None
|
||||
self.media_storage = None
|
||||
self.current_project = None
|
||||
|
||||
# OpenPype sequential rename variables
|
||||
self.rename_index = 0
|
||||
|
|
@ -88,10 +87,7 @@ def get_media_storage():
|
|||
def get_current_project():
|
||||
"""Get current project object.
|
||||
"""
|
||||
if not self.current_project:
|
||||
self.current_project = get_project_manager().GetCurrentProject()
|
||||
|
||||
return self.current_project
|
||||
return get_project_manager().GetCurrentProject()
|
||||
|
||||
|
||||
def get_current_timeline(new=False):
|
||||
|
|
|
|||
|
|
@ -4,18 +4,15 @@ import os
|
|||
from openpype.lib import Logger
|
||||
from .lib import (
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
set_project_manager_to_folder_name
|
||||
get_current_project
|
||||
)
|
||||
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
exported_projet_ext = ".drp"
|
||||
|
||||
|
||||
def file_extensions():
|
||||
return [exported_projet_ext]
|
||||
return [".drp"]
|
||||
|
||||
|
||||
def has_unsaved_changes():
|
||||
|
|
@ -30,13 +27,17 @@ def save_file(filepath):
|
|||
project = get_current_project()
|
||||
name = project.GetName()
|
||||
|
||||
if "Untitled Project" not in name:
|
||||
log.info("Saving project: `{}` as '{}'".format(name, file))
|
||||
pm.ExportProject(name, filepath)
|
||||
else:
|
||||
log.info("Creating new project...")
|
||||
pm.CreateProject(fname)
|
||||
pm.ExportProject(name, filepath)
|
||||
response = False
|
||||
if name == "Untitled Project":
|
||||
response = pm.CreateProject(fname)
|
||||
log.info("New project created: {}".format(response))
|
||||
pm.SaveProject()
|
||||
elif name != fname:
|
||||
response = project.SetName(fname)
|
||||
log.info("Project renamed: {}".format(response))
|
||||
|
||||
exported = pm.ExportProject(fname, filepath)
|
||||
log.info("Project exported: {}".format(exported))
|
||||
|
||||
|
||||
def open_file(filepath):
|
||||
|
|
@ -57,10 +58,8 @@ def open_file(filepath):
|
|||
|
||||
file = os.path.basename(filepath)
|
||||
fname, _ = os.path.splitext(file)
|
||||
dname, _ = fname.split("_v")
|
||||
|
||||
try:
|
||||
if not set_project_manager_to_folder_name(dname):
|
||||
raise
|
||||
# load project from input path
|
||||
project = pm.LoadProject(fname)
|
||||
log.info(f"Project {project.GetName()} opened...")
|
||||
|
|
@ -79,14 +78,18 @@ def open_file(filepath):
|
|||
|
||||
def current_file():
|
||||
pm = get_project_manager()
|
||||
current_dir = os.getenv("AVALON_WORKDIR")
|
||||
file_ext = file_extensions()[0]
|
||||
workdir_path = os.getenv("AVALON_WORKDIR")
|
||||
project = pm.GetCurrentProject()
|
||||
name = project.GetName()
|
||||
fname = name + exported_projet_ext
|
||||
current_file = os.path.join(current_dir, fname)
|
||||
if not current_file:
|
||||
return None
|
||||
return os.path.normpath(current_file)
|
||||
project_name = project.GetName()
|
||||
file_name = project_name + file_ext
|
||||
|
||||
# create current file path
|
||||
current_file_path = os.path.join(workdir_path, file_name)
|
||||
|
||||
# return current file path if it exists
|
||||
if os.path.exists(current_file_path):
|
||||
return os.path.normpath(current_file_path)
|
||||
|
||||
|
||||
def work_root(session):
|
||||
|
|
|
|||
|
|
@ -1,17 +1,13 @@
|
|||
import os
|
||||
|
||||
from openpype.lib import PreLaunchHook
|
||||
import openpype.hosts.resolve
|
||||
|
||||
|
||||
class ResolveLaunchLastWorkfile(PreLaunchHook):
|
||||
class PreLaunchResolveLastWorkfile(PreLaunchHook):
|
||||
"""Special hook to open last workfile for Resolve.
|
||||
|
||||
Checks 'start_last_workfile', if set to False, it will not open last
|
||||
workfile. This property is set explicitly in Launcher.
|
||||
"""
|
||||
|
||||
# Execute after workfile template copy
|
||||
order = 10
|
||||
app_groups = ["resolve"]
|
||||
|
||||
|
|
@ -30,16 +26,9 @@ class ResolveLaunchLastWorkfile(PreLaunchHook):
|
|||
return
|
||||
|
||||
# Add path to launch environment for the startup script to pick up
|
||||
self.log.info(f"Setting OPENPYPE_RESOLVE_OPEN_ON_LAUNCH to launch "
|
||||
f"last workfile: {last_workfile}")
|
||||
self.log.info(
|
||||
"Setting OPENPYPE_RESOLVE_OPEN_ON_LAUNCH to launch "
|
||||
f"last workfile: {last_workfile}"
|
||||
)
|
||||
key = "OPENPYPE_RESOLVE_OPEN_ON_LAUNCH"
|
||||
self.launch_context.env[key] = last_workfile
|
||||
|
||||
# Set the openpype prelaunch startup script path for easy access
|
||||
# in the LUA .scriptlib code
|
||||
op_resolve_root = os.path.dirname(openpype.hosts.resolve.__file__)
|
||||
script_path = os.path.join(op_resolve_root, "startup.py")
|
||||
key = "OPENPYPE_RESOLVE_STARTUP_SCRIPT"
|
||||
self.launch_context.env[key] = script_path
|
||||
self.log.info("Setting OPENPYPE_RESOLVE_STARTUP_SCRIPT to: "
|
||||
f"{script_path}")
|
||||
|
|
@ -5,7 +5,7 @@ from openpype.lib import PreLaunchHook
|
|||
from openpype.hosts.resolve.utils import setup
|
||||
|
||||
|
||||
class ResolvePrelaunch(PreLaunchHook):
|
||||
class PreLaunchResolveSetup(PreLaunchHook):
|
||||
"""
|
||||
This hook will set up the Resolve scripting environment as described in
|
||||
Resolve's documentation found with the installed application at
|
||||
|
|
|
|||
24
openpype/hosts/resolve/hooks/pre_resolve_startup.py
Normal file
24
openpype/hosts/resolve/hooks/pre_resolve_startup.py
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
import os
|
||||
|
||||
from openpype.lib import PreLaunchHook
|
||||
import openpype.hosts.resolve
|
||||
|
||||
|
||||
class PreLaunchResolveStartup(PreLaunchHook):
|
||||
"""Special hook to configure startup script.
|
||||
|
||||
"""
|
||||
order = 11
|
||||
app_groups = ["resolve"]
|
||||
|
||||
def execute(self):
|
||||
# Set the openpype prelaunch startup script path for easy access
|
||||
# in the LUA .scriptlib code
|
||||
op_resolve_root = os.path.dirname(openpype.hosts.resolve.__file__)
|
||||
script_path = os.path.join(op_resolve_root, "startup.py")
|
||||
key = "OPENPYPE_RESOLVE_STARTUP_SCRIPT"
|
||||
self.launch_context.env[key] = script_path
|
||||
|
||||
self.log.info(
|
||||
f"Setting OPENPYPE_RESOLVE_STARTUP_SCRIPT to: {script_path}"
|
||||
)
|
||||
|
|
@ -10,9 +10,11 @@ This code runs in a separate process to the main Resolve process.
|
|||
|
||||
"""
|
||||
import os
|
||||
|
||||
from openpype.lib import Logger
|
||||
import openpype.hosts.resolve.api
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def ensure_installed_host():
|
||||
"""Install resolve host with openpype and return the registered host.
|
||||
|
|
@ -44,17 +46,22 @@ def open_file(path):
|
|||
def main():
|
||||
# Open last workfile
|
||||
workfile_path = os.environ.get("OPENPYPE_RESOLVE_OPEN_ON_LAUNCH")
|
||||
if workfile_path:
|
||||
|
||||
if workfile_path and os.path.exists(workfile_path):
|
||||
log.info(f"Opening last workfile: {workfile_path}")
|
||||
open_file(workfile_path)
|
||||
else:
|
||||
print("No last workfile set to open. Skipping..")
|
||||
log.info("No last workfile set to open. Skipping..")
|
||||
|
||||
# Launch OpenPype menu
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline.context_tools import get_current_project_name
|
||||
project_name = get_current_project_name()
|
||||
log.info(f"Current project name in context: {project_name}")
|
||||
|
||||
settings = get_project_settings(project_name)
|
||||
if settings.get("resolve", {}).get("launch_openpype_menu_on_start", True):
|
||||
log.info("Launching OpenPype menu..")
|
||||
launch_menu()
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,5 @@
|
|||
#! python3
|
||||
from openpype.hosts.resolve.startup import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.pipeline import publish
|
||||
from openpype.lib import EnumDef
|
||||
from openpype.pipeline import colorspace
|
||||
|
||||
|
||||
class CollectColorspace(pyblish.api.InstancePlugin,
|
||||
publish.OpenPypePyblishPluginMixin,
|
||||
publish.ColormanagedPyblishPluginMixin):
|
||||
"""Collect explicit user defined representation colorspaces"""
|
||||
|
||||
label = "Choose representation colorspace"
|
||||
order = pyblish.api.CollectorOrder + 0.49
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
colorspace_items = [
|
||||
(None, "Don't override")
|
||||
]
|
||||
colorspace_attr_show = False
|
||||
|
||||
def process(self, instance):
|
||||
values = self.get_attr_values_from_data(instance.data)
|
||||
colorspace = values.get("colorspace", None)
|
||||
if colorspace is None:
|
||||
return
|
||||
|
||||
self.log.debug("Explicit colorspace set to: {}".format(colorspace))
|
||||
|
||||
context = instance.context
|
||||
for repre in instance.data.get("representations", {}):
|
||||
self.set_representation_colorspace(
|
||||
representation=repre,
|
||||
context=context,
|
||||
colorspace=colorspace
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings):
|
||||
host = registered_host()
|
||||
host_name = host.name
|
||||
project_name = host.get_current_project_name()
|
||||
config_data = colorspace.get_imageio_config(
|
||||
project_name, host_name,
|
||||
project_settings=project_settings
|
||||
)
|
||||
|
||||
if config_data:
|
||||
filepath = config_data["path"]
|
||||
config_items = colorspace.get_ocio_config_colorspaces(filepath)
|
||||
cls.colorspace_items.extend((
|
||||
(name, name) for name in config_items.keys()
|
||||
))
|
||||
cls.colorspace_attr_show = True
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
EnumDef(
|
||||
"colorspace",
|
||||
cls.colorspace_items,
|
||||
default="Don't override",
|
||||
label="Override Colorspace",
|
||||
hidden=not cls.colorspace_attr_show
|
||||
)
|
||||
]
|
||||
|
|
@ -0,0 +1,53 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
from openpype.pipeline.colorspace import (
|
||||
get_ocio_config_colorspaces
|
||||
)
|
||||
|
||||
|
||||
class ValidateColorspace(pyblish.api.InstancePlugin,
|
||||
publish.OpenPypePyblishPluginMixin,
|
||||
publish.ColormanagedPyblishPluginMixin):
|
||||
"""Validate representation colorspaces"""
|
||||
|
||||
label = "Validate representation colorspace"
|
||||
order = pyblish.api.ValidatorOrder
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
config_colorspaces = {} # cache of colorspaces per config path
|
||||
for repre in instance.data.get("representations", {}):
|
||||
|
||||
colorspace_data = repre.get("colorspaceData", {})
|
||||
if not colorspace_data:
|
||||
# Nothing to validate
|
||||
continue
|
||||
|
||||
config_path = colorspace_data["config"]["path"]
|
||||
if config_path not in config_colorspaces:
|
||||
colorspaces = get_ocio_config_colorspaces(config_path)
|
||||
config_colorspaces[config_path] = set(colorspaces)
|
||||
|
||||
colorspace = colorspace_data["colorspace"]
|
||||
self.log.debug(
|
||||
f"Validating representation '{repre['name']}' "
|
||||
f"colorspace '{colorspace}'"
|
||||
)
|
||||
if colorspace not in config_colorspaces[config_path]:
|
||||
message = (
|
||||
f"Representation '{repre['name']}' colorspace "
|
||||
f"'{colorspace}' does not exist in OCIO config: "
|
||||
f"{config_path}"
|
||||
)
|
||||
|
||||
raise PublishValidationError(
|
||||
title="Representation colorspace",
|
||||
message=message,
|
||||
description=message
|
||||
)
|
||||
|
|
@ -167,7 +167,7 @@ def filter_profiles(profiles_data, key_values, keys_order=None, logger=None):
|
|||
for item in key_values.items()
|
||||
])
|
||||
|
||||
logger.info(
|
||||
logger.debug(
|
||||
"Looking for matching profile for: {}".format(log_parts)
|
||||
)
|
||||
|
||||
|
|
@ -209,19 +209,19 @@ def filter_profiles(profiles_data, key_values, keys_order=None, logger=None):
|
|||
matching_profiles.append((profile, profile_scores))
|
||||
|
||||
if not matching_profiles:
|
||||
logger.info(
|
||||
logger.debug(
|
||||
"None of profiles match your setup. {}".format(log_parts)
|
||||
)
|
||||
return None
|
||||
|
||||
if len(matching_profiles) > 1:
|
||||
logger.info(
|
||||
logger.debug(
|
||||
"More than one profile match your setup. {}".format(log_parts)
|
||||
)
|
||||
|
||||
profile = _profile_exclusion(matching_profiles, logger)
|
||||
if profile:
|
||||
logger.info(
|
||||
logger.debug(
|
||||
"Profile selected: {}".format(profile)
|
||||
)
|
||||
return profile
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ class CollectFtrackFamily(pyblish.api.InstancePlugin):
|
|||
if "ftrack" not in families:
|
||||
families.append("ftrack")
|
||||
|
||||
self.log.info("{} 'ftrack' family for instance with '{}'".format(
|
||||
self.log.debug("{} 'ftrack' family for instance with '{}'".format(
|
||||
result_str, family
|
||||
))
|
||||
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ class CollectFtrackTaskStatuses(pyblish.api.ContextPlugin):
|
|||
}
|
||||
context.data["ftrackTaskStatuses"] = task_statuses_by_type_id
|
||||
context.data["ftrackStatusByTaskId"] = {}
|
||||
self.log.info("Collected ftrack task statuses.")
|
||||
self.log.debug("Collected ftrack task statuses.")
|
||||
|
||||
|
||||
class IntegrateFtrackStatusBase(pyblish.api.InstancePlugin):
|
||||
|
|
@ -116,7 +116,7 @@ class IntegrateFtrackStatusBase(pyblish.api.InstancePlugin):
|
|||
profiles = self.get_status_profiles()
|
||||
if not profiles:
|
||||
project_name = context.data["projectName"]
|
||||
self.log.info((
|
||||
self.log.debug((
|
||||
"Status profiles are not filled for project \"{}\". Skipping"
|
||||
).format(project_name))
|
||||
return
|
||||
|
|
@ -124,7 +124,7 @@ class IntegrateFtrackStatusBase(pyblish.api.InstancePlugin):
|
|||
# Task statuses were not collected -> skip
|
||||
task_statuses_by_type_id = context.data.get("ftrackTaskStatuses")
|
||||
if not task_statuses_by_type_id:
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Ftrack task statuses are not collected. Skipping.")
|
||||
return
|
||||
|
||||
|
|
@ -364,12 +364,12 @@ class IntegrateFtrackTaskStatus(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
task_statuses_by_type_id = context.data.get("ftrackTaskStatuses")
|
||||
if not task_statuses_by_type_id:
|
||||
self.log.info("Ftrack task statuses are not collected. Skipping.")
|
||||
self.log.debug("Ftrack task statuses are not collected. Skipping.")
|
||||
return
|
||||
|
||||
status_by_task_id = self._get_status_by_task_id(context)
|
||||
if not status_by_task_id:
|
||||
self.log.info("No statuses to set. Skipping.")
|
||||
self.log.debug("No statuses to set. Skipping.")
|
||||
return
|
||||
|
||||
ftrack_session = context.data["ftrackSession"]
|
||||
|
|
|
|||
|
|
@ -240,7 +240,7 @@ def get_data_subprocess(config_path, data_type):
|
|||
def compatible_python():
|
||||
"""Only 3.9 or higher can directly use PyOpenColorIO in ocio_wrapper"""
|
||||
compatible = False
|
||||
if sys.version[0] == 3 and sys.version[1] >= 9:
|
||||
if sys.version_info.major == 3 and sys.version_info.minor >= 9:
|
||||
compatible = True
|
||||
return compatible
|
||||
|
||||
|
|
|
|||
|
|
@ -233,36 +233,40 @@ class RepairAction(pyblish.api.Action):
|
|||
raise RuntimeError("Plug-in does not have repair method.")
|
||||
|
||||
# Get the errored instances
|
||||
self.log.info("Finding failed instances..")
|
||||
self.log.debug("Finding failed instances..")
|
||||
errored_instances = get_errored_instances_from_context(context)
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(errored_instances, plugin)
|
||||
for instance in instances:
|
||||
self.log.debug(
|
||||
"Attempting repair for instance: {} ...".format(instance)
|
||||
)
|
||||
plugin.repair(instance)
|
||||
|
||||
|
||||
class RepairContextAction(pyblish.api.Action):
|
||||
"""Repairs the action
|
||||
|
||||
To process the repairing this requires a static `repair(instance)` method
|
||||
To process the repairing this requires a static `repair(context)` method
|
||||
is available on the plugin.
|
||||
"""
|
||||
|
||||
label = "Repair"
|
||||
on = "failed" # This action is only available on a failed plug-in
|
||||
icon = "wrench" # Icon from Awesome Icon
|
||||
|
||||
def process(self, context, plugin):
|
||||
if not hasattr(plugin, "repair"):
|
||||
raise RuntimeError("Plug-in does not have repair method.")
|
||||
|
||||
# Get the failed instances
|
||||
self.log.info("Finding failed instances..")
|
||||
self.log.debug("Finding failed plug-ins..")
|
||||
failed_plugins = get_errored_plugins_from_context(context)
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
if plugin in failed_plugins:
|
||||
self.log.info("Attempting fix ...")
|
||||
self.log.debug("Attempting repair ...")
|
||||
plugin.repair(context)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -478,7 +478,9 @@ class AbstractTemplateBuilder(object):
|
|||
create_first_version = template_preset["create_first_version"]
|
||||
|
||||
# check if first version is created
|
||||
created_version_workfile = self.create_first_workfile_version()
|
||||
created_version_workfile = False
|
||||
if create_first_version:
|
||||
created_version_workfile = self.create_first_workfile_version()
|
||||
|
||||
# if first version is created, import template
|
||||
# and populate placeholders
|
||||
|
|
@ -1564,7 +1566,16 @@ class PlaceholderLoadMixin(object):
|
|||
else:
|
||||
failed = False
|
||||
self.load_succeed(placeholder, container)
|
||||
self.cleanup_placeholder(placeholder, failed)
|
||||
self.post_placeholder_process(placeholder, failed)
|
||||
|
||||
if failed:
|
||||
self.log.debug(
|
||||
"Placeholder cleanup skipped due to failed placeholder "
|
||||
"population."
|
||||
)
|
||||
return
|
||||
if not placeholder.data.get("keep_placeholder", True):
|
||||
self.delete_placeholder(placeholder)
|
||||
|
||||
def load_failed(self, placeholder, representation):
|
||||
if hasattr(placeholder, "load_failed"):
|
||||
|
|
@ -1574,7 +1585,7 @@ class PlaceholderLoadMixin(object):
|
|||
if hasattr(placeholder, "load_succeed"):
|
||||
placeholder.load_succeed(container)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
def post_placeholder_process(self, placeholder, failed):
|
||||
"""Cleanup placeholder after load of single representation.
|
||||
|
||||
Can be called multiple times during placeholder item populating and is
|
||||
|
|
@ -1588,6 +1599,10 @@ class PlaceholderLoadMixin(object):
|
|||
|
||||
pass
|
||||
|
||||
def delete_placeholder(self, placeholder, failed):
|
||||
"""Called when all item population is done."""
|
||||
self.log.debug("Clean up of placeholder is not implemented.")
|
||||
|
||||
|
||||
class PlaceholderCreateMixin(object):
|
||||
"""Mixin prepared for creating placeholder plugins.
|
||||
|
|
@ -1673,12 +1688,14 @@ class PlaceholderCreateMixin(object):
|
|||
)
|
||||
]
|
||||
|
||||
def populate_create_placeholder(self, placeholder):
|
||||
def populate_create_placeholder(self, placeholder, pre_create_data=None):
|
||||
"""Create placeholder is going to create matching publishabe instance.
|
||||
|
||||
Args:
|
||||
placeholder (PlaceholderItem): Placeholder item with information
|
||||
about requested publishable instance.
|
||||
pre_create_data (dict): dictionary of configuration from Creator
|
||||
configuration in UI
|
||||
"""
|
||||
|
||||
legacy_create = self.builder.use_legacy_creators
|
||||
|
|
@ -1736,7 +1753,8 @@ class PlaceholderCreateMixin(object):
|
|||
creator_plugin.identifier,
|
||||
create_variant,
|
||||
asset_doc,
|
||||
task_name=task_name
|
||||
task_name=task_name,
|
||||
pre_create_data=pre_create_data
|
||||
)
|
||||
|
||||
except: # noqa: E722
|
||||
|
|
@ -1747,7 +1765,7 @@ class PlaceholderCreateMixin(object):
|
|||
failed = False
|
||||
self.create_succeed(placeholder, creator_instance)
|
||||
|
||||
self.cleanup_placeholder(placeholder, failed)
|
||||
self.post_placeholder_process(placeholder, failed)
|
||||
|
||||
def create_failed(self, placeholder, creator_data):
|
||||
if hasattr(placeholder, "create_failed"):
|
||||
|
|
@ -1757,7 +1775,7 @@ class PlaceholderCreateMixin(object):
|
|||
if hasattr(placeholder, "create_succeed"):
|
||||
placeholder.create_succeed(creator_instance)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
def post_placeholder_process(self, placeholder, failed):
|
||||
"""Cleanup placeholder after load of single representation.
|
||||
|
||||
Can be called multiple times during placeholder item populating and is
|
||||
|
|
|
|||
|
|
@ -86,8 +86,8 @@ class CleanUp(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
if not os.path.normpath(staging_dir).startswith(temp_root):
|
||||
self.log.info("Skipping cleanup. Staging directory is not in the "
|
||||
"temp folder: %s" % staging_dir)
|
||||
self.log.debug("Skipping cleanup. Staging directory is not in the "
|
||||
"temp folder: %s" % staging_dir)
|
||||
return
|
||||
|
||||
if not os.path.exists(staging_dir):
|
||||
|
|
|
|||
|
|
@ -442,7 +442,7 @@ class ModifiedBurnins(ffmpeg_burnins.Burnins):
|
|||
with tempfile.NamedTemporaryFile(mode="w", delete=False) as temp:
|
||||
temp.write(filter_string)
|
||||
filters_path = temp.name
|
||||
filters = '-filter_script "{}"'.format(filters_path)
|
||||
filters = '-filter_script:v "{}"'.format(filters_path)
|
||||
print("Filters:", filter_string)
|
||||
self.cleanup_paths.append(filters_path)
|
||||
|
||||
|
|
|
|||
|
|
@ -32,10 +32,18 @@
|
|||
"skip_timelines_check": [
|
||||
".*"
|
||||
]
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
}
|
||||
},
|
||||
"workfile_builder": {
|
||||
"create_first_version": false,
|
||||
"custom_templates": []
|
||||
},
|
||||
"templated_workfile_build": {
|
||||
"profiles": []
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -107,6 +107,17 @@
|
|||
"label": "Skip Timeline Check for Tasks"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"docstring": "Check if loaded container in scene are latest versions.",
|
||||
"key": "ValidateContainers",
|
||||
"label": "ValidateContainers"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
@ -117,6 +128,10 @@
|
|||
"workfile_builder/builder_on_start",
|
||||
"workfile_builder/profiles"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_templated_workfile_build"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -443,29 +443,29 @@ class PublishPluginsProxy:
|
|||
|
||||
def __init__(self, plugins):
|
||||
plugins_by_id = {}
|
||||
actions_by_id = {}
|
||||
actions_by_plugin_id = {}
|
||||
action_ids_by_plugin_id = {}
|
||||
for plugin in plugins:
|
||||
plugin_id = plugin.id
|
||||
plugins_by_id[plugin_id] = plugin
|
||||
|
||||
action_ids = []
|
||||
actions_by_id = {}
|
||||
action_ids_by_plugin_id[plugin_id] = action_ids
|
||||
actions_by_plugin_id[plugin_id] = actions_by_id
|
||||
|
||||
actions = getattr(plugin, "actions", None) or []
|
||||
for action in actions:
|
||||
action_id = action.id
|
||||
if action_id in actions_by_id:
|
||||
continue
|
||||
action_ids.append(action_id)
|
||||
actions_by_id[action_id] = action
|
||||
|
||||
self._plugins_by_id = plugins_by_id
|
||||
self._actions_by_id = actions_by_id
|
||||
self._actions_by_plugin_id = actions_by_plugin_id
|
||||
self._action_ids_by_plugin_id = action_ids_by_plugin_id
|
||||
|
||||
def get_action(self, action_id):
|
||||
return self._actions_by_id[action_id]
|
||||
def get_action(self, plugin_id, action_id):
|
||||
return self._actions_by_plugin_id[plugin_id][action_id]
|
||||
|
||||
def get_plugin(self, plugin_id):
|
||||
return self._plugins_by_id[plugin_id]
|
||||
|
|
@ -497,7 +497,9 @@ class PublishPluginsProxy:
|
|||
"""
|
||||
|
||||
return [
|
||||
self._create_action_item(self._actions_by_id[action_id], plugin_id)
|
||||
self._create_action_item(
|
||||
self.get_action(plugin_id, action_id), plugin_id
|
||||
)
|
||||
for action_id in self._action_ids_by_plugin_id[plugin_id]
|
||||
]
|
||||
|
||||
|
|
@ -2308,7 +2310,7 @@ class PublisherController(BasePublisherController):
|
|||
def run_action(self, plugin_id, action_id):
|
||||
# TODO handle result in UI
|
||||
plugin = self._publish_plugins_proxy.get_plugin(plugin_id)
|
||||
action = self._publish_plugins_proxy.get_action(action_id)
|
||||
action = self._publish_plugins_proxy.get_action(plugin_id, action_id)
|
||||
|
||||
result = pyblish.plugin.process(
|
||||
plugin, self._publish_context, None, action.id
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.15.11-nightly.4"
|
||||
__version__ = "3.15.12-nightly.1"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.15.10" # OpenPype
|
||||
version = "3.15.11" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -37,3 +37,18 @@ Set regex pattern(s) to look for in a Task name to skip `frameStart`, `frameEnd`
|
|||
* `Frames Per Task` - number of sequence division between individual tasks (chunks)
|
||||
making one job on farm.
|
||||
|
||||
## Worfkile Builder
|
||||
|
||||
Obsolete way how to present artist with a template when they are working on new task.
|
||||
|
||||
## Templated Workfile Build Settings
|
||||
|
||||
This more advanced way allows creating more elaborate workfile templates with placeholders for loaded items or to create publishable item via Creator.
|
||||
|
||||
Workfile template must be prepared separately via Tray and in the host, then its location could be set for combination of:
|
||||
- `Task types` (specific template for `animation`, different for `layout` etc.)
|
||||
- `Task names` (regex supported )
|
||||
|
||||
Additional options:
|
||||
- `Keep placeholders` - when template gets populated should placeholders be deleted? (In most cases yes.)
|
||||
- `Create first version` - if template should be used and populated for version `v001` of a workfile automatically
|
||||
|
|
|
|||
|
|
@ -11,7 +11,6 @@ sidebar_label: AfterEffects
|
|||
- [Load](artist_tools_loader)
|
||||
- [Publish](artist_tools_publisher)
|
||||
- [Manage](artist_tools_inventory)
|
||||
- [Subset Manager](artist_tools_subset_manager)
|
||||
|
||||
## Setup
|
||||
|
||||
|
|
@ -176,3 +175,43 @@ Both previous settings are triggered at same time.
|
|||
### Experimental tools
|
||||
|
||||
Currently empty. Could contain special tools available only for specific hosts for early access testing.
|
||||
|
||||
|
||||
### Workfile builder section
|
||||
|
||||
Next 3 menu items handle creation and usage advanced workfile builder. This could be used to prepare workfile template with placeholders for loaded items and publishable items.
|
||||
This allow to build template with layers for guides, any text layers and layer for image content which get dynamically populated when template is used an populated by an artist.
|
||||
|
||||
#### Create placeholder
|
||||
|
||||
Load or Create placeholders could be used to provide dynamic content or preparation steps for publish.
|
||||
|
||||
##### Load Placeholder
|
||||
|
||||
This one provide way how to load particular representation for particular subset for particular asset for particular task.
|
||||
Eg. Whenever artist start `animation` task they want to load `png` representation of `imageMain` subset of current context's asset.
|
||||
|
||||

|
||||
|
||||
#### Create Placeholder
|
||||
|
||||
This allows to create new composition and when populated it will be enhanced with metadata for publish instance which will be created.
|
||||
|
||||

|
||||
|
||||
|
||||
### Example
|
||||
|
||||
This is how it looks when `Load placeholder` was used to create `LOADERPLACEHOLDER` item which is added as a layer into `CREATEPLACEHOLDER` composition
|
||||
created by `Create placeholder`.
|
||||
|
||||

|
||||
|
||||
Load placeholder was configured to load `image` family, subset with name `imageMain` and `png` representation.
|
||||
|
||||
Any additional layers could be added into composition, when `Build Workfile from template` will be used by an artist, load placeholders will be replace
|
||||
by loaded item(s).
|
||||
|
||||

|
||||
|
||||
Same prepared template could be used for any asset, in each case correct asset according to context will be used automatically.
|
||||
|
|
|
|||
BIN
website/docs/assets/aftereffects_create_placeholder.png
Normal file
BIN
website/docs/assets/aftereffects_create_placeholder.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 20 KiB |
BIN
website/docs/assets/aftereffects_load_placeholder.png
Normal file
BIN
website/docs/assets/aftereffects_load_placeholder.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 51 KiB |
BIN
website/docs/assets/aftereffects_populated_template.png
Normal file
BIN
website/docs/assets/aftereffects_populated_template.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 33 KiB |
BIN
website/docs/assets/aftereffects_prepared_template.png
Normal file
BIN
website/docs/assets/aftereffects_prepared_template.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 31 KiB |
Loading…
Add table
Add a link
Reference in a new issue