Merge develop

This commit is contained in:
Petr Kalis 2023-05-30 11:45:58 +02:00
commit 81cca7fc76
290 changed files with 5398 additions and 7582 deletions

View file

@ -35,6 +35,11 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.15.9-nightly.1
- 3.15.8
- 3.15.8-nightly.3
- 3.15.8-nightly.2
- 3.15.8-nightly.1
- 3.15.7
- 3.15.7-nightly.3
- 3.15.7-nightly.2
@ -130,11 +135,6 @@ body:
- 3.14.2-nightly.5
- 3.14.2-nightly.4
- 3.14.2-nightly.3
- 3.14.2-nightly.2
- 3.14.2-nightly.1
- 3.14.1
- 3.14.1-nightly.4
- 3.14.1-nightly.3
validations:
required: true
- type: dropdown

5
.gitmodules vendored
View file

@ -4,4 +4,7 @@
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git
url = https://github.com/EvotecIT/PSWriteColor.git
[submodule "openpype/hosts/unreal/integration"]
path = openpype/hosts/unreal/integration
url = https://github.com/ynput/ayon-unreal-plugin.git

View file

@ -1,6 +1,304 @@
# Changelog
## [3.15.8](https://github.com/ynput/OpenPype/tree/3.15.8)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.7...3.15.8)
### **🆕 New features**
<details>
<summary>Publisher: Show instances in report page <a href="https://github.com/ynput/OpenPype/pull/4915">#4915</a></summary>
Show publish instances in report page. Also added basic log view with logs grouped by instance. Validation error detail now have 2 colums, one with erro details second with logs. Crashed state shows fast access to report action buttons. Success will show only logs. Publish frame is shrunked automatically on publish stop.
___
</details>
<details>
<summary>Fusion - Loader plugins updates <a href="https://github.com/ynput/OpenPype/pull/4920">#4920</a></summary>
Update to some Fusion loader plugins:The sequence loader can now load footage from the image and online family.The FBX loader can now import all formats Fusions FBX node can read.You can now import the content of another workfile into your current comp with the workfile loader.
___
</details>
<details>
<summary>Fusion: deadline farm rendering <a href="https://github.com/ynput/OpenPype/pull/4955">#4955</a></summary>
Enabling Fusion for deadline farm rendering.
___
</details>
<details>
<summary>AfterEffects: set frame range and resolution <a href="https://github.com/ynput/OpenPype/pull/4983">#4983</a></summary>
Frame information (frame start, duration, fps) and resolution (width and height) is applied to selected composition from Asset Management System (Ftrack or DB) automatically when published instance is created.It is also possible explicitly propagate both values from DB to selected composition by newly added menu buttons.
___
</details>
<details>
<summary>Publish: Enhance automated publish plugin settings <a href="https://github.com/ynput/OpenPype/pull/4986">#4986</a></summary>
Added plugins option to define settings category where to look for settings of a plugin and added public helper functions to apply settings `get_plugin_settings` and `apply_plugin_settings_automatically`.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Load Rig References - Change Rig to Animation in Animation instance <a href="https://github.com/ynput/OpenPype/pull/4877">#4877</a></summary>
We are using the template builder to build an animation scene. All the rig placeholders are imported correctly, but the automatically created animation instances retain the rig family in their names and subsets. In our example, we need animationMain instead of rigMain, because this name will be used in the following steps like lighting.Here is the result we need. I checked, and it's not a template builder problem, because even if I load a rig as a reference, the result is the same. For me, since we are in the animation instance, it makes more sense to have animation instead of rig in the name. The naming is just fine if we use create from the Openpype menu.
___
</details>
<details>
<summary>Enhancement: Resolve prelaunch code refactoring and update defaults <a href="https://github.com/ynput/OpenPype/pull/4916">#4916</a></summary>
The main reason of this PR is wrong default settings in `openpype/settings/defaults/system_settings/applications.json` for Resolve host. The `bin` folder should not be a part of the macos and Linux `RESOLVE_PYTHON3_PATH` variable.The rest of this PR is some code cleanups for Resolve prelaunch hook to simplify further development.Also added a .gitignore for vscode workspace files.
___
</details>
<details>
<summary>Unreal: 🚚 move Unreal plugin to separate repository <a href="https://github.com/ynput/OpenPype/pull/4980">#4980</a></summary>
To support Epic Marketplace have to move AYON Unreal integration plugins to separate repository. This is replacing current files with git submodule, so the change should be functionally without impact.New repository lives here: https://github.com/ynput/ayon-unreal-plugin
___
</details>
<details>
<summary>General: Lib code cleanup <a href="https://github.com/ynput/OpenPype/pull/5003">#5003</a></summary>
Small cleanup in lib files in openpype.
___
</details>
<details>
<summary>Allow to open with djv by extension instead of representation name <a href="https://github.com/ynput/OpenPype/pull/5004">#5004</a></summary>
Filter open in djv action by extension instead of representation.
___
</details>
<details>
<summary>DJV open action `extensions` as `set` <a href="https://github.com/ynput/OpenPype/pull/5005">#5005</a></summary>
Change `extensions` attribute to `set`.
___
</details>
<details>
<summary>Nuke: extract thumbnail with multiple reposition nodes <a href="https://github.com/ynput/OpenPype/pull/5011">#5011</a></summary>
Added support for multiple reposition nodes.
___
</details>
<details>
<summary>Enhancement: Improve logging levels and messages for artist facing publish reports <a href="https://github.com/ynput/OpenPype/pull/5018">#5018</a></summary>
Tweak the logging levels and messages to try and only show those logs that an artist should see and could understand. Move anything that's slightly more involved into a "debug" message instead.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Bugfix/frame variable fix <a href="https://github.com/ynput/OpenPype/pull/4978">#4978</a></summary>
Renamed variables to match OpenPype terminology to reduce confusion and add consistency.
___
</details>
<details>
<summary>Global: plugins cleanup plugin will leave beauty rendered files <a href="https://github.com/ynput/OpenPype/pull/4790">#4790</a></summary>
Attempt to mark more files to be cleaned up explicitly in intermediate `renders` folder in work area for farm jobs.
___
</details>
<details>
<summary>Fix: Download last workfile doesn't work if not already downloaded <a href="https://github.com/ynput/OpenPype/pull/4942">#4942</a></summary>
Some optimization condition is messing with the feature: if the published workfile is not already downloaded, it won't download it...
___
</details>
<details>
<summary>Unreal: Fix transform when loading layout to match existing assets <a href="https://github.com/ynput/OpenPype/pull/4972">#4972</a></summary>
Fixed transform when loading layout to match existing assets.
___
</details>
<details>
<summary>fix the bug of fbx loaders in Max <a href="https://github.com/ynput/OpenPype/pull/4977">#4977</a></summary>
bug fix of fbx loaders for not being able to parent to the CON instances while importing cameras(and models) which is published from other DCCs such as Maya.
___
</details>
<details>
<summary>AfterEffects: allow returning stub with not saved workfile <a href="https://github.com/ynput/OpenPype/pull/4984">#4984</a></summary>
Allows to use Workfile app to Save first empty workfile.
___
</details>
<details>
<summary>Blender: Fix Alembic loading <a href="https://github.com/ynput/OpenPype/pull/4985">#4985</a></summary>
Fixed problem occurring when trying to load an Alembic model in Blender.
___
</details>
<details>
<summary>Unreal: Addon Py2 compatibility <a href="https://github.com/ynput/OpenPype/pull/4994">#4994</a></summary>
Fixed Python 2 compatibility of unreal addon.
___
</details>
<details>
<summary>Nuke: fixed missing files key in representation <a href="https://github.com/ynput/OpenPype/pull/4999">#4999</a></summary>
Issue with missing keys once rendering target set to existing frames is fixed. Instance has to be evaluated in validation for missing files.
___
</details>
<details>
<summary>Unreal: Fix the frame range when loading camera <a href="https://github.com/ynput/OpenPype/pull/5002">#5002</a></summary>
The keyframes of the camera, when loaded, were not using the correct frame range.
___
</details>
<details>
<summary>Fusion: fixing frame range targeting <a href="https://github.com/ynput/OpenPype/pull/5013">#5013</a></summary>
Frame range targeting at Rendering instances is now following configured options.
___
</details>
<details>
<summary>Deadline: fix selection from multiple webservices <a href="https://github.com/ynput/OpenPype/pull/5015">#5015</a></summary>
Multiple different DL webservice could be configured. First they must by configured in System Settings., then they could be configured per project in `project_settings/deadline/deadline_servers`.Only single webservice could be a target of publish though.
___
</details>
### **Merged pull requests**
<details>
<summary>3dsmax: Refactored publish plugins to use proper implementation of pymxs <a href="https://github.com/ynput/OpenPype/pull/4988">#4988</a></summary>
___
</details>
## [3.15.7](https://github.com/ynput/OpenPype/tree/3.15.7)

View file

@ -14,10 +14,10 @@ AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93}
AppName={#MyAppName}
AppVersion={#AppVer}
AppVerName={#MyAppName} version {#AppVer}
AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
AppPublisher=Ynput s.r.o
AppPublisherURL=https://ynput.io
AppSupportURL=https://ynput.io
AppUpdatesURL=https://ynput.io
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes

View file

@ -4,9 +4,8 @@ Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .launch_logic import (
from .ws_stub import (
get_stub,
stub,
)
from .pipeline import (
@ -18,7 +17,8 @@ from .pipeline import (
from .lib import (
maintained_selection,
get_extension_manifest_path,
get_asset_settings
get_asset_settings,
set_settings
)
from .plugin import (
@ -27,9 +27,8 @@ from .plugin import (
__all__ = [
# launch_logic
# ws_stub
"get_stub",
"stub",
# pipeline
"ls",
@ -39,6 +38,7 @@ __all__ = [
"maintained_selection",
"get_extension_manifest_path",
"get_asset_settings",
"set_settings",
# plugin
"AfterEffectsLoader"

View file

@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.24"
ExtensionBundleName="openpype" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.25"
ExtensionBundleName="com.openpype.AE.panel" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ExtensionList>
<Extension Id="com.openpype.AE.panel" Version="1.0" />
</ExtensionList>

View file

@ -2,7 +2,7 @@
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" href="css/topcoat-desktop-dark.min.css"/>
<link id="hostStyle" rel="stylesheet" href="css/styles.css"/>
@ -25,11 +25,11 @@
<title></title>
<script src="js/libs/jquery-2.0.2.min.js"></script>
<script type=text/javascript>
$(function() {
$("a#workfiles-button").bind("click", function() {
RPC.call('AfterEffects.workfiles_route').then(function (data) {
}, function (error) {
alert(error);
@ -37,7 +37,7 @@
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#loader-button").bind("click", function() {
@ -48,7 +48,7 @@
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#publish-button").bind("click", function() {
@ -59,7 +59,7 @@
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#sceneinventory-button").bind("click", function() {
@ -70,7 +70,40 @@
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#setresolution-button").bind("click", function() {
RPC.call('AfterEffects.setresolution_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#setframes-button").bind("click", function() {
RPC.call('AfterEffects.setframes_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#setall-button").bind("click", function() {
RPC.call('AfterEffects.setall_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#experimental-button").bind("click", function() {
@ -80,25 +113,28 @@
});
});
});
</script>
</script>
</head>
<body class="hostElt">
<div id="content">
<div>
<div id="content">
<div>
<div></div><a href=# id=workfiles-button><button class="hostFontSize">Workfiles...</button></a></div>
<div><a href=# id=loader-button><button class="hostFontSize">Load...</button></a></div>
<div><a href=# id=publish-button><button class="hostFontSize">Publish...</button></a></div>
<div><a href=# id=sceneinventory-button><button class="hostFontSize">Manage...</button></a></div>
<div><a href=# id=setresolution-button><button class="hostFontSize">Set Resolution</button></a></div>
<div><a href=# id=setframes-button><button class="hostFontSize">Set Frame Range</button></a></div>
<div><a href=# id=setall-button><button class="hostFontSize">Apply All Settings</button></a></div>
<div><a href=# id=experimental-button><button class="hostFontSize">Experimental Tools...</button></a></div>
</div>
</div>
</div>
<!-- <script src="js/libs/PlayerDebugMode"></script> -->
<script src="js/libs/wsrpc.js"></script>
<script src="js/libs/loglevel.min.js"></script>
@ -107,6 +143,6 @@
<script src="js/themeManager.js"></script>
<script src="js/main.js"></script>
</body>
</html>
</html>

View file

@ -4,7 +4,7 @@ indent: 4, maxerr: 50 */
var csInterface = new CSInterface();
log.warn("script start");
WSRPC.DEBUG = false;
@ -14,7 +14,7 @@ WSRPC.TRACE = false;
async function startUp(url){
promis = runEvalScript("getEnv('" + url + "')");
var res = await promis;
var res = await promis;
log.warn("res: " + res);
promis = runEvalScript("getEnv('OPENPYPE_DEBUG')");
@ -56,7 +56,7 @@ function get_extension_version(){
}
function main(websocket_url){
// creates connection to 'websocket_url', registers routes
// creates connection to 'websocket_url', registers routes
var default_url = 'ws://localhost:8099/ws/';
if (websocket_url == ''){
@ -66,7 +66,7 @@ function main(websocket_url){
RPC.connect();
log.warn("connected");
log.warn("connected");
RPC.addRoute('AfterEffects.open', function (data) {
log.warn('Server called client route "open":', data);
@ -88,7 +88,7 @@ function main(websocket_url){
});
RPC.addRoute('AfterEffects.get_active_document_name', function (data) {
log.warn('Server called client route ' +
log.warn('Server called client route ' +
'"get_active_document_name":', data);
return runEvalScript("getActiveDocumentName()")
.then(function(result){
@ -98,7 +98,7 @@ function main(websocket_url){
});
RPC.addRoute('AfterEffects.get_active_document_full_name', function (data){
log.warn('Server called client route ' +
log.warn('Server called client route ' +
'"get_active_document_full_name":', data);
return runEvalScript("getActiveDocumentFullName()")
.then(function(result){
@ -118,7 +118,7 @@ function main(websocket_url){
});
});
RPC.addRoute('AfterEffects.get_selected_items', function (data) {
log.warn('Server called client route "get_selected_items":', data);
return runEvalScript("getSelectedItems(" + data.comps + "," +
@ -194,23 +194,25 @@ function main(websocket_url){
});
});
RPC.addRoute('AfterEffects.get_work_area', function (data) {
log.warn('Server called client route "get_work_area":', data);
return runEvalScript("getWorkArea(" + data.item_id + ")")
RPC.addRoute('AfterEffects.get_comp_properties', function (data) {
log.warn('Server called client route "get_comp_properties":', data);
return runEvalScript("getCompProperties(" + data.item_id + ")")
.then(function(result){
log.warn("getWorkArea: " + result);
log.warn("get_comp_properties: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.set_work_area', function (data) {
RPC.addRoute('AfterEffects.set_comp_properties', function (data) {
log.warn('Server called client route "set_work_area":', data);
return runEvalScript("setWorkArea(" + data.item_id + ',' +
return runEvalScript("setCompProperties(" + data.item_id + ',' +
data.start + ',' +
data.duration + ',' +
data.frame_rate + ")")
data.frame_rate + ',' +
data.width + ',' +
data.height + ")")
.then(function(result){
log.warn("getWorkArea: " + result);
log.warn("set_comp_properties: " + result);
return result;
});
});
@ -255,7 +257,7 @@ function main(websocket_url){
RPC.addRoute('AfterEffects.import_background', function (data) {
log.warn('Server called client route "import_background":', data);
return runEvalScript("importBackground(" + data.comp_id + ", " +
return runEvalScript("importBackground(" + data.comp_id + ", " +
"'" + data.comp_name + "', " +
JSON.stringify(data.files) + ")")
.then(function(result){
@ -266,7 +268,7 @@ function main(websocket_url){
RPC.addRoute('AfterEffects.reload_background', function (data) {
log.warn('Server called client route "reload_background":', data);
return runEvalScript("reloadBackground(" + data.comp_id + ", " +
return runEvalScript("reloadBackground(" + data.comp_id + ", " +
"'" + data.comp_name + "', " +
JSON.stringify(data.files) + ")")
.then(function(result){
@ -314,6 +316,16 @@ function main(websocket_url){
log.warn('Server called client route "close":', data);
return runEvalScript("close()");
});
RPC.addRoute('AfterEffects.print_msg', function (data) {
log.warn('Server called client route "print_msg":', data);
var escaped_msg = EscapeStringForJSX(data.msg);
return runEvalScript("printMsg('" + escaped_msg +"')")
.then(function(result){
log.warn("print_msg: " + result);
return result;
});
});
}
/** main entry point **/
@ -323,17 +335,17 @@ startUp("WEBSOCKET_URL");
'use strict';
var csInterface = new CSInterface();
function init() {
themeManager.init();
$("#btn_test").click(function () {
csInterface.evalScript('sayHello()');
});
}
init();
}());

View file

@ -1,7 +1,7 @@
/*jslint vars: true, plusplus: true, devel: true, nomen: true, regexp: true,
indent: 4, maxerr: 50 */
/*global $, Folder*/
#include "../js/libs/json.js";
//@include "../js/libs/json.js"
/* All public API function should return JSON! */
@ -29,13 +29,13 @@ function getEnv(variable){
function getMetadata(){
/**
* Returns payload in 'Label' field of project's metadata
*
*
**/
if (ExternalObject.AdobeXMPScript === undefined){
ExternalObject.AdobeXMPScript =
new ExternalObject('lib:AdobeXMPScript');
}
var proj = app.project;
var meta = new XMPMeta(app.project.xmpPacket);
var schemaNS = XMPMeta.getNamespaceURI("xmp");
@ -53,7 +53,7 @@ function getMetadata(){
function imprint(payload){
/**
* Stores payload in 'Label' field of project's metadata
*
*
* Args:
* payload (string): json content
*/
@ -61,14 +61,14 @@ function imprint(payload){
ExternalObject.AdobeXMPScript =
new ExternalObject('lib:AdobeXMPScript');
}
var proj = app.project;
var meta = new XMPMeta(app.project.xmpPacket);
var schemaNS = XMPMeta.getNamespaceURI("xmp");
var label = "xmp:Label";
meta.setProperty(schemaNS, label, payload);
app.project.xmpPacket = meta.serialize();
}
@ -116,14 +116,14 @@ function getItems(comps, folders, footages){
/**
* Returns JSON representation of compositions and
* if 'collectLayers' then layers in comps too.
*
*
* Args:
* comps (bool): return selected compositions
* folders (bool): return folders
* footages (bool): return FootageItem
* Returns:
* (list) of JSON items
*/
*/
var items = []
for (i = 1; i <= app.project.items.length; ++i){
var item = app.project.items[i];
@ -142,14 +142,14 @@ function getItems(comps, folders, footages){
function getSelectedItems(comps, folders, footages){
/**
* Returns list of selected items from Project menu
*
*
* Args:
* comps (bool): return selected compositions
* folders (bool): return folders
* footages (bool): return FootageItem
* Returns:
* (list) of JSON items
*/
*/
var items = []
for (i = 0; i < app.project.selection.length; ++i){
var item = app.project.selection[i];
@ -166,9 +166,9 @@ function getSelectedItems(comps, folders, footages){
function _getItem(item, comps, folders, footages){
/**
* Auxiliary function as project items and selections
* Auxiliary function as project items and selections
* are indexed in different way :/
* Refactor
* Refactor
*/
var item_type = '';
if (item instanceof FolderItem){
@ -189,7 +189,7 @@ function _getItem(item, comps, folders, footages){
return "{}";
}
}
var item = {"name": item.name,
"id": item.id,
"type": item_type};
@ -200,7 +200,7 @@ function importFile(path, item_name, import_options){
/**
* Imports file (image tested for now) as a FootageItem.
* Creates new composition
*
*
* Args:
* path (string): absolute path to image file
* item_name (string): label for composition
@ -218,7 +218,7 @@ function importFile(path, item_name, import_options){
app.beginUndoGroup("Import File");
fp = new File(path);
if (fp.exists){
try {
try {
im_opt = new ImportOptions(fp);
importAsType = import_options["ImportAsType"];
@ -234,18 +234,18 @@ function importFile(path, item_name, import_options){
}
if (importAsType.indexOf('PROJECT') > 0){
im_opt.importAs = ImportAsType.PROJECT;
}
}
}
if ('sequence' in import_options){
im_opt.sequence = true;
}
comp = app.project.importFile(im_opt);
if (app.project.selection.length == 2 &&
app.project.selection[0] instanceof FolderItem){
comp.parentFolder = app.project.selection[0]
comp.parentFolder = app.project.selection[0]
}
} catch (error) {
return _prepareError(error.toString() + importOptions.file.fsName);
@ -283,14 +283,14 @@ function setLabelColor(comp_id, color_idx){
function replaceItem(comp_id, path, item_name){
/**
* Replaces loaded file with new file and updates name
*
*
* Args:
* comp_id (int): id of composition, not a index!
* path (string): absolute path to new file
* item_name (string): new composition name
*/
app.beginUndoGroup("Replace File");
fp = new File(path);
if (!fp.exists){
return _prepareError("File " + path + " not found.");
@ -303,7 +303,7 @@ function replaceItem(comp_id, path, item_name){
}else{
item.replace(fp);
}
item.name = item_name;
} catch (error) {
return _prepareError(error.toString() + path);
@ -319,7 +319,7 @@ function replaceItem(comp_id, path, item_name){
function renameItem(item_id, new_name){
/**
* Renames item with 'item_id' to 'new_name'
*
*
* Args:
* item_id (int): id to search item
* new_name (str)
@ -335,7 +335,7 @@ function renameItem(item_id, new_name){
function deleteItem(item_id){
/**
* Delete any 'item_id'
*
*
* Not restricted only to comp, it could delete
* any item with 'id'
*/
@ -347,38 +347,76 @@ function deleteItem(item_id){
}
}
function getWorkArea(comp_id){
function getCompProperties(comp_id){
/**
* Returns information about workarea - are that will be
* rendered. All calculation will be done in OpenPype,
* easier to modify without redeploy of extension.
*
* Returns information about composition - are that will be
* rendered.
*
* Returns
* (dict)
*/
var item = app.project.itemByID(comp_id);
if (item){
return JSON.stringify({
"workAreaStart": item.displayStartFrame,
"workAreaDuration": item.duration,
"frameRate": item.frameRate});
}else{
var comp = app.project.itemByID(comp_id);
if (!comp){
return _prepareError("There is no composition with "+ comp_id);
}
return JSON.stringify({
"id": comp.id,
"name": comp.name,
"frameStart": comp.displayStartFrame,
"framesDuration": comp.duration * comp.frameRate,
"frameRate": comp.frameRate,
"width": comp.width,
"height": comp.height});
}
function setWorkArea(comp_id, workAreaStart, workAreaDuration, frameRate){
function setCompProperties(comp_id, frameStart, framesCount, frameRate,
width, height){
/**
* Sets work area info from outside (from Ftrack via OpenPype)
*/
var item = app.project.itemByID(comp_id);
if (item){
item.displayStartTime = workAreaStart;
item.duration = workAreaDuration;
item.frameRate = frameRate;
}else{
var comp = app.project.itemByID(comp_id);
if (!comp){
return _prepareError("There is no composition with "+ comp_id);
}
app.beginUndoGroup('change comp properties');
if (frameStart && framesCount && frameRate){
comp.displayStartFrame = frameStart;
comp.duration = framesCount / frameRate;
comp.frameRate = frameRate;
}
if (width && height){
var widthOld = comp.width;
var widthNew = width;
var widthDelta = widthNew - widthOld;
var heightOld = comp.height;
var heightNew = height;
var heightDelta = heightNew - heightOld;
var offset = [widthDelta / 2, heightDelta / 2];
comp.width = widthNew;
comp.height = heightNew;
for (var i = 1, il = comp.numLayers; i <= il; i++) {
var layer = comp.layer(i);
var positionProperty = layer.property('ADBE Transform Group').property('ADBE Position');
if (positionProperty.numKeys > 0) {
for (var j = 1, jl = positionProperty.numKeys; j <= jl; j++) {
var keyValue = positionProperty.keyValue(j);
positionProperty.setValueAtKey(j, keyValue + offset);
}
} else {
var positionValue = positionProperty.value;
positionProperty.setValue(positionValue + offset);
}
}
}
app.endUndoGroup();
}
function save(){
@ -504,7 +542,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
* Args:
* comp_id (int): id of target composition
* item_id (int): FootageItem.id
* found_comp (CompItem, optional): to limit querying if
* found_comp (CompItem, optional): to limit quering if
* comp already found previously
*/
var comp = found_comp || app.project.itemByID(comp_id);
@ -749,7 +787,7 @@ function render(target_folder, comp_id){
var om1 = app.project.renderQueue.item(i).outputModule(1);
var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space?
var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE );
var targetFolder = new Folder(target_folder);
@ -763,7 +801,7 @@ function render(target_folder, comp_id){
render_item.render = false;
}
}
}
app.beginSuppressDialogs();
app.project.renderQueue.render();
@ -779,6 +817,10 @@ function getAppVersion(){
return _prepareSingleValue(app.version);
}
function printMsg(msg){
alert(msg);
}
function _prepareSingleValue(value){
return JSON.stringify({"result": value})
}

View file

@ -1,49 +1,77 @@
import os
import sys
import subprocess
import collections
import logging
import asyncio
import functools
import traceback
from wsrpc_aiohttp import (
WebSocketRoute,
WebSocketAsync
)
from qtpy import QtCore
from qtpy import QtCore, QtWidgets
from openpype.lib import Logger
from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
from openpype.tests.lib import is_in_tests
from openpype.pipeline import install_host, legacy_io
from openpype.modules import ModulesManager
from openpype.tools.adobe_webserver.app import WebServerTool
from .ws_stub import AfterEffectsServerStub
from .ws_stub import get_stub
from .lib import set_settings
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
class ConnectionNotEstablishedYet(Exception):
pass
def safe_excepthook(*args):
traceback.print_exception(*args)
def get_stub():
"""
Convenience function to get server RPC stub to call methods directed
for host (Photoshop).
It expects already created connection, started from client.
Currently created when panel is opened (PS: Window>Extensions>Avalon)
:return: <PhotoshopClientStub> where functions could be called from
"""
ae_stub = AfterEffectsServerStub()
if not ae_stub.client:
raise ConnectionNotEstablishedYet("Connection is not created yet")
def main(*subprocess_args):
"""Main entrypoint to AE launching, called from pre hook."""
sys.excepthook = safe_excepthook
return ae_stub
from openpype.hosts.aftereffects.api import AfterEffectsHost
host = AfterEffectsHost()
install_host(host)
def stub():
return get_stub()
os.environ["OPENPYPE_LOG_NO_COLORS"] = "False"
app = QtWidgets.QApplication([])
app.setQuitOnLastWindowClosed(False)
launcher = ProcessLauncher(subprocess_args)
launcher.start()
if os.environ.get("HEADLESS_PUBLISH"):
manager = ModulesManager()
webpublisher_addon = manager["webpublisher"]
launcher.execute_in_main_thread(
functools.partial(
webpublisher_addon.headless_publish,
log,
"CloseAE",
is_in_tests()
)
)
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
save = False
if os.getenv("WORKFILES_SAVE_AS"):
save = True
launcher.execute_in_main_thread(
lambda: host_tools.show_tool_by_name("workfiles", save=save)
)
sys.exit(app.exec_())
def show_tool_by_name(tool_name):
@ -55,6 +83,7 @@ def show_tool_by_name(tool_name):
class ProcessLauncher(QtCore.QObject):
"""Launches webserver, connects to it, runs main thread."""
route_name = "AfterEffects"
_main_thread_callbacks = collections.deque()
@ -296,6 +325,15 @@ class AfterEffectsRoute(WebSocketRoute):
async def sceneinventory_route(self):
self._tool_route("sceneinventory")
async def setresolution_route(self):
self._settings_route(False, True)
async def setframes_route(self):
self._settings_route(True, False)
async def setall_route(self):
self._settings_route(True, True)
async def experimental_tools_route(self):
self._tool_route("experimental_tools")
@ -309,3 +347,13 @@ class AfterEffectsRoute(WebSocketRoute):
# Required return statement.
return "nothing"
def _settings_route(self, frames, resolution):
partial_method = functools.partial(set_settings,
frames,
resolution)
ProcessLauncher.execute_in_main_thread(partial_method)
# Required return statement.
return "nothing"

View file

@ -1,69 +1,17 @@
import os
import sys
import re
import json
import contextlib
import traceback
import logging
from functools import partial
from qtpy import QtWidgets
from openpype.pipeline import install_host
from openpype.modules import ModulesManager
from openpype.tools.utils import host_tools
from openpype.tests.lib import is_in_tests
from .launch_logic import ProcessLauncher, get_stub
from openpype.pipeline.context_tools import get_current_context
from openpype.client import get_asset_by_name
from .ws_stub import get_stub
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
def safe_excepthook(*args):
traceback.print_exception(*args)
def main(*subprocess_args):
sys.excepthook = safe_excepthook
from openpype.hosts.aftereffects.api import AfterEffectsHost
host = AfterEffectsHost()
install_host(host)
os.environ["OPENPYPE_LOG_NO_COLORS"] = "False"
app = QtWidgets.QApplication([])
app.setQuitOnLastWindowClosed(False)
launcher = ProcessLauncher(subprocess_args)
launcher.start()
if os.environ.get("HEADLESS_PUBLISH"):
manager = ModulesManager()
webpublisher_addon = manager["webpublisher"]
launcher.execute_in_main_thread(
partial(
webpublisher_addon.headless_publish,
log,
"CloseAE",
is_in_tests()
)
)
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
save = False
if os.getenv("WORKFILES_SAVE_AS"):
save = True
launcher.execute_in_main_thread(
lambda: host_tools.show_tool_by_name("workfiles", save=save)
)
sys.exit(app.exec_())
@contextlib.contextmanager
def maintained_selection():
"""Maintain selection during context."""
@ -145,13 +93,13 @@ def get_asset_settings(asset_doc):
"""
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
resolution_width = asset_data.get("resolutionWidth")
resolution_height = asset_data.get("resolutionHeight")
fps = asset_data.get("fps", 0)
frame_start = asset_data.get("frameStart", 0)
frame_end = asset_data.get("frameEnd", 0)
handle_start = asset_data.get("handleStart", 0)
handle_end = asset_data.get("handleEnd", 0)
resolution_width = asset_data.get("resolutionWidth", 0)
resolution_height = asset_data.get("resolutionHeight", 0)
duration = (frame_end - frame_start + 1) + handle_start + handle_end
return {
@ -164,3 +112,49 @@ def get_asset_settings(asset_doc):
"resolutionHeight": resolution_height,
"duration": duration
}
def set_settings(frames, resolution, comp_ids=None, print_msg=True):
"""Sets number of frames and resolution to selected comps.
Args:
frames (bool): True if set frame info
resolution (bool): True if set resolution
comp_ids (list): specific composition ids, if empty
it tries to look for currently selected
print_msg (bool): True throw JS alert with msg
"""
frame_start = frames_duration = fps = width = height = None
current_context = get_current_context()
asset_doc = get_asset_by_name(current_context["project_name"],
current_context["asset_name"])
settings = get_asset_settings(asset_doc)
msg = ''
if frames:
frame_start = settings["frameStart"] - settings["handleStart"]
frames_duration = settings["duration"]
fps = settings["fps"]
msg += f"frame start:{frame_start}, duration:{frames_duration}, "\
f"fps:{fps}"
if resolution:
width = settings["resolutionWidth"]
height = settings["resolutionHeight"]
msg += f"width:{width} and height:{height}"
stub = get_stub()
if not comp_ids:
comps = stub.get_selected_items(True, False, False)
comp_ids = [comp.id for comp in comps]
if not comp_ids:
stub.print_msg("Select at least one composition to apply settings.")
return
for comp_id in comp_ids:
msg = f"Setting for comp {comp_id} " + msg
log.debug(msg)
stub.set_comp_properties(comp_id, frame_start, frames_duration,
fps, width, height)
if print_msg:
stub.print_msg(msg)

View file

@ -8,10 +8,7 @@ from openpype.lib import Logger, register_event_callback
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
legacy_io,
)
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.aftereffects
@ -23,7 +20,8 @@ from openpype.host import (
IPublishHost
)
from .launch_logic import get_stub, ConnectionNotEstablishedYet
from .launch_logic import get_stub
from .ws_stub import ConnectionNotEstablishedYet
log = Logger.get_logger(__name__)
@ -60,9 +58,6 @@ class AfterEffectsHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
print("Not connected yet, ignoring")
return
if not stub.get_active_document_name():
return
self._stub = stub
return self._stub

View file

@ -11,6 +11,10 @@ from wsrpc_aiohttp import WebSocketAsync
from openpype.tools.adobe_webserver.app import WebServerTool
class ConnectionNotEstablishedYet(Exception):
pass
@attr.s
class AEItem(object):
"""
@ -24,8 +28,8 @@ class AEItem(object):
# all imported elements, single for
# regular image, array for Backgrounds
members = attr.ib(factory=list)
workAreaStart = attr.ib(default=None)
workAreaDuration = attr.ib(default=None)
frameStart = attr.ib(default=None)
framesDuration = attr.ib(default=None)
frameRate = attr.ib(default=None)
file_name = attr.ib(default=None)
instance_id = attr.ib(default=None) # New Publisher
@ -355,42 +359,50 @@ class AfterEffectsServerStub():
return self._handle_return(res)
def get_work_area(self, item_id):
""" Get work are information for render purposes
def get_comp_properties(self, comp_id):
""" Get composition information for render purposes
Returns startFrame, frameDuration, fps, width, height.
Args:
item_id (int):
comp_id (int):
Returns:
(AEItem)
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_work_area',
item_id=item_id
('AfterEffects.get_comp_properties',
item_id=comp_id
))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def set_work_area(self, item, start, duration, frame_rate):
def set_comp_properties(self, comp_id, start, duration, frame_rate,
width, height):
"""
Set work area to predefined values (from Ftrack).
Work area directs what gets rendered.
Beware of rounding, AE expects seconds, not frames directly.
Args:
item (dict):
start (float): workAreaStart in seconds
duration (float): in seconds
comp_id (int):
start (int): workAreaStart in frames
duration (int): in frames
frame_rate (float): frames in seconds
width (int): resolution width
height (int): resolution height
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.set_work_area',
item_id=item.id,
('AfterEffects.set_comp_properties',
item_id=comp_id,
start=start,
duration=duration,
frame_rate=frame_rate))
frame_rate=frame_rate,
width=width,
height=height))
return self._handle_return(res)
def save(self):
@ -554,6 +566,12 @@ class AfterEffectsServerStub():
return self._handle_return(res)
def print_msg(self, msg):
"""Triggers Javascript alert dialog."""
self.websocketserver.call(self.client.call
('AfterEffects.print_msg',
msg=msg))
def _handle_return(self, res):
"""Wraps return, throws ValueError if 'error' key is present."""
if res and isinstance(res, str) and res != "undefined":
@ -608,8 +626,8 @@ class AfterEffectsServerStub():
d.get('name'),
d.get('type'),
d.get('members'),
d.get('workAreaStart'),
d.get('workAreaDuration'),
d.get('frameStart'),
d.get('framesDuration'),
d.get('frameRate'),
d.get('file_name'),
d.get("instance_id"),
@ -618,3 +636,18 @@ class AfterEffectsServerStub():
ret.append(item)
return ret
def get_stub():
"""
Convenience function to get server RPC stub to call methods directed
for host (Photoshop).
It expects already created connection, started from client.
Currently created when panel is opened (PS: Window>Extensions>Avalon)
:return: <PhotoshopClientStub> where functions could be called from
"""
ae_stub = AfterEffectsServerStub()
if not ae_stub.client:
raise ConnectionNotEstablishedYet("Connection is not created yet")
return ae_stub

View file

@ -9,6 +9,7 @@ from openpype.pipeline import (
CreatorError
)
from openpype.hosts.aftereffects.api.pipeline import cache_and_get_instances
from openpype.hosts.aftereffects.api.lib import set_settings
from openpype.lib import prepare_template_data
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
@ -32,6 +33,14 @@ class RenderCreator(Creator):
def create(self, subset_name_from_ui, data, pre_create_data):
stub = api.get_stub() # only after After Effects is up
try:
_ = stub.get_active_document_full_name()
except ValueError:
raise CreatorError(
"Please save workfile via Workfile app first!"
)
if pre_create_data.get("use_selection"):
comps = stub.get_selected_items(
comps=True, folders=False, footages=False
@ -41,8 +50,8 @@ class RenderCreator(Creator):
if not comps:
raise CreatorError(
"Nothing to create. Select composition "
"if 'useSelection' or create at least "
"Nothing to create. Select composition in Project Bin if "
"'Use selection' is toggled or create at least "
"one composition."
)
use_composition_name = (pre_create_data.get("use_composition_name") or
@ -87,10 +96,14 @@ class RenderCreator(Creator):
self._add_instance_to_context(new_instance)
stub.rename_item(comp.id, subset_name)
set_settings(True, True, [comp.id], print_msg=False)
def get_pre_create_attr_defs(self):
output = [
BoolDef("use_selection", default=True, label="Use selection"),
BoolDef("use_selection",
tooltip="Composition for publishable instance should be "
"selected by default.",
default=True, label="Use selection"),
BoolDef("use_composition_name",
label="Use composition name in subset"),
UISeparatorDef(),

View file

@ -66,19 +66,19 @@ class CollectAERender(publish.AbstractCollectRender):
comp_id = int(inst.data["members"][0])
work_area_info = CollectAERender.get_stub().get_work_area(comp_id)
comp_info = CollectAERender.get_stub().get_comp_properties(
comp_id)
if not work_area_info:
if not comp_info:
self.log.warning("Orphaned instance, deleting metadata")
inst_id = inst.get("instance_id") or str(comp_id)
inst_id = inst.data.get("instance_id") or str(comp_id)
CollectAERender.get_stub().remove_instance(inst_id)
continue
frame_start = work_area_info.workAreaStart
frame_end = round(work_area_info.workAreaStart +
float(work_area_info.workAreaDuration) *
float(work_area_info.frameRate)) - 1
fps = work_area_info.frameRate
frame_start = comp_info.frameStart
frame_end = round(comp_info.frameStart +
comp_info.framesDuration) - 1
fps = comp_info.frameRate
# TODO add resolution when supported by extension
task_name = inst.data.get("task") # legacy

View file

@ -26,6 +26,8 @@ from openpype.lib import (
emit_event
)
import openpype.hosts.blender
from openpype.settings import get_project_settings
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
@ -83,6 +85,31 @@ def uninstall():
ops.unregister()
def show_message(title, message):
from openpype.widgets.message_window import Window
from .ops import BlenderApplication
BlenderApplication.get_app()
Window(
parent=None,
title=title,
message=message,
level="warning")
def message_window(title, message):
from .ops import (
MainThreadItem,
execute_in_main_thread,
_process_app_events
)
mti = MainThreadItem(show_message, title, message)
execute_in_main_thread(mti)
_process_app_events()
def set_start_end_frames():
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
@ -125,10 +152,36 @@ def set_start_end_frames():
def on_new():
set_start_end_frames()
project = os.environ.get("AVALON_PROJECT")
settings = get_project_settings(project)
unit_scale_settings = settings.get("blender").get("unit_scale_settings")
unit_scale_enabled = unit_scale_settings.get("enabled")
if unit_scale_enabled:
unit_scale = unit_scale_settings.get("base_file_unit_scale")
bpy.context.scene.unit_settings.scale_length = unit_scale
def on_open():
set_start_end_frames()
project = os.environ.get("AVALON_PROJECT")
settings = get_project_settings(project)
unit_scale_settings = settings.get("blender").get("unit_scale_settings")
unit_scale_enabled = unit_scale_settings.get("enabled")
apply_on_opening = unit_scale_settings.get("apply_on_opening")
if unit_scale_enabled and apply_on_opening:
unit_scale = unit_scale_settings.get("base_file_unit_scale")
prev_unit_scale = bpy.context.scene.unit_settings.scale_length
if unit_scale != prev_unit_scale:
bpy.context.scene.unit_settings.scale_length = unit_scale
message_window(
"Base file unit scale changed",
"Base file unit scale changed to match the project settings.")
@bpy.app.handlers.persistent
def _on_save_pre(*args):

View file

@ -65,37 +65,19 @@ class CacheModelLoader(plugin.AssetLoader):
imported = lib.get_selection()
empties = [obj for obj in imported if obj.type == 'EMPTY']
container = None
for empty in empties:
if not empty.parent:
container = empty
break
assert container, "No asset group found"
# Children must be linked before parents,
# otherwise the hierarchy will break
objects = []
nodes = list(container.children)
for obj in nodes:
for obj in imported:
obj.parent = asset_group
bpy.data.objects.remove(container)
for obj in nodes:
for obj in imported:
objects.append(obj)
nodes.extend(list(obj.children))
imported.extend(list(obj.children))
objects.reverse()
for obj in objects:
parent.objects.link(obj)
collection.objects.unlink(obj)
for obj in objects:
name = obj.name
obj.name = f"{group_name}:{name}"
@ -138,13 +120,14 @@ class CacheModelLoader(plugin.AssetLoader):
group_name = plugin.asset_name(asset, subset, unique_number)
namespace = namespace or f"{asset}_{unique_number}"
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_container)
avalon_containers = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_containers:
avalon_containers = bpy.data.collections.new(
name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_containers)
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
avalon_containers.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)

View file

@ -0,0 +1,209 @@
"""Load an asset in Blender from an Alembic file."""
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
import bpy
from openpype.pipeline import (
get_representation_path,
AVALON_CONTAINER_ID,
)
from openpype.hosts.blender.api import plugin, lib
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
)
class AbcCameraLoader(plugin.AssetLoader):
"""Load a camera from Alembic file.
Stores the imported asset in an empty named after the asset.
"""
families = ["camera"]
representations = ["abc"]
label = "Load Camera (ABC)"
icon = "code-fork"
color = "orange"
def _remove(self, asset_group):
objects = list(asset_group.children)
for obj in objects:
if obj.type == "CAMERA":
bpy.data.cameras.remove(obj.data)
elif obj.type == "EMPTY":
objects.extend(obj.children)
bpy.data.objects.remove(obj)
def _process(self, libpath, asset_group, group_name):
plugin.deselect_all()
bpy.ops.wm.alembic_import(filepath=libpath)
objects = lib.get_selection()
for obj in objects:
obj.parent = asset_group
for obj in objects:
name = obj.name
obj.name = f"{group_name}:{name}"
if obj.type != "EMPTY":
name_data = obj.data.name
obj.data.name = f"{group_name}:{name_data}"
if not obj.get(AVALON_PROPERTY):
obj[AVALON_PROPERTY] = dict()
avalon_info = obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
plugin.deselect_all()
return objects
def process_asset(
self,
context: dict,
name: str,
namespace: Optional[str] = None,
options: Optional[Dict] = None,
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
asset_name = plugin.asset_name(asset, subset)
unique_number = plugin.get_unique_number(asset, subset)
group_name = plugin.asset_name(asset, subset, unique_number)
namespace = namespace or f"{asset}_{unique_number}"
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_container)
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)
objects = []
nodes = list(asset_group.children)
for obj in nodes:
objects.append(obj)
nodes.extend(list(obj.children))
bpy.context.scene.collection.objects.link(asset_group)
asset_group[AVALON_PROPERTY] = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or "",
"loader": str(self.__class__.__name__),
"representation": str(context["representation"]["_id"]),
"libpath": libpath,
"asset_name": asset_name,
"parent": str(context["representation"]["parent"]),
"family": context["representation"]["context"]["family"],
"objectName": group_name,
}
self[:] = objects
return objects
def exec_update(self, container: Dict, representation: Dict):
"""Update the loaded asset.
This will remove all objects of the current collection, load the new
ones and add them to the collection.
If the objects of the collection are used in another collection they
will not be removed, only unlinked. Normally this should not be the
case though.
Warning:
No nested collections are supported at the moment!
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
libpath = Path(get_representation_path(representation))
extension = libpath.suffix.lower()
self.log.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert asset_group, (
f"The asset is not loaded: {container['objectName']}")
assert libpath, (
f"No existing library file found for {container['objectName']}")
assert libpath.is_file(), f"The file doesn't exist: {libpath}"
assert extension in plugin.VALID_EXTENSIONS, (
f"Unsupported file: {libpath}")
metadata = asset_group.get(AVALON_PROPERTY)
group_libpath = metadata["libpath"]
normalized_group_libpath = str(
Path(bpy.path.abspath(group_libpath)).resolve())
normalized_libpath = str(
Path(bpy.path.abspath(str(libpath))).resolve())
self.log.debug(
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_group_libpath,
normalized_libpath,
)
if normalized_group_libpath == normalized_libpath:
self.log.info("Library already loaded, not updating...")
return
mat = asset_group.matrix_basis.copy()
self._remove(asset_group)
self._process(str(libpath), asset_group, object_name)
asset_group.matrix_basis = mat
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
def exec_remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
Warning:
No nested collections are supported at the moment!
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
if not asset_group:
return False
self._remove(asset_group)
bpy.data.objects.remove(asset_group)
return True

View file

@ -13,6 +13,7 @@ from .lib import (
update_frame_range,
set_asset_framerange,
get_current_comp,
get_bmd_library,
comp_lock_and_undo_chunk
)

View file

@ -256,8 +256,11 @@ def switch_item(container,
@contextlib.contextmanager
def maintained_selection():
comp = get_current_comp()
def maintained_selection(comp=None):
"""Reset comp selection from before the context after the context"""
if comp is None:
comp = get_current_comp()
previous_selection = comp.GetToolList(True).values()
try:
yield
@ -269,6 +272,33 @@ def maintained_selection():
flow.Select(tool, True)
@contextlib.contextmanager
def maintained_comp_range(comp=None,
global_start=True,
global_end=True,
render_start=True,
render_end=True):
"""Reset comp frame ranges from before the context after the context"""
if comp is None:
comp = get_current_comp()
comp_attrs = comp.GetAttrs()
preserve_attrs = {}
if global_start:
preserve_attrs["COMPN_GlobalStart"] = comp_attrs["COMPN_GlobalStart"]
if global_end:
preserve_attrs["COMPN_GlobalEnd"] = comp_attrs["COMPN_GlobalEnd"]
if render_start:
preserve_attrs["COMPN_RenderStart"] = comp_attrs["COMPN_RenderStart"]
if render_end:
preserve_attrs["COMPN_RenderEnd"] = comp_attrs["COMPN_RenderEnd"]
try:
yield
finally:
comp.SetAttrs(preserve_attrs)
def get_frame_path(path):
"""Get filename for the Fusion Saver with padded number as '#'
@ -309,6 +339,12 @@ def get_fusion_module():
return fusion
def get_bmd_library():
"""Get bmd library"""
bmd = getattr(sys.modules["__main__"], "bmd", None)
return bmd
def get_current_comp():
"""Get current comp in this session"""
fusion = get_fusion_module()

View file

@ -40,6 +40,11 @@ class CreateSaver(NewCreator):
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}")
def create(self, subset_name, instance_data, pre_create_data):
self.pass_pre_attributes_to_instance(
instance_data,
pre_create_data
)
instance_data.update({
"id": "pyblish.avalon.instance",
"subset": subset_name
@ -194,20 +199,25 @@ class CreateSaver(NewCreator):
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
self._get_frame_range_enum()
]
return attr_defs
def get_instance_attr_defs(self):
"""Settings for publish page"""
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
]
return attr_defs
return self.get_pre_create_attr_defs()
def pass_pre_attributes_to_instance(
self,
instance_data,
pre_create_data
):
creator_attrs = instance_data["creator_attributes"] = {}
for pass_key in pre_create_data.keys():
creator_attrs[pass_key] = pre_create_data[pass_key]
# These functions below should be moved to another file
# so it can be used by other plugins. plugin.py ?
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
@ -220,6 +230,19 @@ class CreateSaver(NewCreator):
"render_target", items=rendering_targets, label="Render target"
)
def _get_frame_range_enum(self):
frame_range_options = {
"asset_db": "Current asset context",
"render_range": "From render in/out",
"comp_range": "From composition timeline"
}
return EnumDef(
"frame_range_source",
items=frame_range_options,
label="Frame range source"
)
def _get_reviewable_bool(self):
return BoolDef(
"review",

View file

@ -1,4 +1,3 @@
from openpype.pipeline import (
load,
get_representation_path,
@ -6,7 +5,7 @@ from openpype.pipeline import (
from openpype.hosts.fusion.api import (
imprint_container,
get_current_comp,
comp_lock_and_undo_chunk
comp_lock_and_undo_chunk,
)
@ -15,7 +14,21 @@ class FusionLoadFBXMesh(load.LoaderPlugin):
families = ["*"]
representations = ["*"]
extensions = {"fbx"}
extensions = {
"3ds",
"amc",
"aoa",
"asf",
"bvh",
"c3d",
"dae",
"dxf",
"fbx",
"htr",
"mcd",
"obj",
"trc",
}
label = "Load FBX mesh"
order = -10
@ -27,23 +40,24 @@ class FusionLoadFBXMesh(load.LoaderPlugin):
def load(self, context, name, namespace, data):
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
namespace = context["asset"]["name"]
# Create the Loader with the filename path set
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp, "Create tool"):
path = self.fname
args = (-32768, -32768)
tool = comp.AddTool(self.tool_type, *args)
tool["ImportFile"] = path
imprint_container(tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__)
imprint_container(
tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
)
def switch(self, container, representation):
self.update(container, representation)

View file

@ -3,17 +3,14 @@ import contextlib
import openpype.pipeline.load as load
from openpype.pipeline.load import (
get_representation_context,
get_representation_path_from_context
get_representation_path_from_context,
)
from openpype.hosts.fusion.api import (
imprint_container,
get_current_comp,
comp_lock_and_undo_chunk
)
from openpype.lib.transcoding import (
IMAGE_EXTENSIONS,
VIDEO_EXTENSIONS
comp_lock_and_undo_chunk,
)
from openpype.lib.transcoding import IMAGE_EXTENSIONS, VIDEO_EXTENSIONS
comp = get_current_comp()
@ -57,20 +54,23 @@ def preserve_trim(loader, log=None):
try:
yield
finally:
length = loader.GetAttrs()["TOOLIT_Clip_Length"][1] - 1
if trim_from_start > length:
trim_from_start = length
if log:
log.warning("Reducing trim in to %d "
"(because of less frames)" % trim_from_start)
log.warning(
"Reducing trim in to %d "
"(because of less frames)" % trim_from_start
)
remainder = length - trim_from_start
if trim_from_end > remainder:
trim_from_end = remainder
if log:
log.warning("Reducing trim in to %d "
"(because of less frames)" % trim_from_end)
log.warning(
"Reducing trim in to %d "
"(because of less frames)" % trim_from_end
)
loader["ClipTimeStart"][time] = trim_from_start
loader["ClipTimeEnd"][time] = length - trim_from_end
@ -109,11 +109,15 @@ def loader_shift(loader, frame, relative=True):
# Shifting global in will try to automatically compensate for the change
# in the "ClipTimeStart" and "HoldFirstFrame" inputs, so we preserve those
# input values to "just shift" the clip
with preserve_inputs(loader, inputs=["ClipTimeStart",
"ClipTimeEnd",
"HoldFirstFrame",
"HoldLastFrame"]):
with preserve_inputs(
loader,
inputs=[
"ClipTimeStart",
"ClipTimeEnd",
"HoldFirstFrame",
"HoldLastFrame",
],
):
# GlobalIn cannot be set past GlobalOut or vice versa
# so we must apply them in the order of the shift.
if shift > 0:
@ -129,7 +133,14 @@ def loader_shift(loader, frame, relative=True):
class FusionLoadSequence(load.LoaderPlugin):
"""Load image sequence into Fusion"""
families = ["imagesequence", "review", "render", "plate"]
families = [
"imagesequence",
"review",
"render",
"plate",
"image",
"onilne",
]
representations = ["*"]
extensions = set(
ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS)
@ -143,7 +154,7 @@ class FusionLoadSequence(load.LoaderPlugin):
def load(self, context, name, namespace, data):
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
namespace = context["asset"]["name"]
# Use the first file for now
path = get_representation_path_from_context(context)
@ -151,7 +162,6 @@ class FusionLoadSequence(load.LoaderPlugin):
# Create the Loader with the filename path set
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp, "Create Loader"):
args = (-32768, -32768)
tool = comp.AddTool("Loader", *args)
tool["Clip"] = path
@ -160,11 +170,13 @@ class FusionLoadSequence(load.LoaderPlugin):
start = self._get_start(context["version"], tool)
loader_shift(tool, start, relative=False)
imprint_container(tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__)
imprint_container(
tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
)
def switch(self, container, representation):
self.update(container, representation)
@ -222,24 +234,28 @@ class FusionLoadSequence(load.LoaderPlugin):
start = self._get_start(context["version"], tool)
with comp_lock_and_undo_chunk(comp, "Update Loader"):
# Update the loader's path whilst preserving some values
with preserve_trim(tool, log=self.log):
with preserve_inputs(tool,
inputs=("HoldFirstFrame",
"HoldLastFrame",
"Reverse",
"Depth",
"KeyCode",
"TimeCodeOffset")):
with preserve_inputs(
tool,
inputs=(
"HoldFirstFrame",
"HoldLastFrame",
"Reverse",
"Depth",
"KeyCode",
"TimeCodeOffset",
),
):
tool["Clip"] = path
# Set the global in to the start frame of the sequence
global_in_changed = loader_shift(tool, start, relative=False)
if global_in_changed:
# Log this change to the user
self.log.debug("Changed '%s' global in: %d" % (tool.Name,
start))
self.log.debug(
"Changed '%s' global in: %d" % (tool.Name, start)
)
# Update the imprinted representation
tool.SetData("avalon.representation", str(representation["_id"]))
@ -264,9 +280,11 @@ class FusionLoadSequence(load.LoaderPlugin):
# Get frame start without handles
start = data.get("frameStart")
if start is None:
self.log.warning("Missing start frame for version "
"assuming starts at frame 0 for: "
"{}".format(tool.Name))
self.log.warning(
"Missing start frame for version "
"assuming starts at frame 0 for: "
"{}".format(tool.Name)
)
return 0
# Use `handleStart` if the data is available

View file

@ -0,0 +1,32 @@
"""Import workfiles into your current comp.
As all imported nodes are free floating and will probably be changed there
is no update or reload function added for this plugin
"""
from openpype.pipeline import load
from openpype.hosts.fusion.api import (
get_current_comp,
get_bmd_library,
)
class FusionLoadWorkfile(load.LoaderPlugin):
"""Load the content of a workfile into Fusion"""
families = ["workfile"]
representations = ["*"]
extensions = {"comp"}
label = "Load Workfile"
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, data):
# Get needed elements
bmd = get_bmd_library()
comp = get_current_comp()
# Paste the content of the file into the current comp
comp.Paste(bmd.readfile(self.fname))

View file

@ -35,9 +35,10 @@ class CollectFusionCompFrameRanges(pyblish.api.ContextPlugin):
# Store comp render ranges
start, end, global_start, global_end = get_comp_render_range(comp)
context.data["frameStart"] = int(start)
context.data["frameEnd"] = int(end)
context.data["frameStartHandle"] = int(global_start)
context.data["frameEndHandle"] = int(global_end)
context.data["handleStart"] = int(start) - int(global_start)
context.data["handleEnd"] = int(global_end) - int(end)
context.data.update({
"renderFrameStart": int(start),
"renderFrameEnd": int(end),
"compFrameStart": int(global_start),
"compFrameEnd": int(global_end)
})

View file

@ -1,50 +0,0 @@
import pyblish.api
from openpype.pipeline import publish
import os
class CollectFusionExpectedFrames(
pyblish.api.InstancePlugin, publish.ColormanagedPyblishPluginMixin
):
"""Collect all frames needed to publish expected frames"""
order = pyblish.api.CollectorOrder + 0.5
label = "Collect Expected Frames"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
context = instance.context
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{len(str(frame_end))}d" % frame_start,
"files": files,
"stagingDir": output_dir,
}
self.set_representation_colorspace(
representation=repre,
context=context,
)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review"]
# add the repre to the instance
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)

View file

@ -1,22 +0,0 @@
import pyblish.api
class CollectFusionVersion(pyblish.api.ContextPlugin):
"""Collect current comp"""
order = pyblish.api.CollectorOrder
label = "Collect Fusion Version"
hosts = ["fusion"]
def process(self, context):
"""Collect all image sequence tools"""
comp = context.data.get("currentComp")
if not comp:
raise RuntimeError("No comp previously collected, unable to "
"retrieve Fusion version.")
version = comp.GetApp().Version
context.data["fusionVersion"] = version
self.log.info("Fusion version: %s" % version)

View file

@ -113,4 +113,4 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
inputs = [c["representation"] for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
self.log.debug("Collected inputs: %s" % inputs)

View file

@ -1,5 +1,3 @@
import os
import pyblish.api
@ -24,23 +22,63 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
creator_attributes = instance.data["creator_attributes"]
instance.data.update(creator_attributes)
# Include start and end render frame in label
subset = instance.data["subset"]
frame_range_source = creator_attributes.get("frame_range_source")
instance.data["frame_range_source"] = frame_range_source
# get asset frame ranges to all instances
# render family instances `asset_db` render target
start = context.data["frameStart"]
end = context.data["frameEnd"]
label = "{subset} ({start}-{end})".format(subset=subset,
start=int(start),
end=int(end))
handle_start = context.data["handleStart"]
handle_end = context.data["handleEnd"]
start_with_handle = start - handle_start
end_with_handle = end + handle_end
# conditions for render family instances
if frame_range_source == "render_range":
# set comp render frame ranges
start = context.data["renderFrameStart"]
end = context.data["renderFrameEnd"]
handle_start = 0
handle_end = 0
start_with_handle = start
end_with_handle = end
if frame_range_source == "comp_range":
comp_start = context.data["compFrameStart"]
comp_end = context.data["compFrameEnd"]
render_start = context.data["renderFrameStart"]
render_end = context.data["renderFrameEnd"]
# set comp frame ranges
start = render_start
end = render_end
handle_start = render_start - comp_start
handle_end = comp_end - render_end
start_with_handle = comp_start
end_with_handle = comp_end
# Include start and end render frame in label
subset = instance.data["subset"]
label = (
"{subset} ({start}-{end}) [{handle_start}-{handle_end}]"
).format(
subset=subset,
start=int(start),
end=int(end),
handle_start=int(handle_start),
handle_end=int(handle_end)
)
instance.data.update({
"label": label,
# todo: Allow custom frame range per instance
"frameStart": context.data["frameStart"],
"frameEnd": context.data["frameEnd"],
"frameStartHandle": context.data["frameStartHandle"],
"frameEndHandle": context.data["frameStartHandle"],
"handleStart": context.data["handleStart"],
"handleEnd": context.data["handleEnd"],
"frameStart": start,
"frameEnd": end,
"frameStartHandle": start_with_handle,
"frameEndHandle": end_with_handle,
"handleStart": handle_start,
"handleEnd": handle_end,
"fps": context.data["fps"],
})
@ -49,31 +87,3 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
if instance.data.get("review", False):
self.log.info("Adding review family..")
instance.data["families"].append("review")
if instance.data["family"] == "render":
# TODO: This should probably move into a collector of
# its own for the "render" family
from openpype.hosts.fusion.api.lib import get_frame_path
comp = context.data["currentComp"]
# This is only the case for savers currently but not
# for workfile instances. So we assume saver here.
tool = instance.data["transientData"]["tool"]
path = tool["Clip"][comp.TIME_UNDEFINED]
filename = os.path.basename(path)
head, padding, tail = get_frame_path(filename)
ext = os.path.splitext(path)[1]
assert tail == ext, ("Tail does not match %s" % ext)
instance.data.update({
"path": path,
"outputDir": os.path.dirname(path),
"ext": ext, # todo: should be redundant?
# Backwards compatibility: embed tool in instance.data
"tool": tool
})
# Add tool itself as member
instance.append(tool)

View file

@ -0,0 +1,209 @@
import os
import attr
import pyblish.api
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
from openpype.hosts.fusion.api.lib import get_frame_path
@attr.s
class FusionRenderInstance(RenderInstance):
# extend generic, composition name is needed
fps = attr.ib(default=None)
projectEntity = attr.ib(default=None)
stagingDir = attr.ib(default=None)
app_version = attr.ib(default=None)
tool = attr.ib(default=None)
workfileComp = attr.ib(default=None)
publish_attributes = attr.ib(default={})
frameStartHandle = attr.ib(default=None)
frameEndHandle = attr.ib(default=None)
class CollectFusionRender(
publish.AbstractCollectRender,
publish.ColormanagedPyblishPluginMixin
):
order = pyblish.api.CollectorOrder + 0.09
label = "Collect Fusion Render"
hosts = ["fusion"]
def get_instances(self, context):
comp = context.data.get("currentComp")
comp_frame_format_prefs = comp.GetPrefs("Comp.FrameFormat")
aspect_x = comp_frame_format_prefs["AspectX"]
aspect_y = comp_frame_format_prefs["AspectY"]
instances = []
instances_to_remove = []
current_file = context.data["currentFile"]
version = context.data["version"]
project_entity = context.data["projectEntity"]
for inst in context:
if not inst.data.get("active", True):
continue
family = inst.data["family"]
if family != "render":
continue
task_name = context.data["task"]
tool = inst.data["transientData"]["tool"]
instance_families = inst.data.get("families", [])
subset_name = inst.data["subset"]
instance = FusionRenderInstance(
family="render",
tool=tool,
workfileComp=comp,
families=instance_families,
version=version,
time="",
source=current_file,
label=inst.data["label"],
subset=subset_name,
asset=inst.data["asset"],
task=task_name,
attachTo=False,
setMembers='',
publish=True,
name=subset_name,
resolutionWidth=comp_frame_format_prefs.get("Width"),
resolutionHeight=comp_frame_format_prefs.get("Height"),
pixelAspect=aspect_x / aspect_y,
tileRendering=False,
tilesX=0,
tilesY=0,
review="review" in instance_families,
frameStart=inst.data["frameStart"],
frameEnd=inst.data["frameEnd"],
handleStart=inst.data["handleStart"],
handleEnd=inst.data["handleEnd"],
frameStartHandle=inst.data["frameStartHandle"],
frameEndHandle=inst.data["frameEndHandle"],
frameStep=1,
fps=comp_frame_format_prefs.get("Rate"),
app_version=comp.GetApp().Version,
publish_attributes=inst.data.get("publish_attributes", {})
)
render_target = inst.data["creator_attributes"]["render_target"]
# Add render target family
render_target_family = f"render.{render_target}"
if render_target_family not in instance.families:
instance.families.append(render_target_family)
# Add render target specific data
if render_target in {"local", "frames"}:
instance.projectEntity = project_entity
if render_target == "farm":
fam = "render.farm"
if fam not in instance.families:
instance.families.append(fam)
instance.toBeRenderedOn = "deadline"
instance.farm = True # to skip integrate
if "review" in instance.families:
# to skip ExtractReview locally
instance.families.remove("review")
# add new instance to the list and remove the original
# instance since it is not needed anymore
instances.append(instance)
instances_to_remove.append(inst)
for instance in instances_to_remove:
context.remove(instance)
return instances
def post_collecting_action(self):
for instance in self._context:
if "render.frames" in instance.data.get("families", []):
# adding representation data to the instance
self._update_for_frames(instance)
def get_expected_files(self, render_instance):
"""
Returns list of rendered files that should be created by
Deadline. These are not published directly, they are source
for later 'submit_publish_job'.
Args:
render_instance (RenderInstance): to pull anatomy and parts used
in url
Returns:
(list) of absolute urls to rendered file
"""
start = render_instance.frameStart - render_instance.handleStart
end = render_instance.frameEnd + render_instance.handleEnd
path = (
render_instance.tool["Clip"]
[render_instance.workfileComp.TIME_UNDEFINED]
)
output_dir = os.path.dirname(path)
render_instance.outputDir = output_dir
basename = os.path.basename(path)
head, padding, ext = get_frame_path(basename)
expected_files = []
for frame in range(start, end + 1):
expected_files.append(
os.path.join(
output_dir,
f"{head}{str(frame).zfill(padding)}{ext}"
)
)
return expected_files
def _update_for_frames(self, instance):
"""Updating instance for render.frames family
Adding representation data to the instance. Also setting
colorspaceData to the representation based on file rules.
"""
expected_files = instance.data["expectedFiles"]
start = instance.data["frameStart"] - instance.data["handleStart"]
path = expected_files[0]
basename = os.path.basename(path)
staging_dir = os.path.dirname(path)
_, padding, ext = get_frame_path(basename)
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{padding}d" % start,
"files": [os.path.basename(f) for f in expected_files],
"stagingDir": staging_dir,
}
self.set_representation_colorspace(
representation=repre,
context=instance.context,
)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review"]
# add the repre to the instance
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)
return instance

View file

@ -1,25 +0,0 @@
import pyblish.api
class CollectFusionRenders(pyblish.api.InstancePlugin):
"""Collect current saver node's render Mode
Options:
local (Render locally)
frames (Use existing frames)
"""
order = pyblish.api.CollectorOrder + 0.4
label = "Collect Renders"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
render_target = instance.data["render_target"]
family = instance.data["family"]
# add targeted family to families
instance.data["families"].append(
"{}.{}".format(family, render_target)
)

View file

@ -1,8 +1,12 @@
import os
import logging
import contextlib
import collections
import pyblish.api
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
from openpype.pipeline import publish
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
from openpype.hosts.fusion.api.lib import get_frame_path, maintained_comp_range
log = logging.getLogger(__name__)
@ -38,7 +42,10 @@ def enabled_savers(comp, savers):
saver.SetAttrs({"TOOLB_PassThrough": original_state})
class FusionRenderLocal(pyblish.api.InstancePlugin):
class FusionRenderLocal(
pyblish.api.InstancePlugin,
publish.ColormanagedPyblishPluginMixin
):
"""Render the current Fusion composition locally."""
order = pyblish.api.ExtractorOrder - 0.2
@ -46,11 +53,16 @@ class FusionRenderLocal(pyblish.api.InstancePlugin):
hosts = ["fusion"]
families = ["render.local"]
is_rendered_key = "_fusionrenderlocal_has_rendered"
def process(self, instance):
context = instance.context
# Start render
self.render_once(context)
result = self.render(instance)
if result is False:
raise RuntimeError(f"Comp render failed for {instance}")
self._add_representation(instance)
# Log render status
self.log.info(
@ -61,39 +73,48 @@ class FusionRenderLocal(pyblish.api.InstancePlugin):
)
)
def render_once(self, context):
"""Render context comp only once, even with more render instances"""
def render(self, instance):
"""Render instance.
# This plug-in assumes all render nodes get rendered at the same time
# to speed up the rendering. The check below makes sure that we only
# execute the rendering once and not for each instance.
key = f"__hasRun{self.__class__.__name__}"
We try to render the minimal amount of times by combining the instances
that have a matching frame range in one Fusion render. Then for the
batch of instances we store whether the render succeeded or failed.
savers_to_render = [
# Get the saver tool from the instance
instance[0] for instance in context if
# Only active instances
instance.data.get("publish", True) and
# Only render.local instances
"render.local" in instance.data["families"]
]
"""
if key not in context.data:
# We initialize as false to indicate it wasn't successful yet
# so we can keep track of whether Fusion succeeded
context.data[key] = False
if self.is_rendered_key in instance.data:
# This instance was already processed in batch with another
# instance, so we just return the render result directly
self.log.debug(f"Instance {instance} was already rendered")
return instance.data[self.is_rendered_key]
current_comp = context.data["currentComp"]
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
instances_by_frame_range = self.get_render_instances_by_frame_range(
instance.context
)
self.log.info("Starting Fusion render")
self.log.info(f"Start frame: {frame_start}")
self.log.info(f"End frame: {frame_end}")
saver_names = ", ".join(saver.Name for saver in savers_to_render)
self.log.info(f"Rendering tools: {saver_names}")
# Render matching batch of instances that share the same frame range
frame_range = self.get_instance_render_frame_range(instance)
render_instances = instances_by_frame_range[frame_range]
with comp_lock_and_undo_chunk(current_comp):
# We initialize render state false to indicate it wasn't successful
# yet to keep track of whether Fusion succeeded. This is for cases
# where an error below this might cause the comp render result not
# to be stored for the instances of this batch
for render_instance in render_instances:
render_instance.data[self.is_rendered_key] = False
savers_to_render = [inst.data["tool"] for inst in render_instances]
current_comp = instance.context.data["currentComp"]
frame_start, frame_end = frame_range
self.log.info(
f"Starting Fusion render frame range {frame_start}-{frame_end}"
)
saver_names = ", ".join(saver.Name for saver in savers_to_render)
self.log.info(f"Rendering tools: {saver_names}")
with comp_lock_and_undo_chunk(current_comp):
with maintained_comp_range(current_comp):
with enabled_savers(current_comp, savers_to_render):
result = current_comp.Render(
{
@ -103,7 +124,76 @@ class FusionRenderLocal(pyblish.api.InstancePlugin):
}
)
context.data[key] = bool(result)
# Store the render state for all the rendered instances
for render_instance in render_instances:
render_instance.data[self.is_rendered_key] = bool(result)
if context.data[key] is False:
raise RuntimeError("Comp render failed")
return result
def _add_representation(self, instance):
"""Add representation to instance"""
expected_files = instance.data["expectedFiles"]
start = instance.data["frameStart"] - instance.data["handleStart"]
path = expected_files[0]
_, padding, ext = get_frame_path(path)
staging_dir = os.path.dirname(path)
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{padding}d" % start,
"files": [os.path.basename(f) for f in expected_files],
"stagingDir": staging_dir,
}
self.set_representation_colorspace(
representation=repre,
context=instance.context,
)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review"]
# add the repre to the instance
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)
return instance
def get_render_instances_by_frame_range(self, context):
"""Return enabled render.local instances grouped by their frame range.
Arguments:
context (pyblish.Context): The pyblish context
Returns:
dict: (start, end): instances mapping
"""
instances_to_render = [
instance for instance in context if
# Only active instances
instance.data.get("publish", True) and
# Only render.local instances
"render.local" in instance.data.get("families", [])
]
# Instances by frame ranges
instances_by_frame_range = collections.defaultdict(list)
for instance in instances_to_render:
start, end = self.get_instance_render_frame_range(instance)
instances_by_frame_range[(start, end)].append(instance)
return dict(instances_by_frame_range)
def get_instance_render_frame_range(self, instance):
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
return start, end

View file

@ -17,5 +17,5 @@ class FusionSaveComp(pyblish.api.ContextPlugin):
current = comp.GetAttrs().get("COMPS_FileName", "")
assert context.data['currentFile'] == current
self.log.info("Saving current file..")
self.log.info("Saving current file: {}".format(current))
comp.Save()

View file

@ -21,7 +21,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
tool = instance[0]
tool = instance.data["tool"]
create_dir = tool.GetInput("CreateDir")
if create_dir == 0.0:
cls.log.error(

View file

@ -14,7 +14,7 @@ class ValidateLocalFramesExistence(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Expected Frames Exists"
families = ["render"]
families = ["render.frames"]
hosts = ["fusion"]
actions = [RepairAction, SelectInvalidAction]
@ -23,31 +23,20 @@ class ValidateLocalFramesExistence(pyblish.api.InstancePlugin):
if non_existing_frames is None:
non_existing_frames = []
if instance.data.get("render_target") == "frames":
tool = instance[0]
tool = instance.data["tool"]
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
expected_files = instance.data["expectedFiles"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
for file in expected_files:
if not os.path.exists(file):
cls.log.error(
f"Missing file: {file}"
)
non_existing_frames.append(file)
for file in files:
if not os.path.exists(os.path.join(output_dir, file)):
cls.log.error(
f"Missing file: {os.path.join(output_dir, file)}"
)
non_existing_frames.append(file)
if len(non_existing_frames) > 0:
cls.log.error(f"Some of {tool.Name}'s files does not exist")
return [tool]
if len(non_existing_frames) > 0:
cls.log.error(f"Some of {tool.Name}'s files does not exist")
return [tool]
def process(self, instance):
non_existing_frames = []
@ -67,8 +56,7 @@ class ValidateLocalFramesExistence(pyblish.api.InstancePlugin):
def repair(cls, instance):
invalid = cls.get_invalid(instance)
if invalid:
tool = invalid[0]
tool = instance.data["tool"]
# Change render target to local to render locally
tool.SetData("openpype.creator_attributes.render_target", "local")

View file

@ -30,11 +30,11 @@ class ValidateFilenameHasExtension(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
path = instance.data["path"]
path = instance.data["expectedFiles"][0]
fname, ext = os.path.splitext(path)
if not ext:
tool = instance[0]
tool = instance.data["tool"]
cls.log.error("%s has no extension specified" % tool.Name)
return [tool]

View file

@ -0,0 +1,41 @@
import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateInstanceFrameRange(pyblish.api.InstancePlugin):
"""Validate instance frame range is within comp's global render range."""
order = pyblish.api.ValidatorOrder
label = "Validate Filename Has Extension"
families = ["render"]
hosts = ["fusion"]
def process(self, instance):
context = instance.context
global_start = context.data["compFrameStart"]
global_end = context.data["compFrameEnd"]
render_start = instance.data["frameStartHandle"]
render_end = instance.data["frameEndHandle"]
if render_start < global_start or render_end > global_end:
message = (
f"Instance {instance} render frame range "
f"({render_start}-{render_end}) is outside of the comp's "
f"global render range ({global_start}-{global_end}) and thus "
f"can't be rendered. "
)
description = (
f"{message}\n\n"
f"Either update the comp's global range or the instance's "
f"frame range to ensure the comp's frame range includes the "
f"to render frame range for the instance."
)
raise PublishValidationError(
title="Frame range outside of comp range",
message=message,
description=description
)

View file

@ -20,7 +20,7 @@ class ValidateSaverHasInput(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
saver = instance[0]
saver = instance.data["tool"]
if not saver.Input.GetConnectedOutput():
return [saver]

View file

@ -37,7 +37,7 @@ class ValidateSaverPassthrough(pyblish.api.ContextPlugin):
def is_invalid(self, instance):
saver = instance[0]
saver = instance.data["tool"]
attr = saver.GetAttrs()
active = not attr["TOOLB_PassThrough"]

View file

@ -8,7 +8,6 @@ import pyblish.api
from openpype.hosts.houdini.api import lib
class CollectFrames(pyblish.api.InstancePlugin):
"""Collect all frames which would be saved from the ROP nodes"""
@ -34,8 +33,10 @@ class CollectFrames(pyblish.api.InstancePlugin):
self.log.warning("Using current frame: {}".format(hou.frame()))
output = output_parm.eval()
_, ext = lib.splitext(output,
allowed_multidot_extensions=[".ass.gz"])
_, ext = lib.splitext(
output,
allowed_multidot_extensions=[".ass.gz"]
)
file_name = os.path.basename(output)
result = file_name

View file

@ -117,4 +117,4 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
inputs = [c["representation"] for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
self.log.debug("Collected inputs: %s" % inputs)

View file

@ -55,7 +55,9 @@ class CollectInstances(pyblish.api.ContextPlugin):
has_family = node.evalParm("family")
assert has_family, "'%s' is missing 'family'" % node.name()
self.log.info("processing {}".format(node))
self.log.info(
"Processing legacy instance node {}".format(node.path())
)
data = lib.read(node)
# Check bypass state and reverse

View file

@ -32,5 +32,4 @@ class CollectWorkfile(pyblish.api.InstancePlugin):
"stagingDir": folder,
}]
self.log.info('Collected instance: {}'.format(file))
self.log.info('staging Dir: {}'.format(folder))
self.log.debug('Collected workfile instance: {}'.format(file))

View file

@ -20,7 +20,7 @@ class SaveCurrentScene(pyblish.api.ContextPlugin):
)
if host.has_unsaved_changes():
self.log.info("Saving current file {}...".format(current_file))
self.log.info("Saving current file: {}".format(current_file))
host.save_workfile(current_file)
else:
self.log.debug("No unsaved changes, skipping file save..")

View file

@ -28,18 +28,37 @@ class ValidateWorkfilePaths(
if not self.is_active(instance.data):
return
invalid = self.get_invalid()
self.log.info(
"node types to check: {}".format(", ".join(self.node_types)))
self.log.info(
"prohibited vars: {}".format(", ".join(self.prohibited_vars))
self.log.debug(
"Checking node types: {}".format(", ".join(self.node_types)))
self.log.debug(
"Searching prohibited vars: {}".format(
", ".join(self.prohibited_vars)
)
)
if invalid:
for param in invalid:
self.log.error(
"{}: {}".format(param.path(), param.unexpandedString()))
raise PublishValidationError(
"Invalid paths found", title=self.label)
if invalid:
all_container_vars = set()
for param in invalid:
value = param.unexpandedString()
contained_vars = [
var for var in self.prohibited_vars
if var in value
]
all_container_vars.update(contained_vars)
self.log.error(
"Parm {} contains prohibited vars {}: {}".format(
param.path(),
", ".join(contained_vars),
value)
)
message = (
"Prohibited vars {} found in parameter values".format(
", ".join(all_container_vars)
)
)
raise PublishValidationError(message, title=self.label)
@classmethod
def get_invalid(cls):
@ -63,7 +82,7 @@ class ValidateWorkfilePaths(
def repair(cls, instance):
invalid = cls.get_invalid()
for param in invalid:
cls.log.info("processing: {}".format(param.path()))
cls.log.info("Processing: {}".format(param.path()))
cls.log.info("Replacing {} for {}".format(
param.unexpandedString(),
hou.text.expandString(param.unexpandedString())))

View file

@ -245,11 +245,15 @@ def reset_frame_range(fps: bool = True):
fps_number = float(data_fps["data"]["fps"])
rt.frameRate = fps_number
frame_range = get_frame_range()
frame_start = frame_range["frameStart"] - int(frame_range["handleStart"])
frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"])
frange_cmd = f"animationRange = interval {frame_start} {frame_end}"
frame_start_handle = frame_range["frameStart"] - int(
frame_range["handleStart"]
)
frame_end_handle = frame_range["frameEnd"] + int(frame_range["handleEnd"])
frange_cmd = (
f"animationRange = interval {frame_start_handle} {frame_end_handle}"
)
rt.execute(frange_cmd)
set_render_frame_range(frame_start, frame_end)
set_render_frame_range(frame_start_handle, frame_end_handle)
def set_context_setting():

View file

@ -20,28 +20,25 @@ class FbxLoader(load.LoaderPlugin):
from pymxs import runtime as rt
filepath = os.path.normpath(self.fname)
rt.FBXImporterSetParam("Animation", True)
rt.FBXImporterSetParam("Camera", True)
rt.FBXImporterSetParam("AxisConversionMethod", True)
rt.FBXImporterSetParam("Preserveinstances", True)
rt.importFile(
filepath,
rt.name("noPrompt"),
using=rt.FBXIMP)
fbx_import_cmd = (
f"""
container = rt.getNodeByName(f"{name}")
if not container:
container = rt.container()
container.name = f"{name}"
FBXImporterSetParam "Animation" true
FBXImporterSetParam "Cameras" true
FBXImporterSetParam "AxisConversionMethod" true
FbxExporterSetParam "UpAxis" "Y"
FbxExporterSetParam "Preserveinstances" true
importFile @"{filepath}" #noPrompt using:FBXIMP
""")
self.log.debug(f"Executing command: {fbx_import_cmd}")
rt.execute(fbx_import_cmd)
container_name = f"{name}_CON"
asset = rt.getNodeByName(f"{name}")
for selection in rt.getCurrentSelection():
selection.Parent = container
return containerise(
name, [asset], context, loader=self.__class__.__name__)
name, [container], context, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt

View file

@ -21,26 +21,24 @@ class FbxModelLoader(load.LoaderPlugin):
from pymxs import runtime as rt
filepath = os.path.normpath(self.fname)
rt.FBXImporterSetParam("Animation", False)
rt.FBXImporterSetParam("Cameras", False)
rt.FBXImporterSetParam("Preserveinstances", True)
rt.importFile(
filepath,
rt.name("noPrompt"),
using=rt.FBXIMP)
fbx_import_cmd = (
f"""
container = rt.getNodeByName(f"{name}")
if not container:
container = rt.container()
container.name = f"{name}"
FBXImporterSetParam "Animation" false
FBXImporterSetParam "Cameras" false
FBXImporterSetParam "AxisConversionMethod" true
FbxExporterSetParam "UpAxis" "Y"
FbxExporterSetParam "Preserveinstances" true
importFile @"{filepath}" #noPrompt using:FBXIMP
""")
self.log.debug(f"Executing command: {fbx_import_cmd}")
rt.execute(fbx_import_cmd)
asset = rt.getNodeByName(f"{name}")
for selection in rt.getCurrentSelection():
selection.Parent = container
return containerise(
name, [asset], context, loader=self.__class__.__name__)
name, [container], context, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt

View file

@ -1,18 +1,11 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import maintained_selection, get_all_children
class ExtractCameraAlembic(publish.Extractor,
OptionalPyblishPluginMixin):
class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
"""
Extract Camera with AlembicExport
"""
@ -38,38 +31,33 @@ class ExtractCameraAlembic(publish.Extractor,
path = os.path.join(stagingdir, filename)
# We run the render
self.log.info("Writing alembic '%s' to '%s'" % (filename,
stagingdir))
self.log.info("Writing alembic '%s' to '%s'" % (filename, stagingdir))
export_cmd = (
f"""
AlembicExport.ArchiveType = #ogawa
AlembicExport.CoordinateSystem = #maya
AlembicExport.StartFrame = {start}
AlembicExport.EndFrame = {end}
AlembicExport.CustomAttributes = true
exportFile @"{path}" #noPrompt selectedOnly:on using:AlembicExport
""")
self.log.debug(f"Executing command: {export_cmd}")
rt.AlembicExport.ArchiveType = rt.name("ogawa")
rt.AlembicExport.CoordinateSystem = rt.name("maya")
rt.AlembicExport.StartFrame = start
rt.AlembicExport.EndFrame = end
rt.AlembicExport.CustomAttributes = True
with maintained_selection():
# select and export
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(export_cmd)
rt.exportFile(
path,
rt.name("noPrompt"),
selectedOnly=True,
using=rt.AlembicExport,
)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'abc',
'ext': 'abc',
'files': filename,
"name": "abc",
"ext": "abc",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
path))
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))

View file

@ -1,18 +1,11 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import maintained_selection, get_all_children
class ExtractCameraFbx(publish.Extractor,
OptionalPyblishPluginMixin):
class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
"""
Extract Camera with FbxExporter
"""
@ -33,43 +26,35 @@ class ExtractCameraFbx(publish.Extractor,
filename = "{name}.fbx".format(**instance.data)
filepath = os.path.join(stagingdir, filename)
self.log.info("Writing fbx file '%s' to '%s'" % (filename,
filepath))
self.log.info("Writing fbx file '%s' to '%s'" % (filename, filepath))
# Need to export:
# Animation = True
# Cameras = True
# AxisConversionMethod
fbx_export_cmd = (
f"""
FBXExporterSetParam "Animation" true
FBXExporterSetParam "Cameras" true
FBXExporterSetParam "AxisConversionMethod" "Animation"
FbxExporterSetParam "UpAxis" "Y"
FbxExporterSetParam "Preserveinstances" true
exportFile @"{filepath}" #noPrompt selectedOnly:true using:FBXEXP
""")
self.log.debug(f"Executing command: {fbx_export_cmd}")
rt.FBXExporterSetParam("Animation", True)
rt.FBXExporterSetParam("Cameras", True)
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
rt.FBXExporterSetParam("UpAxis", "Y")
rt.FBXExporterSetParam("Preserveinstances", True)
with maintained_selection():
# select and export
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(fbx_export_cmd)
rt.exportFile(
filepath,
rt.name("noPrompt"),
selectedOnly=True,
using=rt.FBXEXP,
)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'fbx',
'ext': 'fbx',
'files': filename,
"name": "fbx",
"ext": "fbx",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
filepath))
self.log.info(
"Extracted instance '%s' to: %s" % (instance.name, filepath)
)

View file

@ -1,18 +1,11 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import get_all_children
class ExtractMaxSceneRaw(publish.Extractor,
OptionalPyblishPluginMixin):
class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
"""
Extract Raw Max Scene with SaveSelected
"""
@ -20,9 +13,7 @@ class ExtractMaxSceneRaw(publish.Extractor,
order = pyblish.api.ExtractorOrder - 0.2
label = "Extract Max Scene (Raw)"
hosts = ["max"]
families = ["camera",
"maxScene",
"model"]
families = ["camera", "maxScene", "model"]
optional = True
def process(self, instance):
@ -37,26 +28,23 @@ class ExtractMaxSceneRaw(publish.Extractor,
filename = "{name}.max".format(**instance.data)
max_path = os.path.join(stagingdir, filename)
self.log.info("Writing max file '%s' to '%s'" % (filename,
max_path))
self.log.info("Writing max file '%s' to '%s'" % (filename, max_path))
if "representations" not in instance.data:
instance.data["representations"] = []
# saving max scene
with maintained_selection():
# need to figure out how to select the camera
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(f'saveNodes selection "{max_path}" quiet:true')
nodes = get_all_children(rt.getNodeByName(container))
rt.saveNodes(nodes, max_path, quiet=True)
self.log.info("Performing Extraction ...")
representation = {
'name': 'max',
'ext': 'max',
'files': filename,
"name": "max",
"ext": "max",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
max_path))
self.log.info(
"Extracted instance '%s' to: %s" % (instance.name, max_path)
)

View file

@ -1,18 +1,11 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import maintained_selection, get_all_children
class ExtractModel(publish.Extractor,
OptionalPyblishPluginMixin):
class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
"""
Extract Geometry in Alembic Format
"""
@ -36,39 +29,36 @@ class ExtractModel(publish.Extractor,
filepath = os.path.join(stagingdir, filename)
# We run the render
self.log.info("Writing alembic '%s' to '%s'" % (filename,
stagingdir))
self.log.info("Writing alembic '%s' to '%s'" % (filename, stagingdir))
export_cmd = (
f"""
AlembicExport.ArchiveType = #ogawa
AlembicExport.CoordinateSystem = #maya
AlembicExport.CustomAttributes = true
AlembicExport.UVs = true
AlembicExport.VertexColors = true
AlembicExport.PreserveInstances = true
exportFile @"{filepath}" #noPrompt selectedOnly:on using:AlembicExport
""")
self.log.debug(f"Executing command: {export_cmd}")
rt.AlembicExport.ArchiveType = rt.name("ogawa")
rt.AlembicExport.CoordinateSystem = rt.name("maya")
rt.AlembicExport.CustomAttributes = True
rt.AlembicExport.UVs = True
rt.AlembicExport.VertexColors = True
rt.AlembicExport.PreserveInstances = True
with maintained_selection():
# select and export
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(export_cmd)
rt.exportFile(
filepath,
rt.name("noPrompt"),
selectedOnly=True,
using=rt.AlembicExport,
)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'abc',
'ext': 'abc',
'files': filename,
"name": "abc",
"ext": "abc",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
filepath))
self.log.info(
"Extracted instance '%s' to: %s" % (instance.name, filepath)
)

View file

@ -1,18 +1,11 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import maintained_selection, get_all_children
class ExtractModelFbx(publish.Extractor,
OptionalPyblishPluginMixin):
class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
"""
Extract Geometry in FBX Format
"""
@ -33,42 +26,38 @@ class ExtractModelFbx(publish.Extractor,
stagingdir = self.staging_dir(instance)
filename = "{name}.fbx".format(**instance.data)
filepath = os.path.join(stagingdir,
filename)
self.log.info("Writing FBX '%s' to '%s'" % (filepath,
stagingdir))
filepath = os.path.join(stagingdir, filename)
self.log.info("Writing FBX '%s' to '%s'" % (filepath, stagingdir))
export_fbx_cmd = (
f"""
FBXExporterSetParam "Animation" false
FBXExporterSetParam "Cameras" false
FBXExporterSetParam "Lights" false
FBXExporterSetParam "PointCache" false
FBXExporterSetParam "AxisConversionMethod" "Animation"
FbxExporterSetParam "UpAxis" "Y"
FbxExporterSetParam "Preserveinstances" true
exportFile @"{filepath}" #noPrompt selectedOnly:true using:FBXEXP
""")
self.log.debug(f"Executing command: {export_fbx_cmd}")
rt.FBXExporterSetParam("Animation", False)
rt.FBXExporterSetParam("Cameras", False)
rt.FBXExporterSetParam("Lights", False)
rt.FBXExporterSetParam("PointCache", False)
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
rt.FBXExporterSetParam("UpAxis", "Y")
rt.FBXExporterSetParam("Preserveinstances", True)
with maintained_selection():
# select and export
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(export_fbx_cmd)
rt.exportFile(
filepath,
rt.name("noPrompt"),
selectedOnly=True,
using=rt.FBXEXP,
)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'fbx',
'ext': 'fbx',
'files': filename,
"name": "fbx",
"ext": "fbx",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
filepath))
self.log.info(
"Extracted instance '%s' to: %s" % (instance.name, filepath)
)

View file

@ -1,18 +1,11 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import maintained_selection, get_all_children
class ExtractModelObj(publish.Extractor,
OptionalPyblishPluginMixin):
class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
"""
Extract Geometry in OBJ Format
"""
@ -33,27 +26,31 @@ class ExtractModelObj(publish.Extractor,
stagingdir = self.staging_dir(instance)
filename = "{name}.obj".format(**instance.data)
filepath = os.path.join(stagingdir,
filename)
self.log.info("Writing OBJ '%s' to '%s'" % (filepath,
stagingdir))
filepath = os.path.join(stagingdir, filename)
self.log.info("Writing OBJ '%s' to '%s'" % (filepath, stagingdir))
with maintained_selection():
# select and export
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(f'exportFile @"{filepath}" #noPrompt selectedOnly:true using:ObjExp') # noqa
rt.exportFile(
filepath,
rt.name("noPrompt"),
selectedOnly=True,
using=rt.ObjExp,
)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'obj',
'ext': 'obj',
'files': filename,
"name": "obj",
"ext": "obj",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
filepath))
self.log.info(
"Extracted instance '%s' to: %s" % (instance.name, filepath)
)

View file

@ -41,10 +41,7 @@ import os
import pyblish.api
from openpype.pipeline import publish
from pymxs import runtime as rt
from openpype.hosts.max.api import (
maintained_selection,
get_all_children
)
from openpype.hosts.max.api import maintained_selection, get_all_children
class ExtractAlembic(publish.Extractor):
@ -66,35 +63,30 @@ class ExtractAlembic(publish.Extractor):
path = os.path.join(parent_dir, file_name)
# We run the render
self.log.info("Writing alembic '%s' to '%s'" % (file_name,
parent_dir))
self.log.info("Writing alembic '%s' to '%s'" % (file_name, parent_dir))
abc_export_cmd = (
f"""
AlembicExport.ArchiveType = #ogawa
AlembicExport.CoordinateSystem = #maya
AlembicExport.StartFrame = {start}
AlembicExport.EndFrame = {end}
exportFile @"{path}" #noPrompt selectedOnly:on using:AlembicExport
""")
self.log.debug(f"Executing command: {abc_export_cmd}")
rt.AlembicExport.ArchiveType = rt.name("ogawa")
rt.AlembicExport.CoordinateSystem = rt.name("maya")
rt.AlembicExport.StartFrame = start
rt.AlembicExport.EndFrame = end
with maintained_selection():
# select and export
rt.select(get_all_children(rt.getNodeByName(container)))
rt.execute(abc_export_cmd)
rt.exportFile(
path,
rt.name("noPrompt"),
selectedOnly=True,
using=rt.AlembicExport,
)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'abc',
'ext': 'abc',
'files': file_name,
"name": "abc",
"ext": "abc",
"files": file_name,
"stagingDir": parent_dir,
}
instance.data["representations"].append(representation)

View file

@ -190,6 +190,44 @@ def maintained_selection():
cmds.select(clear=True)
def get_custom_namespace(custom_namespace):
"""Return unique namespace.
The input namespace can contain a single group
of '#' number tokens to indicate where the namespace's
unique index should go. The amount of tokens defines
the zero padding of the number, e.g ### turns into 001.
Warning: Note that a namespace will always be
prefixed with a _ if it starts with a digit
Example:
>>> get_custom_namespace("myspace_##_")
# myspace_01_
>>> get_custom_namespace("##_myspace")
# _01_myspace
>>> get_custom_namespace("myspace##")
# myspace01
"""
split = re.split("([#]+)", custom_namespace, 1)
if len(split) == 3:
base, padding, suffix = split
padding = "%0{}d".format(len(padding))
else:
base = split[0]
padding = "%02d" # default padding
suffix = ""
return unique_namespace(
base,
format=padding,
prefix="_" if not base or base[0].isdigit() else "",
suffix=suffix
)
def unique_namespace(namespace, format="%02d", prefix="", suffix=""):
"""Return unique namespace
@ -316,11 +354,13 @@ def collect_animation_data(fps=False):
# get scene values as defaults
frame_start = cmds.playbackOptions(query=True, minTime=True)
frame_end = cmds.playbackOptions(query=True, maxTime=True)
handle_start = cmds.playbackOptions(query=True, animationStartTime=True)
handle_end = cmds.playbackOptions(query=True, animationEndTime=True)
frame_start_handle = cmds.playbackOptions(
query=True, animationStartTime=True
)
frame_end_handle = cmds.playbackOptions(query=True, animationEndTime=True)
handle_start = frame_start - handle_start
handle_end = handle_end - frame_end
handle_start = frame_start - frame_start_handle
handle_end = frame_end_handle - frame_end
# build attributes
data = OrderedDict()
@ -3937,7 +3977,9 @@ def get_capture_preset(task_name, task_type, subset, project_settings, log):
return capture_preset or {}
def create_rig_animation_instance(nodes, context, namespace, log=None):
def create_rig_animation_instance(
nodes, context, namespace, options=None, log=None
):
"""Create an animation publish instance for loaded rigs.
See the RecreateRigAnimationInstance inventory action on how to use this
@ -3947,12 +3989,16 @@ def create_rig_animation_instance(nodes, context, namespace, log=None):
nodes (list): Member nodes of the rig instance.
context (dict): Representation context of the rig container
namespace (str): Namespace of the rig container
options (dict, optional): Additional loader data
log (logging.Logger, optional): Logger to log to if provided
Returns:
None
"""
if options is None:
options = {}
output = next((node for node in nodes if
node.endswith("out_SET")), None)
controls = next((node for node in nodes if
@ -3971,6 +4017,23 @@ def create_rig_animation_instance(nodes, context, namespace, log=None):
asset = legacy_io.Session["AVALON_ASSET"]
dependency = str(context["representation"]["_id"])
custom_subset = options.get("animationSubsetName")
if custom_subset:
formatting_data = {
"asset_name": context['asset']['name'],
"asset_type": context['asset']['type'],
"subset": context['subset']['name'],
"family": (
context['subset']['data'].get('family') or
context['subset']['data']['families'][0]
)
}
namespace = get_custom_namespace(
custom_subset.format(
**formatting_data
)
)
if log:
log.info("Creating subset: {}".format(namespace))

View file

@ -84,44 +84,6 @@ def get_reference_node_parents(ref):
return parents
def get_custom_namespace(custom_namespace):
"""Return unique namespace.
The input namespace can contain a single group
of '#' number tokens to indicate where the namespace's
unique index should go. The amount of tokens defines
the zero padding of the number, e.g ### turns into 001.
Warning: Note that a namespace will always be
prefixed with a _ if it starts with a digit
Example:
>>> get_custom_namespace("myspace_##_")
# myspace_01_
>>> get_custom_namespace("##_myspace")
# _01_myspace
>>> get_custom_namespace("myspace##")
# myspace01
"""
split = re.split("([#]+)", custom_namespace, 1)
if len(split) == 3:
base, padding, suffix = split
padding = "%0{}d".format(len(padding))
else:
base = split[0]
padding = "%02d" # default padding
suffix = ""
return lib.unique_namespace(
base,
format=padding,
prefix="_" if not base or base[0].isdigit() else "",
suffix=suffix
)
class Creator(LegacyCreator):
defaults = ['Main']
@ -216,7 +178,7 @@ class ReferenceLoader(Loader):
count = options.get("count") or 1
for c in range(0, count):
namespace = get_custom_namespace(custom_namespace)
namespace = lib.get_custom_namespace(custom_namespace)
group_name = "{}:{}".format(
namespace,
custom_group_name

View file

@ -223,7 +223,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
def _post_process_rig(self, name, namespace, context, options):
nodes = self[:]
create_rig_animation_instance(
nodes, context, namespace, log=self.log
nodes, context, namespace, options=options, log=self.log
)
def _lock_camera_transforms(self, nodes):

View file

@ -166,7 +166,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
inputs = [c["representation"] for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
self.log.debug("Collected inputs: %s" % inputs)
def _collect_renderlayer_inputs(self, scene_containers, instance):
"""Collects inputs from nodes in renderlayer, incl. shaders + camera"""

View file

@ -31,5 +31,5 @@ class SaveCurrentScene(pyblish.api.ContextPlugin):
# remove lockfile before saving
if is_workfile_lock_enabled("maya", project_name, project_settings):
remove_workfile_lock(current)
self.log.info("Saving current file..")
self.log.info("Saving current file: {}".format(current))
cmds.file(save=True, force=True)

View file

@ -13,7 +13,6 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
invalid = list()
if not instance.data["setMembers"]:
objectset_name = instance.data['name']
@ -22,6 +21,10 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
return invalid
def process(self, instance):
# Allow renderlayer and workfile to be empty
skip_families = ["workfile", "renderlayer", "rendersetup"]
if instance.data.get("family") in skip_families:
return
invalid = self.get_invalid(instance)
if invalid:

View file

@ -2239,13 +2239,13 @@ class WorkfileSettings(object):
handle_end = data["handleEnd"]
fps = float(data["fps"])
frame_start = int(data["frameStart"]) - handle_start
frame_end = int(data["frameEnd"]) + handle_end
frame_start_handle = int(data["frameStart"]) - handle_start
frame_end_handle = int(data["frameEnd"]) + handle_end
self._root_node["lock_range"].setValue(False)
self._root_node["fps"].setValue(fps)
self._root_node["first_frame"].setValue(frame_start)
self._root_node["last_frame"].setValue(frame_end)
self._root_node["first_frame"].setValue(frame_start_handle)
self._root_node["last_frame"].setValue(frame_end_handle)
self._root_node["lock_range"].setValue(True)
# setting active viewers

View file

@ -151,6 +151,7 @@ class NukeHost(
def add_nuke_callbacks():
""" Adding all available nuke callbacks
"""
nuke_settings = get_current_project_settings()["nuke"]
workfile_settings = WorkfileSettings()
# Set context settings.
nuke.addOnCreate(
@ -169,7 +170,10 @@ def add_nuke_callbacks():
# # set apply all workfile settings on script load and save
nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
nuke.addFilenameFilter(dirmap_file_name_filter)
if nuke_settings["nuke-dirmap"]["enabled"]:
log.info("Added Nuke's dirmaping callback ...")
# Add dirmap for file paths.
nuke.addFilenameFilter(dirmap_file_name_filter)
log.info("Added Nuke callbacks ...")

View file

@ -133,11 +133,11 @@ class CollectNukeWrites(pyblish.api.InstancePlugin,
else:
representation['files'] = collected_frames
# inject colorspace data
self.set_representation_colorspace(
representation, instance.context,
colorspace=colorspace
)
# inject colorspace data
self.set_representation_colorspace(
representation, instance.context,
colorspace=colorspace
)
instance.data["representations"].append(representation)
self.log.info("Publishing rendered frames ...")

View file

@ -5,6 +5,8 @@ import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.nuke import api as napi
from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings
if sys.version_info[0] >= 3:
unicode = str
@ -28,7 +30,7 @@ class ExtractThumbnail(publish.Extractor):
bake_viewer_process = True
bake_viewer_input_process = True
nodes = {}
reposition_nodes = None
def process(self, instance):
if instance.data.get("farm"):
@ -123,18 +125,32 @@ class ExtractThumbnail(publish.Extractor):
temporary_nodes.append(rnode)
previous_node = rnode
reformat_node = nuke.createNode("Reformat")
ref_node = self.nodes.get("Reformat", None)
if ref_node:
for k, v in ref_node:
self.log.debug("k, v: {0}:{1}".format(k, v))
if isinstance(v, unicode):
v = str(v)
reformat_node[k].setValue(v)
if self.reposition_nodes is None:
# [deprecated] create reformat node old way
reformat_node = nuke.createNode("Reformat")
ref_node = self.nodes.get("Reformat", None)
if ref_node:
for k, v in ref_node:
self.log.debug("k, v: {0}:{1}".format(k, v))
if isinstance(v, unicode):
v = str(v)
reformat_node[k].setValue(v)
reformat_node.setInput(0, previous_node)
previous_node = reformat_node
temporary_nodes.append(reformat_node)
reformat_node.setInput(0, previous_node)
previous_node = reformat_node
temporary_nodes.append(reformat_node)
else:
# create reformat node new way
for repo_node in self.reposition_nodes:
node_class = repo_node["node_class"]
knobs = repo_node["knobs"]
node = nuke.createNode(node_class)
set_node_knobs_from_settings(node, knobs)
# connect in order
node.setInput(0, previous_node)
previous_node = node
temporary_nodes.append(node)
# only create colorspace baking if toggled on
if bake_viewer_process:

View file

@ -24,6 +24,8 @@ from .lib import (
get_project_manager,
get_current_project,
get_current_timeline,
get_any_timeline,
get_new_timeline,
create_bin,
get_media_pool_item,
create_media_pool_item,
@ -95,6 +97,8 @@ __all__ = [
"get_project_manager",
"get_current_project",
"get_current_timeline",
"get_any_timeline",
"get_new_timeline",
"create_bin",
"get_media_pool_item",
"create_media_pool_item",

View file

@ -15,6 +15,7 @@ log = Logger.get_logger(__name__)
self = sys.modules[__name__]
self.project_manager = None
self.media_storage = None
self.current_project = None
# OpenPype sequential rename variables
self.rename_index = 0
@ -85,22 +86,60 @@ def get_media_storage():
def get_current_project():
# initialize project manager
get_project_manager()
"""Get current project object.
"""
if not self.current_project:
self.current_project = get_project_manager().GetCurrentProject()
return self.project_manager.GetCurrentProject()
return self.current_project
def get_current_timeline(new=False):
# get current project
"""Get current timeline object.
Args:
new (bool)[optional]: [DEPRECATED] if True it will create
new timeline if none exists
Returns:
TODO: will need to reflect future `None`
object: resolve.Timeline
"""
project = get_current_project()
timeline = project.GetCurrentTimeline()
# return current timeline if any
if timeline:
return timeline
# TODO: [deprecated] and will be removed in future
if new:
media_pool = project.GetMediaPool()
new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name)
project.SetCurrentTimeline(new_timeline)
return get_new_timeline()
return project.GetCurrentTimeline()
def get_any_timeline():
"""Get any timeline object.
Returns:
object | None: resolve.Timeline
"""
project = get_current_project()
timeline_count = project.GetTimelineCount()
if timeline_count > 0:
return project.GetTimelineByIndex(1)
def get_new_timeline():
"""Get new timeline object.
Returns:
object: resolve.Timeline
"""
project = get_current_project()
media_pool = project.GetMediaPool()
new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name)
project.SetCurrentTimeline(new_timeline)
return new_timeline
def create_bin(name: str, root: object = None) -> object:
@ -312,7 +351,13 @@ def get_current_timeline_items(
track_type = track_type or "video"
selecting_color = selecting_color or "Chocolate"
project = get_current_project()
timeline = get_current_timeline()
# get timeline anyhow
timeline = (
get_current_timeline() or
get_any_timeline() or
get_new_timeline()
)
selected_clips = []
# get all tracks count filtered by track type

View file

@ -327,7 +327,10 @@ class ClipLoader:
self.active_timeline = options["timeline"]
else:
# create new sequence
self.active_timeline = lib.get_current_timeline(new=True)
self.active_timeline = (
lib.get_current_timeline() or
lib.get_new_timeline()
)
else:
self.active_timeline = lib.get_current_timeline()

View file

@ -1,4 +1,5 @@
import os
from pathlib import Path
import platform
from openpype.lib import PreLaunchHook
from openpype.hosts.resolve.utils import setup
@ -6,33 +7,57 @@ from openpype.hosts.resolve.utils import setup
class ResolvePrelaunch(PreLaunchHook):
"""
This hook will check if current workfile path has Resolve
project inside. IF not, it initialize it and finally it pass
path to the project by environment variable to Premiere launcher
shell script.
This hook will set up the Resolve scripting environment as described in
Resolve's documentation found with the installed application at
{resolve}/Support/Developer/Scripting/README.txt
Prepares the following environment variables:
- `RESOLVE_SCRIPT_API`
- `RESOLVE_SCRIPT_LIB`
It adds $RESOLVE_SCRIPT_API/Modules to PYTHONPATH.
Additionally it sets up the Python home for Python 3 based on the
RESOLVE_PYTHON3_HOME in the environment (usually defined in OpenPype's
Application environment for Resolve by the admin). For this it sets
PYTHONHOME and PATH variables.
It also defines:
- `RESOLVE_UTILITY_SCRIPTS_DIR`: Destination directory for OpenPype
Fusion scripts to be copied to for Resolve to pick them up.
- `OPENPYPE_LOG_NO_COLORS` to True to ensure OP doesn't try to
use logging with terminal colors as it fails in Resolve.
"""
app_groups = ["resolve"]
def execute(self):
current_platform = platform.system().lower()
PROGRAMDATA = self.launch_context.env.get("PROGRAMDATA", "")
RESOLVE_SCRIPT_API_ = {
programdata = self.launch_context.env.get("PROGRAMDATA", "")
resolve_script_api_locations = {
"windows": (
f"{PROGRAMDATA}/Blackmagic Design/"
f"{programdata}/Blackmagic Design/"
"DaVinci Resolve/Support/Developer/Scripting"
),
"darwin": (
"/Library/Application Support/Blackmagic Design"
"/DaVinci Resolve/Developer/Scripting"
),
"linux": "/opt/resolve/Developer/Scripting"
"linux": "/opt/resolve/Developer/Scripting",
}
RESOLVE_SCRIPT_API = os.path.normpath(
RESOLVE_SCRIPT_API_[current_platform])
self.launch_context.env["RESOLVE_SCRIPT_API"] = RESOLVE_SCRIPT_API
resolve_script_api = Path(
resolve_script_api_locations[current_platform]
)
self.log.info(
f"setting RESOLVE_SCRIPT_API variable to {resolve_script_api}"
)
self.launch_context.env[
"RESOLVE_SCRIPT_API"
] = resolve_script_api.as_posix()
RESOLVE_SCRIPT_LIB_ = {
resolve_script_lib_dirs = {
"windows": (
"C:/Program Files/Blackmagic Design"
"/DaVinci Resolve/fusionscript.dll"
@ -41,61 +66,69 @@ class ResolvePrelaunch(PreLaunchHook):
"/Applications/DaVinci Resolve/DaVinci Resolve.app"
"/Contents/Libraries/Fusion/fusionscript.so"
),
"linux": "/opt/resolve/libs/Fusion/fusionscript.so"
"linux": "/opt/resolve/libs/Fusion/fusionscript.so",
}
RESOLVE_SCRIPT_LIB = os.path.normpath(
RESOLVE_SCRIPT_LIB_[current_platform])
self.launch_context.env["RESOLVE_SCRIPT_LIB"] = RESOLVE_SCRIPT_LIB
resolve_script_lib = Path(resolve_script_lib_dirs[current_platform])
self.launch_context.env[
"RESOLVE_SCRIPT_LIB"
] = resolve_script_lib.as_posix()
self.log.info(
f"setting RESOLVE_SCRIPT_LIB variable to {resolve_script_lib}"
)
# TODO: add OTIO installation from `openpype/requirements.py`
# TODO: add OTIO installation from `openpype/requirements.py`
# making sure python <3.9.* is installed at provided path
python3_home = os.path.normpath(
self.launch_context.env.get("RESOLVE_PYTHON3_HOME", ""))
python3_home = Path(
self.launch_context.env.get("RESOLVE_PYTHON3_HOME", "")
)
assert os.path.isdir(python3_home), (
assert python3_home.is_dir(), (
"Python 3 is not installed at the provided folder path. Either "
"make sure the `environments\resolve.json` is having correctly "
"set `RESOLVE_PYTHON3_HOME` or make sure Python 3 is installed "
f"in given path. \nRESOLVE_PYTHON3_HOME: `{python3_home}`"
)
self.launch_context.env["PYTHONHOME"] = python3_home
self.log.info(f"Path to Resolve Python folder: `{python3_home}`...")
# add to the python path to path
env_path = self.launch_context.env["PATH"]
self.launch_context.env["PATH"] = os.pathsep.join([
python3_home,
os.path.join(python3_home, "Scripts")
] + env_path.split(os.pathsep))
self.log.debug(f"PATH: {self.launch_context.env['PATH']}")
python3_home_str = python3_home.as_posix()
self.launch_context.env["PYTHONHOME"] = python3_home_str
self.log.info(f"Path to Resolve Python folder: `{python3_home_str}`")
# add to the PYTHONPATH
env_pythonpath = self.launch_context.env["PYTHONPATH"]
self.launch_context.env["PYTHONPATH"] = os.pathsep.join([
os.path.join(python3_home, "Lib", "site-packages"),
os.path.join(RESOLVE_SCRIPT_API, "Modules"),
] + env_pythonpath.split(os.pathsep))
modules_path = Path(resolve_script_api, "Modules").as_posix()
self.launch_context.env[
"PYTHONPATH"
] = f"{modules_path}{os.pathsep}{env_pythonpath}"
self.log.debug(f"PYTHONPATH: {self.launch_context.env['PYTHONPATH']}")
RESOLVE_UTILITY_SCRIPTS_DIR_ = {
# add the pythonhome folder to PATH because on Windows
# this is needed for Py3 to be correctly detected within Resolve
env_path = self.launch_context.env["PATH"]
self.log.info(f"Adding `{python3_home_str}` to the PATH variable")
self.launch_context.env[
"PATH"
] = f"{python3_home_str}{os.pathsep}{env_path}"
self.log.debug(f"PATH: {self.launch_context.env['PATH']}")
resolve_utility_scripts_dirs = {
"windows": (
f"{PROGRAMDATA}/Blackmagic Design"
f"{programdata}/Blackmagic Design"
"/DaVinci Resolve/Fusion/Scripts/Comp"
),
"darwin": (
"/Library/Application Support/Blackmagic Design"
"/DaVinci Resolve/Fusion/Scripts/Comp"
),
"linux": "/opt/resolve/Fusion/Scripts/Comp"
"linux": "/opt/resolve/Fusion/Scripts/Comp",
}
RESOLVE_UTILITY_SCRIPTS_DIR = os.path.normpath(
RESOLVE_UTILITY_SCRIPTS_DIR_[current_platform]
resolve_utility_scripts_dir = Path(
resolve_utility_scripts_dirs[current_platform]
)
# setting utility scripts dir for scripts syncing
self.launch_context.env["RESOLVE_UTILITY_SCRIPTS_DIR"] = (
RESOLVE_UTILITY_SCRIPTS_DIR)
self.launch_context.env[
"RESOLVE_UTILITY_SCRIPTS_DIR"
] = resolve_utility_scripts_dir.as_posix()
# remove terminal coloring tags
self.launch_context.env["OPENPYPE_LOG_NO_COLORS"] = "True"

View file

@ -19,6 +19,7 @@ from openpype.lib.transcoding import (
IMAGE_EXTENSIONS
)
class LoadClip(plugin.TimelineItemLoader):
"""Load a subset to timeline as clip

View file

@ -0,0 +1,13 @@
#! python3
from openpype.pipeline import install_host
from openpype.hosts.resolve import api as bmdvr
from openpype.hosts.resolve.api.lib import get_current_project
if __name__ == "__main__":
install_host(bmdvr)
project = get_current_project()
timeline_count = project.GetTimelineCount()
print(f"Timeline count: {timeline_count}")
timeline = project.GetTimelineByIndex(timeline_count)
print(f"Timeline name: {timeline.GetName()}")
print(timeline.GetTrackCount("video"))

View file

@ -1,6 +1,6 @@
import os
import shutil
from openpype.lib import Logger
from openpype.lib import Logger, is_running_from_build
RESOLVE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@ -8,30 +8,30 @@ RESOLVE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
def setup(env):
log = Logger.get_logger("ResolveSetup")
scripts = {}
us_env = env.get("RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR")
us_dir = env["RESOLVE_UTILITY_SCRIPTS_DIR"]
util_scripts_env = env.get("RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR")
util_scripts_dir = env["RESOLVE_UTILITY_SCRIPTS_DIR"]
us_paths = [os.path.join(
util_scripts_paths = [os.path.join(
RESOLVE_ROOT_DIR,
"utility_scripts"
)]
# collect script dirs
if us_env:
log.info("Utility Scripts Env: `{}`".format(us_env))
us_paths = us_env.split(
os.pathsep) + us_paths
if util_scripts_env:
log.info("Utility Scripts Env: `{}`".format(util_scripts_env))
util_scripts_paths = util_scripts_env.split(
os.pathsep) + util_scripts_paths
# collect scripts from dirs
for path in us_paths:
for path in util_scripts_paths:
scripts.update({path: os.listdir(path)})
log.info("Utility Scripts Dir: `{}`".format(us_paths))
log.info("Utility Scripts Dir: `{}`".format(util_scripts_paths))
log.info("Utility Scripts: `{}`".format(scripts))
# make sure no script file is in folder
for s in os.listdir(us_dir):
path = os.path.join(us_dir, s)
for script in os.listdir(util_scripts_dir):
path = os.path.join(util_scripts_dir, script)
log.info("Removing `{}`...".format(path))
if os.path.isdir(path):
shutil.rmtree(path, onerror=None)
@ -39,12 +39,17 @@ def setup(env):
os.remove(path)
# copy scripts into Resolve's utility scripts dir
for d, sl in scripts.items():
# directory and scripts list
for s in sl:
# script in script list
src = os.path.join(d, s)
dst = os.path.join(us_dir, s)
for directory, scripts in scripts.items():
for script in scripts:
if (
is_running_from_build() and
script in ["tests", "develop"]
):
# only copy those if started from build
continue
src = os.path.join(directory, script)
dst = os.path.join(util_scripts_dir, script)
log.info("Copying `{}` to `{}`...".format(src, dst))
if os.path.isdir(src):
shutil.copytree(

View file

@ -16,11 +16,12 @@ class SaveCurrentWorkfile(pyblish.api.ContextPlugin):
def process(self, context):
host = registered_host()
if context.data["currentFile"] != host.get_current_workfile():
current = host.get_current_workfile()
if context.data["currentFile"] != current:
raise KnownPublishError("Workfile has changed during publishing!")
if host.has_unsaved_changes():
self.log.info("Saving current file..")
self.log.info("Saving current file: {}".format(current))
host.save_workfile()
else:
self.log.debug("Skipping workfile save because there are no "

View file

@ -1,8 +1,5 @@
import os
from pathlib import Path
from openpype.modules import IHostAddon, OpenPypeModule
from .lib import get_compatible_integration
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@ -17,10 +14,14 @@ class UnrealAddon(OpenPypeModule, IHostAddon):
def add_implementation_envs(self, env, app):
"""Modify environments to contain all required for implementation."""
# Set AYON_UNREAL_PLUGIN required for Unreal implementation
# Imports are in this method for Python 2 compatiblity of an addon
from pathlib import Path
from .lib import get_compatible_integration
ue_version = app.name.replace("-", ".")
unreal_plugin_path = os.path.join(
UNREAL_ROOT_DIR, "integration", f"UE_{ue_version}", "Ayon"
UNREAL_ROOT_DIR, "integration", "UE_{}".format(ue_version), "Ayon"
)
if not Path(unreal_plugin_path).exists():
compatible_versions = get_compatible_integration(

View file

@ -1,10 +0,0 @@
# Building the plugin
In order to successfully build the plugin, make sure that the path to the UnrealBuildTool.exe is specified correctly.
After the UBT path specify for which platform it will be compiled. in the -Project parameter, specify the path to the
CommandletProject.uproject file. Next the build type has to be specified (DebugGame, Development, Package, etc.) and then the -TargetType (Editor, Runtime, etc.)
`BuildPlugin_[Ver].bat` runs the building process in the background. If you want to show the progress inside the
command prompt, use the `BuildPlugin_[Ver]_Window.bat` file.

View file

@ -1,35 +0,0 @@
# Prerequisites
*.d
# Compiled Object files
*.slo
*.lo
*.o
*.obj
# Precompiled Headers
*.gch
*.pch
# Compiled Dynamic libraries
*.so
*.dylib
*.dll
# Fortran module files
*.mod
*.smod
# Compiled Static libraries
*.lai
*.la
*.a
*.lib
# Executables
*.exe
*.out
*.app
/Binaries
/Intermediate

View file

@ -1,23 +0,0 @@
{
"FileVersion": 3,
"Version": 1,
"VersionName": "1.0",
"FriendlyName": "Ayon",
"Description": "Ayon Integration",
"Category": "Ayon.Integration",
"CreatedBy": "Ondrej Samohel",
"CreatedByURL": "https://ayon.ynput.io",
"DocsURL": "https://ayon.ynput.io/docs/artist_hosts_unreal",
"MarketplaceURL": "",
"SupportURL": "https://ynput.io/",
"EngineVersion": "4.27",
"CanContainContent": true,
"Installed": true,
"Modules": [
{
"Name": "Ayon",
"Type": "Editor",
"LoadingPhase": "Default"
}
]
}

View file

@ -1,2 +0,0 @@
[/Script/Ayon.AyonSettings]
FolderColor=(R=91,G=197,B=220,A=255)

View file

@ -1,8 +0,0 @@
[FilterPlugin]
; This section lists additional files which will be packaged along with your plugin. Paths should be listed relative to the root plugin directory, and
; may include "...", "*", and "?" wildcards to match directories, files, and individual characters respectively.
;
; Examples:
; /README.txt
; /Extras/...
; /Binaries/ThirdParty/*.dll

View file

@ -1,30 +0,0 @@
import unreal
ayon_detected = True
try:
from openpype.pipeline import install_host
from openpype.hosts.unreal.api import UnrealHost
ayon_host = UnrealHost()
except ImportError as exc:
ayon_host = None
ayon_detected = False
unreal.log_error(f"OpenPype: cannot load Ayon [ {exc} ]")
if ayon_detected:
install_host(ayon_host)
@unreal.uclass()
class AyonIntegration(unreal.AyonPythonBridge):
@unreal.ufunction(override=True)
def RunInPython_Popup(self):
unreal.log_warning("Ayon: showing tools popup")
if ayon_detected:
ayon_host.show_tools_popup()
@unreal.ufunction(override=True)
def RunInPython_Dialog(self):
unreal.log_warning("Ayon: showing tools dialog")
if ayon_detected:
ayon_host.show_tools_dialog()

View file

@ -1,3 +0,0 @@
# Ayon Unreal Integration plugin - UE 4.x
This is plugin for Unreal Editor, creating menu for [Ayon](https://github.com/ynput/OpenPype) tools to run.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 721 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

View file

@ -1,61 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
using UnrealBuildTool;
public class Ayon : ModuleRules
{
public Ayon(ReadOnlyTargetRules Target) : base(Target)
{
PCHUsage = PCHUsageMode.UseExplicitOrSharedPCHs;
PublicIncludePaths.AddRange(
new string[]
{
// ... add public include paths required here ...
}
);
PrivateIncludePaths.AddRange(
new string[]
{
// ... add other private include paths required here ...
}
);
PublicDependencyModuleNames.AddRange(
new string[]
{
"Core",
// ... add other public dependencies that you statically link with here ...
}
);
PrivateDependencyModuleNames.AddRange(
new string[]
{
"GameProjectGeneration",
"Projects",
"InputCore",
"UnrealEd",
"LevelEditor",
"CoreUObject",
"Engine",
"Slate",
"SlateCore",
"AssetTools"
// ... add private dependencies that you statically link with here ...
}
);
DynamicallyLoadedModuleNames.AddRange(
new string[]
{
// ... add any modules that your module loads dynamically here ...
}
);
}
}

View file

@ -1,156 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
#include "Ayon.h"
#include "ISettingsContainer.h"
#include "ISettingsModule.h"
#include "ISettingsSection.h"
#include "LevelEditor.h"
#include "AyonPythonBridge.h"
#include "AyonSettings.h"
#include "AyonStyle.h"
#include "Modules/ModuleManager.h"
static const FName AyonTabName("Ayon");
#define LOCTEXT_NAMESPACE "FAyonModule"
// This function is triggered when the plugin is staring up
void FAyonModule::StartupModule()
{
if (!IsRunningCommandlet()) {
FAyonStyle::Initialize();
FAyonStyle::SetIcon("Logo", "ayon40");
// Create the Extender that will add content to the menu
FLevelEditorModule& LevelEditorModule = FModuleManager::LoadModuleChecked<FLevelEditorModule>("LevelEditor");
TSharedPtr<FExtender> MenuExtender = MakeShareable(new FExtender());
TSharedPtr<FExtender> ToolbarExtender = MakeShareable(new FExtender());
MenuExtender->AddMenuExtension(
"LevelEditor",
EExtensionHook::After,
NULL,
FMenuExtensionDelegate::CreateRaw(this, &FAyonModule::AddMenuEntry)
);
ToolbarExtender->AddToolBarExtension(
"Settings",
EExtensionHook::After,
NULL,
FToolBarExtensionDelegate::CreateRaw(this, &FAyonModule::AddToobarEntry));
LevelEditorModule.GetMenuExtensibilityManager()->AddExtender(MenuExtender);
LevelEditorModule.GetToolBarExtensibilityManager()->AddExtender(ToolbarExtender);
RegisterSettings();
}
}
void FAyonModule::ShutdownModule()
{
FAyonStyle::Shutdown();
}
void FAyonModule::AddMenuEntry(FMenuBuilder& MenuBuilder)
{
// Create Section
MenuBuilder.BeginSection("Ayon", TAttribute<FText>(FText::FromString("Ayon")));
{
// Create a Submenu inside of the Section
MenuBuilder.AddMenuEntry(
FText::FromString("Tools..."),
FText::FromString("Pipeline tools"),
FSlateIcon(FAyonStyle::GetStyleSetName(), "Ayon.Logo"),
FUIAction(FExecuteAction::CreateRaw(this, &FAyonModule::MenuPopup))
);
MenuBuilder.AddMenuEntry(
FText::FromString("Tools dialog..."),
FText::FromString("Pipeline tools dialog"),
FSlateIcon(FAyonStyle::GetStyleSetName(), "Ayon.Logo"),
FUIAction(FExecuteAction::CreateRaw(this, &FAyonModule::MenuDialog))
);
}
MenuBuilder.EndSection();
}
void FAyonModule::AddToobarEntry(FToolBarBuilder& ToolbarBuilder)
{
ToolbarBuilder.BeginSection(TEXT("Ayon"));
{
ToolbarBuilder.AddToolBarButton(
FUIAction(
FExecuteAction::CreateRaw(this, &FAyonModule::MenuPopup),
NULL,
FIsActionChecked()
),
NAME_None,
LOCTEXT("Ayon_label", "Ayon"),
LOCTEXT("Ayon_tooltip", "Ayon Tools"),
FSlateIcon(FAyonStyle::GetStyleSetName(), "Ayon.Logo")
);
}
ToolbarBuilder.EndSection();
}
void FAyonModule::RegisterSettings()
{
ISettingsModule& SettingsModule = FModuleManager::LoadModuleChecked<ISettingsModule>("Settings");
// Create the new category
// TODO: After the movement of the plugin from the game to editor, it might be necessary to move this!
ISettingsContainerPtr SettingsContainer = SettingsModule.GetContainer("Project");
UAyonSettings* Settings = GetMutableDefault<UAyonSettings>();
// Register the settings
ISettingsSectionPtr SettingsSection = SettingsModule.RegisterSettings("Project", "Ayon", "General",
LOCTEXT("RuntimeGeneralSettingsName",
"General"),
LOCTEXT("RuntimeGeneralSettingsDescription",
"Base configuration for Open Pype Module"),
Settings
);
// Register the save handler to your settings, you might want to use it to
// validate those or just act to settings changes.
if (SettingsSection.IsValid())
{
SettingsSection->OnModified().BindRaw(this, &FAyonModule::HandleSettingsSaved);
}
}
bool FAyonModule::HandleSettingsSaved()
{
UAyonSettings* Settings = GetMutableDefault<UAyonSettings>();
bool ResaveSettings = false;
// You can put any validation code in here and resave the settings in case an invalid
// value has been entered
if (ResaveSettings)
{
Settings->SaveConfig();
}
return true;
}
void FAyonModule::MenuPopup()
{
UAyonPythonBridge* bridge = UAyonPythonBridge::Get();
bridge->RunInPython_Popup();
}
void FAyonModule::MenuDialog()
{
UAyonPythonBridge* bridge = UAyonPythonBridge::Get();
bridge->RunInPython_Dialog();
}
IMPLEMENT_MODULE(FAyonModule, Ayon)

View file

@ -1,114 +0,0 @@
// Fill out your copyright notice in the Description page of Project Settings.
#include "AyonAssetContainer.h"
#include "AssetRegistryModule.h"
#include "Misc/PackageName.h"
#include "Containers/UnrealString.h"
UAyonAssetContainer::UAyonAssetContainer(const FObjectInitializer& ObjectInitializer)
: UAssetUserData(ObjectInitializer)
{
FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked<FAssetRegistryModule>("AssetRegistry");
FString path = UAyonAssetContainer::GetPathName();
UE_LOG(LogTemp, Warning, TEXT("UAyonAssetContainer %s"), *path);
FARFilter Filter;
Filter.PackagePaths.Add(FName(*path));
AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UAyonAssetContainer::OnAssetAdded);
AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UAyonAssetContainer::OnAssetRemoved);
AssetRegistryModule.Get().OnAssetRenamed().AddUObject(this, &UAyonAssetContainer::OnAssetRenamed);
}
void UAyonAssetContainer::OnAssetAdded(const FAssetData& AssetData)
{
TArray<FString> split;
// get directory of current container
FString selfFullPath = UAyonAssetContainer::GetPathName();
FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath);
// get asset path and class
FString assetPath = AssetData.GetFullName();
FString assetFName = AssetData.AssetClass.ToString();
// split path
assetPath.ParseIntoArray(split, TEXT(" "), true);
FString assetDir = FPackageName::GetLongPackagePath(*split[1]);
// take interest only in paths starting with path of current container
if (assetDir.StartsWith(*selfDir))
{
// exclude self
if (assetFName != "AyonAssetContainer")
{
assets.Add(assetPath);
assetsData.Add(AssetData);
UE_LOG(LogTemp, Log, TEXT("%s: asset added to %s"), *selfFullPath, *selfDir);
}
}
}
void UAyonAssetContainer::OnAssetRemoved(const FAssetData& AssetData)
{
TArray<FString> split;
// get directory of current container
FString selfFullPath = UAyonAssetContainer::GetPathName();
FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath);
// get asset path and class
FString assetPath = AssetData.GetFullName();
FString assetFName = AssetData.AssetClass.ToString();
// split path
assetPath.ParseIntoArray(split, TEXT(" "), true);
FString assetDir = FPackageName::GetLongPackagePath(*split[1]);
// take interest only in paths starting with path of current container
FString path = UAyonAssetContainer::GetPathName();
FString lpp = FPackageName::GetLongPackagePath(*path);
if (assetDir.StartsWith(*selfDir))
{
// exclude self
if (assetFName != "AyonAssetContainer")
{
// UE_LOG(LogTemp, Warning, TEXT("%s: asset removed"), *lpp);
assets.Remove(assetPath);
assetsData.Remove(AssetData);
}
}
}
void UAyonAssetContainer::OnAssetRenamed(const FAssetData& AssetData, const FString& str)
{
TArray<FString> split;
// get directory of current container
FString selfFullPath = UAyonAssetContainer::GetPathName();
FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath);
// get asset path and class
FString assetPath = AssetData.GetFullName();
FString assetFName = AssetData.AssetClass.ToString();
// split path
assetPath.ParseIntoArray(split, TEXT(" "), true);
FString assetDir = FPackageName::GetLongPackagePath(*split[1]);
if (assetDir.StartsWith(*selfDir))
{
// exclude self
if (assetFName != "AyonAssetContainer")
{
assets.Remove(str);
assets.Add(assetPath);
assetsData.Remove(AssetData);
// UE_LOG(LogTemp, Warning, TEXT("%s: asset renamed %s"), *lpp, *str);
}
}
}

View file

@ -1,20 +0,0 @@
#include "AyonAssetContainerFactory.h"
#include "AyonAssetContainer.h"
UAyonAssetContainerFactory::UAyonAssetContainerFactory(const FObjectInitializer& ObjectInitializer)
: UFactory(ObjectInitializer)
{
SupportedClass = UAyonAssetContainer::StaticClass();
bCreateNew = false;
bEditorImport = true;
}
UObject* UAyonAssetContainerFactory::FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn)
{
UAyonAssetContainer* AssetContainer = NewObject<UAyonAssetContainer>(InParent, Class, Name, Flags);
return AssetContainer;
}
bool UAyonAssetContainerFactory::ShouldShowInNewMenu() const {
return false;
}

View file

@ -1,53 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
#include "AyonLib.h"
#include "AssetViewUtils.h"
#include "Misc/Paths.h"
#include "Misc/ConfigCacheIni.h"
#include "UObject/UnrealType.h"
/**
* Sets color on folder icon on given path
* @param InPath - path to folder
* @param InFolderColor - color of the folder
* @warning This color will appear only after Editor restart. Is there a better way?
*/
bool UAyonLib::SetFolderColor(const FString& FolderPath, const FLinearColor& FolderColor, const bool& bForceAdd)
{
if (AssetViewUtils::DoesFolderExist(FolderPath))
{
const TSharedPtr<FLinearColor> LinearColor = MakeShared<FLinearColor>(FolderColor);
AssetViewUtils::SaveColor(FolderPath, LinearColor, true);
UE_LOG(LogAssetData, Display, TEXT("A color {%s} has been set to folder \"%s\""), *LinearColor->ToString(),
*FolderPath)
return true;
}
UE_LOG(LogAssetData, Display, TEXT("Setting a color {%s} to folder \"%s\" has failed! Directory doesn't exist!"),
*FolderColor.ToString(), *FolderPath)
return false;
}
/**
* Returns all poperties on given object
* @param cls - class
* @return TArray of properties
*/
TArray<FString> UAyonLib::GetAllProperties(UClass* cls)
{
TArray<FString> Ret;
if (cls != nullptr)
{
for (TFieldIterator<FProperty> It(cls); It; ++It)
{
FProperty* Property = *It;
if (Property->HasAnyPropertyFlags(EPropertyFlags::CPF_Edit))
{
Ret.Add(Property->GetName());
}
}
}
return Ret;
}

View file

@ -1,203 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
// Deprecation warning: this is left here just for backwards compatibility
// and will be removed in next versions of Ayon.
#pragma once
#include "AyonPublishInstance.h"
#include "AssetRegistryModule.h"
#include "AyonLib.h"
#include "AyonSettings.h"
#include "Framework/Notifications/NotificationManager.h"
#include "Widgets/Notifications/SNotificationList.h"
//Moves all the invalid pointers to the end to prepare them for the shrinking
#define REMOVE_INVALID_ENTRIES(VAR) VAR.CompactStable(); \
VAR.Shrink();
UAyonPublishInstance::UAyonPublishInstance(const FObjectInitializer& ObjectInitializer)
: UPrimaryDataAsset(ObjectInitializer)
{
const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked<
FAssetRegistryModule>("AssetRegistry");
const FPropertyEditorModule& PropertyEditorModule = FModuleManager::LoadModuleChecked<FPropertyEditorModule>(
"PropertyEditor");
FString Left, Right;
GetPathName().Split("/" + GetName(), &Left, &Right);
FARFilter Filter;
Filter.PackagePaths.Emplace(FName(Left));
TArray<FAssetData> FoundAssets;
AssetRegistryModule.GetRegistry().GetAssets(Filter, FoundAssets);
for (const FAssetData& AssetData : FoundAssets)
OnAssetCreated(AssetData);
REMOVE_INVALID_ENTRIES(AssetDataInternal)
REMOVE_INVALID_ENTRIES(AssetDataExternal)
AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UAyonPublishInstance::OnAssetCreated);
AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UAyonPublishInstance::OnAssetRemoved);
AssetRegistryModule.Get().OnAssetUpdated().AddUObject(this, &UAyonPublishInstance::OnAssetUpdated);
#ifdef WITH_EDITOR
ColorAyonDirs();
#endif
}
void UAyonPublishInstance::OnAssetCreated(const FAssetData& InAssetData)
{
TArray<FString> split;
UObject* Asset = InAssetData.GetAsset();
if (!IsValid(Asset))
{
UE_LOG(LogAssetData, Warning, TEXT("Asset \"%s\" is not valid! Skipping the addition."),
*InAssetData.ObjectPath.ToString());
return;
}
const bool result = IsUnderSameDir(Asset) && Cast<UAyonPublishInstance>(Asset) == nullptr;
if (result)
{
if (AssetDataInternal.Emplace(Asset).IsValidId())
{
UE_LOG(LogTemp, Log, TEXT("Added an Asset to PublishInstance - Publish Instance: %s, Asset %s"),
*this->GetName(), *Asset->GetName());
}
}
}
void UAyonPublishInstance::OnAssetRemoved(const FAssetData& InAssetData)
{
if (Cast<UAyonPublishInstance>(InAssetData.GetAsset()) == nullptr)
{
if (AssetDataInternal.Contains(nullptr))
{
AssetDataInternal.Remove(nullptr);
REMOVE_INVALID_ENTRIES(AssetDataInternal)
}
else
{
AssetDataExternal.Remove(nullptr);
REMOVE_INVALID_ENTRIES(AssetDataExternal)
}
}
}
void UAyonPublishInstance::OnAssetUpdated(const FAssetData& InAssetData)
{
REMOVE_INVALID_ENTRIES(AssetDataInternal);
REMOVE_INVALID_ENTRIES(AssetDataExternal);
}
bool UAyonPublishInstance::IsUnderSameDir(const UObject* InAsset) const
{
FString ThisLeft, ThisRight;
this->GetPathName().Split(this->GetName(), &ThisLeft, &ThisRight);
return InAsset->GetPathName().StartsWith(ThisLeft);
}
#ifdef WITH_EDITOR
void UAyonPublishInstance::ColorAyonDirs()
{
FString PathName = this->GetPathName();
//Check whether the path contains the defined Ayon folder
if (!PathName.Contains(TEXT("Ayon"))) return;
//Get the base path for open pype
FString PathLeft, PathRight;
PathName.Split(FString("Ayon"), &PathLeft, &PathRight);
if (PathLeft.IsEmpty() || PathRight.IsEmpty())
{
UE_LOG(LogAssetData, Error, TEXT("Failed to retrieve the base Ayon directory!"))
return;
}
PathName.RemoveFromEnd(PathRight, ESearchCase::CaseSensitive);
//Get the current settings
const UAyonSettings* Settings = GetMutableDefault<UAyonSettings>();
//Color the base folder
UAyonLib::SetFolderColor(PathName, Settings->GetFolderFColor(), false);
//Get Sub paths, iterate through them and color them according to the folder color in UAyonSettings
const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked<FAssetRegistryModule>(
"AssetRegistry");
TArray<FString> PathList;
AssetRegistryModule.Get().GetSubPaths(PathName, PathList, true);
if (PathList.Num() > 0)
{
for (const FString& Path : PathList)
{
UAyonLib::SetFolderColor(Path, Settings->GetFolderFColor(), false);
}
}
}
void UAyonPublishInstance::SendNotification(const FString& Text) const
{
FNotificationInfo Info{FText::FromString(Text)};
Info.bFireAndForget = true;
Info.bUseLargeFont = false;
Info.bUseThrobber = false;
Info.bUseSuccessFailIcons = false;
Info.ExpireDuration = 4.f;
Info.FadeOutDuration = 2.f;
FSlateNotificationManager::Get().AddNotification(Info);
UE_LOG(LogAssetData, Warning,
TEXT(
"Removed duplicated asset from the AssetsDataExternal in Container \"%s\", Asset is already included in the AssetDataInternal!"
), *GetName()
)
}
void UAyonPublishInstance::PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent)
{
Super::PostEditChangeProperty(PropertyChangedEvent);
if (PropertyChangedEvent.ChangeType == EPropertyChangeType::ValueSet &&
PropertyChangedEvent.Property->GetFName() == GET_MEMBER_NAME_CHECKED(
UAyonPublishInstance, AssetDataExternal))
{
// Check for duplicated assets
for (const auto& Asset : AssetDataInternal)
{
if (AssetDataExternal.Contains(Asset))
{
AssetDataExternal.Remove(Asset);
return SendNotification(
"You are not allowed to add assets into AssetDataExternal which are already included in AssetDataInternal!");
}
}
// Check if no UAyonPublishInstance type assets are included
for (const auto& Asset : AssetDataExternal)
{
if (Cast<UAyonPublishInstance>(Asset.Get()) != nullptr)
{
AssetDataExternal.Remove(Asset);
return SendNotification("You are not allowed to add publish instances!");
}
}
}
}
#endif

View file

@ -1,23 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
// Deprecation warning: this is left here just for backwards compatibility
// and will be removed in next versions of Ayon.
#include "AyonPublishInstanceFactory.h"
#include "AyonPublishInstance.h"
UAyonPublishInstanceFactory::UAyonPublishInstanceFactory(const FObjectInitializer& ObjectInitializer)
: UFactory(ObjectInitializer)
{
SupportedClass = UAyonPublishInstance::StaticClass();
bCreateNew = false;
bEditorImport = true;
}
UObject* UAyonPublishInstanceFactory::FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn)
{
check(InClass->IsChildOf(UAyonPublishInstance::StaticClass()));
return NewObject<UAyonPublishInstance>(InParent, InClass, InName, Flags);
}
bool UAyonPublishInstanceFactory::ShouldShowInNewMenu() const {
return false;
}

View file

@ -1,14 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
#include "AyonPythonBridge.h"
UAyonPythonBridge* UAyonPythonBridge::Get()
{
TArray<UClass*> AyonPythonBridgeClasses;
GetDerivedClasses(UAyonPythonBridge::StaticClass(), AyonPythonBridgeClasses);
int32 NumClasses = AyonPythonBridgeClasses.Num();
if (NumClasses > 0)
{
return Cast<UAyonPythonBridge>(AyonPythonBridgeClasses[NumClasses - 1]->GetDefaultObject());
}
return nullptr;
};

View file

@ -1,20 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
#include "AyonSettings.h"
#include "Interfaces/IPluginManager.h"
/**
* Mainly is used for initializing default values if the DefaultAyonSettings.ini file does not exist in the saved config
*/
UAyonSettings::UAyonSettings(const FObjectInitializer& ObjectInitializer)
{
const FString ConfigFilePath = AYON_SETTINGS_FILEPATH;
// This has to be probably in the future set using the UE Reflection system
FColor Color;
GConfig->GetColor(TEXT("/Script/Ayon.AyonSettings"), TEXT("FolderColor"), Color, ConfigFilePath);
FolderColor = Color;
}

View file

@ -1,70 +0,0 @@
// Copyright 2023, Ayon, All rights reserved.
#include "AyonStyle.h"
#include "Framework/Application/SlateApplication.h"
#include "Styling/SlateStyle.h"
#include "Styling/SlateStyleRegistry.h"
TUniquePtr< FSlateStyleSet > FAyonStyle::AyonStyleInstance = nullptr;
void FAyonStyle::Initialize()
{
if (!AyonStyleInstance.IsValid())
{
AyonStyleInstance = Create();
FSlateStyleRegistry::RegisterSlateStyle(*AyonStyleInstance);
}
}
void FAyonStyle::Shutdown()
{
if (AyonStyleInstance.IsValid())
{
FSlateStyleRegistry::UnRegisterSlateStyle(*AyonStyleInstance);
AyonStyleInstance.Reset();
}
}
FName FAyonStyle::GetStyleSetName()
{
static FName StyleSetName(TEXT("AyonStyle"));
return StyleSetName;
}
FName FAyonStyle::GetContextName()
{
static FName ContextName(TEXT("Ayon"));
return ContextName;
}
#define IMAGE_BRUSH(RelativePath, ...) FSlateImageBrush( Style->RootToContentDir( RelativePath, TEXT(".png") ), __VA_ARGS__ )
const FVector2D Icon40x40(40.0f, 40.0f);
TUniquePtr< FSlateStyleSet > FAyonStyle::Create()
{
TUniquePtr< FSlateStyleSet > Style = MakeUnique<FSlateStyleSet>(GetStyleSetName());
Style->SetContentRoot(FPaths::EnginePluginsDir() / TEXT("Marketplace/Ayon/Resources"));
return Style;
}
void FAyonStyle::SetIcon(const FString& StyleName, const FString& ResourcePath)
{
FSlateStyleSet* Style = AyonStyleInstance.Get();
FString Name(GetContextName().ToString());
Name = Name + "." + StyleName;
Style->Set(*Name, new FSlateImageBrush(Style->RootToContentDir(ResourcePath, TEXT(".png")), Icon40x40));
FSlateApplication::Get().GetRenderer()->ReloadTextureResources();
}
#undef IMAGE_BRUSH
const ISlateStyle& FAyonStyle::Get()
{
check(AyonStyleInstance);
return *AyonStyleInstance;
}

Some files were not shown because too many files have changed in this diff Show more