Merge branch 'release/3.15.x' into feature/OP-3426_Add-support-for-Deadline-for-automatic-tests

This commit is contained in:
Petr Kalis 2022-12-02 15:35:00 +01:00 committed by GitHub
commit 7959bfb958
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
82 changed files with 3122 additions and 609 deletions

View file

@ -1,8 +1,66 @@
# Changelog
## [3.14.6](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.7](https://github.com/pypeclub/OpenPype/tree/3.14.7)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.5...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.6...3.14.7)
**🆕 New features**
- Hiero: loading effect family to timeline [\#4055](https://github.com/pypeclub/OpenPype/pull/4055)
**🚀 Enhancements**
- Photoshop: bug with pop-up window on Instance Creator [\#4121](https://github.com/pypeclub/OpenPype/pull/4121)
- Publisher: Open on specific tab [\#4120](https://github.com/pypeclub/OpenPype/pull/4120)
- Publisher: Hide unknown publish values [\#4116](https://github.com/pypeclub/OpenPype/pull/4116)
- Ftrack: Event server status give more information about version locations [\#4112](https://github.com/pypeclub/OpenPype/pull/4112)
- General: Allow higher numbers in frames and clips [\#4101](https://github.com/pypeclub/OpenPype/pull/4101)
- Publisher: Settings for validate frame range [\#4097](https://github.com/pypeclub/OpenPype/pull/4097)
- Publisher: Ignore escape button [\#4090](https://github.com/pypeclub/OpenPype/pull/4090)
- Flame: Loading clip with native colorspace resolved from mapping [\#4079](https://github.com/pypeclub/OpenPype/pull/4079)
- General: Extract review single frame output [\#4064](https://github.com/pypeclub/OpenPype/pull/4064)
- Publisher: Prepared common function for instance data cache [\#4063](https://github.com/pypeclub/OpenPype/pull/4063)
- Publisher: Easy access to publish page from create page [\#4058](https://github.com/pypeclub/OpenPype/pull/4058)
- General/TVPaint: Attribute defs dialog [\#4052](https://github.com/pypeclub/OpenPype/pull/4052)
- Publisher: Better reset defer [\#4048](https://github.com/pypeclub/OpenPype/pull/4048)
- Publisher: Add thumbnail sources [\#4042](https://github.com/pypeclub/OpenPype/pull/4042)
**🐛 Bug fixes**
- General: Move default settings for template name [\#4119](https://github.com/pypeclub/OpenPype/pull/4119)
- Slack: notification fail in new tray publisher [\#4118](https://github.com/pypeclub/OpenPype/pull/4118)
- Nuke: loaded nodes set to first tab [\#4114](https://github.com/pypeclub/OpenPype/pull/4114)
- Nuke: load image first frame [\#4113](https://github.com/pypeclub/OpenPype/pull/4113)
- Files Widget: Ignore case sensitivity of extensions [\#4096](https://github.com/pypeclub/OpenPype/pull/4096)
- Webpublisher: extension is lowercased in Setting and in uploaded files [\#4095](https://github.com/pypeclub/OpenPype/pull/4095)
- Publish Report Viewer: Fix small bugs [\#4086](https://github.com/pypeclub/OpenPype/pull/4086)
- Igniter: fix regex to match semver better [\#4085](https://github.com/pypeclub/OpenPype/pull/4085)
- Maya: aov filtering [\#4083](https://github.com/pypeclub/OpenPype/pull/4083)
- Flame/Flare: Loading to multiple batches [\#4080](https://github.com/pypeclub/OpenPype/pull/4080)
- hiero: creator from settings with set maximum [\#4077](https://github.com/pypeclub/OpenPype/pull/4077)
- Nuke: resolve hashes in file name only for frame token [\#4074](https://github.com/pypeclub/OpenPype/pull/4074)
- Publisher: Fix cache of asset docs [\#4070](https://github.com/pypeclub/OpenPype/pull/4070)
- Webpublisher: cleanup wp extract thumbnail [\#4067](https://github.com/pypeclub/OpenPype/pull/4067)
- Settings UI: Locked setting can't bypass lock [\#4066](https://github.com/pypeclub/OpenPype/pull/4066)
- Loader: Fix comparison of repre name [\#4053](https://github.com/pypeclub/OpenPype/pull/4053)
- Deadline: Extract environment subprocess failure [\#4050](https://github.com/pypeclub/OpenPype/pull/4050)
**🔀 Refactored code**
- General: Collect entities plugin minor changes [\#4089](https://github.com/pypeclub/OpenPype/pull/4089)
- General: Direct interfaces import [\#4065](https://github.com/pypeclub/OpenPype/pull/4065)
**Merged pull requests:**
- Bump loader-utils from 1.4.1 to 1.4.2 in /website [\#4100](https://github.com/pypeclub/OpenPype/pull/4100)
- Online family for Tray Publisher [\#4093](https://github.com/pypeclub/OpenPype/pull/4093)
- Bump loader-utils from 1.4.0 to 1.4.1 in /website [\#4081](https://github.com/pypeclub/OpenPype/pull/4081)
- remove underscore from subset name [\#4059](https://github.com/pypeclub/OpenPype/pull/4059)
- Alembic Loader as Arnold Standin [\#4047](https://github.com/pypeclub/OpenPype/pull/4047)
## [3.14.6](https://github.com/pypeclub/OpenPype/tree/3.14.6)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.5...3.14.6)
### 📖 Documentation

View file

@ -1,5 +1,99 @@
# Changelog
## [3.14.7](https://github.com/pypeclub/OpenPype/tree/3.14.7)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.6...3.14.7)
**🆕 New features**
- Hiero: loading effect family to timeline [\#4055](https://github.com/pypeclub/OpenPype/pull/4055)
**🚀 Enhancements**
- Photoshop: bug with pop-up window on Instance Creator [\#4121](https://github.com/pypeclub/OpenPype/pull/4121)
- Publisher: Open on specific tab [\#4120](https://github.com/pypeclub/OpenPype/pull/4120)
- Publisher: Hide unknown publish values [\#4116](https://github.com/pypeclub/OpenPype/pull/4116)
- Ftrack: Event server status give more information about version locations [\#4112](https://github.com/pypeclub/OpenPype/pull/4112)
- General: Allow higher numbers in frames and clips [\#4101](https://github.com/pypeclub/OpenPype/pull/4101)
- Publisher: Settings for validate frame range [\#4097](https://github.com/pypeclub/OpenPype/pull/4097)
- Publisher: Ignore escape button [\#4090](https://github.com/pypeclub/OpenPype/pull/4090)
- Flame: Loading clip with native colorspace resolved from mapping [\#4079](https://github.com/pypeclub/OpenPype/pull/4079)
- General: Extract review single frame output [\#4064](https://github.com/pypeclub/OpenPype/pull/4064)
- Publisher: Prepared common function for instance data cache [\#4063](https://github.com/pypeclub/OpenPype/pull/4063)
- Publisher: Easy access to publish page from create page [\#4058](https://github.com/pypeclub/OpenPype/pull/4058)
- General/TVPaint: Attribute defs dialog [\#4052](https://github.com/pypeclub/OpenPype/pull/4052)
- Publisher: Better reset defer [\#4048](https://github.com/pypeclub/OpenPype/pull/4048)
- Publisher: Add thumbnail sources [\#4042](https://github.com/pypeclub/OpenPype/pull/4042)
**🐛 Bug fixes**
- General: Move default settings for template name [\#4119](https://github.com/pypeclub/OpenPype/pull/4119)
- Slack: notification fail in new tray publisher [\#4118](https://github.com/pypeclub/OpenPype/pull/4118)
- Nuke: loaded nodes set to first tab [\#4114](https://github.com/pypeclub/OpenPype/pull/4114)
- Nuke: load image first frame [\#4113](https://github.com/pypeclub/OpenPype/pull/4113)
- Files Widget: Ignore case sensitivity of extensions [\#4096](https://github.com/pypeclub/OpenPype/pull/4096)
- Webpublisher: extension is lowercased in Setting and in uploaded files [\#4095](https://github.com/pypeclub/OpenPype/pull/4095)
- Publish Report Viewer: Fix small bugs [\#4086](https://github.com/pypeclub/OpenPype/pull/4086)
- Igniter: fix regex to match semver better [\#4085](https://github.com/pypeclub/OpenPype/pull/4085)
- Maya: aov filtering [\#4083](https://github.com/pypeclub/OpenPype/pull/4083)
- Flame/Flare: Loading to multiple batches [\#4080](https://github.com/pypeclub/OpenPype/pull/4080)
- hiero: creator from settings with set maximum [\#4077](https://github.com/pypeclub/OpenPype/pull/4077)
- Nuke: resolve hashes in file name only for frame token [\#4074](https://github.com/pypeclub/OpenPype/pull/4074)
- Publisher: Fix cache of asset docs [\#4070](https://github.com/pypeclub/OpenPype/pull/4070)
- Webpublisher: cleanup wp extract thumbnail [\#4067](https://github.com/pypeclub/OpenPype/pull/4067)
- Settings UI: Locked setting can't bypass lock [\#4066](https://github.com/pypeclub/OpenPype/pull/4066)
- Loader: Fix comparison of repre name [\#4053](https://github.com/pypeclub/OpenPype/pull/4053)
- Deadline: Extract environment subprocess failure [\#4050](https://github.com/pypeclub/OpenPype/pull/4050)
**🔀 Refactored code**
- General: Collect entities plugin minor changes [\#4089](https://github.com/pypeclub/OpenPype/pull/4089)
- General: Direct interfaces import [\#4065](https://github.com/pypeclub/OpenPype/pull/4065)
**Merged pull requests:**
- Bump loader-utils from 1.4.1 to 1.4.2 in /website [\#4100](https://github.com/pypeclub/OpenPype/pull/4100)
- Online family for Tray Publisher [\#4093](https://github.com/pypeclub/OpenPype/pull/4093)
- Bump loader-utils from 1.4.0 to 1.4.1 in /website [\#4081](https://github.com/pypeclub/OpenPype/pull/4081)
- remove underscore from subset name [\#4059](https://github.com/pypeclub/OpenPype/pull/4059)
- Alembic Loader as Arnold Standin [\#4047](https://github.com/pypeclub/OpenPype/pull/4047)
## [3.14.6](https://github.com/pypeclub/OpenPype/tree/3.14.6)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.5...3.14.6)
### 📖 Documentation
- Documentation: Minor updates to dev\_requirements.md [\#4025](https://github.com/pypeclub/OpenPype/pull/4025)
**🆕 New features**
- Nuke: add 13.2 variant [\#4041](https://github.com/pypeclub/OpenPype/pull/4041)
**🚀 Enhancements**
- Publish Report Viewer: Store reports locally on machine [\#4040](https://github.com/pypeclub/OpenPype/pull/4040)
- General: More specific error in burnins script [\#4026](https://github.com/pypeclub/OpenPype/pull/4026)
- General: Extract review does not crash with old settings overrides [\#4023](https://github.com/pypeclub/OpenPype/pull/4023)
- Publisher: Convertors for legacy instances [\#4020](https://github.com/pypeclub/OpenPype/pull/4020)
- workflows: adding milestone creator and assigner [\#4018](https://github.com/pypeclub/OpenPype/pull/4018)
- Publisher: Catch creator errors [\#4015](https://github.com/pypeclub/OpenPype/pull/4015)
**🐛 Bug fixes**
- Hiero - effect collection fixes [\#4038](https://github.com/pypeclub/OpenPype/pull/4038)
- Nuke - loader clip correct hash conversion in path [\#4037](https://github.com/pypeclub/OpenPype/pull/4037)
- Maya: Soft fail when applying capture preset [\#4034](https://github.com/pypeclub/OpenPype/pull/4034)
- Igniter: handle missing directory [\#4032](https://github.com/pypeclub/OpenPype/pull/4032)
- StandalonePublisher: Fix thumbnail publishing [\#4029](https://github.com/pypeclub/OpenPype/pull/4029)
- Experimental Tools: Fix publisher import [\#4027](https://github.com/pypeclub/OpenPype/pull/4027)
- Houdini: fix wrong path in ASS loader [\#4016](https://github.com/pypeclub/OpenPype/pull/4016)
**🔀 Refactored code**
- General: Import lib functions from lib [\#4017](https://github.com/pypeclub/OpenPype/pull/4017)
## [3.14.5](https://github.com/pypeclub/OpenPype/tree/3.14.5) (2022-10-24)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.4...3.14.5)

View file

@ -389,10 +389,11 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
returned if 'None' is passed.
Returns:
None: If subset with specified filters was not found.
Dict: Subset document which can be reduced to specified 'fields'.
"""
Union[None, Dict[str, Any]]: None if subset with specified filters was
not found or dict subset document which can be reduced to
specified 'fields'.
"""
if not subset_name:
return None

View file

@ -0,0 +1,177 @@
import os
import shutil
from time import sleep
from openpype.client.entities import (
get_last_version_by_subset_id,
get_representations,
get_subsets,
)
from openpype.lib import PreLaunchHook
from openpype.lib.local_settings import get_local_site_id
from openpype.lib.profiles_filtering import filter_profiles
from openpype.pipeline.load.utils import get_representation_path
from openpype.settings.lib import get_project_settings
class CopyLastPublishedWorkfile(PreLaunchHook):
"""Copy last published workfile as first workfile.
Prelaunch hook works only if last workfile leads to not existing file.
- That is possible only if it's first version.
"""
# Before `AddLastWorkfileToLaunchArgs`
order = -1
app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"]
def execute(self):
"""Check if local workfile doesn't exist, else copy it.
1- Check if setting for this feature is enabled
2- Check if workfile in work area doesn't exist
3- Check if published workfile exists and is copied locally in publish
4- Substitute copied published workfile as first workfile
Returns:
None: This is a void method.
"""
sync_server = self.modules_manager.get("sync_server")
if not sync_server or not sync_server.enabled:
self.log.debug("Sync server module is not enabled or available")
return
# Check there is no workfile available
last_workfile = self.data.get("last_workfile_path")
if os.path.exists(last_workfile):
self.log.debug(
"Last workfile exists. Skipping {} process.".format(
self.__class__.__name__
)
)
return
# Get data
project_name = self.data["project_name"]
task_name = self.data["task_name"]
task_type = self.data["task_type"]
host_name = self.application.host_name
# Check settings has enabled it
project_settings = get_project_settings(project_name)
profiles = project_settings["global"]["tools"]["Workfiles"][
"last_workfile_on_startup"
]
filter_data = {
"tasks": task_name,
"task_types": task_type,
"hosts": host_name,
}
last_workfile_settings = filter_profiles(profiles, filter_data)
use_last_published_workfile = last_workfile_settings.get(
"use_last_published_workfile"
)
if use_last_published_workfile is None:
self.log.info(
(
"Seems like old version of settings is used."
' Can\'t access custom templates in host "{}".'.format(
host_name
)
)
)
return
elif use_last_published_workfile is False:
self.log.info(
(
'Project "{}" has turned off to use last published'
' workfile as first workfile for host "{}"'.format(
project_name, host_name
)
)
)
return
self.log.info("Trying to fetch last published workfile...")
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
# Check it can proceed
if not project_doc and not asset_doc:
return
# Get subset id
subset_id = next(
(
subset["_id"]
for subset in get_subsets(
project_name,
asset_ids=[asset_doc["_id"]],
fields=["_id", "data.family", "data.families"],
)
if subset["data"].get("family") == "workfile"
# Legacy compatibility
or "workfile" in subset["data"].get("families", {})
),
None,
)
if not subset_id:
self.log.debug(
'No any workfile for asset "{}".'.format(asset_doc["name"])
)
return
# Get workfile representation
last_version_doc = get_last_version_by_subset_id(
project_name, subset_id, fields=["_id"]
)
if not last_version_doc:
self.log.debug("Subset does not have any versions")
return
workfile_representation = next(
(
representation
for representation in get_representations(
project_name, version_ids=[last_version_doc["_id"]]
)
if representation["context"]["task"]["name"] == task_name
),
None,
)
if not workfile_representation:
self.log.debug(
'No published workfile for task "{}" and host "{}".'.format(
task_name, host_name
)
)
return
local_site_id = get_local_site_id()
sync_server.add_site(
project_name,
workfile_representation["_id"],
local_site_id,
force=True,
priority=99,
reset_timer=True,
)
while not sync_server.is_representation_on_site(
project_name, workfile_representation["_id"], local_site_id
):
sleep(5)
# Get paths
published_workfile_path = get_representation_path(
workfile_representation, root=anatomy.roots
)
local_workfile_dir = os.path.dirname(last_workfile)
# Copy file and substitute path
self.data["last_workfile_path"] = shutil.copy(
published_workfile_path, local_workfile_dir
)

View file

@ -237,7 +237,7 @@ function main(websocket_url){
RPC.addRoute('AfterEffects.get_render_info', function (data) {
log.warn('Server called client route "get_render_info":', data);
return runEvalScript("getRenderInfo()")
return runEvalScript("getRenderInfo(" + data.comp_id +")")
.then(function(result){
log.warn("get_render_info: " + result);
return result;
@ -289,7 +289,7 @@ function main(websocket_url){
RPC.addRoute('AfterEffects.render', function (data) {
log.warn('Server called client route "render":', data);
var escapedPath = EscapeStringForJSX(data.folder_url);
return runEvalScript("render('" + escapedPath +"')")
return runEvalScript("render('" + escapedPath +"', " + data.comp_id + ")")
.then(function(result){
log.warn("render: " + result);
return result;

View file

@ -395,41 +395,84 @@ function saveAs(path){
app.project.save(fp = new File(path));
}
function getRenderInfo(){
function getRenderInfo(comp_id){
/***
Get info from render queue.
Currently pulls only file name to parse extension and
Currently pulls only file name to parse extension and
if it is sequence in Python
Args:
comp_id (int): id of composition
Return:
(list) [{file_name:"xx.png", width:00, height:00}]
**/
var item = app.project.itemByID(comp_id);
if (!item){
return _prepareError("Composition with '" + comp_id + "' wasn't found! Recreate publishable instance(s)")
}
var comp_name = item.name;
var output_metadata = []
try{
var render_item = app.project.renderQueue.item(1);
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate(); // create new, cannot change status if DONE
render_item.remove(); // remove existing to limit duplications
render_item = app.project.renderQueue.item(1);
// render_item.duplicate() should create new item on renderQueue
// BUT it works only sometimes, there are some weird synchronization issue
// this method will be called always before render, so prepare items here
// for render to spare the hassle
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
if (render_item.comp.id != comp_id){
continue;
}
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate(); // create new, cannot change status if DONE
render_item.remove(); // remove existing to limit duplications
continue;
}
}
render_item.render = true; // always set render queue to render
var item = render_item.outputModule(1);
// properly validate as `numItems` won't change magically
var comp_id_count = 0;
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
if (render_item.comp.id != comp_id){
continue;
}
comp_id_count += 1;
var item = render_item.outputModule(1);
for (j = 1; j<= render_item.numOutputModules; ++j){
var file_url = item.file.toString();
output_metadata.push(
JSON.stringify({
"file_name": file_url,
"width": render_item.comp.width,
"height": render_item.comp.height
})
);
}
}
} catch (error) {
return _prepareError("There is no render queue, create one");
}
var file_url = item.file.toString();
return JSON.stringify({
"file_name": file_url,
"width": render_item.comp.width,
"height": render_item.comp.height
})
if (comp_id_count > 1){
return _prepareError("There cannot be more items in Render Queue for '" + comp_name + "'!")
}
if (comp_id_count == 0){
return _prepareError("There is no item in Render Queue for '" + comp_name + "'! Add composition to Render Queue.")
}
return '[' + output_metadata.join() + ']';
}
function getAudioUrlForComp(comp_id){
/**
* Searches composition for audio layer
*
*
* Only single AVLayer is expected!
* Used for collecting Audio
*
*
* Args:
* comp_id (int): id of composition
* Return:
@ -457,7 +500,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
/**
* Adds already imported FootageItem ('item_id') as a new
* layer to composition ('comp_id').
*
*
* Args:
* comp_id (int): id of target composition
* item_id (int): FootageItem.id
@ -480,17 +523,17 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
function importBackground(comp_id, composition_name, files_to_import){
/**
* Imports backgrounds images to existing or new composition.
*
*
* If comp_id is not provided, new composition is created, basic
* values (width, heights, frameRatio) takes from first imported
* image.
*
*
* Args:
* comp_id (int): id of existing composition (null if new)
* composition_name (str): used when new composition
* composition_name (str): used when new composition
* files_to_import (list): list of absolute paths to import and
* add as layers
*
*
* Returns:
* (str): json representation (id, name, members)
*/
@ -512,7 +555,7 @@ function importBackground(comp_id, composition_name, files_to_import){
}
}
}
if (files_to_import){
for (i = 0; i < files_to_import.length; ++i){
item = _importItem(files_to_import[i]);
@ -524,8 +567,8 @@ function importBackground(comp_id, composition_name, files_to_import){
if (!comp){
folder = app.project.items.addFolder(composition_name);
imported_ids.push(folder.id);
comp = app.project.items.addComp(composition_name, item.width,
item.height, item.pixelAspect,
comp = app.project.items.addComp(composition_name, item.width,
item.height, item.pixelAspect,
1, 26.7); // hardcode defaults
imported_ids.push(comp.id);
comp.parentFolder = folder;
@ -534,7 +577,7 @@ function importBackground(comp_id, composition_name, files_to_import){
item.parentFolder = folder;
addItemAsLayerToComp(comp.id, item.id, comp);
}
}
}
var item = {"name": comp.name,
"id": folder.id,
@ -545,19 +588,19 @@ function importBackground(comp_id, composition_name, files_to_import){
function reloadBackground(comp_id, composition_name, files_to_import){
/**
* Reloads existing composition.
*
*
* It deletes complete composition with encompassing folder, recreates
* from scratch via 'importBackground' functionality.
*
*
* Args:
* comp_id (int): id of existing composition (null if new)
* composition_name (str): used when new composition
* composition_name (str): used when new composition
* files_to_import (list): list of absolute paths to import and
* add as layers
*
*
* Returns:
* (str): json representation (id, name, members)
*
*
*/
var imported_ids = []; // keep track of members of composition
comp = app.project.itemByID(comp_id);
@ -620,7 +663,7 @@ function reloadBackground(comp_id, composition_name, files_to_import){
function _get_file_name(file_url){
/**
* Returns file name without extension from 'file_url'
*
*
* Args:
* file_url (str): full absolute url
* Returns:
@ -635,7 +678,7 @@ function _delete_obsolete_items(folder, new_filenames){
/***
* Goes through 'folder' and removes layers not in new
* background
*
*
* Args:
* folder (FolderItem)
* new_filenames (array): list of layer names in new bg
@ -660,14 +703,14 @@ function _delete_obsolete_items(folder, new_filenames){
function _importItem(file_url){
/**
* Imports 'file_url' as new FootageItem
*
*
* Args:
* file_url (str): file url with content
* Returns:
* (FootageItem)
*/
file_name = _get_file_name(file_url);
//importFile prepared previously to return json
item_json = importFile(file_url, file_name, JSON.stringify({"ImportAsType":"FOOTAGE"}));
item_json = JSON.parse(item_json);
@ -689,30 +732,42 @@ function isFileSequence (item){
return false;
}
function render(target_folder){
function render(target_folder, comp_id){
var out_dir = new Folder(target_folder);
var out_dir = out_dir.fsName;
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
var om1 = app.project.renderQueue.item(i).outputModule(1);
var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space?
var composition = render_item.comp;
if (composition.id == comp_id){
if (render_item.status == RQItemStatus.DONE){
var new_item = render_item.duplicate();
render_item.remove();
render_item = new_item;
}
render_item.render = true;
var om1 = app.project.renderQueue.item(i).outputModule(1);
var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space?
var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE );
var targetFolder = new Folder(target_folder);
if (!targetFolder.exists) {
targetFolder.create();
}
om1.file = new File(targetFolder.fsName + '/' + file_name);
}else{
if (render_item.status != RQItemStatus.DONE){
render_item.render = false;
}
}
var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE );
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate();
render_item.remove();
continue;
}
var targetFolder = new Folder(target_folder);
if (!targetFolder.exists) {
targetFolder.create();
}
om1.file = new File(targetFolder.fsName + '/' + file_name);
}
app.beginSuppressDialogs();
app.project.renderQueue.render();
app.endSuppressDialogs(false);
}
function close(){

View file

@ -418,18 +418,18 @@ class AfterEffectsServerStub():
return self._handle_return(res)
def get_render_info(self):
def get_render_info(self, comp_id):
""" Get render queue info for render purposes
Returns:
(AEItem): with 'file_name' field
(list) of (AEItem): with 'file_name' field
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_render_info'))
('AfterEffects.get_render_info',
comp_id=comp_id))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
return records
def get_audio_url(self, item_id):
""" Get audio layer absolute url for comp
@ -522,7 +522,7 @@ class AfterEffectsServerStub():
if records:
return records.pop()
def render(self, folder_url):
def render(self, folder_url, comp_id):
"""
Render all renderqueueitem to 'folder_url'
Args:
@ -531,7 +531,8 @@ class AfterEffectsServerStub():
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.render',
folder_url=folder_url))
folder_url=folder_url,
comp_id=comp_id))
return self._handle_return(res)
def get_extension_version(self):

View file

@ -1,3 +1,5 @@
import re
from openpype import resources
from openpype.lib import BoolDef, UISeparatorDef
from openpype.hosts.aftereffects import api
@ -8,6 +10,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.hosts.aftereffects.api.pipeline import cache_and_get_instances
from openpype.lib import prepare_template_data
class RenderCreator(Creator):
@ -44,46 +47,71 @@ class RenderCreator(Creator):
for created_inst, _changes in update_list:
api.get_stub().imprint(created_inst.get("instance_id"),
created_inst.data_to_store())
subset_change = _changes.get("subset")
if subset_change:
api.get_stub().rename_item(created_inst.data["members"][0],
subset_change[1])
def remove_instances(self, instances):
for instance in instances:
self.host.remove_instance(instance)
self._remove_instance_from_context(instance)
self.host.remove_instance(instance)
def create(self, subset_name, data, pre_create_data):
subset = instance.data["subset"]
comp_id = instance.data["members"][0]
comp = api.get_stub().get_item(comp_id)
if comp:
new_comp_name = comp.name.replace(subset, '')
if not new_comp_name:
new_comp_name = "dummyCompName"
api.get_stub().rename_item(comp_id,
new_comp_name)
def create(self, subset_name_from_ui, data, pre_create_data):
stub = api.get_stub() # only after After Effects is up
if pre_create_data.get("use_selection"):
items = stub.get_selected_items(
comps = stub.get_selected_items(
comps=True, folders=False, footages=False
)
else:
items = stub.get_items(comps=True, folders=False, footages=False)
comps = stub.get_items(comps=True, folders=False, footages=False)
if len(items) > 1:
if not comps:
raise CreatorError(
"Please select only single composition at time."
)
if not items:
raise CreatorError((
"Nothing to create. Select composition "
"if 'useSelection' or create at least "
"one composition."
))
)
for inst in self.create_context.instances:
if subset_name == inst.subset_name:
raise CreatorError("{} already exists".format(
inst.subset_name))
for comp in comps:
if pre_create_data.get("use_composition_name"):
composition_name = comp.name
dynamic_fill = prepare_template_data({"composition":
composition_name})
subset_name = subset_name_from_ui.format(**dynamic_fill)
data["composition_name"] = composition_name
else:
subset_name = subset_name_from_ui
subset_name = re.sub(r"\{composition\}", '', subset_name,
flags=re.IGNORECASE)
data["members"] = [items[0].id]
new_instance = CreatedInstance(self.family, subset_name, data, self)
if "farm" in pre_create_data:
use_farm = pre_create_data["farm"]
new_instance.creator_attributes["farm"] = use_farm
for inst in self.create_context.instances:
if subset_name == inst.subset_name:
raise CreatorError("{} already exists".format(
inst.subset_name))
api.get_stub().imprint(new_instance.id,
new_instance.data_to_store())
self._add_instance_to_context(new_instance)
data["members"] = [comp.id]
new_instance = CreatedInstance(self.family, subset_name, data,
self)
if "farm" in pre_create_data:
use_farm = pre_create_data["farm"]
new_instance.creator_attributes["farm"] = use_farm
api.get_stub().imprint(new_instance.id,
new_instance.data_to_store())
self._add_instance_to_context(new_instance)
stub.rename_item(comp.id, subset_name)
def get_default_variants(self):
return self._default_variants
@ -94,6 +122,8 @@ class RenderCreator(Creator):
def get_pre_create_attr_defs(self):
output = [
BoolDef("use_selection", default=True, label="Use selection"),
BoolDef("use_composition_name",
label="Use composition name in subset"),
UISeparatorDef(),
BoolDef("farm", label="Render on farm")
]
@ -102,6 +132,18 @@ class RenderCreator(Creator):
def get_detail_description(self):
return """Creator for Render instances"""
def get_dynamic_data(self, variant, task_name, asset_doc,
project_name, host_name, instance):
dynamic_data = {}
if instance is not None:
composition_name = instance.get("composition_name")
if composition_name:
dynamic_data["composition"] = composition_name
else:
dynamic_data["composition"] = "{composition}"
return dynamic_data
def _handle_legacy(self, instance_data):
"""Converts old instances to new format."""
if not instance_data.get("members"):

View file

@ -22,7 +22,7 @@ class AERenderInstance(RenderInstance):
stagingDir = attr.ib(default=None)
app_version = attr.ib(default=None)
publish_attributes = attr.ib(default={})
file_name = attr.ib(default=None)
file_names = attr.ib(default=[])
class CollectAERender(publish.AbstractCollectRender):
@ -64,14 +64,13 @@ class CollectAERender(publish.AbstractCollectRender):
if family not in ["render", "renderLocal"]: # legacy
continue
item_id = inst.data["members"][0]
comp_id = int(inst.data["members"][0])
work_area_info = CollectAERender.get_stub().get_work_area(
int(item_id))
work_area_info = CollectAERender.get_stub().get_work_area(comp_id)
if not work_area_info:
self.log.warning("Orphaned instance, deleting metadata")
inst_id = inst.get("instance_id") or item_id
inst_id = inst.get("instance_id") or str(comp_id)
CollectAERender.get_stub().remove_instance(inst_id)
continue
@ -84,9 +83,10 @@ class CollectAERender(publish.AbstractCollectRender):
task_name = inst.data.get("task") # legacy
render_q = CollectAERender.get_stub().get_render_info()
render_q = CollectAERender.get_stub().get_render_info(comp_id)
if not render_q:
raise ValueError("No file extension set in Render Queue")
render_item = render_q[0]
subset_name = inst.data["subset"]
instance = AERenderInstance(
@ -103,8 +103,8 @@ class CollectAERender(publish.AbstractCollectRender):
setMembers='',
publish=True,
name=subset_name,
resolutionWidth=render_q.width,
resolutionHeight=render_q.height,
resolutionWidth=render_item.width,
resolutionHeight=render_item.height,
pixelAspect=1,
tileRendering=False,
tilesX=0,
@ -115,16 +115,16 @@ class CollectAERender(publish.AbstractCollectRender):
fps=fps,
app_version=app_version,
publish_attributes=inst.data.get("publish_attributes", {}),
file_name=render_q.file_name
file_names=[item.file_name for item in render_q]
)
comp = compositions_by_id.get(int(item_id))
comp = compositions_by_id.get(comp_id)
if not comp:
raise ValueError("There is no composition for item {}".
format(item_id))
format(comp_id))
instance.outputDir = self._get_output_dir(instance)
instance.comp_name = comp.name
instance.comp_id = item_id
instance.comp_id = comp_id
is_local = "renderLocal" in inst.data["family"] # legacy
if inst.data.get("creator_attributes"):
@ -163,28 +163,30 @@ class CollectAERender(publish.AbstractCollectRender):
start = render_instance.frameStart
end = render_instance.frameEnd
_, ext = os.path.splitext(os.path.basename(render_instance.file_name))
base_dir = self._get_output_dir(render_instance)
expected_files = []
if "#" not in render_instance.file_name: # single frame (mov)W
path = os.path.join(base_dir, "{}_{}_{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
ext.replace('.', '')
))
expected_files.append(path)
else:
for frame in range(start, end + 1):
path = os.path.join(base_dir, "{}_{}_{}.{}.{}".format(
for file_name in render_instance.file_names:
_, ext = os.path.splitext(os.path.basename(file_name))
ext = ext.replace('.', '')
version_str = "v{:03d}".format(render_instance.version)
if "#" not in file_name: # single frame (mov)W
path = os.path.join(base_dir, "{}_{}_{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
str(frame).zfill(self.padding_width),
ext.replace('.', '')
version_str,
ext
))
expected_files.append(path)
else:
for frame in range(start, end + 1):
path = os.path.join(base_dir, "{}_{}_{}.{}.{}".format(
render_instance.asset,
render_instance.subset,
version_str,
str(frame).zfill(self.padding_width),
ext
))
expected_files.append(path)
return expected_files
def _get_output_dir(self, render_instance):

View file

@ -21,41 +21,55 @@ class ExtractLocalRender(publish.Extractor):
def process(self, instance):
stub = get_stub()
staging_dir = instance.data["stagingDir"]
self.log.info("staging_dir::{}".format(staging_dir))
self.log.debug("staging_dir::{}".format(staging_dir))
# pull file name from Render Queue Output module
render_q = stub.get_render_info()
stub.render(staging_dir)
if not render_q:
# pull file name collected value from Render Queue Output module
if not instance.data["file_names"]:
raise ValueError("No file extension set in Render Queue")
_, ext = os.path.splitext(os.path.basename(render_q.file_name))
ext = ext[1:]
first_file_path = None
files = []
self.log.info("files::{}".format(os.listdir(staging_dir)))
for file_name in os.listdir(staging_dir):
files.append(file_name)
if first_file_path is None:
first_file_path = os.path.join(staging_dir,
file_name)
comp_id = instance.data['comp_id']
stub.render(staging_dir, comp_id)
resulting_files = files
if len(files) == 1:
resulting_files = files[0]
representations = []
for file_name in instance.data["file_names"]:
_, ext = os.path.splitext(os.path.basename(file_name))
ext = ext[1:]
repre_data = {
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"name": ext,
"ext": ext,
"files": resulting_files,
"stagingDir": staging_dir
}
if instance.data["review"]:
repre_data["tags"] = ["review"]
first_file_path = None
files = []
for found_file_name in os.listdir(staging_dir):
if not found_file_name.endswith(ext):
continue
instance.data["representations"] = [repre_data]
files.append(found_file_name)
if first_file_path is None:
first_file_path = os.path.join(staging_dir,
found_file_name)
if not files:
self.log.info("no files")
return
# single file cannot be wrapped in array
resulting_files = files
if len(files) == 1:
resulting_files = files[0]
repre_data = {
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"name": ext,
"ext": ext,
"files": resulting_files,
"stagingDir": staging_dir
}
first_repre = not representations
if instance.data["review"] and first_repre:
repre_data["tags"] = ["review"]
representations.append(repre_data)
instance.data["representations"] = representations
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.

View file

@ -4,13 +4,13 @@ import shutil
from copy import deepcopy
from xml.etree import ElementTree as ET
import qargparse
from Qt import QtCore, QtWidgets
import qargparse
from openpype import style
from openpype.settings import get_current_project_settings
from openpype.lib import Logger
from openpype.pipeline import LegacyCreator, LoaderPlugin
from openpype.settings import get_current_project_settings
from . import constants
from . import lib as flib
@ -690,6 +690,54 @@ class ClipLoader(LoaderPlugin):
)
]
_mapping = None
def get_colorspace(self, context):
"""Get colorspace name
Look either to version data or representation data.
Args:
context (dict): version context data
Returns:
str: colorspace name or None
"""
version = context['version']
version_data = version.get("data", {})
colorspace = version_data.get(
"colorspace", None
)
if (
not colorspace
or colorspace == "Unknown"
):
colorspace = context["representation"]["data"].get(
"colorspace", None)
return colorspace
@classmethod
def get_native_colorspace(cls, input_colorspace):
"""Return native colorspace name.
Args:
input_colorspace (str | None): colorspace name
Returns:
str: native colorspace name defined in mapping or None
"""
if not cls._mapping:
settings = get_current_project_settings()["flame"]
mapping = settings["imageio"]["profilesMapping"]["inputs"]
cls._mapping = {
input["ocioName"]: input["flameName"]
for input in mapping
}
return cls._mapping.get(input_colorspace)
class OpenClipSolver(flib.MediaInfoFile):
create_new_clip = False

View file

@ -36,14 +36,15 @@ class LoadClip(opfapi.ClipLoader):
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
colorspace = self.get_colorspace(context)
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace
colorspace = self.get_native_colorspace(colorspace)
self.log.info("Loading with colorspace: `{}`".format(colorspace))
# create workfile path
workfile_dir = os.environ["AVALON_WORKDIR"]

View file

@ -35,7 +35,7 @@ class LoadClipBatch(opfapi.ClipLoader):
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
colorspace = self.get_colorspace(context)
# in case output is not in context replace key to representation
if not context["representation"]["context"].get("output"):
@ -47,10 +47,10 @@ class LoadClipBatch(opfapi.ClipLoader):
clip_name = StringTemplate(self.clip_name_template).format(
formating_data)
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace
colorspace = self.get_native_colorspace(colorspace)
self.log.info("Loading with colorspace: `{}`".format(colorspace))
# create workfile path
workfile_dir = options.get("workdir") or os.environ["AVALON_WORKDIR"]

View file

@ -30,9 +30,15 @@ from .lib import (
get_timeline_selection,
get_current_track,
get_track_item_tags,
get_track_openpype_tag,
set_track_openpype_tag,
get_track_openpype_data,
get_track_item_pype_tag,
set_track_item_pype_tag,
get_track_item_pype_data,
get_trackitem_openpype_tag,
set_trackitem_openpype_tag,
get_trackitem_openpype_data,
set_publish_attribute,
get_publish_attribute,
imprint,
@ -85,9 +91,12 @@ __all__ = [
"get_timeline_selection",
"get_current_track",
"get_track_item_tags",
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",
"get_track_openpype_tag",
"set_track_openpype_tag",
"get_track_openpype_data",
"get_trackitem_openpype_tag",
"set_trackitem_openpype_tag",
"get_trackitem_openpype_data",
"set_publish_attribute",
"get_publish_attribute",
"imprint",
@ -99,6 +108,10 @@ __all__ = [
"apply_colorspace_project",
"apply_colorspace_clips",
"get_sequence_pattern_and_padding",
# depricated
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",
# plugins
"CreatorWidget",

View file

@ -7,11 +7,15 @@ import os
import re
import sys
import platform
import functools
import warnings
import json
import ast
import secrets
import shutil
import hiero
from Qt import QtWidgets
from Qt import QtWidgets, QtCore, QtXml
from openpype.client import get_project
from openpype.settings import get_project_settings
@ -20,15 +24,51 @@ from openpype.pipeline.load import filter_containers
from openpype.lib import Logger
from . import tags
try:
from PySide.QtCore import QFile, QTextStream
from PySide.QtXml import QDomDocument
except ImportError:
from PySide2.QtCore import QFile, QTextStream
from PySide2.QtXml import QDomDocument
# from opentimelineio import opentime
# from pprint import pformat
class DeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", DeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=DeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
log = Logger.get_logger(__name__)
@ -301,7 +341,124 @@ def get_track_item_tags(track_item):
return returning_tag_data
def _get_tag_unique_hash():
# sourcery skip: avoid-builtin-shadow
return secrets.token_hex(nbytes=4)
def set_track_openpype_tag(track, data=None):
"""
Set openpype track tag to input track object.
Attributes:
track (hiero.core.VideoTrack): hiero object
Returns:
hiero.core.Tag
"""
data = data or {}
# basic Tag's attribute
tag_data = {
"editable": "0",
"note": "OpenPype data container",
"icon": "openpype_icon.png",
"metadata": dict(data.items())
}
# get available pype tag if any
_tag = get_track_openpype_tag(track)
if _tag:
# it not tag then create one
tag = tags.update_tag(_tag, tag_data)
else:
# if pype tag available then update with input data
tag = tags.create_tag(
"{}_{}".format(
self.pype_tag_name,
_get_tag_unique_hash()
),
tag_data
)
# add it to the input track item
track.addTag(tag)
return tag
def get_track_openpype_tag(track):
"""
Get pype track item tag created by creator or loader plugin.
Attributes:
trackItem (hiero.core.TrackItem): hiero object
Returns:
hiero.core.Tag: hierarchy, orig clip attributes
"""
# get all tags from track item
_tags = track.tags()
if not _tags:
return None
for tag in _tags:
# return only correct tag defined by global name
if self.pype_tag_name in tag.name():
return tag
def get_track_openpype_data(track, container_name=None):
"""
Get track's openpype tag data.
Attributes:
trackItem (hiero.core.VideoTrack): hiero object
Returns:
dict: data found on pype tag
"""
return_data = {}
# get pype data tag from track item
tag = get_track_openpype_tag(track)
if not tag:
return None
# get tag metadata attribute
tag_data = deepcopy(dict(tag.metadata()))
for obj_name, obj_data in tag_data.items():
obj_name = obj_name.replace("tag.", "")
if obj_name in ["applieswhole", "note", "label"]:
continue
return_data[obj_name] = json.loads(obj_data)
return (
return_data[container_name]
if container_name
else return_data
)
@deprecated("openpype.hosts.hiero.api.lib.get_trackitem_openpype_tag")
def get_track_item_pype_tag(track_item):
# backward compatibility alias
return get_trackitem_openpype_tag(track_item)
@deprecated("openpype.hosts.hiero.api.lib.set_trackitem_openpype_tag")
def set_track_item_pype_tag(track_item, data=None):
# backward compatibility alias
return set_trackitem_openpype_tag(track_item, data)
@deprecated("openpype.hosts.hiero.api.lib.get_trackitem_openpype_data")
def get_track_item_pype_data(track_item):
# backward compatibility alias
return get_trackitem_openpype_data(track_item)
def get_trackitem_openpype_tag(track_item):
"""
Get pype track item tag created by creator or loader plugin.
@ -317,16 +474,16 @@ def get_track_item_pype_tag(track_item):
return None
for tag in _tags:
# return only correct tag defined by global name
if tag.name() == self.pype_tag_name:
if self.pype_tag_name in tag.name():
return tag
def set_track_item_pype_tag(track_item, data=None):
def set_trackitem_openpype_tag(track_item, data=None):
"""
Set pype track item tag to input track_item.
Set openpype track tag to input track object.
Attributes:
trackItem (hiero.core.TrackItem): hiero object
track (hiero.core.VideoTrack): hiero object
Returns:
hiero.core.Tag
@ -341,21 +498,26 @@ def set_track_item_pype_tag(track_item, data=None):
"metadata": dict(data.items())
}
# get available pype tag if any
_tag = get_track_item_pype_tag(track_item)
_tag = get_trackitem_openpype_tag(track_item)
if _tag:
# it not tag then create one
tag = tags.update_tag(_tag, tag_data)
else:
# if pype tag available then update with input data
tag = tags.create_tag(self.pype_tag_name, tag_data)
tag = tags.create_tag(
"{}_{}".format(
self.pype_tag_name,
_get_tag_unique_hash()
),
tag_data
)
# add it to the input track item
track_item.addTag(tag)
return tag
def get_track_item_pype_data(track_item):
def get_trackitem_openpype_data(track_item):
"""
Get track item's pype tag data.
@ -367,7 +529,7 @@ def get_track_item_pype_data(track_item):
"""
data = {}
# get pype data tag from track item
tag = get_track_item_pype_tag(track_item)
tag = get_trackitem_openpype_tag(track_item)
if not tag:
return None
@ -420,7 +582,7 @@ def imprint(track_item, data=None):
"""
data = data or {}
tag = set_track_item_pype_tag(track_item, data)
tag = set_trackitem_openpype_tag(track_item, data)
# add publish attribute
set_publish_attribute(tag, True)
@ -832,22 +994,22 @@ def set_selected_track_items(track_items_list, sequence=None):
def _read_doc_from_path(path):
# reading QDomDocument from HROX path
hrox_file = QFile(path)
if not hrox_file.open(QFile.ReadOnly):
# reading QtXml.QDomDocument from HROX path
hrox_file = QtCore.QFile(path)
if not hrox_file.open(QtCore.QFile.ReadOnly):
raise RuntimeError("Failed to open file for reading")
doc = QDomDocument()
doc = QtXml.QDomDocument()
doc.setContent(hrox_file)
hrox_file.close()
return doc
def _write_doc_to_path(doc, path):
# write QDomDocument to path as HROX
hrox_file = QFile(path)
if not hrox_file.open(QFile.WriteOnly):
# write QtXml.QDomDocument to path as HROX
hrox_file = QtCore.QFile(path)
if not hrox_file.open(QtCore.QFile.WriteOnly):
raise RuntimeError("Failed to open file for writing")
stream = QTextStream(hrox_file)
stream = QtCore.QTextStream(hrox_file)
doc.save(stream, 1)
hrox_file.close()
@ -1030,7 +1192,7 @@ def sync_clip_name_to_data_asset(track_items_list):
# get name and data
ti_name = track_item.name()
data = get_track_item_pype_data(track_item)
data = get_trackitem_openpype_data(track_item)
# ignore if no data on the clip or not publish instance
if not data:
@ -1042,10 +1204,10 @@ def sync_clip_name_to_data_asset(track_items_list):
if data["asset"] != ti_name:
data["asset"] = ti_name
# remove the original tag
tag = get_track_item_pype_tag(track_item)
tag = get_trackitem_openpype_tag(track_item)
track_item.removeTag(tag)
# create new tag with updated data
set_track_item_pype_tag(track_item, data)
set_trackitem_openpype_tag(track_item, data)
print("asset was changed in clip: {}".format(ti_name))
@ -1083,10 +1245,10 @@ def check_inventory_versions(track_items=None):
project_name = legacy_io.active_project()
filter_result = filter_containers(containers, project_name)
for container in filter_result.latest:
set_track_color(container["_track_item"], clip_color)
set_track_color(container["_item"], clip_color)
for container in filter_result.outdated:
set_track_color(container["_track_item"], clip_color_last)
set_track_color(container["_item"], clip_color_last)
def selection_changed_timeline(event):

View file

@ -1,6 +1,7 @@
"""
Basic avalon integration
"""
from copy import deepcopy
import os
import contextlib
from collections import OrderedDict
@ -17,6 +18,7 @@ from openpype.pipeline import (
)
from openpype.tools.utils import host_tools
from . import lib, menu, events
import hiero
log = Logger.get_logger(__name__)
@ -106,7 +108,7 @@ def containerise(track_item,
data_imprint.update({k: v})
log.debug("_ data_imprint: {}".format(data_imprint))
lib.set_track_item_pype_tag(track_item, data_imprint)
lib.set_trackitem_openpype_tag(track_item, data_imprint)
return track_item
@ -123,79 +125,131 @@ def ls():
"""
# get all track items from current timeline
all_track_items = lib.get_track_items()
all_items = lib.get_track_items()
for track_item in all_track_items:
container = parse_container(track_item)
if container:
yield container
# append all video tracks
for track in lib.get_current_sequence():
if type(track) != hiero.core.VideoTrack:
continue
all_items.append(track)
for item in all_items:
container_data = parse_container(item)
if isinstance(container_data, list):
for _c in container_data:
yield _c
elif container_data:
yield container_data
def parse_container(track_item, validate=True):
def parse_container(item, validate=True):
"""Return container data from track_item's pype tag.
Args:
track_item (hiero.core.TrackItem): A containerised track item.
item (hiero.core.TrackItem or hiero.core.VideoTrack):
A containerised track item.
validate (bool)[optional]: validating with avalon scheme
Returns:
dict: The container schema data for input containerized track item.
"""
def data_to_container(item, data):
if (
not data
or data.get("id") != "pyblish.avalon.container"
):
return
if validate and data and data.get("schema"):
schema.validate(data)
if not isinstance(data, dict):
return
# If not all required data return the empty container
required = ['schema', 'id', 'name',
'namespace', 'loader', 'representation']
if any(key not in data for key in required):
return
container = {key: data[key] for key in required}
container["objectName"] = item.name()
# Store reference to the node object
container["_item"] = item
return container
# convert tag metadata to normal keys names
data = lib.get_track_item_pype_data(track_item)
if (
not data
or data.get("id") != "pyblish.avalon.container"
):
return
if type(item) == hiero.core.VideoTrack:
return_list = []
_data = lib.get_track_openpype_data(item)
if validate and data and data.get("schema"):
schema.validate(data)
if not _data:
return
# convert the data to list and validate them
for _, obj_data in _data.items():
cotnainer = data_to_container(item, obj_data)
return_list.append(cotnainer)
return return_list
else:
_data = lib.get_trackitem_openpype_data(item)
return data_to_container(item, _data)
if not isinstance(data, dict):
return
# If not all required data return the empty container
required = ['schema', 'id', 'name',
'namespace', 'loader', 'representation']
if not all(key in data for key in required):
return
container = {key: data[key] for key in required}
container["objectName"] = track_item.name()
# Store reference to the node object
container["_track_item"] = track_item
def _update_container_data(container, data):
for key in container:
try:
container[key] = data[key]
except KeyError:
pass
return container
def update_container(track_item, data=None):
"""Update container data to input track_item's pype tag.
def update_container(item, data=None):
"""Update container data to input track_item or track's
openpype tag.
Args:
track_item (hiero.core.TrackItem): A containerised track item.
item (hiero.core.TrackItem or hiero.core.VideoTrack):
A containerised track item.
data (dict)[optional]: dictionery with data to be updated
Returns:
bool: True if container was updated correctly
"""
data = data or dict()
container = lib.get_track_item_pype_data(track_item)
data = data or {}
data = deepcopy(data)
for _key, _value in container.items():
try:
container[_key] = data[_key]
except KeyError:
pass
if type(item) == hiero.core.VideoTrack:
# form object data for test
object_name = data["objectName"]
log.info("Updating container: `{}`".format(track_item.name()))
return bool(lib.set_track_item_pype_tag(track_item, container))
# get all available containers
containers = lib.get_track_openpype_data(item)
container = lib.get_track_openpype_data(item, object_name)
containers = deepcopy(containers)
container = deepcopy(container)
# update data in container
updated_container = _update_container_data(container, data)
# merge updated container back to containers
containers.update({object_name: updated_container})
return bool(lib.set_track_openpype_tag(item, containers))
else:
container = lib.get_trackitem_openpype_data(item)
updated_container = _update_container_data(container, data)
log.info("Updating container: `{}`".format(item.name()))
return bool(lib.set_trackitem_openpype_tag(item, updated_container))
def launch_workfiles_app(*args):
@ -272,11 +326,11 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
instance, old_value, new_value))
from openpype.hosts.hiero.api import (
get_track_item_pype_tag,
get_trackitem_openpype_tag,
set_publish_attribute
)
# Whether instances should be passthrough based on new value
track_item = instance.data["item"]
tag = get_track_item_pype_tag(track_item)
tag = get_trackitem_openpype_tag(track_item)
set_publish_attribute(tag, new_value)

View file

@ -1,3 +1,4 @@
import json
import re
import os
import hiero
@ -85,17 +86,16 @@ def update_tag(tag, data):
# get metadata key from data
data_mtd = data.get("metadata", {})
# due to hiero bug we have to make sure keys which are not existent in
# data are cleared of value by `None`
for _mk in mtd.dict().keys():
if _mk.replace("tag.", "") not in data_mtd.keys():
mtd.setValue(_mk, str(None))
# set all data metadata to tag metadata
for k, v in data_mtd.items():
for _k, _v in data_mtd.items():
value = str(_v)
if type(_v) == dict:
value = json.dumps(_v)
# set the value
mtd.setValue(
"tag.{}".format(str(k)),
str(v)
"tag.{}".format(str(_k)),
value
)
# set note description of tag

View file

@ -0,0 +1,308 @@
import json
from collections import OrderedDict
import six
from openpype.client import (
get_version_by_id
)
from openpype.pipeline import (
AVALON_CONTAINER_ID,
load,
legacy_io,
get_representation_path
)
from openpype.hosts.hiero import api as phiero
from openpype.lib import Logger
class LoadEffects(load.LoaderPlugin):
"""Loading colorspace soft effect exported from nukestudio"""
representations = ["effectJson"]
families = ["effect"]
label = "Load Effects"
order = 0
icon = "cc"
color = "white"
log = Logger.get_logger(__name__)
def load(self, context, name, namespace, data):
"""
Loading function to get the soft effects to particular read node
Arguments:
context (dict): context of version
name (str): name of the version
namespace (str): asset name
data (dict): compulsory attribute > not used
Returns:
nuke node: containerised nuke node object
"""
active_sequence = phiero.get_current_sequence()
active_track = phiero.get_current_track(
active_sequence, "Loaded_{}".format(name))
# get main variables
namespace = namespace or context["asset"]["name"]
object_name = "{}_{}".format(name, namespace)
clip_in = context["asset"]["data"]["clipIn"]
clip_out = context["asset"]["data"]["clipOut"]
data_imprint = {
"objectName": object_name,
"children_names": []
}
# getting file path
file = self.fname.replace("\\", "/")
if self._shared_loading(
file,
active_track,
clip_in,
clip_out,
data_imprint
):
self.containerise(
active_track,
name=name,
namespace=namespace,
object_name=object_name,
context=context,
loader=self.__class__.__name__,
data=data_imprint)
def _shared_loading(
self,
file,
active_track,
clip_in,
clip_out,
data_imprint,
update=False
):
# getting data from json file with unicode conversion
with open(file, "r") as f:
json_f = {self.byteify(key): self.byteify(value)
for key, value in json.load(f).items()}
# get correct order of nodes by positions on track and subtrack
nodes_order = self.reorder_nodes(json_f)
used_subtracks = {
stitem.name(): stitem
for stitem in phiero.flatten(active_track.subTrackItems())
}
loaded = False
for index_order, (ef_name, ef_val) in enumerate(nodes_order.items()):
new_name = "{}_loaded".format(ef_name)
if new_name not in used_subtracks:
effect_track_item = active_track.createEffect(
effectType=ef_val["class"],
timelineIn=clip_in,
timelineOut=clip_out,
subTrackIndex=index_order
)
effect_track_item.setName(new_name)
else:
effect_track_item = used_subtracks[new_name]
node = effect_track_item.node()
for knob_name, knob_value in ef_val["node"].items():
if (
not knob_value
or knob_name == "name"
):
continue
try:
# assume list means animation
# except 4 values could be RGBA or vector
if isinstance(knob_value, list) and len(knob_value) > 4:
node[knob_name].setAnimated()
for i, value in enumerate(knob_value):
if isinstance(value, list):
# list can have vector animation
for ci, cv in enumerate(value):
node[knob_name].setValueAt(
cv,
(clip_in + i),
ci
)
else:
# list is single values
node[knob_name].setValueAt(
value,
(clip_in + i)
)
else:
node[knob_name].setValue(knob_value)
except NameError:
self.log.warning("Knob: {} cannot be set".format(
knob_name))
# register all loaded children
data_imprint["children_names"].append(new_name)
# make sure containerisation will happen
loaded = True
return loaded
def update(self, container, representation):
""" Updating previously loaded effects
"""
active_track = container["_item"]
file = get_representation_path(representation).replace("\\", "/")
# get main variables
name = container['name']
namespace = container['namespace']
# get timeline in out data
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
version_data = version_doc["data"]
clip_in = version_data["clipIn"]
clip_out = version_data["clipOut"]
object_name = "{}_{}".format(name, namespace)
# Disable previously created nodes
used_subtracks = {
stitem.name(): stitem
for stitem in phiero.flatten(active_track.subTrackItems())
}
container = phiero.get_track_openpype_data(
active_track, object_name
)
loaded_subtrack_items = container["children_names"]
for loaded_stitem in loaded_subtrack_items:
if loaded_stitem not in used_subtracks:
continue
item_to_remove = used_subtracks.pop(loaded_stitem)
# TODO: find a way to erase nodes
self.log.debug(
"This node needs to be removed: {}".format(item_to_remove))
data_imprint = {
"objectName": object_name,
"name": name,
"representation": str(representation["_id"]),
"children_names": []
}
if self._shared_loading(
file,
active_track,
clip_in,
clip_out,
data_imprint,
update=True
):
return phiero.update_container(active_track, data_imprint)
def reorder_nodes(self, data):
new_order = OrderedDict()
trackNums = [v["trackIndex"] for k, v in data.items()
if isinstance(v, dict)]
subTrackNums = [v["subTrackIndex"] for k, v in data.items()
if isinstance(v, dict)]
for trackIndex in range(
min(trackNums), max(trackNums) + 1):
for subTrackIndex in range(
min(subTrackNums), max(subTrackNums) + 1):
item = self.get_item(data, trackIndex, subTrackIndex)
if item is not {}:
new_order.update(item)
return new_order
def get_item(self, data, trackIndex, subTrackIndex):
return {key: val for key, val in data.items()
if isinstance(val, dict)
if subTrackIndex == val["subTrackIndex"]
if trackIndex == val["trackIndex"]}
def byteify(self, input):
"""
Converts unicode strings to strings
It goes through all dictionary
Arguments:
input (dict/str): input
Returns:
dict: with fixed values and keys
"""
if isinstance(input, dict):
return {self.byteify(key): self.byteify(value)
for key, value in input.items()}
elif isinstance(input, list):
return [self.byteify(element) for element in input]
elif isinstance(input, six.text_type):
return str(input)
else:
return input
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
pass
def containerise(
self,
track,
name,
namespace,
object_name,
context,
loader=None,
data=None
):
"""Bundle Hiero's object into an assembly and imprint it with metadata
Containerisation enables a tracking of version, author and origin
for loaded assets.
Arguments:
track (hiero.core.VideoTrack): object to imprint as container
name (str): Name of resulting assembly
namespace (str): Namespace under which to host container
object_name (str): name of container
context (dict): Asset information
loader (str, optional): Name of node used to produce this
container.
Returns:
track_item (hiero.core.TrackItem): containerised object
"""
data_imprint = {
object_name: {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": str(name),
"namespace": str(namespace),
"loader": str(loader),
"representation": str(context["representation"]["_id"]),
}
}
if data:
for k, v in data.items():
data_imprint[object_name].update({k: v})
self.log.debug("_ data_imprint: {}".format(data_imprint))
phiero.set_track_openpype_tag(track, data_imprint)

View file

@ -16,6 +16,9 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
review_track_index = instance.context.data.get("reviewTrackIndex")
item = instance.data["item"]
if "audio" in instance.data["family"]:
return
# frame range
self.handle_start = instance.data["handleStart"]
self.handle_end = instance.data["handleEnd"]

View file

@ -48,7 +48,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
self.log.debug("clip_name: {}".format(clip_name))
# get openpype tag data
tag_data = phiero.get_track_item_pype_data(track_item)
tag_data = phiero.get_trackitem_openpype_data(track_item)
self.log.debug("__ tag_data: {}".format(pformat(tag_data)))
if not tag_data:

View file

@ -536,6 +536,11 @@ class RenderProductsArnold(ARenderProducts):
products = []
aov_name = self._get_attr(aov, "name")
multipart = False
multilayer = bool(self._get_attr("defaultArnoldDriver.multipart"))
merge_AOVs = bool(self._get_attr("defaultArnoldDriver.mergeAOVs"))
if multilayer or merge_AOVs:
multipart = True
ai_drivers = cmds.listConnections("{}.outputs".format(aov),
source=True,
destination=False,
@ -589,6 +594,7 @@ class RenderProductsArnold(ARenderProducts):
ext=ext,
aov=aov_name,
driver=ai_driver,
multipart=multipart,
camera=camera)
products.append(product)
@ -1016,7 +1022,11 @@ class RenderProductsRedshift(ARenderProducts):
# due to some AOVs still being written into separate files,
# like Cryptomatte.
# AOVs are merged in multi-channel file
multipart = bool(self._get_attr("redshiftOptions.exrForceMultilayer"))
multipart = False
force_layer = bool(self._get_attr("redshiftOptions.exrForceMultilayer")) # noqa
exMultipart = bool(self._get_attr("redshiftOptions.exrMultipart"))
if exMultipart or force_layer:
multipart = True
# Get Redshift Extension from image format
image_format = self._get_attr("redshiftOptions.imageFormat") # integer
@ -1044,7 +1054,6 @@ class RenderProductsRedshift(ARenderProducts):
# Any AOVs that still get processed, like Cryptomatte
# by themselves are not multipart files.
aov_multipart = not multipart
# Redshift skips rendering of masterlayer without AOV suffix
# when a Beauty AOV is rendered. It overrides the main layer.
@ -1075,7 +1084,7 @@ class RenderProductsRedshift(ARenderProducts):
productName=aov_light_group_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart,
multipart=multipart,
camera=camera)
products.append(product)
@ -1089,7 +1098,7 @@ class RenderProductsRedshift(ARenderProducts):
product = RenderProduct(productName=aov_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart,
multipart=multipart,
camera=camera)
products.append(product)
@ -1100,7 +1109,7 @@ class RenderProductsRedshift(ARenderProducts):
if light_groups_enabled:
return products
beauty_name = "Beauty_other" if has_beauty_aov else ""
beauty_name = "BeautyAux" if has_beauty_aov else ""
for camera in cameras:
products.insert(0,
RenderProduct(productName=beauty_name,

View file

@ -0,0 +1,132 @@
import os
from openpype.pipeline import (
legacy_io,
load,
get_representation_path
)
from openpype.settings import get_project_settings
class AlembicStandinLoader(load.LoaderPlugin):
"""Load Alembic as Arnold Standin"""
families = ["animation", "model", "pointcache"]
representations = ["abc"]
label = "Import Alembic as Arnold Standin"
order = -5
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, options):
import maya.cmds as cmds
import mtoa.ui.arnoldmenu
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
version = context["version"]
version_data = version.get("data", {})
family = version["data"]["families"]
self.log.info("version_data: {}\n".format(version_data))
self.log.info("family: {}\n".format(family))
frameStart = version_data.get("frameStart", None)
asset = context["asset"]["name"]
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings["maya"]["load"]["colors"]
fps = legacy_io.Session["AVALON_FPS"]
c = colors.get(family[0])
if c is not None:
r = (float(c[0]) / 255)
g = (float(c[1]) / 255)
b = (float(c[2]) / 255)
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
r, g, b)
transform_name = label + "_ABC"
standinShape = cmds.ls(mtoa.ui.arnoldmenu.createStandIn())[0]
standin = cmds.listRelatives(standinShape, parent=True,
typ="transform")
standin = cmds.rename(standin, transform_name)
standinShape = cmds.listRelatives(standin, children=True)[0]
cmds.parent(standin, root)
# Set the standin filepath
cmds.setAttr(standinShape + ".dso", self.fname, type="string")
cmds.setAttr(standinShape + ".abcFPS", float(fps))
if frameStart is None:
cmds.setAttr(standinShape + ".useFrameExtension", 0)
elif "model" in family:
cmds.setAttr(standinShape + ".useFrameExtension", 0)
else:
cmds.setAttr(standinShape + ".useFrameExtension", 1)
nodes = [root, standin]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
import pymel.core as pm
path = get_representation_path(representation)
fps = legacy_io.Session["AVALON_FPS"]
# Update the standin
standins = list()
members = pm.sets(container['objectName'], query=True)
self.log.info("container:{}".format(container))
for member in members:
shape = member.getShape()
if (shape and shape.type() == "aiStandIn"):
standins.append(shape)
for standin in standins:
standin.dso.set(path)
standin.abcFPS.set(float(fps))
if "modelMain" in container['objectName']:
standin.useFrameExtension.set(0)
else:
standin.useFrameExtension.set(1)
container = pm.PyNode(container["objectName"])
container.representation.set(str(representation["_id"]))
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
import maya.cmds as cmds
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -73,8 +73,8 @@ class YetiCacheLoader(load.LoaderPlugin):
c = colors.get(family)
if c is not None:
cmds.setAttr(group_name + ".useOutlinerColor", 1)
cmds.setAttr(group_name + ".outlinerColor",
cmds.setAttr(group_node + ".useOutlinerColor", 1)
cmds.setAttr(group_node + ".outlinerColor",
(float(c[0])/255),
(float(c[1])/255),
(float(c[2])/255)

View file

@ -364,6 +364,9 @@ def containerise(node,
set_avalon_knob_data(node, data)
# set tab to first native
node.setTab(0)
return node

View file

@ -65,6 +65,9 @@ class AlembicCameraLoader(load.LoaderPlugin):
object_name, file),
inpanel=False
)
# hide property panel
camera_node.hideControlPanel()
camera_node.forceValidate()
camera_node["frame_rate"].setValue(float(fps))

View file

@ -145,6 +145,9 @@ class LoadClip(plugin.NukeLoader):
"Read",
"name {}".format(read_name))
# hide property panel
read_node.hideControlPanel()
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():

View file

@ -89,6 +89,9 @@ class LoadEffects(load.LoaderPlugin):
"Group",
"name {}_1".format(object_name))
# hide property panel
GN.hideControlPanel()
# adding content to the group node
with GN:
pre_node = nuke.createNode("Input")

View file

@ -90,6 +90,9 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
"Group",
"name {}_1".format(object_name))
# hide property panel
GN.hideControlPanel()
# adding content to the group node
with GN:
pre_node = nuke.createNode("Input")

View file

@ -62,7 +62,9 @@ class LoadImage(load.LoaderPlugin):
def load(self, context, name, namespace, options):
self.log.info("__ options: `{}`".format(options))
frame_number = options.get("frame_number", 1)
frame_number = options.get(
"frame_number", int(nuke.root()["first_frame"].getValue())
)
version = context['version']
version_data = version.get("data", {})
@ -112,6 +114,10 @@ class LoadImage(load.LoaderPlugin):
r = nuke.createNode(
"Read",
"name {}".format(read_name))
# hide property panel
r.hideControlPanel()
r["file"].setValue(file)
# Set colorspace defined in version data

View file

@ -63,6 +63,10 @@ class AlembicModelLoader(load.LoaderPlugin):
object_name, file),
inpanel=False
)
# hide property panel
model_node.hideControlPanel()
model_node.forceValidate()
# Ensure all items are imported and selected.

View file

@ -71,6 +71,9 @@ class LinkAsGroup(load.LoaderPlugin):
"Precomp",
"file {}".format(file))
# hide property panel
P.hideControlPanel()
# Set colorspace defined in version data
colorspace = context["version"]["data"].get("colorspace", None)
self.log.info("colorspace: {}\n".format(colorspace))

View file

@ -0,0 +1,96 @@
# -*- coding: utf-8 -*-
"""Creator of online files.
Online file retain their original name and use it as subset name. To
avoid conflicts, this creator checks if subset with this name already
exists under selected asset.
"""
from pathlib import Path
from openpype.client import get_subset_by_name, get_asset_by_name
from openpype.lib.attribute_definitions import FileDef
from openpype.pipeline import (
CreatedInstance,
CreatorError
)
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
class OnlineCreator(TrayPublishCreator):
"""Creates instance from file and retains its original name."""
identifier = "io.openpype.creators.traypublisher.online"
label = "Online"
family = "online"
description = "Publish file retaining its original file name"
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg"]
def get_detail_description(self):
return """# Create file retaining its original file name.
This will publish files using template helping to retain original
file name and that file name is used as subset name.
Bz default it tries to guard against multiple publishes of the same
file."""
def get_icon(self):
return "fa.file"
def create(self, subset_name, instance_data, pre_create_data):
repr_file = pre_create_data.get("representation_file")
if not repr_file:
raise CreatorError("No files specified")
files = repr_file.get("filenames")
if not files:
# this should never happen
raise CreatorError("Missing files from representation")
origin_basename = Path(files[0]).stem
asset = get_asset_by_name(
self.project_name, instance_data["asset"], fields=["_id"])
if get_subset_by_name(
self.project_name, origin_basename, asset["_id"],
fields=["_id"]):
raise CreatorError(f"subset with {origin_basename} already "
"exists in selected asset")
instance_data["originalBasename"] = origin_basename
subset_name = origin_basename
instance_data["creator_attributes"] = {
"path": (Path(repr_file["directory"]) / files[0]).as_posix()
}
# Create new instance
new_instance = CreatedInstance(self.family, subset_name,
instance_data, self)
self._store_new_instance(new_instance)
def get_pre_create_attr_defs(self):
return [
FileDef(
"representation_file",
folders=False,
extensions=self.extensions,
allow_sequences=False,
single_item=True,
label="Representation",
)
]
def get_subset_name(
self,
variant,
task_name,
asset_doc,
project_name,
host_name=None,
instance=None
):
if instance is None:
return "{originalBasename}"
return instance.data["subset"]

View file

@ -0,0 +1,23 @@
# -*- coding: utf-8 -*-
import pyblish.api
from pathlib import Path
class CollectOnlineFile(pyblish.api.InstancePlugin):
"""Collect online file and retain its file name."""
label = "Collect Online File"
order = pyblish.api.CollectorOrder
families = ["online"]
hosts = ["traypublisher"]
def process(self, instance):
file = Path(instance.data["creator_attributes"]["path"])
instance.data["representations"].append(
{
"name": file.suffix.lstrip("."),
"ext": file.suffix.lstrip("."),
"files": file.name,
"stagingDir": file.parent.as_posix()
}
)

View file

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
import pyblish.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError,
OptionalPyblishPluginMixin,
)
from openpype.client import get_subset_by_name
class ValidateOnlineFile(OptionalPyblishPluginMixin,
pyblish.api.InstancePlugin):
"""Validate that subset doesn't exist yet."""
label = "Validate Existing Online Files"
hosts = ["traypublisher"]
families = ["online"]
order = ValidateContentsOrder
optional = True
def process(self, instance):
project_name = instance.context.data["projectName"]
asset_id = instance.data["assetEntity"]["_id"]
subset = get_subset_by_name(
project_name, instance.data["subset"], asset_id)
if subset:
raise PublishValidationError(
"Subset to be published already exists.",
title=self.label
)

View file

@ -25,6 +25,7 @@ class ExtractSequence(pyblish.api.Extractor):
label = "Extract Sequence"
hosts = ["tvpaint"]
families = ["review", "renderPass", "renderLayer", "renderScene"]
families_to_review = ["review"]
# Modifiable with settings
review_bg = [255, 255, 255, 255]
@ -133,9 +134,9 @@ class ExtractSequence(pyblish.api.Extractor):
output_frame_start
)
# Fill tags and new families
# Fill tags and new families from project settings
tags = []
if family_lowered in ("review", "renderlayer", "renderscene"):
if family_lowered in self.families_to_review:
tags.append("review")
# Sequence of one frame

View file

@ -86,6 +86,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
first_file = task_data["files"][0]
_, extension = os.path.splitext(first_file)
extension = extension.lower()
family, families, tags = self._get_family(
self.task_type_to_family,
task_type,
@ -180,6 +181,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
def _get_single_repre(self, task_dir, files, tags):
_, ext = os.path.splitext(files[0])
ext = ext.lower()
repre_data = {
"name": ext[1:],
"ext": ext[1:],
@ -199,6 +201,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
frame_start = list(collections[0].indexes)[0]
frame_end = list(collections[0].indexes)[-1]
ext = collections[0].tail
ext = ext.lower()
repre_data = {
"frameStart": frame_start,
"frameEnd": frame_end,
@ -244,8 +247,17 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
for config in families_config:
if is_sequence != config["is_sequence"]:
continue
if (extension in config["extensions"] or
'' in config["extensions"]): # all extensions setting
extensions = config.get("extensions") or []
lower_extensions = set()
for ext in extensions:
if ext:
ext = ext.lower()
if ext.startswith("."):
ext = ext[1:]
lower_extensions.add(ext)
# all extensions setting
if not lower_extensions or extension in lower_extensions:
found_family = config["result_family"]
break

View file

@ -105,11 +105,14 @@ class AbtractAttrDef(object):
How to force to set `key` attribute?
Args:
key(str): Under which key will be attribute value stored.
label(str): Attribute label.
tooltip(str): Attribute tooltip.
is_label_horizontal(bool): UI specific argument. Specify if label is
key (str): Under which key will be attribute value stored.
default (Any): Default value of an attribute.
label (str): Attribute label.
tooltip (str): Attribute tooltip.
is_label_horizontal (bool): UI specific argument. Specify if label is
next to value input or ahead.
hidden (bool): Will be item hidden (for UI purposes).
disabled (bool): Item will be visible but disabled (for UI purposes).
"""
type_attributes = []
@ -117,16 +120,29 @@ class AbtractAttrDef(object):
is_value_def = True
def __init__(
self, key, default, label=None, tooltip=None, is_label_horizontal=None
self,
key,
default,
label=None,
tooltip=None,
is_label_horizontal=None,
hidden=False,
disabled=False
):
if is_label_horizontal is None:
is_label_horizontal = True
if hidden is None:
hidden = False
self.key = key
self.label = label
self.tooltip = tooltip
self.default = default
self.is_label_horizontal = is_label_horizontal
self._id = uuid.uuid4()
self.hidden = hidden
self.disabled = disabled
self._id = uuid.uuid4().hex
self.__init__class__ = AbtractAttrDef
@ -173,7 +189,9 @@ class AbtractAttrDef(object):
"label": self.label,
"tooltip": self.tooltip,
"default": self.default,
"is_label_horizontal": self.is_label_horizontal
"is_label_horizontal": self.is_label_horizontal,
"hidden": self.hidden,
"disabled": self.disabled
}
for attr in self.type_attributes:
data[attr] = getattr(self, attr)
@ -235,6 +253,26 @@ class UnknownDef(AbtractAttrDef):
return value
class HiddenDef(AbtractAttrDef):
"""Hidden value of Any type.
This attribute can be used for UI purposes to pass values related
to other attributes (e.g. in multi-page UIs).
Keep in mind the value should be possible to parse by json parser.
"""
type = "hidden"
def __init__(self, key, default=None, **kwargs):
kwargs["default"] = default
kwargs["hidden"] = True
super(UnknownDef, self).__init__(key, **kwargs)
def convert_value(self, value):
return value
class NumberDef(AbtractAttrDef):
"""Number definition.
@ -541,6 +579,13 @@ class FileDefItem(object):
return ext
return None
@property
def lower_ext(self):
ext = self.ext
if ext is not None:
return ext.lower()
return ext
@property
def is_dir(self):
if self.is_empty:

View file

@ -507,12 +507,13 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
else:
render_file_name = os.path.basename(col)
aov_patterns = self.aov_filter
preview = match_aov_pattern(app, aov_patterns, render_file_name)
preview = match_aov_pattern(app, aov_patterns, render_file_name)
# toggle preview on if multipart is on
if instance_data.get("multipartExr"):
preview = True
self.log.debug("preview:{}".format(preview))
new_instance = deepcopy(instance_data)
new_instance["subset"] = subset_name
new_instance["subsetGroup"] = group_name
@ -555,7 +556,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
if new_instance.get("extendFrames", False):
self._copy_extend_frames(new_instance, rep)
instances.append(new_instance)
self.log.debug("instances:{}".format(instances))
return instances
def _get_representations(self, instance, exp_files):

View file

@ -14,6 +14,137 @@ from Deadline.Scripting import (
ProcessUtils,
)
VERSION_REGEX = re.compile(
r"(?P<major>0|[1-9]\d*)"
r"\.(?P<minor>0|[1-9]\d*)"
r"\.(?P<patch>0|[1-9]\d*)"
r"(?:-(?P<prerelease>[a-zA-Z\d\-.]*))?"
r"(?:\+(?P<buildmetadata>[a-zA-Z\d\-.]*))?"
)
class OpenPypeVersion:
"""Fake semver version class for OpenPype version purposes.
The version
"""
def __init__(self, major, minor, patch, prerelease, origin=None):
self.major = major
self.minor = minor
self.patch = patch
self.prerelease = prerelease
is_valid = True
if not major or not minor or not patch:
is_valid = False
self.is_valid = is_valid
if origin is None:
base = "{}.{}.{}".format(str(major), str(minor), str(patch))
if not prerelease:
origin = base
else:
origin = "{}-{}".format(base, str(prerelease))
self.origin = origin
@classmethod
def from_string(cls, version):
"""Create an object of version from string.
Args:
version (str): Version as a string.
Returns:
Union[OpenPypeVersion, None]: Version object if input is nonempty
string otherwise None.
"""
if not version:
return None
valid_parts = VERSION_REGEX.findall(version)
if len(valid_parts) != 1:
# Return invalid version with filled 'origin' attribute
return cls(None, None, None, None, origin=str(version))
# Unpack found version
major, minor, patch, pre, post = valid_parts[0]
prerelease = pre
# Post release is not important anymore and should be considered as
# part of prerelease
# - comparison is implemented to find suitable build and builds should
# never contain prerelease part so "not proper" parsing is
# acceptable for this use case.
if post:
prerelease = "{}+{}".format(pre, post)
return cls(
int(major), int(minor), int(patch), prerelease, origin=version
)
def has_compatible_release(self, other):
"""Version has compatible release as other version.
Both major and minor versions must be exactly the same. In that case
a build can be considered as release compatible with any version.
Args:
other (OpenPypeVersion): Other version.
Returns:
bool: Version is release compatible with other version.
"""
if self.is_valid and other.is_valid:
return self.major == other.major and self.minor == other.minor
return False
def __bool__(self):
return self.is_valid
def __repr__(self):
return "<{} {}>".format(self.__class__.__name__, self.origin)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return self.origin == other
return self.origin == other.origin
def __lt__(self, other):
if not isinstance(other, self.__class__):
return None
if not self.is_valid:
return True
if not other.is_valid:
return False
if self.origin == other.origin:
return None
same_major = self.major == other.major
if not same_major:
return self.major < other.major
same_minor = self.minor == other.minor
if not same_minor:
return self.minor < other.minor
same_patch = self.patch == other.patch
if not same_patch:
return self.patch < other.patch
if not self.prerelease:
return False
if not other.prerelease:
return True
pres = [self.prerelease, other.prerelease]
pres.sort()
return pres[0] == self.prerelease
def get_openpype_version_from_path(path, build=True):
"""Get OpenPype version from provided path.
@ -21,9 +152,9 @@ def get_openpype_version_from_path(path, build=True):
build (bool, optional): Get only builds, not sources
Returns:
str or None: version of OpenPype if found.
Union[OpenPypeVersion, None]: version of OpenPype if found.
"""
# fix path for application bundle on macos
if platform.system().lower() == "darwin":
path = os.path.join(path, "Contents", "MacOS", "lib", "Python")
@ -46,8 +177,10 @@ def get_openpype_version_from_path(path, build=True):
with open(version_file, "r") as vf:
exec(vf.read(), version)
version_match = re.search(r"(\d+\.\d+.\d+).*", version["__version__"])
return version_match[1]
version_str = version.get("__version__")
if version_str:
return OpenPypeVersion.from_string(version_str)
return None
def get_openpype_executable():
@ -59,6 +192,91 @@ def get_openpype_executable():
return exe_list, dir_list
def get_openpype_versions(dir_list):
print(">>> Getting OpenPype executable ...")
openpype_versions = []
install_dir = DirectoryUtils.SearchDirectoryList(dir_list)
if install_dir:
print("--- Looking for OpenPype at: {}".format(install_dir))
sub_dirs = [
f.path for f in os.scandir(install_dir)
if f.is_dir()
]
for subdir in sub_dirs:
version = get_openpype_version_from_path(subdir)
if not version:
continue
print(" - found: {} - {}".format(version, subdir))
openpype_versions.append((version, subdir))
return openpype_versions
def get_requested_openpype_executable(
exe, dir_list, requested_version
):
requested_version_obj = OpenPypeVersion.from_string(requested_version)
if not requested_version_obj:
print((
">>> Requested version does not match version regex \"{}\""
).format(VERSION_REGEX))
return None
print((
">>> Scanning for compatible requested version {}"
).format(requested_version))
openpype_versions = get_openpype_versions(dir_list)
if not openpype_versions:
return None
# if looking for requested compatible version,
# add the implicitly specified to the list too.
if exe:
exe_dir = os.path.dirname(exe)
print("Looking for OpenPype at: {}".format(exe_dir))
version = get_openpype_version_from_path(exe_dir)
if version:
print(" - found: {} - {}".format(version, exe_dir))
openpype_versions.append((version, exe_dir))
matching_item = None
compatible_versions = []
for version_item in openpype_versions:
version, version_dir = version_item
if requested_version_obj.has_compatible_release(version):
compatible_versions.append(version_item)
if version == requested_version_obj:
# Store version item if version match exactly
# - break if is found matching version
matching_item = version_item
break
if not compatible_versions:
return None
compatible_versions.sort(key=lambda item: item[0])
if matching_item:
version, version_dir = matching_item
print((
"*** Found exact match build version {} in {}"
).format(version_dir, version))
else:
version, version_dir = compatible_versions[-1]
print((
"*** Latest compatible version found is {} in {}"
).format(version_dir, version))
# create list of executables for different platform and let
# Deadline decide.
exe_list = [
os.path.join(version_dir, "openpype_console.exe"),
os.path.join(version_dir, "openpype_console")
]
return FileUtils.SearchFileList(";".join(exe_list))
def inject_openpype_environment(deadlinePlugin):
""" Pull env vars from OpenPype and push them to rendering process.
@ -68,93 +286,29 @@ def inject_openpype_environment(deadlinePlugin):
print(">>> Injecting OpenPype environments ...")
try:
print(">>> Getting OpenPype executable ...")
exe_list, dir_list = get_openpype_executable()
openpype_versions = []
# if the job requires specific OpenPype version,
# lets go over all available and find compatible build.
exe = FileUtils.SearchFileList(exe_list)
requested_version = job.GetJobEnvironmentKeyValue("OPENPYPE_VERSION")
if requested_version:
print((
">>> Scanning for compatible requested version {}"
).format(requested_version))
install_dir = DirectoryUtils.SearchDirectoryList(dir_list)
if install_dir:
print("--- Looking for OpenPype at: {}".format(install_dir))
sub_dirs = [
f.path for f in os.scandir(install_dir)
if f.is_dir()
]
for subdir in sub_dirs:
version = get_openpype_version_from_path(subdir)
if not version:
continue
print(" - found: {} - {}".format(version, subdir))
openpype_versions.append((version, subdir))
exe = get_requested_openpype_executable(
exe, dir_list, requested_version
)
if exe is None:
raise RuntimeError((
"Cannot find compatible version available for version {}"
" requested by the job. Please add it through plugin"
" configuration in Deadline or install it to configured"
" directory."
).format(requested_version))
exe = FileUtils.SearchFileList(exe_list)
if openpype_versions:
# if looking for requested compatible version,
# add the implicitly specified to the list too.
print("Looking for OpenPype at: {}".format(os.path.dirname(exe)))
version = get_openpype_version_from_path(
os.path.dirname(exe))
if version:
print(" - found: {} - {}".format(
version, os.path.dirname(exe)
))
openpype_versions.append((version, os.path.dirname(exe)))
if requested_version:
# sort detected versions
if openpype_versions:
# use natural sorting
openpype_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
print((
"*** Latest available version found is {}"
).format(openpype_versions[-1][0]))
requested_major, requested_minor, _ = requested_version.split(".")[:3] # noqa: E501
compatible_versions = []
for version in openpype_versions:
v = version[0].split(".")[:3]
if v[0] == requested_major and v[1] == requested_minor:
compatible_versions.append(version)
if not compatible_versions:
raise RuntimeError(
("Cannot find compatible version available "
"for version {} requested by the job. "
"Please add it through plugin configuration "
"in Deadline or install it to configured "
"directory.").format(requested_version))
# sort compatible versions nad pick the last one
compatible_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
print((
"*** Latest compatible version found is {}"
).format(compatible_versions[-1][0]))
# create list of executables for different platform and let
# Deadline decide.
exe_list = [
os.path.join(
compatible_versions[-1][1], "openpype_console.exe"),
os.path.join(
compatible_versions[-1][1], "openpype_console")
]
exe = FileUtils.SearchFileList(";".join(exe_list))
if exe == "":
raise RuntimeError(
"OpenPype executable was not found " +
"in the semicolon separated list " +
"\"" + ";".join(exe_list) + "\". " +
"The path to the render executable can be configured " +
"from the Plugin Configuration in the Deadline Monitor.")
if not exe:
raise RuntimeError((
"OpenPype executable was not found in the semicolon "
"separated list \"{}\"."
"The path to the render executable can be configured"
" from the Plugin Configuration in the Deadline Monitor."
).format(";".join(exe_list)))
print("--- OpenPype executable: {}".format(exe))
@ -172,25 +326,25 @@ def inject_openpype_environment(deadlinePlugin):
export_url
]
add_args = {}
add_args['project'] = \
job.GetJobEnvironmentKeyValue('AVALON_PROJECT')
add_args['asset'] = job.GetJobEnvironmentKeyValue('AVALON_ASSET')
add_args['task'] = job.GetJobEnvironmentKeyValue('AVALON_TASK')
add_args['app'] = job.GetJobEnvironmentKeyValue('AVALON_APP_NAME')
add_args["envgroup"] = "farm"
add_kwargs = {
"project": job.GetJobEnvironmentKeyValue("AVALON_PROJECT"),
"asset": job.GetJobEnvironmentKeyValue("AVALON_ASSET"),
"task": job.GetJobEnvironmentKeyValue("AVALON_TASK"),
"app": job.GetJobEnvironmentKeyValue("AVALON_APP_NAME"),
"envgroup": "farm"
}
if job.GetJobEnvironmentKeyValue('IS_TEST'):
args.append("--automatic-tests")
if all(add_args.values()):
for key, value in add_args.items():
args.append("--{}".format(key))
args.append(value)
if all(add_kwargs.values()):
for key, value in add_kwargs.items():
args.extend(["--{}".format(key), value])
else:
msg = "Required env vars: AVALON_PROJECT, AVALON_ASSET, " + \
"AVALON_TASK, AVALON_APP_NAME"
raise RuntimeError(msg)
raise RuntimeError((
"Missing required env vars: AVALON_PROJECT, AVALON_ASSET,"
" AVALON_TASK, AVALON_APP_NAME"
))
if not os.environ.get("OPENPYPE_MONGO"):
print(">>> Missing OPENPYPE_MONGO env var, process won't work")
@ -211,12 +365,12 @@ def inject_openpype_environment(deadlinePlugin):
print(">>> Loading file ...")
with open(export_url) as fp:
contents = json.load(fp)
for key, value in contents.items():
deadlinePlugin.SetProcessEnvironmentVariable(key, value)
for key, value in contents.items():
deadlinePlugin.SetProcessEnvironmentVariable(key, value)
script_url = job.GetJobPluginInfoKeyValue("ScriptFilename")
if script_url:
script_url = script_url.format(**contents).replace("\\", "/")
print(">>> Setting script path {}".format(script_url))
job.SetJobPluginInfoKeyValue("ScriptFilename", script_url)

View file

@ -7,10 +7,8 @@ import pyblish.api
from openpype.client import get_asset_by_id
from openpype.lib import filter_profiles
from openpype.pipeline import KnownPublishError
# Copy of constant `openpype_modules.ftrack.lib.avalon_sync.CUST_ATTR_AUTO_SYNC`
CUST_ATTR_AUTO_SYNC = "avalon_auto_sync"
CUST_ATTR_GROUP = "openpype"
@ -19,7 +17,6 @@ CUST_ATTR_GROUP = "openpype"
def get_pype_attr(session, split_hierarchical=True):
custom_attributes = []
hier_custom_attributes = []
# TODO remove deprecated "avalon" group from query
cust_attrs_query = (
"select id, entity_type, object_type_id, is_hierarchical, default"
" from CustomAttributeConfiguration"
@ -79,120 +76,284 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
create_task_status_profiles = []
def process(self, context):
self.context = context
if "hierarchyContext" not in self.context.data:
if "hierarchyContext" not in context.data:
return
hierarchy_context = self._get_active_assets(context)
self.log.debug("__ hierarchy_context: {}".format(hierarchy_context))
session = self.context.data["ftrackSession"]
project_name = self.context.data["projectEntity"]["name"]
query = 'Project where full_name is "{}"'.format(project_name)
project = session.query(query).one()
auto_sync_state = project["custom_attributes"][CUST_ATTR_AUTO_SYNC]
session = context.data["ftrackSession"]
project_name = context.data["projectName"]
project = session.query(
'select id, full_name from Project where full_name is "{}"'.format(
project_name
)
).first()
if not project:
raise KnownPublishError(
"Project \"{}\" was not found on ftrack.".format(project_name)
)
self.context = context
self.session = session
self.ft_project = project
self.task_types = self.get_all_task_types(project)
self.task_statuses = self.get_task_statuses(project)
# disable termporarily ftrack project's autosyncing
if auto_sync_state:
self.auto_sync_off(project)
# import ftrack hierarchy
self.import_to_ftrack(project_name, hierarchy_context)
try:
# import ftrack hierarchy
self.import_to_ftrack(project_name, hierarchy_context)
except Exception:
raise
finally:
if auto_sync_state:
self.auto_sync_on(project)
def query_ftrack_entitites(self, session, ft_project):
project_id = ft_project["id"]
entities = session.query((
"select id, name, parent_id"
" from TypedContext where project_id is \"{}\""
).format(project_id)).all()
def import_to_ftrack(self, project_name, input_data, parent=None):
entities_by_id = {}
entities_by_parent_id = collections.defaultdict(list)
for entity in entities:
entities_by_id[entity["id"]] = entity
parent_id = entity["parent_id"]
entities_by_parent_id[parent_id].append(entity)
ftrack_hierarchy = []
ftrack_id_queue = collections.deque()
ftrack_id_queue.append((project_id, ftrack_hierarchy))
while ftrack_id_queue:
item = ftrack_id_queue.popleft()
ftrack_id, parent_list = item
if ftrack_id == project_id:
entity = ft_project
name = entity["full_name"]
else:
entity = entities_by_id[ftrack_id]
name = entity["name"]
children = []
parent_list.append({
"name": name,
"low_name": name.lower(),
"entity": entity,
"children": children,
})
for child in entities_by_parent_id[ftrack_id]:
ftrack_id_queue.append((child["id"], children))
return ftrack_hierarchy
def find_matching_ftrack_entities(
self, hierarchy_context, ftrack_hierarchy
):
walk_queue = collections.deque()
for entity_name, entity_data in hierarchy_context.items():
walk_queue.append(
(entity_name, entity_data, ftrack_hierarchy)
)
matching_ftrack_entities = []
while walk_queue:
item = walk_queue.popleft()
entity_name, entity_data, ft_children = item
matching_ft_child = None
for ft_child in ft_children:
if ft_child["low_name"] == entity_name.lower():
matching_ft_child = ft_child
break
if matching_ft_child is None:
continue
entity = matching_ft_child["entity"]
entity_data["ft_entity"] = entity
matching_ftrack_entities.append(entity)
hierarchy_children = entity_data.get("childs")
if not hierarchy_children:
continue
for child_name, child_data in hierarchy_children.items():
walk_queue.append(
(child_name, child_data, matching_ft_child["children"])
)
return matching_ftrack_entities
def query_custom_attribute_values(self, session, entities, hier_attrs):
attr_ids = {
attr["id"]
for attr in hier_attrs
}
entity_ids = {
entity["id"]
for entity in entities
}
output = {
entity_id: {}
for entity_id in entity_ids
}
if not attr_ids or not entity_ids:
return {}
joined_attr_ids = ",".join(
['"{}"'.format(attr_id) for attr_id in attr_ids]
)
# Query values in chunks
chunk_size = int(5000 / len(attr_ids))
# Make sure entity_ids is `list` for chunk selection
entity_ids = list(entity_ids)
results = []
for idx in range(0, len(entity_ids), chunk_size):
joined_entity_ids = ",".join([
'"{}"'.format(entity_id)
for entity_id in entity_ids[idx:idx + chunk_size]
])
results.extend(
session.query(
(
"select value, entity_id, configuration_id"
" from CustomAttributeValue"
" where entity_id in ({}) and configuration_id in ({})"
).format(
joined_entity_ids,
joined_attr_ids
)
).all()
)
for result in results:
attr_id = result["configuration_id"]
entity_id = result["entity_id"]
output[entity_id][attr_id] = result["value"]
return output
def import_to_ftrack(self, project_name, hierarchy_context):
# Prequery hiearchical custom attributes
hier_custom_attributes = get_pype_attr(self.session)[1]
hier_attrs = get_pype_attr(self.session)[1]
hier_attr_by_key = {
attr["key"]: attr
for attr in hier_custom_attributes
for attr in hier_attrs
}
# Query user entity (for comments)
user = self.session.query(
"User where username is \"{}\"".format(self.session.api_user)
).first()
if not user:
self.log.warning(
"Was not able to query current User {}".format(
self.session.api_user
)
)
# Query ftrack hierarchy with parenting
ftrack_hierarchy = self.query_ftrack_entitites(
self.session, self.ft_project)
# Fill ftrack entities to hierarchy context
# - there is no need to query entities again
matching_entities = self.find_matching_ftrack_entities(
hierarchy_context, ftrack_hierarchy)
# Query custom attribute values of each entity
custom_attr_values_by_id = self.query_custom_attribute_values(
self.session, matching_entities, hier_attrs)
# Get ftrack api module (as they are different per python version)
ftrack_api = self.context.data["ftrackPythonModule"]
for entity_name in input_data:
entity_data = input_data[entity_name]
# Use queue of hierarchy items to process
import_queue = collections.deque()
for entity_name, entity_data in hierarchy_context.items():
import_queue.append(
(entity_name, entity_data, None)
)
while import_queue:
item = import_queue.popleft()
entity_name, entity_data, parent = item
entity_type = entity_data['entity_type']
self.log.debug(entity_data)
self.log.debug(entity_type)
if entity_type.lower() == 'project':
entity = self.ft_project
elif self.ft_project is None or parent is None:
entity = entity_data.get("ft_entity")
if entity is None and entity_type.lower() == "project":
raise AssertionError(
"Collected items are not in right order!"
)
# try to find if entity already exists
else:
query = (
'TypedContext where name is "{0}" and '
'project_id is "{1}"'
).format(entity_name, self.ft_project["id"])
try:
entity = self.session.query(query).one()
except Exception:
entity = None
# Create entity if not exists
if entity is None:
entity = self.create_entity(
name=entity_name,
type=entity_type,
parent=parent
)
entity = self.session.create(entity_type, {
"name": entity_name,
"parent": parent
})
entity_data["ft_entity"] = entity
# self.log.info('entity: {}'.format(dict(entity)))
# CUSTOM ATTRIBUTES
custom_attributes = entity_data.get('custom_attributes', [])
instances = [
instance
for instance in self.context
if instance.data.get("asset") == entity["name"]
]
custom_attributes = entity_data.get('custom_attributes', {})
instances = []
for instance in self.context:
instance_asset_name = instance.data.get("asset")
if (
instance_asset_name
and instance_asset_name.lower() == entity["name"].lower()
):
instances.append(instance)
for instance in instances:
instance.data["ftrackEntity"] = entity
for key in custom_attributes:
for key, cust_attr_value in custom_attributes.items():
if cust_attr_value is None:
continue
hier_attr = hier_attr_by_key.get(key)
# Use simple method if key is not hierarchical
if not hier_attr:
assert (key in entity['custom_attributes']), (
'Missing custom attribute key: `{0}` in attrs: '
'`{1}`'.format(key, entity['custom_attributes'].keys())
if key not in entity["custom_attributes"]:
raise KnownPublishError((
"Missing custom attribute in ftrack with name '{}'"
).format(key))
entity['custom_attributes'][key] = cust_attr_value
continue
attr_id = hier_attr["id"]
entity_values = custom_attr_values_by_id.get(entity["id"], {})
# New value is defined by having id in values
# - it can be set to 'None' (ftrack allows that using API)
is_new_value = attr_id not in entity_values
attr_value = entity_values.get(attr_id)
# Use ftrack operations method to set hiearchical
# attribute value.
# - this is because there may be non hiearchical custom
# attributes with different properties
entity_key = collections.OrderedDict((
("configuration_id", hier_attr["id"]),
("entity_id", entity["id"])
))
op = None
if is_new_value:
op = ftrack_api.operation.CreateEntityOperation(
"CustomAttributeValue",
entity_key,
{"value": cust_attr_value}
)
entity['custom_attributes'][key] = custom_attributes[key]
else:
# Use ftrack operations method to set hiearchical
# attribute value.
# - this is because there may be non hiearchical custom
# attributes with different properties
entity_key = collections.OrderedDict()
entity_key["configuration_id"] = hier_attr["id"]
entity_key["entity_id"] = entity["id"]
self.session.recorded_operations.push(
ftrack_api.operation.UpdateEntityOperation(
"ContextCustomAttributeValue",
entity_key,
"value",
ftrack_api.symbol.NOT_SET,
custom_attributes[key]
)
elif attr_value != cust_attr_value:
op = ftrack_api.operation.UpdateEntityOperation(
"CustomAttributeValue",
entity_key,
"value",
attr_value,
cust_attr_value
)
if op is not None:
self.session.recorded_operations.push(op)
if self.session.recorded_operations:
try:
self.session.commit()
except Exception:
@ -206,7 +367,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
for instance in instances:
task_name = instance.data.get("task")
if task_name:
instances_by_task_name[task_name].append(instance)
instances_by_task_name[task_name.lower()].append(instance)
tasks = entity_data.get('tasks', [])
existing_tasks = []
@ -247,30 +408,28 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
six.reraise(tp, value, tb)
# Create notes.
user = self.session.query(
"User where username is \"{}\"".format(self.session.api_user)
).first()
if user:
for comment in entity_data.get("comments", []):
entity_comments = entity_data.get("comments")
if user and entity_comments:
for comment in entity_comments:
entity.create_note(comment, user)
else:
self.log.warning(
"Was not able to query current User {}".format(
self.session.api_user
)
)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Import children.
if 'childs' in entity_data:
self.import_to_ftrack(
project_name, entity_data['childs'], entity)
children = entity_data.get("childs")
if not children:
continue
for entity_name, entity_data in children.items():
import_queue.append(
(entity_name, entity_data, entity)
)
def create_links(self, project_name, entity_data, entity):
# Clear existing links.
@ -366,48 +525,6 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
return task
def create_entity(self, name, type, parent):
entity = self.session.create(type, {
'name': name,
'parent': parent
})
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
return entity
def auto_sync_off(self, project):
project["custom_attributes"][CUST_ATTR_AUTO_SYNC] = False
self.log.info("Ftrack autosync swithed off")
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
def auto_sync_on(self, project):
project["custom_attributes"][CUST_ATTR_AUTO_SYNC] = True
self.log.info("Ftrack autosync swithed on")
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
def _get_active_assets(self, context):
""" Returns only asset dictionary.
Usually the last part of deep dictionary which
@ -429,19 +546,17 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
hierarchy_context = context.data["hierarchyContext"]
active_assets = []
active_assets = set()
# filter only the active publishing insatnces
for instance in context:
if instance.data.get("publish") is False:
continue
if not instance.data.get("asset"):
continue
active_assets.append(instance.data["asset"])
asset_name = instance.data.get("asset")
if asset_name:
active_assets.add(asset_name)
# remove duplicity in list
active_assets = list(set(active_assets))
self.log.debug("__ active_assets: {}".format(active_assets))
self.log.debug("__ active_assets: {}".format(list(active_assets)))
return get_pure_hierarchy_data(hierarchy_context)

View file

@ -7,6 +7,8 @@ import signal
import socket
import datetime
import appdirs
import ftrack_api
from openpype_modules.ftrack.ftrack_server.ftrack_server import FtrackServer
from openpype_modules.ftrack.ftrack_server.lib import (
@ -253,6 +255,15 @@ class StatusFactory:
)
})
items.append({
"type": "label",
"value": (
"Local versions dir: {}<br/>Version repository path: {}"
).format(
appdirs.user_data_dir("openpype", "pypeclub"),
os.environ.get("OPENPYPE_PATH")
)
})
items.append({"type": "label", "value": "---"})
return items

View file

@ -31,7 +31,6 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin):
continue
review_path = representation.get("published_path")
self.log.debug("Found review at: {}".format(review_path))
gazu.task.add_preview(

View file

@ -18,15 +18,15 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin):
profiles = None
def process(self, instance):
task_name = legacy_io.Session.get("AVALON_TASK")
task_data = instance.data["anatomyData"].get("task", {})
family = self.main_family_from_instance(instance)
key_values = {
"families": family,
"tasks": task_name,
"tasks": task_data.get("name"),
"task_types": task_data.get("type"),
"hosts": instance.data["anatomyData"]["app"],
"subsets": instance.data["subset"]
}
profile = filter_profiles(self.profiles, key_values,
logger=self.log)

View file

@ -112,7 +112,13 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
if review_path:
fill_pairs.append(("review_filepath", review_path))
task_data = fill_data.get("task")
task_data = (
copy.deepcopy(instance.data.get("anatomyData", {})).get("task")
or fill_data.get("task")
)
if not isinstance(task_data, dict):
# fallback for legacy - if task_data is only task name
task_data["name"] = task_data
if task_data:
if (
"{task}" in message_templ
@ -142,13 +148,17 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
def _get_thumbnail_path(self, instance):
"""Returns abs url for thumbnail if present in instance repres"""
published_path = None
thumbnail_path = None
for repre in instance.data.get("representations", []):
if repre.get('thumbnail') or "thumbnail" in repre.get('tags', []):
if os.path.exists(repre["published_path"]):
published_path = repre["published_path"]
repre_thumbnail_path = (
repre.get("published_path") or
os.path.join(repre["stagingDir"], repre["files"])
)
if os.path.exists(repre_thumbnail_path):
thumbnail_path = repre_thumbnail_path
break
return published_path
return thumbnail_path
def _get_review_path(self, instance):
"""Returns abs url for review if present in instance repres"""
@ -178,10 +188,17 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
channel=channel,
title=os.path.basename(p_file)
)
attachment_str += "\n<{}|{}>".format(
response["file"]["permalink"],
os.path.basename(p_file))
file_ids.append(response["file"]["id"])
if response.get("error"):
error_str = self._enrich_error(
str(response.get("error")),
channel)
self.log.warning(
"Error happened: {}".format(error_str))
else:
attachment_str += "\n<{}|{}>".format(
response["file"]["permalink"],
os.path.basename(p_file))
file_ids.append(response["file"]["id"])
if publish_files:
message += attachment_str

View file

@ -0,0 +1,37 @@
from aiohttp.web_response import Response
from openpype.lib import Logger
class SyncServerModuleRestApi:
"""
REST API endpoint used for calling from hosts when context change
happens in Workfile app.
"""
def __init__(self, user_module, server_manager):
self._log = None
self.module = user_module
self.server_manager = server_manager
self.prefix = "/sync_server"
self.register()
@property
def log(self):
if self._log is None:
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
def register(self):
self.server_manager.add_route(
"POST",
self.prefix + "/reset_timer",
self.reset_timer,
)
async def reset_timer(self, _request):
"""Force timer to run immediately."""
self.module.reset_timer()
return Response(status=200)

View file

@ -236,6 +236,7 @@ class SyncServerThread(threading.Thread):
"""
def __init__(self, module):
self.log = Logger.get_logger(self.__class__.__name__)
super(SyncServerThread, self).__init__()
self.module = module
self.loop = None

View file

@ -136,14 +136,14 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
""" Start of Public API """
def add_site(self, project_name, representation_id, site_name=None,
force=False):
force=False, priority=None, reset_timer=False):
"""
Adds new site to representation to be synced.
'project_name' must have synchronization enabled (globally or
project only)
Used as a API endpoint from outside applications (Loader etc).
Used as an API endpoint from outside applications (Loader etc).
Use 'force' to reset existing site.
@ -152,6 +152,9 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
representation_id (string): MongoDB _id value
site_name (string): name of configured and active site
force (bool): reset site if exists
priority (int): set priority
reset_timer (bool): if delay timer should be reset, eg. user mark
some representation to be synced manually
Throws:
SiteAlreadyPresentError - if adding already existing site and
@ -167,7 +170,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
self.reset_site_on_representation(project_name,
representation_id,
site_name=site_name,
force=force)
force=force,
priority=priority)
if reset_timer:
self.reset_timer()
def remove_site(self, project_name, representation_id, site_name,
remove_local_files=False):
@ -911,7 +918,59 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
In case of user's involvement (reset site), start that right away.
"""
self.sync_server_thread.reset_timer()
if not self.enabled:
return
if self.sync_server_thread is None:
self._reset_timer_with_rest_api()
else:
self.sync_server_thread.reset_timer()
def is_representation_on_site(
self, project_name, representation_id, site_name
):
"""Checks if 'representation_id' has all files avail. on 'site_name'"""
representation = get_representation_by_id(project_name,
representation_id,
fields=["_id", "files"])
if not representation:
return False
on_site = False
for file_info in representation.get("files", []):
for site in file_info.get("sites", []):
if site["name"] != site_name:
continue
if (site.get("progress") or site.get("error") or
not site.get("created_dt")):
return False
on_site = True
return on_site
def _reset_timer_with_rest_api(self):
# POST to webserver sites to add to representations
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
if not webserver_url:
self.log.warning("Couldn't find webserver url")
return
rest_api_url = "{}/sync_server/reset_timer".format(
webserver_url
)
try:
import requests
except Exception:
self.log.warning(
"Couldn't add sites to representations "
"('requests' is not available)"
)
return
requests.post(rest_api_url)
def get_enabled_projects(self):
"""Returns list of projects which have SyncServer enabled."""
@ -1544,12 +1603,12 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
Args:
project_name (string): name of project - force to db connection as
each file might come from different collection
new_file_id (string):
new_file_id (string): only present if file synced successfully
file (dictionary): info about processed file (pulled from DB)
representation (dictionary): parent repr of file (from DB)
site (string): label ('gdrive', 'S3')
error (string): exception message
progress (float): 0-1 of progress of upload/download
progress (float): 0-0.99 of progress of upload/download
priority (int): 0-100 set priority
Returns:
@ -1655,7 +1714,8 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
def reset_site_on_representation(self, project_name, representation_id,
side=None, file_id=None, site_name=None,
remove=False, pause=None, force=False):
remove=False, pause=None, force=False,
priority=None):
"""
Reset information about synchronization for particular 'file_id'
and provider.
@ -1678,6 +1738,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
remove (bool): if True remove site altogether
pause (bool or None): if True - pause, False - unpause
force (bool): hard reset - currently only for add_site
priority (int): set priority
Raises:
SiteAlreadyPresentError - if adding already existing site and
@ -1705,6 +1766,10 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
elem = {"name": site_name}
# Add priority
if priority:
elem["priority"] = priority
if file_id: # reset site for particular file
self._reset_site_for_file(project_name, representation_id,
elem, file_id, site_name)
@ -2089,6 +2154,15 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
def cli(self, click_group):
click_group.add_command(cli_main)
# Webserver module implementation
def webserver_initialization(self, server_manager):
"""Add routes for syncs."""
if self.tray_initialized:
from .rest_api import SyncServerModuleRestApi
self.rest_api_obj = SyncServerModuleRestApi(
self, server_manager
)
@click.group(SyncServerModule.name, help="SyncServer module related commands.")
def cli_main():

View file

@ -21,7 +21,7 @@ class TimersManagerModuleRestApi:
@property
def log(self):
if self._log is None:
self._log = Logger.get_logger(self.__ckass__.__name__)
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
def register(self):

View file

@ -393,8 +393,9 @@ class BaseCreator:
asset_doc(dict): Asset document for which subset is created.
project_name(str): Project name.
host_name(str): Which host creates subset.
instance(str|None): Object of 'CreatedInstance' for which is
subset name updated. Passed only on subset name update.
instance(CreatedInstance|None): Object of 'CreatedInstance' for
which is subset name updated. Passed only on subset name
update.
"""
dynamic_data = self.get_dynamic_data(

View file

@ -188,7 +188,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
for subset_doc in subset_docs:
subset_id = subset_doc["_id"]
last_version_doc = last_version_docs_by_subset_id.get(subset_id)
if last_version_docs_by_subset_id is None:
if last_version_doc is None:
continue
asset_id = subset_doc["parent"]

View file

@ -179,7 +179,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
single_frame_image = False
if len(input_filepaths) == 1:
ext = os.path.splitext(input_filepaths[0])[-1]
single_frame_image = ext in IMAGE_EXTENSIONS
single_frame_image = ext.lower() in IMAGE_EXTENSIONS
filtered_defs = []
for output_def in output_defs:
@ -501,7 +501,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
first_sequence_frame += handle_start
ext = os.path.splitext(repre["files"][0])[1].replace(".", "")
if ext in self.alpha_exts:
if ext.lower() in self.alpha_exts:
input_allow_bg = True
return {
@ -934,6 +934,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
if output_ext.startswith("."):
output_ext = output_ext[1:]
output_ext = output_ext.lower()
# Store extension to representation
new_repre["ext"] = output_ext

View file

@ -129,7 +129,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
"mvUsd",
"mvUsdComposition",
"mvUsdOverride",
"simpleUnrealTexture"
"simpleUnrealTexture",
"online"
]
default_template_name = "publish"

View file

@ -48,10 +48,16 @@
"file": "{originalBasename}_{@version}.{ext}",
"path": "{@folder}/{@file}"
},
"online": {
"folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/{@version}",
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
"path": "{@folder}/{@file}"
},
"__dynamic_keys_labels__": {
"maya2unreal": "Maya to Unreal",
"simpleUnrealTextureHero": "Simple Unreal Texture - Hero",
"simpleUnrealTexture": "Simple Unreal Texture"
"simpleUnrealTexture": "Simple Unreal Texture",
"online": "online"
}
}
}

View file

@ -288,6 +288,17 @@
"task_types": [],
"tasks": [],
"template_name": "maya2unreal"
},
{
"families": [
"online"
],
"hosts": [
"traypublisher"
],
"task_types": [],
"tasks": [],
"template_name": "online"
}
]
},
@ -404,15 +415,13 @@
"template": "{family}{Task}"
},
{
"families": [
"renderLocal"
],
"families": ["render"],
"hosts": [
"aftereffects"
],
"task_types": [],
"tasks": [],
"template": "render{Task}{Variant}"
"template": "{family}{Task}{Composition}{Variant}"
},
{
"families": [
@ -458,7 +467,8 @@
"hosts": [],
"task_types": [],
"tasks": [],
"enabled": true
"enabled": true,
"use_last_published_workfile": false
}
],
"open_workfile_tool_on_startup": [

View file

@ -303,5 +303,12 @@
"extensions": [
".mov"
]
},
"publish": {
"ValidateFrameRange": {
"enabled": true,
"optional": true,
"active": true
}
}
}

View file

@ -11,6 +11,11 @@
255,
255,
255
],
"families_to_review": [
"review",
"renderlayer",
"renderscene"
]
},
"ValidateProjectSettings": {

View file

@ -311,6 +311,24 @@
"object_type": "text"
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "publish",
"label": "Publish plugins",
"children": [
{
"type": "schema_template",
"name": "template_validate_plugin",
"template_data": [
{
"key": "ValidateFrameRange",
"label": "Validate frame range"
}
]
}
]
}
]
}

View file

@ -56,6 +56,18 @@
"key": "review_bg",
"label": "Review BG color",
"use_alpha": false
},
{
"type": "enum",
"key": "families_to_review",
"label": "Families to review",
"multiselection": true,
"enum_items": [
{"review": "review"},
{"renderpass": "renderPass"},
{"renderlayer": "renderLayer"},
{"renderscene": "renderScene"}
]
}
]
},

View file

@ -16,22 +16,26 @@
{
"type": "number",
"key": "frameStart",
"label": "Frame Start"
"label": "Frame Start",
"maximum": 999999999
},
{
"type": "number",
"key": "frameEnd",
"label": "Frame End"
"label": "Frame End",
"maximum": 999999999
},
{
"type": "number",
"key": "clipIn",
"label": "Clip In"
"label": "Clip In",
"maximum": 999999999
},
{
"type": "number",
"key": "clipOut",
"label": "Clip Out"
"label": "Clip Out",
"maximum": 999999999
},
{
"type": "number",

View file

@ -149,6 +149,11 @@
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "boolean",
"key": "use_last_published_workfile",
"label": "Use last published workfile"
}
]
}

View file

@ -0,0 +1,26 @@
[
{
"type": "dict",
"collapsible": true,
"key": "{key}",
"label": "{label}",
"checkbox_key": "enabled",
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "boolean",
"key": "optional",
"label": "Optional"
},
{
"type": "boolean",
"key": "active",
"label": "Active"
}
]
}
]

View file

@ -1126,6 +1126,10 @@ ValidationArtistMessage QLabel {
background: transparent;
}
CreateNextPageOverlay {
font-size: 32pt;
}
/* Settings - NOT USED YET
- we need to define font family for settings UI */

View file

@ -349,7 +349,7 @@ class FilesModel(QtGui.QStandardItemModel):
item.setData(file_item.filenames, FILENAMES_ROLE)
item.setData(file_item.directory, DIRPATH_ROLE)
item.setData(icon_pixmap, ITEM_ICON_ROLE)
item.setData(file_item.ext, EXT_ROLE)
item.setData(file_item.lower_ext, EXT_ROLE)
item.setData(file_item.is_dir, IS_DIR_ROLE)
item.setData(file_item.is_sequence, IS_SEQUENCE_ROLE)
@ -463,7 +463,7 @@ class FilesProxyModel(QtCore.QSortFilterProxyModel):
for filepath in filepaths:
if os.path.isfile(filepath):
_, ext = os.path.splitext(filepath)
if ext in self._allowed_extensions:
if ext.lower() in self._allowed_extensions:
return True
elif self._allow_folders:
@ -475,7 +475,7 @@ class FilesProxyModel(QtCore.QSortFilterProxyModel):
for filepath in filepaths:
if os.path.isfile(filepath):
_, ext = os.path.splitext(filepath)
if ext in self._allowed_extensions:
if ext.lower() in self._allowed_extensions:
filtered_paths.append(filepath)
elif self._allow_folders:

View file

@ -6,6 +6,7 @@ from Qt import QtWidgets, QtCore
from openpype.lib.attribute_definitions import (
AbtractAttrDef,
UnknownDef,
HiddenDef,
NumberDef,
TextDef,
EnumDef,
@ -22,6 +23,16 @@ from .files_widget import FilesWidget
def create_widget_for_attr_def(attr_def, parent=None):
widget = _create_widget_for_attr_def(attr_def, parent)
if attr_def.hidden:
widget.setVisible(False)
if attr_def.disabled:
widget.setEnabled(False)
return widget
def _create_widget_for_attr_def(attr_def, parent=None):
if not isinstance(attr_def, AbtractAttrDef):
raise TypeError("Unexpected type \"{}\" expected \"{}\"".format(
str(type(attr_def)), AbtractAttrDef
@ -42,6 +53,9 @@ def create_widget_for_attr_def(attr_def, parent=None):
if isinstance(attr_def, UnknownDef):
return UnknownAttrWidget(attr_def, parent)
if isinstance(attr_def, HiddenDef):
return HiddenAttrWidget(attr_def, parent)
if isinstance(attr_def, FileDef):
return FileAttrWidget(attr_def, parent)
@ -115,6 +129,10 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
self._current_keys.add(attr_def.key)
widget = create_widget_for_attr_def(attr_def, self)
self._widgets.append(widget)
if attr_def.hidden:
continue
expand_cols = 2
if attr_def.is_value_def and attr_def.is_label_horizontal:
@ -133,7 +151,6 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
layout.addWidget(
widget, row, col_num, 1, expand_cols
)
self._widgets.append(widget)
row += 1
def set_value(self, value):
@ -459,6 +476,29 @@ class UnknownAttrWidget(_BaseAttrDefWidget):
self._input_widget.setText(str_value)
class HiddenAttrWidget(_BaseAttrDefWidget):
def _ui_init(self):
self.setVisible(False)
self._value = None
self._multivalue = False
def setVisible(self, visible):
if visible:
visible = False
super(HiddenAttrWidget, self).setVisible(visible)
def current_value(self):
if self._multivalue:
raise ValueError("{} can't output for multivalue.".format(
self.__class__.__name__
))
return self._value
def set_value(self, value, multivalue=False):
self._value = copy.deepcopy(value)
self._multivalue = multivalue
class FileAttrWidget(_BaseAttrDefWidget):
def _ui_init(self):
input_widget = FilesWidget(

View file

@ -28,7 +28,7 @@ class NameDef:
class NumberDef:
def __init__(self, minimum=None, maximum=None, decimals=None):
self.minimum = 0 if minimum is None else minimum
self.maximum = 999999 if maximum is None else maximum
self.maximum = 999999999 if maximum is None else maximum
self.decimals = 0 if decimals is None else decimals

View file

@ -8,6 +8,7 @@ from .widgets import (
ResetBtn,
ValidateBtn,
PublishBtn,
CreateNextPageOverlay,
)
from .help_widget import (
HelpButton,
@ -28,6 +29,7 @@ __all__ = (
"ResetBtn",
"ValidateBtn",
"PublishBtn",
"CreateNextPageOverlay",
"HelpButton",
"HelpDialog",

View file

@ -674,9 +674,16 @@ class InstanceCardView(AbstractInstanceView):
instances_by_group[group_name]
)
self._update_ordered_group_nameS()
self._update_ordered_group_names()
def _update_ordered_group_nameS(self):
def has_items(self):
if self._convertor_items_group is not None:
return True
if self._widgets_by_group:
return True
return False
def _update_ordered_group_names(self):
ordered_group_names = [CONTEXT_GROUP]
for idx in range(self._content_layout.count()):
if idx > 0:

View file

@ -912,6 +912,13 @@ class InstanceListView(AbstractInstanceView):
if not self._instance_view.isExpanded(proxy_index):
self._instance_view.expand(proxy_index)
def has_items(self):
if self._convertor_group_widget is not None:
return True
if self._group_items:
return True
return False
def get_selected_items(self):
"""Get selected instance ids and context selection.

View file

@ -195,6 +195,20 @@ class OverviewWidget(QtWidgets.QFrame):
self._subset_views_widget.setMaximumWidth(view_width)
self._change_anim.start()
def get_subset_views_geo(self):
parent = self._subset_views_widget.parent()
global_pos = parent.mapToGlobal(self._subset_views_widget.pos())
return QtCore.QRect(
global_pos.x(),
global_pos.y(),
self._subset_views_widget.width(),
self._subset_views_widget.height()
)
def has_items(self):
view = self._subset_views_layout.currentWidget()
return view.has_items()
def _on_create_clicked(self):
"""Pass signal to parent widget which should care about changing state.

View file

@ -54,6 +54,9 @@ class PublisherTabsWidget(QtWidgets.QFrame):
self._buttons_by_identifier = {}
def is_current_tab(self, identifier):
if isinstance(identifier, int):
identifier = self.get_tab_by_index(identifier)
if isinstance(identifier, PublisherTabBtn):
identifier = identifier.identifier
return self._current_identifier == identifier
@ -68,7 +71,16 @@ class PublisherTabsWidget(QtWidgets.QFrame):
self.set_current_tab(identifier)
return button
def get_tab_by_index(self, index):
if 0 >= index < self._btns_layout.count():
item = self._btns_layout.itemAt(index)
return item.widget()
return None
def set_current_tab(self, identifier):
if isinstance(identifier, int):
identifier = self.get_tab_by_index(identifier)
if isinstance(identifier, PublisherTabBtn):
identifier = identifier.identifier

View file

@ -511,7 +511,7 @@ class ValidationsWidget(QtWidgets.QFrame):
)
# After success publishing
publish_started_widget = ValidationArtistMessage(
"Publishing went smoothly", self
"So far so good", self
)
# After success publishing
publish_stop_ok_widget = ValidationArtistMessage(

View file

@ -9,6 +9,7 @@ import collections
from Qt import QtWidgets, QtCore, QtGui
import qtawesome
from openpype.lib.attribute_definitions import UnknownDef
from openpype.tools.attribute_defs import create_widget_for_attr_def
from openpype.tools import resources
from openpype.tools.flickcharm import FlickCharm
@ -305,6 +306,20 @@ class AbstractInstanceView(QtWidgets.QWidget):
"{} Method 'refresh' is not implemented."
).format(self.__class__.__name__))
def has_items(self):
"""View has at least one item.
This is more a question for controller but is called from widget
which should probably should not use controller.
Returns:
bool: There is at least one instance or conversion item.
"""
raise NotImplementedError((
"{} Method 'has_items' is not implemented."
).format(self.__class__.__name__))
def get_selected_items(self):
"""Selected instances required for callbacks.
@ -578,6 +593,11 @@ class TasksCombobox(QtWidgets.QComboBox):
self._text = None
# Make sure combobox is extended horizontally
size_policy = self.sizePolicy()
size_policy.setHorizontalPolicy(size_policy.MinimumExpanding)
self.setSizePolicy(size_policy)
def set_invalid_empty_task(self, invalid=True):
self._proxy_model.set_filter_empty(invalid)
if invalid:
@ -1180,7 +1200,7 @@ class GlobalAttrsWidget(QtWidgets.QWidget):
"""Set currently selected instances.
Args:
instances(list<CreatedInstance>): List of selected instances.
instances(List[CreatedInstance]): List of selected instances.
Empty instances tells that nothing or context is selected.
"""
self._set_btns_visible(False)
@ -1303,6 +1323,13 @@ class CreatorAttrsWidget(QtWidgets.QWidget):
else:
widget.set_value(values, True)
widget.value_changed.connect(self._input_value_changed)
self._attr_def_id_to_instances[attr_def.id] = attr_instances
self._attr_def_id_to_attr_def[attr_def.id] = attr_def
if attr_def.hidden:
continue
expand_cols = 2
if attr_def.is_value_def and attr_def.is_label_horizontal:
expand_cols = 1
@ -1321,13 +1348,8 @@ class CreatorAttrsWidget(QtWidgets.QWidget):
content_layout.addWidget(
widget, row, col_num, 1, expand_cols
)
row += 1
widget.value_changed.connect(self._input_value_changed)
self._attr_def_id_to_instances[attr_def.id] = attr_instances
self._attr_def_id_to_attr_def[attr_def.id] = attr_def
self._scroll_area.setWidget(content_widget)
self._content_widget = content_widget
@ -1421,8 +1443,17 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
widget = create_widget_for_attr_def(
attr_def, content_widget
)
label = attr_def.label or attr_def.key
content_layout.addRow(label, widget)
hidden_widget = attr_def.hidden
# Hide unknown values of publish plugins
# - The keys in most of cases does not represent what would
# label represent
if isinstance(attr_def, UnknownDef):
widget.setVisible(False)
hidden_widget = True
if not hidden_widget:
label = attr_def.label or attr_def.key
content_layout.addRow(label, widget)
widget.value_changed.connect(self._input_value_changed)
@ -1614,6 +1645,7 @@ class SubsetAttributesWidget(QtWidgets.QWidget):
instances(List[CreatedInstance]): List of currently selected
instances.
context_selected(bool): Is context selected.
convertor_identifiers(List[str]): Identifiers of convert items.
"""
all_valid = True
@ -1708,3 +1740,159 @@ class SubsetAttributesWidget(QtWidgets.QWidget):
self._thumbnail_widget.setVisible(True)
self._thumbnail_widget.set_current_thumbnails(thumbnail_paths)
class CreateNextPageOverlay(QtWidgets.QWidget):
clicked = QtCore.Signal()
def __init__(self, parent):
super(CreateNextPageOverlay, self).__init__(parent)
self.setCursor(QtCore.Qt.PointingHandCursor)
self._arrow_color = (
get_objected_colors("font").get_qcolor()
)
self._bg_color = (
get_objected_colors("bg-buttons").get_qcolor()
)
change_anim = QtCore.QVariantAnimation()
change_anim.setStartValue(0.0)
change_anim.setEndValue(1.0)
change_anim.setDuration(200)
change_anim.setEasingCurve(QtCore.QEasingCurve.OutCubic)
change_anim.valueChanged.connect(self._on_anim)
self._change_anim = change_anim
self._is_visible = None
self._anim_value = 0.0
self._increasing = False
self._under_mouse = None
self._handle_show_on_own = True
self._mouse_pressed = False
self.set_visible(True)
def set_increasing(self, increasing):
if self._increasing is increasing:
return
self._increasing = increasing
if increasing:
self._change_anim.setDirection(self._change_anim.Forward)
else:
self._change_anim.setDirection(self._change_anim.Backward)
if self._change_anim.state() != self._change_anim.Running:
self._change_anim.start()
def set_visible(self, visible):
if self._is_visible is visible:
return
self._is_visible = visible
if not visible:
self.set_increasing(False)
if not self._is_anim_finished():
return
self.setVisible(visible)
self._check_anim_timer()
def _is_anim_finished(self):
if self._increasing:
return self._anim_value == 1.0
return self._anim_value == 0.0
def _on_anim(self, value):
self._check_anim_timer()
self._anim_value = value
self.update()
if not self._is_anim_finished():
return
if not self._is_visible:
self.setVisible(False)
def set_under_mouse(self, under_mouse):
if self._under_mouse is under_mouse:
return
self._under_mouse = under_mouse
self.set_increasing(under_mouse)
def _is_under_mouse(self):
mouse_pos = self.mapFromGlobal(QtGui.QCursor.pos())
under_mouse = self.rect().contains(mouse_pos)
return under_mouse
def _check_anim_timer(self):
if not self.isVisible():
return
self.set_increasing(self._under_mouse)
def mousePressEvent(self, event):
if event.button() == QtCore.Qt.LeftButton:
self._mouse_pressed = True
super(CreateNextPageOverlay, self).mousePressEvent(event)
def mouseReleaseEvent(self, event):
if self._mouse_pressed:
self._mouse_pressed = False
if self.rect().contains(event.pos()):
self.clicked.emit()
super(CreateNextPageOverlay, self).mouseReleaseEvent(event)
def paintEvent(self, event):
painter = QtGui.QPainter()
painter.begin(self)
if self._anim_value == 0.0:
painter.end()
return
painter.setClipRect(event.rect())
painter.setRenderHints(
painter.Antialiasing
| painter.SmoothPixmapTransform
)
painter.setPen(QtCore.Qt.NoPen)
rect = QtCore.QRect(self.rect())
rect_width = rect.width()
rect_height = rect.height()
radius = rect_width * 0.2
x_offset = 0
y_offset = 0
if self._anim_value != 1.0:
x_offset += rect_width - (rect_width * self._anim_value)
arrow_height = rect_height * 0.4
arrow_half_height = arrow_height * 0.5
arrow_x_start = x_offset + ((rect_width - arrow_half_height) * 0.5)
arrow_x_end = arrow_x_start + arrow_half_height
center_y = rect.center().y()
painter.setBrush(self._bg_color)
painter.drawRoundedRect(
x_offset, y_offset,
rect_width + radius, rect_height,
radius, radius
)
src_arrow_path = QtGui.QPainterPath()
src_arrow_path.moveTo(arrow_x_start, center_y - arrow_half_height)
src_arrow_path.lineTo(arrow_x_end, center_y)
src_arrow_path.lineTo(arrow_x_start, center_y + arrow_half_height)
arrow_stroker = QtGui.QPainterPathStroker()
arrow_stroker.setWidth(min(4, arrow_half_height * 0.2))
arrow_path = arrow_stroker.createStroke(src_arrow_path)
painter.fillPath(arrow_path, self._arrow_color)
painter.end()

View file

@ -29,6 +29,8 @@ from .widgets import (
HelpButton,
HelpDialog,
CreateNextPageOverlay,
)
@ -154,7 +156,7 @@ class PublisherWindow(QtWidgets.QDialog):
footer_layout.addWidget(footer_bottom_widget, 0)
# Content
# - wrap stacked widget under one more widget to be able propagate
# - wrap stacked widget under one more widget to be able to propagate
# margins (QStackedLayout can't have margins)
content_widget = QtWidgets.QWidget(under_publish_widget)
@ -225,8 +227,8 @@ class PublisherWindow(QtWidgets.QDialog):
# Floating publish frame
publish_frame = PublishFrame(controller, self.footer_border, self)
# Timer started on show -> connected to timer counter
# - helps to deffer on show logic by 3 event loops
create_overlay_button = CreateNextPageOverlay(self)
show_timer = QtCore.QTimer()
show_timer.setInterval(1)
show_timer.timeout.connect(self._on_show_timer)
@ -255,6 +257,9 @@ class PublisherWindow(QtWidgets.QDialog):
publish_btn.clicked.connect(self._on_publish_clicked)
publish_frame.details_page_requested.connect(self._go_to_details_tab)
create_overlay_button.clicked.connect(
self._on_create_overlay_button_click
)
controller.event_system.add_callback(
"instances.refresh.finished", self._on_instances_refresh
@ -262,6 +267,9 @@ class PublisherWindow(QtWidgets.QDialog):
controller.event_system.add_callback(
"publish.reset.finished", self._on_publish_reset
)
controller.event_system.add_callback(
"controller.reset.finished", self._on_controller_reset
)
controller.event_system.add_callback(
"publish.process.started", self._on_publish_start
)
@ -310,6 +318,7 @@ class PublisherWindow(QtWidgets.QDialog):
self._publish_overlay = publish_overlay
self._publish_frame = publish_frame
self._content_widget = content_widget
self._content_stacked_layout = content_stacked_layout
self._overview_widget = overview_widget
@ -331,25 +340,39 @@ class PublisherWindow(QtWidgets.QDialog):
self._controller = controller
self._first_show = True
self._first_reset = True
# This is a little bit confusing but 'reset_on_first_show' is too long
# forin init
# for init
self._reset_on_first_show = reset_on_show
self._reset_on_show = True
self._publish_frame_visible = None
self._tab_on_reset = None
self._error_messages_to_show = collections.deque()
self._errors_dialog_message_timer = errors_dialog_message_timer
self._set_publish_visibility(False)
self._create_overlay_button = create_overlay_button
self._app_event_listener_installed = False
self._show_timer = show_timer
self._show_counter = 0
self._window_is_visible = False
@property
def controller(self):
return self._controller
def make_sure_is_visible(self):
if self._window_is_visible:
self.setWindowState(QtCore.Qt.ActiveWindow)
else:
self.show()
def showEvent(self, event):
self._window_is_visible = True
super(PublisherWindow, self).showEvent(event)
if self._first_show:
self._first_show = False
@ -360,6 +383,38 @@ class PublisherWindow(QtWidgets.QDialog):
def resizeEvent(self, event):
super(PublisherWindow, self).resizeEvent(event)
self._update_publish_frame_rect()
self._update_create_overlay_size()
def closeEvent(self, event):
self._window_is_visible = False
self._uninstall_app_event_listener()
self.save_changes()
self._reset_on_show = True
self._controller.clear_thumbnail_temp_dir_path()
super(PublisherWindow, self).closeEvent(event)
def leaveEvent(self, event):
super(PublisherWindow, self).leaveEvent(event)
self._update_create_overlay_visibility()
def eventFilter(self, obj, event):
if event.type() == QtCore.QEvent.MouseMove:
self._update_create_overlay_visibility(event.globalPos())
return super(PublisherWindow, self).eventFilter(obj, event)
def _install_app_event_listener(self):
if self._app_event_listener_installed:
return
self._app_event_listener_installed = True
app = QtWidgets.QApplication.instance()
app.installEventFilter(self)
def _uninstall_app_event_listener(self):
if not self._app_event_listener_installed:
return
self._app_event_listener_installed = False
app = QtWidgets.QApplication.instance()
app.removeEventFilter(self)
def keyPressEvent(self, event):
# Ignore escape button to close window
@ -390,17 +445,16 @@ class PublisherWindow(QtWidgets.QDialog):
# Reset counter when done for next show event
self._show_counter = 0
self._update_create_overlay_size()
self._update_create_overlay_visibility()
if self._is_on_create_tab():
self._install_app_event_listener()
# Reset if requested
if self._reset_on_show:
self._reset_on_show = False
self.reset()
def closeEvent(self, event):
self.save_changes()
self._reset_on_show = True
self._controller.clear_thumbnail_temp_dir_path()
super(PublisherWindow, self).closeEvent(event)
def save_changes(self):
self._controller.save_changes()
@ -410,8 +464,21 @@ class PublisherWindow(QtWidgets.QDialog):
def set_context_label(self, label):
self._context_label.setText(label)
def set_tab_on_reset(self, tab):
"""Define tab that will be selected on window show.
This is single use method, when publisher window is showed the value is
unset and not used on next show.
Args:
tab (Union[int, Literal[create, publish, details, report]]: Index
or name of tab which will be selected on show (after reset).
"""
self._tab_on_reset = tab
def _update_publish_details_widget(self, force=False):
if not force and self._tabs_widget.current_tab() != "details":
if not force and not self._is_on_details_tab():
return
report_data = self.controller.get_publish_report()
@ -441,6 +508,10 @@ class PublisherWindow(QtWidgets.QDialog):
self._help_dialog.width(), self._help_dialog.height()
)
def _on_create_overlay_button_click(self):
self._create_overlay_button.set_under_mouse(False)
self._go_to_publish_tab()
def _on_tab_change(self, old_tab, new_tab):
if old_tab == "details":
self._publish_details_widget.close_details_popup()
@ -465,20 +536,53 @@ class PublisherWindow(QtWidgets.QDialog):
self._report_widget
)
is_create = new_tab == "create"
if is_create:
self._install_app_event_listener()
else:
self._uninstall_app_event_listener()
self._create_overlay_button.set_visible(is_create)
def _on_context_or_active_change(self):
self._validate_create_instances()
def _on_create_request(self):
self._go_to_create_tab()
def _set_current_tab(self, identifier):
self._tabs_widget.set_current_tab(identifier)
def set_current_tab(self, tab):
self._set_current_tab(tab)
if not self._window_is_visible:
self.set_tab_on_reset(tab)
def _is_current_tab(self, identifier):
return self._tabs_widget.is_current_tab(identifier)
def _go_to_create_tab(self):
self._tabs_widget.set_current_tab("create")
self._set_current_tab("create")
def _go_to_publish_tab(self):
self._set_current_tab("publish")
def _go_to_details_tab(self):
self._tabs_widget.set_current_tab("details")
self._set_current_tab("details")
def _go_to_report_tab(self):
self._tabs_widget.set_current_tab("report")
self._set_current_tab("report")
def _is_on_create_tab(self):
return self._is_current_tab("create")
def _is_on_publish_tab(self):
return self._is_current_tab("publish")
def _is_on_details_tab(self):
return self._is_current_tab("details")
def _is_on_report_tab(self):
return self._is_current_tab("report")
def _set_publish_overlay_visibility(self, visible):
if visible:
@ -530,11 +634,33 @@ class PublisherWindow(QtWidgets.QDialog):
self._set_publish_visibility(False)
self._set_footer_enabled(False)
self._update_publish_details_widget()
if (
not self._tabs_widget.is_current_tab("create")
and not self._tabs_widget.is_current_tab("publish")
def _on_controller_reset(self):
self._first_reset, first_reset = False, self._first_reset
if self._tab_on_reset is not None:
self._tab_on_reset, new_tab = None, self._tab_on_reset
self._set_current_tab(new_tab)
return
# On first reset change tab based on available items
# - if there is at least one instance the tab is changed to 'publish'
# otherwise 'create' is used
# - this happens only on first show
if first_reset:
if self._overview_widget.has_items():
self._go_to_publish_tab()
else:
self._go_to_create_tab()
elif (
not self._is_on_create_tab()
and not self._is_on_publish_tab()
):
self._tabs_widget.set_current_tab("publish")
# If current tab is not 'Create' or 'Publish' go to 'Publish'
# - this can happen when publishing started and was reset
# at that moment it doesn't make sense to stay at publish
# specific tabs.
self._go_to_publish_tab()
def _on_publish_start(self):
self._create_tab.setEnabled(False)
@ -550,8 +676,8 @@ class PublisherWindow(QtWidgets.QDialog):
self._publish_details_widget.close_details_popup()
if self._tabs_widget.is_current_tab(self._create_tab):
self._tabs_widget.set_current_tab("publish")
if self._is_on_create_tab():
self._go_to_publish_tab()
def _on_publish_validated_change(self, event):
if event["value"]:
@ -564,7 +690,7 @@ class PublisherWindow(QtWidgets.QDialog):
publish_has_crashed = self._controller.publish_has_crashed
validate_enabled = not publish_has_crashed
publish_enabled = not publish_has_crashed
if self._tabs_widget.is_current_tab("publish"):
if self._is_on_publish_tab():
self._go_to_report_tab()
if validate_enabled:
@ -668,6 +794,36 @@ class PublisherWindow(QtWidgets.QDialog):
event["title"], new_failed_info, "Convertor:"
)
def _update_create_overlay_size(self):
metrics = self._create_overlay_button.fontMetrics()
height = int(metrics.height())
width = int(height * 0.7)
end_pos_x = self.width()
start_pos_x = end_pos_x - width
center = self._content_widget.parent().mapTo(
self,
self._content_widget.rect().center()
)
pos_y = center.y() - (height * 0.5)
self._create_overlay_button.setGeometry(
start_pos_x, pos_y,
width, height
)
def _update_create_overlay_visibility(self, global_pos=None):
if global_pos is None:
global_pos = QtGui.QCursor.pos()
under_mouse = False
my_pos = self.mapFromGlobal(global_pos)
if self.rect().contains(my_pos):
widget_geo = self._overview_widget.get_subset_views_geo()
widget_x = widget_geo.left() + (widget_geo.width() * 0.5)
under_mouse = widget_x < global_pos.x()
self._create_overlay_button.set_under_mouse(under_mouse)
class ErrorsMessageBox(ErrorMessageBox):
def __init__(self, error_title, failed_info, message_start, parent):

View file

@ -24,7 +24,6 @@ __all__ = (
"SETTINGS_PATH_KEY",
"ROOT_KEY",
"SETTINGS_PATH_KEY",
"VALUE_KEY",
"SAVE_TIME_KEY",
"PROJECT_NAME_KEY",

View file

@ -285,14 +285,12 @@ class HostToolsHelper:
return self._publisher_tool
def show_publisher_tool(self, parent=None, controller=None):
def show_publisher_tool(self, parent=None, controller=None, tab=None):
with qt_app_context():
dialog = self.get_publisher_tool(parent, controller)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.showNormal()
window = self.get_publisher_tool(parent, controller)
if tab:
window.set_current_tab(tab)
window.make_sure_is_visible()
def get_tool_by_name(self, tool_name, parent=None, *args, **kwargs):
"""Show tool by it's name.
@ -446,8 +444,8 @@ def show_publish(parent=None):
_SingletonPoint.show_tool_by_name("publish", parent)
def show_publisher(parent=None):
_SingletonPoint.show_tool_by_name("publisher", parent)
def show_publisher(parent=None, **kwargs):
_SingletonPoint.show_tool_by_name("publisher", parent, **kwargs)
def show_experimental_tools_dialog(parent=None):

View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2019 Scaleway
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -0,0 +1,16 @@
# -*- coding: utf-8 -*-
__version__ = "1.0.6"
# Emulates __all__ for Python2
from .secrets import (
choice,
randbelow,
randbits,
SystemRandom,
token_bytes,
token_hex,
token_urlsafe,
compare_digest
)

View file

@ -0,0 +1,132 @@
# -*- coding: utf-8 -*-
"""Generate cryptographically strong pseudo-random numbers suitable for
managing secrets such as account authentication, tokens, and similar.
See PEP 506 for more information.
https://www.python.org/dev/peps/pep-0506/
"""
__all__ = ['choice', 'randbelow', 'randbits', 'SystemRandom',
'token_bytes', 'token_hex', 'token_urlsafe',
'compare_digest',
]
import os
import sys
from random import SystemRandom
import base64
import binascii
# hmac.compare_digest did appear in python 2.7.7
if sys.version_info >= (2, 7, 7):
from hmac import compare_digest
else:
# If we use an older python version, we will define an equivalent method
def compare_digest(a, b):
"""Compatibility compare_digest method for python < 2.7.
This method is NOT cryptographically secure and may be subject to
timing attacks, see https://docs.python.org/2/library/hmac.html
"""
return a == b
_sysrand = SystemRandom()
randbits = _sysrand.getrandbits
choice = _sysrand.choice
def randbelow(exclusive_upper_bound):
"""Return a random int in the range [0, n)."""
if exclusive_upper_bound <= 0:
raise ValueError("Upper bound must be positive.")
return _sysrand._randbelow(exclusive_upper_bound)
DEFAULT_ENTROPY = 32 # number of bytes to return by default
def token_bytes(nbytes=None):
"""Return a random byte string containing *nbytes* bytes.
If *nbytes* is ``None`` or not supplied, a reasonable
default is used.
>>> token_bytes(16) #doctest:+SKIP
b'\\xebr\\x17D*t\\xae\\xd4\\xe3S\\xb6\\xe2\\xebP1\\x8b'
"""
if nbytes is None:
nbytes = DEFAULT_ENTROPY
return os.urandom(nbytes)
def token_hex(nbytes=None):
"""Return a random text string, in hexadecimal.
The string has *nbytes* random bytes, each byte converted to two
hex digits. If *nbytes* is ``None`` or not supplied, a reasonable
default is used.
>>> token_hex(16) #doctest:+SKIP
'f9bf78b9a18ce6d46a0cd2b0b86df9da'
"""
return binascii.hexlify(token_bytes(nbytes)).decode('ascii')
def token_urlsafe(nbytes=None):
"""Return a random URL-safe text string, in Base64 encoding.
The string has *nbytes* random bytes. If *nbytes* is ``None``
or not supplied, a reasonable default is used.
>>> token_urlsafe(16) #doctest:+SKIP
'Drmhze6EPcv0fN_81Bj-nA'
"""
tok = token_bytes(nbytes)
return base64.urlsafe_b64encode(tok).rstrip(b'=').decode('ascii')

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
__version__ = "3.14.7-nightly.4"
__version__ = "3.14.7"

View file

@ -38,8 +38,6 @@ In AfterEffects you'll find the tools in the `OpenPype` extension:
You can show the extension panel by going to `Window` > `Extensions` > `OpenPype`.
Because of current rendering limitations, it is expected that only single composition will be marked for publishing!
### Publish
When you are ready to share some work, you will need to publish it. This is done by opening the `Publisher` through the `Publish...` button.
@ -69,7 +67,9 @@ Publisher allows publishing into different context, just click on any instance,
#### RenderQueue
AE's Render Queue is required for publishing locally or on a farm. Artist needs to configure expected result format (extension, resolution) in the Render Queue in an Output module. Currently its expected to have only single render item and single output module in the Render Queue.
AE's Render Queue is required for publishing locally or on a farm. Artist needs to configure expected result format (extension, resolution) in the Render Queue in an Output module.
Currently its expected to have only single render item per composition in the Render Queue.
AE might throw some warning windows during publishing locally, so please pay attention to them in a case publishing seems to be stuck in a `Extract Local Render`.

View file

@ -4812,9 +4812,9 @@ loader-runner@^4.2.0:
integrity sha512-92+huvxMvYlMzMt0iIOukcwYBFpkYJdpl2xsZ7LrlayO7E8SOv+JJUEK17B/dJIHAOLMfh2dZZ/Y18WgmGtYNw==
loader-utils@^1.4.0:
version "1.4.1"
resolved "https://registry.yarnpkg.com/loader-utils/-/loader-utils-1.4.1.tgz#278ad7006660bccc4d2c0c1578e17c5c78d5c0e0"
integrity sha512-1Qo97Y2oKaU+Ro2xnDMR26g1BwMT29jNbem1EvcujW2jqt+j5COXyscjM7bLQkM9HaxI7pkWeW7gnI072yMI9Q==
version "1.4.2"
resolved "https://registry.yarnpkg.com/loader-utils/-/loader-utils-1.4.2.tgz#29a957f3a63973883eb684f10ffd3d151fec01a3"
integrity sha512-I5d00Pd/jwMD2QCduo657+YM/6L3KZu++pmX9VFncxaxvHcru9jx1lBaFft+r4Mt2jK0Yhp41XlRAihzPxHNCg==
dependencies:
big.js "^5.2.2"
emojis-list "^3.0.0"