Merge branch 'develop' of github.com:pypeclub/OpenPype into bugfix/adobe_product_show_issue

This commit is contained in:
Petr Kalis 2021-12-01 12:56:53 +01:00
commit 3f97436cc4
18 changed files with 730 additions and 421 deletions

View file

@ -1,6 +1,6 @@
# Changelog
## [3.7.0-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.7.0-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD)
@ -10,26 +10,37 @@
**🆕 New features**
- Store typed version dependencies for workfiles [\#2192](https://github.com/pypeclub/OpenPype/pull/2192)
- Settings UI use OpenPype styles [\#2296](https://github.com/pypeclub/OpenPype/pull/2296)
**🚀 Enhancements**
- Assets Widget: Clear model on project change [\#2345](https://github.com/pypeclub/OpenPype/pull/2345)
- General: OpenPype default modules hierarchy [\#2338](https://github.com/pypeclub/OpenPype/pull/2338)
- General: FFprobe error exception contain original error message [\#2328](https://github.com/pypeclub/OpenPype/pull/2328)
- Resolve: Add experimental button to menu [\#2325](https://github.com/pypeclub/OpenPype/pull/2325)
- Hiero: Add experimental tools action [\#2323](https://github.com/pypeclub/OpenPype/pull/2323)
- Input links: Cleanup and unification of differences [\#2322](https://github.com/pypeclub/OpenPype/pull/2322)
- General: Don't validate vendor bin with executing them [\#2317](https://github.com/pypeclub/OpenPype/pull/2317)
- General: Run process log stderr as info log level [\#2309](https://github.com/pypeclub/OpenPype/pull/2309)
- General: Reduce vendor imports [\#2305](https://github.com/pypeclub/OpenPype/pull/2305)
- Tools: Cleanup of unused classes [\#2304](https://github.com/pypeclub/OpenPype/pull/2304)
- Project Manager: Added ability to delete project [\#2298](https://github.com/pypeclub/OpenPype/pull/2298)
- Ftrack: Synchronize input links [\#2287](https://github.com/pypeclub/OpenPype/pull/2287)
- StandalonePublisher: Remove unused plugin ExtractHarmonyZip [\#2277](https://github.com/pypeclub/OpenPype/pull/2277)
- Ftrack: Support multiple reviews [\#2271](https://github.com/pypeclub/OpenPype/pull/2271)
- Ftrack: Remove unused clean component plugin [\#2269](https://github.com/pypeclub/OpenPype/pull/2269)
- Royal Render: Support for rr channels in separate dirs [\#2268](https://github.com/pypeclub/OpenPype/pull/2268)
- Houdini: Add experimental tools action [\#2267](https://github.com/pypeclub/OpenPype/pull/2267)
- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265)
- Nuke: extract baked review videos presets [\#2248](https://github.com/pypeclub/OpenPype/pull/2248)
- TVPaint: Workers rendering [\#2209](https://github.com/pypeclub/OpenPype/pull/2209)
**🐛 Bug fixes**
- Maya Look Assigner: Fix Python 3 compatibility [\#2343](https://github.com/pypeclub/OpenPype/pull/2343)
- Tools: Use Qt context on tools show [\#2340](https://github.com/pypeclub/OpenPype/pull/2340)
- Timers Manager: Disable auto stop timer on linux platform [\#2334](https://github.com/pypeclub/OpenPype/pull/2334)
- nuke: bake preset single input exception [\#2331](https://github.com/pypeclub/OpenPype/pull/2331)
- Hiero: fixing multiple templates at a hierarchy parent [\#2330](https://github.com/pypeclub/OpenPype/pull/2330)
- Fix - provider icons are pulled from a folder [\#2326](https://github.com/pypeclub/OpenPype/pull/2326)
- InputLinks: Typo in "inputLinks" key [\#2314](https://github.com/pypeclub/OpenPype/pull/2314)
- Deadline timeout and logging [\#2312](https://github.com/pypeclub/OpenPype/pull/2312)
@ -44,6 +55,10 @@
- Bug: fix variable name \_asset\_id in workfiles application [\#2274](https://github.com/pypeclub/OpenPype/pull/2274)
- Version handling fixes [\#2272](https://github.com/pypeclub/OpenPype/pull/2272)
**Merged pull requests:**
- Maya: configurable model top level validation [\#2321](https://github.com/pypeclub/OpenPype/pull/2321)
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)
@ -66,11 +81,12 @@
**🚀 Enhancements**
- Royal Render: Support for rr channels in separate dirs [\#2268](https://github.com/pypeclub/OpenPype/pull/2268)
- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265)
- SceneInventory: Choose loader in asset switcher [\#2262](https://github.com/pypeclub/OpenPype/pull/2262)
- Style: New fonts in OpenPype style [\#2256](https://github.com/pypeclub/OpenPype/pull/2256)
- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255)
- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251)
- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244)
- Added endpoint for configured extensions [\#2221](https://github.com/pypeclub/OpenPype/pull/2221)
**🐛 Bug fixes**
@ -81,15 +97,16 @@
- Maya: Render publishing fails on linux [\#2260](https://github.com/pypeclub/OpenPype/pull/2260)
- LookAssigner: Fix tool reopen [\#2259](https://github.com/pypeclub/OpenPype/pull/2259)
- Standalone: editorial not publishing thumbnails on all subsets [\#2258](https://github.com/pypeclub/OpenPype/pull/2258)
- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254)
- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247)
- Maya: Support for configurable AOV separator characters [\#2197](https://github.com/pypeclub/OpenPype/pull/2197)
- Maya: texture colorspace modes in looks [\#2195](https://github.com/pypeclub/OpenPype/pull/2195)
## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.1-nightly.1...3.6.1)
**🐛 Bug fixes**
- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254)
## [3.6.0](https://github.com/pypeclub/OpenPype/tree/3.6.0) (2021-11-15)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.0-nightly.6...3.6.0)
@ -97,11 +114,9 @@
### 📖 Documentation
- Add alternative sites for Site Sync [\#2206](https://github.com/pypeclub/OpenPype/pull/2206)
- Add command line way of running site sync server [\#2188](https://github.com/pypeclub/OpenPype/pull/2188)
**🚀 Enhancements**
- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244)
- Tools: Subset manager in OpenPype [\#2243](https://github.com/pypeclub/OpenPype/pull/2243)
- General: Skip module directories without init file [\#2239](https://github.com/pypeclub/OpenPype/pull/2239)
- General: Static interfaces [\#2238](https://github.com/pypeclub/OpenPype/pull/2238)
@ -114,13 +129,6 @@
- Maya : Validate shape zero [\#2212](https://github.com/pypeclub/OpenPype/pull/2212)
- Maya : validate unique names [\#2211](https://github.com/pypeclub/OpenPype/pull/2211)
- Tools: OpenPype stylesheet in workfiles tool [\#2208](https://github.com/pypeclub/OpenPype/pull/2208)
- Ftrack: Replace Queue with deque in event handlers logic [\#2204](https://github.com/pypeclub/OpenPype/pull/2204)
- Tools: New select context dialog [\#2200](https://github.com/pypeclub/OpenPype/pull/2200)
- Maya : Validate mesh ngons [\#2199](https://github.com/pypeclub/OpenPype/pull/2199)
- Dirmap in Nuke [\#2198](https://github.com/pypeclub/OpenPype/pull/2198)
- Delivery: Check 'frame' key in template for sequence delivery [\#2196](https://github.com/pypeclub/OpenPype/pull/2196)
- Settings: Site sync project settings improvement [\#2193](https://github.com/pypeclub/OpenPype/pull/2193)
- Usage of tools code [\#2185](https://github.com/pypeclub/OpenPype/pull/2185)
**🐛 Bug fixes**
@ -133,9 +141,6 @@
- Ftrack: Base event fix of 'get\_project\_from\_entity' method [\#2214](https://github.com/pypeclub/OpenPype/pull/2214)
- Maya : multiple subsets review broken [\#2210](https://github.com/pypeclub/OpenPype/pull/2210)
- Fix - different command used for Linux and Mac OS [\#2207](https://github.com/pypeclub/OpenPype/pull/2207)
- Tools: Workfiles tool don't use avalon widgets [\#2205](https://github.com/pypeclub/OpenPype/pull/2205)
- Ftrack: Fill missing ftrack id on mongo project [\#2203](https://github.com/pypeclub/OpenPype/pull/2203)
- Project Manager: Fix copying of tasks [\#2191](https://github.com/pypeclub/OpenPype/pull/2191)
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)

View file

@ -13,7 +13,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
# Should be as last hook because must change launch arguments to string
order = 1000
app_groups = ["nuke", "nukex", "hiero", "nukestudio", "photoshop"]
app_groups = ["nuke", "nukex", "hiero", "nukestudio"]
platforms = ["windows"]
def execute(self):

View file

@ -192,7 +192,10 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
render_products = layer_render_products.layer_data.products
assert render_products, "no render products generated"
exp_files = []
multipart = False
for product in render_products:
if product.multipart:
multipart = True
product_name = product.productName
if product.camera and layer_render_products.has_camera_token():
product_name = "{}{}".format(
@ -205,7 +208,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
})
self.log.info("multipart: {}".format(
layer_render_products.multipart))
multipart))
assert exp_files, "no file names were generated, this is bug"
self.log.info(exp_files)
@ -300,7 +303,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"subset": expected_layer_name,
"attachTo": attach_to,
"setMembers": layer_name,
"multipartExr": layer_render_products.multipart,
"multipartExr": multipart,
"review": render_instance.data.get("review") or False,
"publish": True,

View file

@ -49,7 +49,8 @@ from .vendor_bin_utils import (
get_vendor_bin_path,
get_oiio_tools_path,
get_ffmpeg_tool_path,
ffprobe_streams
ffprobe_streams,
is_oiio_supported
)
from .python_module_tools import (
@ -65,6 +66,11 @@ from .profiles_filtering import (
filter_profiles
)
from .transcoding import (
get_transcode_temp_directory,
should_convert_for_ffmpeg,
convert_for_ffmpeg
)
from .avalon_context import (
CURRENT_DOC_SCHEMAS,
PROJECT_NAME_ALLOWED_SYMBOLS,
@ -137,10 +143,6 @@ from .plugin_tools import (
source_hash,
get_unique_layer_name,
get_background_layers,
oiio_supported,
decompress,
get_decompress_dir,
should_decompress
)
from .path_tools import (
@ -185,6 +187,7 @@ __all__ = [
"get_oiio_tools_path",
"get_ffmpeg_tool_path",
"ffprobe_streams",
"is_oiio_supported",
"import_filepath",
"modules_from_path",
@ -192,6 +195,10 @@ __all__ = [
"classes_from_module",
"import_module_from_dirpath",
"get_transcode_temp_directory",
"should_convert_for_ffmpeg",
"convert_for_ffmpeg",
"CURRENT_DOC_SCHEMAS",
"PROJECT_NAME_ALLOWED_SYMBOLS",
"PROJECT_NAME_REGEX",
@ -256,10 +263,6 @@ __all__ = [
"source_hash",
"get_unique_layer_name",
"get_background_layers",
"oiio_supported",
"decompress",
"get_decompress_dir",
"should_decompress",
"version_up",
"get_version_from_path",

View file

@ -5,12 +5,8 @@ import inspect
import logging
import re
import json
import tempfile
import distutils
from .execute import run_subprocess
from .profiles_filtering import filter_profiles
from .vendor_bin_utils import get_oiio_tools_path
from openpype.settings import get_project_settings
@ -425,129 +421,6 @@ def get_background_layers(file_url):
return layers
def oiio_supported():
"""
Checks if oiiotool is configured for this platform.
Triggers simple subprocess, handles exception if fails.
'should_decompress' will throw exception if configured,
but not present or not working.
Returns:
(bool)
"""
oiio_path = get_oiio_tools_path()
if oiio_path:
oiio_path = distutils.spawn.find_executable(oiio_path)
if not oiio_path:
log.debug("OIIOTool is not configured or not present at {}".
format(oiio_path))
return False
return True
def decompress(target_dir, file_url,
input_frame_start=None, input_frame_end=None, log=None):
"""
Decompresses DWAA 'file_url' .exr to 'target_dir'.
Creates uncompressed files in 'target_dir', they need to be cleaned.
File url could be for single file or for a sequence, in that case
%0Xd will be as a placeholder for frame number AND input_frame* will
be filled.
In that case single oiio command with '--frames' will be triggered for
all frames, this should be faster then looping and running sequentially
Args:
target_dir (str): extended from stagingDir
file_url (str): full urls to source file (with or without %0Xd)
input_frame_start (int) (optional): first frame
input_frame_end (int) (optional): last frame
log (Logger) (optional): pype logger
"""
is_sequence = input_frame_start is not None and \
input_frame_end is not None and \
(int(input_frame_end) > int(input_frame_start))
oiio_cmd = []
oiio_cmd.append(get_oiio_tools_path())
oiio_cmd.append("--compression none")
base_file_name = os.path.basename(file_url)
oiio_cmd.append(file_url)
if is_sequence:
oiio_cmd.append("--frames {}-{}".format(input_frame_start,
input_frame_end))
oiio_cmd.append("-o")
oiio_cmd.append(os.path.join(target_dir, base_file_name))
subprocess_exr = " ".join(oiio_cmd)
if not log:
log = logging.getLogger(__name__)
log.debug("Decompressing {}".format(subprocess_exr))
run_subprocess(
subprocess_exr, shell=True, logger=log
)
def get_decompress_dir():
"""
Creates temporary folder for decompressing.
Its local, in case of farm it is 'local' to the farm machine.
Should be much faster, needs to be cleaned up later.
"""
return os.path.normpath(
tempfile.mkdtemp(prefix="pyblish_tmp_")
)
def should_decompress(file_url):
"""
Tests that 'file_url' is compressed with DWAA.
Uses 'oiio_supported' to check that OIIO tool is available for this
platform.
Shouldn't throw exception as oiiotool is guarded by check function.
Currently implemented this way as there is no support for Mac and Linux
In the future, it should be more strict and throws exception on
misconfiguration.
Args:
file_url (str): path to rendered file (in sequence it would be
first file, if that compressed it is expected that whole seq
will be too)
Returns:
(bool): 'file_url' is DWAA compressed and should be decompressed
and we can decompress (oiiotool supported)
"""
if oiio_supported():
try:
output = run_subprocess([
get_oiio_tools_path(),
"--info", "-v", file_url])
return "compression: \"dwaa\"" in output or \
"compression: \"dwab\"" in output
except RuntimeError:
_name, ext = os.path.splitext(file_url)
# TODO: should't the list of allowed extensions be
# taken from an OIIO variable of supported formats
if ext not in [".mxf"]:
# Reraise exception
raise
return False
return False
def parse_json(path):
"""Parses json file at 'path' location

266
openpype/lib/transcoding.py Normal file
View file

@ -0,0 +1,266 @@
import os
import re
import logging
import collections
import tempfile
from .execute import run_subprocess
from .vendor_bin_utils import (
get_oiio_tools_path,
is_oiio_supported
)
def get_transcode_temp_directory():
"""Creates temporary folder for transcoding.
Its local, in case of farm it is 'local' to the farm machine.
Should be much faster, needs to be cleaned up later.
"""
return os.path.normpath(
tempfile.mkdtemp(prefix="op_transcoding_")
)
def get_oiio_info_for_input(filepath, logger=None):
"""Call oiiotool to get information about input and return stdout."""
args = [
get_oiio_tools_path(), "--info", "-v", filepath
]
return run_subprocess(args, logger=logger)
def parse_oiio_info(oiio_info):
"""Create an object based on output from oiiotool.
Removes quotation marks from compression value. Parse channels into
dictionary - key is channel name value is determined type of channel
(e.g. 'uint', 'float').
Args:
oiio_info (str): Output of calling "oiiotool --info -v <path>"
Returns:
dict: Loaded data from output.
"""
lines = [
line.strip()
for line in oiio_info.split("\n")
]
# Each line should contain information about one key
# key - value are separated with ": "
oiio_sep = ": "
data_map = {}
for line in lines:
parts = line.split(oiio_sep)
if len(parts) < 2:
continue
key = parts.pop(0)
value = oiio_sep.join(parts)
data_map[key] = value
if "compression" in data_map:
value = data_map["compression"]
data_map["compression"] = value.replace("\"", "")
channels_info = {}
channels_value = data_map.get("channel list") or ""
if channels_value:
channels = channels_value.split(", ")
type_regex = re.compile(r"(?P<name>[^\(]+) \((?P<type>[^\)]+)\)")
for channel in channels:
match = type_regex.search(channel)
if not match:
channel_name = channel
channel_type = "uint"
else:
channel_name = match.group("name")
channel_type = match.group("type")
channels_info[channel_name] = channel_type
data_map["channels_info"] = channels_info
return data_map
def get_convert_rgb_channels(channels_info):
"""Get first available RGB(A) group from channels info.
## Examples
```
# Ideal situation
channels_info: {
"R": ...,
"G": ...,
"B": ...,
"A": ...
}
```
Result will be `("R", "G", "B", "A")`
```
# Not ideal situation
channels_info: {
"beauty.red": ...,
"beuaty.green": ...,
"beauty.blue": ...,
"depth.Z": ...
}
```
Result will be `("beauty.red", "beauty.green", "beauty.blue", None)`
Returns:
NoneType: There is not channel combination that matches RGB
combination.
tuple: Tuple of 4 channel names defying channel names for R, G, B, A
where A can be None.
"""
rgb_by_main_name = collections.defaultdict(dict)
main_name_order = [""]
for channel_name in channels_info.keys():
name_parts = channel_name.split(".")
rgb_part = name_parts.pop(-1).lower()
main_name = ".".join(name_parts)
if rgb_part in ("r", "red"):
rgb_by_main_name[main_name]["R"] = channel_name
elif rgb_part in ("g", "green"):
rgb_by_main_name[main_name]["G"] = channel_name
elif rgb_part in ("b", "blue"):
rgb_by_main_name[main_name]["B"] = channel_name
elif rgb_part in ("a", "alpha"):
rgb_by_main_name[main_name]["A"] = channel_name
else:
continue
if main_name not in main_name_order:
main_name_order.append(main_name)
output = None
for main_name in main_name_order:
colors = rgb_by_main_name.get(main_name) or {}
red = colors.get("R")
green = colors.get("G")
blue = colors.get("B")
alpha = colors.get("A")
if red is not None and green is not None and blue is not None:
output = (red, green, blue, alpha)
break
return output
def should_convert_for_ffmpeg(src_filepath):
"""Find out if input should be converted for ffmpeg.
Currently cares only about exr inputs and is based on OpenImageIO.
Returns:
bool/NoneType: True if should be converted, False if should not and
None if can't determine.
"""
# Care only about exr at this moment
ext = os.path.splitext(src_filepath)[-1].lower()
if ext != ".exr":
return False
# Can't determine if should convert or not without oiio_tool
if not is_oiio_supported():
return None
# Load info about info from oiio tool
oiio_info = get_oiio_info_for_input(src_filepath)
input_info = parse_oiio_info(oiio_info)
# Check compression
compression = input_info["compression"]
if compression in ("dwaa", "dwab"):
return True
# Check channels
channels_info = input_info["channels_info"]
review_channels = get_convert_rgb_channels(channels_info)
if review_channels is None:
return None
return False
def convert_for_ffmpeg(
first_input_path,
output_dir,
input_frame_start,
input_frame_end,
logger=None
):
"""Contert source file to format supported in ffmpeg.
Currently can convert only exrs.
Args:
first_input_path (str): Path to first file of a sequence or a single
file path for non-sequential input.
output_dir (str): Path to directory where output will be rendered.
Must not be same as input's directory.
input_frame_start (int): Frame start of input.
input_frame_end (int): Frame end of input.
logger (logging.Logger): Logger used for logging.
Raises:
ValueError: If input filepath has extension not supported by function.
Currently is supported only ".exr" extension.
"""
if logger is None:
logger = logging.getLogger(__name__)
ext = os.path.splitext(first_input_path)[1].lower()
if ext != ".exr":
raise ValueError((
"Function 'convert_for_ffmpeg' currently support only"
" \".exr\" extension. Got \"{}\"."
).format(ext))
is_sequence = False
if input_frame_start is not None and input_frame_end is not None:
is_sequence = int(input_frame_end) != int(input_frame_start)
oiio_info = get_oiio_info_for_input(first_input_path)
input_info = parse_oiio_info(oiio_info)
# Change compression only if source compression is "dwaa" or "dwab"
# - they're not supported in ffmpeg
compression = input_info["compression"]
if compression in ("dwaa", "dwab"):
compression = "none"
# Prepare subprocess arguments
oiio_cmd = [
get_oiio_tools_path(),
"--compression", compression,
first_input_path
]
channels_info = input_info["channels_info"]
review_channels = get_convert_rgb_channels(channels_info)
if review_channels is None:
raise ValueError(
"Couldn't find channels that can be used for conversion."
)
red, green, blue, alpha = review_channels
channels_arg = "R={},G={},B={}".format(red, green, blue)
if alpha is not None:
channels_arg += ",A={}".format(alpha)
oiio_cmd.append("--ch")
oiio_cmd.append(channels_arg)
# Add frame definitions to arguments
if is_sequence:
oiio_cmd.append("--frames")
oiio_cmd.append("{}-{}".format(input_frame_start, input_frame_end))
# Add last argument - path to output
base_file_name = os.path.basename(first_input_path)
output_path = os.path.join(output_dir, base_file_name)
oiio_cmd.append("-o")
oiio_cmd.append(output_path)
logger.debug("Conversion command: {}".format(" ".join(oiio_cmd)))
run_subprocess(oiio_cmd, logger=logger)

View file

@ -3,6 +3,7 @@ import logging
import json
import platform
import subprocess
import distutils
log = logging.getLogger("FFmpeg utils")
@ -105,3 +106,21 @@ def ffprobe_streams(path_to_file, logger=None):
))
return json.loads(popen_stdout)["streams"]
def is_oiio_supported():
"""Checks if oiiotool is configured for this platform.
Returns:
bool: OIIO tool executable is available.
"""
loaded_path = oiio_path = get_oiio_tools_path()
if oiio_path:
oiio_path = distutils.spawn.find_executable(oiio_path)
if not oiio_path:
log.debug("OIIOTool is not configured or not present at {}".format(
loaded_path
))
return False
return True

View file

@ -445,9 +445,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
preview = True
break
if instance_data.get("multipartExr"):
preview = True
new_instance = copy(instance_data)
new_instance["subset"] = subset_name
new_instance["subsetGroup"] = group_name
if preview:
new_instance["review"] = True
# create represenation
if isinstance(col, (list, tuple)):
@ -527,6 +532,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
if bake_renders:
preview = False
# toggle preview on if multipart is on
if instance.get("multipartExr", False):
preview = True
staging = os.path.dirname(list(collection)[0])
success, rootless_staging_dir = (
self.anatomy.find_root_template_from_path(staging)

View file

@ -14,9 +14,11 @@ import openpype
import openpype.api
from openpype.lib import (
get_pype_execute_args,
should_decompress,
get_decompress_dir,
decompress,
get_transcode_temp_directory,
convert_for_ffmpeg,
should_convert_for_ffmpeg,
CREATE_NO_WINDOW
)
@ -70,18 +72,6 @@ class ExtractBurnin(openpype.api.Extractor):
options = None
def process(self, instance):
# ffmpeg doesn't support multipart exrs
if instance.data.get("multipartExr") is True:
instance_label = (
getattr(instance, "label", None)
or instance.data.get("label")
or instance.data.get("name")
)
self.log.info((
"Instance \"{}\" contain \"multipartExr\". Skipped."
).format(instance_label))
return
# QUESTION what is this for and should we raise an exception?
if "representations" not in instance.data:
raise RuntimeError("Burnin needs already created mov to work on.")
@ -95,6 +85,55 @@ class ExtractBurnin(openpype.api.Extractor):
self.log.debug("Removing representation: {}".format(repre))
instance.data["representations"].remove(repre)
def _get_burnins_per_representations(self, instance, src_burnin_defs):
self.log.debug("Filtering of representations and their burnins starts")
filtered_repres = []
repres = instance.data.get("representations") or []
for idx, repre in enumerate(repres):
self.log.debug("repre ({}): `{}`".format(idx + 1, repre["name"]))
if not self.repres_is_valid(repre):
continue
repre_burnin_links = repre.get("burnins", [])
self.log.debug(
"repre_burnin_links: {}".format(repre_burnin_links)
)
burnin_defs = copy.deepcopy(src_burnin_defs)
self.log.debug(
"burnin_defs.keys(): {}".format(burnin_defs.keys())
)
# Filter output definition by `burnin` represetation key
repre_linked_burnins = {
name: output
for name, output in burnin_defs.items()
if name in repre_burnin_links
}
self.log.debug(
"repre_linked_burnins: {}".format(repre_linked_burnins)
)
# if any match then replace burnin defs and follow tag filtering
if repre_linked_burnins:
burnin_defs = repre_linked_burnins
# Filter output definition by representation tags (optional)
repre_burnin_defs = self.filter_burnins_by_tags(
burnin_defs, repre["tags"]
)
if not repre_burnin_defs:
self.log.info((
"Skipped representation. All burnin definitions from"
" selected profile does not match to representation's"
" tags. \"{}\""
).format(str(repre["tags"])))
continue
filtered_repres.append((repre, repre_burnin_defs))
return filtered_repres
def main_process(self, instance):
# TODO get these data from context
host_name = instance.context.data["hostName"]
@ -110,8 +149,7 @@ class ExtractBurnin(openpype.api.Extractor):
).format(host_name, family, task_name))
return
self.log.debug("profile: {}".format(
profile))
self.log.debug("profile: {}".format(profile))
# Pre-filter burnin definitions by instance families
burnin_defs = self.filter_burnins_defs(profile, instance)
@ -133,46 +171,10 @@ class ExtractBurnin(openpype.api.Extractor):
# Executable args that will execute the script
# [pype executable, *pype script, "run"]
executable_args = get_pype_execute_args("run", scriptpath)
for idx, repre in enumerate(tuple(instance.data["representations"])):
self.log.debug("repre ({}): `{}`".format(idx + 1, repre["name"]))
repre_burnin_links = repre.get("burnins", [])
if not self.repres_is_valid(repre):
continue
self.log.debug("repre_burnin_links: {}".format(
repre_burnin_links))
self.log.debug("burnin_defs.keys(): {}".format(
burnin_defs.keys()))
# Filter output definition by `burnin` represetation key
repre_linked_burnins = {
name: output for name, output in burnin_defs.items()
if name in repre_burnin_links
}
self.log.debug("repre_linked_burnins: {}".format(
repre_linked_burnins))
# if any match then replace burnin defs and follow tag filtering
_burnin_defs = copy.deepcopy(burnin_defs)
if repre_linked_burnins:
_burnin_defs = repre_linked_burnins
# Filter output definition by representation tags (optional)
repre_burnin_defs = self.filter_burnins_by_tags(
_burnin_defs, repre["tags"]
)
if not repre_burnin_defs:
self.log.info((
"Skipped representation. All burnin definitions from"
" selected profile does not match to representation's"
" tags. \"{}\""
).format(str(repre["tags"])))
continue
burnins_per_repres = self._get_burnins_per_representations(
instance, burnin_defs
)
for repre, repre_burnin_defs in burnins_per_repres:
# Create copy of `_burnin_data` and `_temp_data` for repre.
burnin_data = copy.deepcopy(_burnin_data)
temp_data = copy.deepcopy(_temp_data)
@ -180,6 +182,41 @@ class ExtractBurnin(openpype.api.Extractor):
# Prepare representation based data.
self.prepare_repre_data(instance, repre, burnin_data, temp_data)
src_repre_staging_dir = repre["stagingDir"]
# Should convert representation source files before processing?
repre_files = repre["files"]
if isinstance(repre_files, (tuple, list)):
filename = repre_files[0]
else:
filename = repre_files
first_input_path = os.path.join(src_repre_staging_dir, filename)
# Determine if representation requires pre conversion for ffmpeg
do_convert = should_convert_for_ffmpeg(first_input_path)
# If result is None the requirement of conversion can't be
# determined
if do_convert is None:
self.log.info((
"Can't determine if representation requires conversion."
" Skipped."
))
continue
# Do conversion if needed
# - change staging dir of source representation
# - must be set back after output definitions processing
if do_convert:
new_staging_dir = get_transcode_temp_directory()
repre["stagingDir"] = new_staging_dir
convert_for_ffmpeg(
first_input_path,
new_staging_dir,
_temp_data["frameStart"],
_temp_data["frameEnd"],
self.log
)
# Add anatomy keys to burnin_data.
filled_anatomy = anatomy.format_all(burnin_data)
burnin_data["anatomy"] = filled_anatomy.get_solved()
@ -199,6 +236,7 @@ class ExtractBurnin(openpype.api.Extractor):
files_to_delete = []
for filename_suffix, burnin_def in repre_burnin_defs.items():
new_repre = copy.deepcopy(repre)
new_repre["stagingDir"] = src_repre_staging_dir
# Keep "ftrackreview" tag only on first output
if first_output:
@ -229,27 +267,9 @@ class ExtractBurnin(openpype.api.Extractor):
new_repre["outputName"] = new_name
# Prepare paths and files for process.
self.input_output_paths(new_repre, temp_data, filename_suffix)
decompressed_dir = ''
full_input_path = temp_data["full_input_path"]
do_decompress = should_decompress(full_input_path)
if do_decompress:
decompressed_dir = get_decompress_dir()
decompress(
decompressed_dir,
full_input_path,
temp_data["frame_start"],
temp_data["frame_end"],
self.log
)
# input path changed, 'decompressed' added
input_file = os.path.basename(full_input_path)
temp_data["full_input_path"] = os.path.join(
decompressed_dir,
input_file)
self.input_output_paths(
repre, new_repre, temp_data, filename_suffix
)
# Data for burnin script
script_data = {
@ -305,6 +325,14 @@ class ExtractBurnin(openpype.api.Extractor):
# Add new representation to instance
instance.data["representations"].append(new_repre)
# Cleanup temp staging dir after procesisng of output definitions
if do_convert:
temp_dir = repre["stagingDir"]
shutil.rmtree(temp_dir)
# Set staging dir of source representation back to previous
# value
repre["stagingDir"] = src_repre_staging_dir
# Remove source representation
# NOTE we maybe can keep source representation if necessary
instance.data["representations"].remove(repre)
@ -317,9 +345,6 @@ class ExtractBurnin(openpype.api.Extractor):
os.remove(filepath)
self.log.debug("Removed: \"{}\"".format(filepath))
if do_decompress and os.path.exists(decompressed_dir):
shutil.rmtree(decompressed_dir)
def _get_burnin_options(self):
# Prepare burnin options
burnin_options = copy.deepcopy(self.default_options)
@ -474,6 +499,12 @@ class ExtractBurnin(openpype.api.Extractor):
"Representation \"{}\" don't have \"burnin\" tag. Skipped."
).format(repre["name"]))
return False
if not repre.get("files"):
self.log.warning((
"Representation \"{}\" have empty files. Skipped."
).format(repre["name"]))
return False
return True
def filter_burnins_by_tags(self, burnin_defs, tags):
@ -504,7 +535,9 @@ class ExtractBurnin(openpype.api.Extractor):
return filtered_burnins
def input_output_paths(self, new_repre, temp_data, filename_suffix):
def input_output_paths(
self, src_repre, new_repre, temp_data, filename_suffix
):
"""Prepare input and output paths for representation.
Store data to `temp_data` for keys "full_input_path" which is full path
@ -565,12 +598,13 @@ class ExtractBurnin(openpype.api.Extractor):
repre_files = output_filename
stagingdir = new_repre["stagingDir"]
src_stagingdir = src_repre["stagingDir"]
dst_stagingdir = new_repre["stagingDir"]
full_input_path = os.path.join(
os.path.normpath(stagingdir), input_filename
os.path.normpath(src_stagingdir), input_filename
).replace("\\", "/")
full_output_path = os.path.join(
os.path.normpath(stagingdir), output_filename
os.path.normpath(dst_stagingdir), output_filename
).replace("\\", "/")
temp_data["full_input_path"] = full_input_path
@ -587,7 +621,7 @@ class ExtractBurnin(openpype.api.Extractor):
if is_sequence:
for filename in input_filenames:
filepath = os.path.join(
os.path.normpath(stagingdir), filename
os.path.normpath(src_stagingdir), filename
).replace("\\", "/")
full_input_paths.append(filepath)

View file

@ -7,10 +7,11 @@ from openpype.lib import (
run_subprocess,
path_to_subprocess_arg,
should_decompress,
get_decompress_dir,
decompress
get_transcode_temp_directory,
convert_for_ffmpeg,
should_convert_for_ffmpeg
)
import shutil
@ -31,57 +32,56 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
def process(self, instance):
self.log.info("subset {}".format(instance.data['subset']))
# skip crypto passes.
if 'crypto' in instance.data['subset']:
self.log.info("Skipping crypto passes.")
return
do_decompress = False
# ffmpeg doesn't support multipart exrs, use oiiotool if available
if instance.data.get("multipartExr") is True:
return
# Skip review when requested.
# Skip if review not set.
if not instance.data.get("review", True):
self.log.info("Skipping - no review set on instance.")
return
# get representation and loop them
representations = instance.data["representations"]
# filter out mov and img sequences
representations_new = representations[:]
for repre in representations:
tags = repre.get("tags", [])
self.log.debug(repre)
valid = 'review' in tags or "thumb-nuke" in tags
if not valid:
continue
if not isinstance(repre['files'], (list, tuple)):
input_file = repre['files']
filtered_repres = self._get_filtered_repres(instance)
for repre in filtered_repres:
repre_files = repre["files"]
if not isinstance(repre_files, (list, tuple)):
input_file = repre_files
else:
file_index = int(float(len(repre['files'])) * 0.5)
input_file = repre['files'][file_index]
file_index = int(float(len(repre_files)) * 0.5)
input_file = repre_files[file_index]
stagingdir = os.path.normpath(repre.get("stagingDir"))
stagingdir = os.path.normpath(repre["stagingDir"])
# input_file = (
# collections[0].format('{head}{padding}{tail}') % start
# )
full_input_path = os.path.join(stagingdir, input_file)
self.log.info("input {}".format(full_input_path))
decompressed_dir = ''
do_decompress = should_decompress(full_input_path)
if do_decompress:
decompressed_dir = get_decompress_dir()
do_convert = should_convert_for_ffmpeg(full_input_path)
# If result is None the requirement of conversion can't be
# determined
if do_convert is None:
self.log.info((
"Can't determine if representation requires conversion."
" Skipped."
))
continue
decompress(
decompressed_dir,
full_input_path)
# input path changed, 'decompressed' added
full_input_path = os.path.join(
decompressed_dir,
input_file)
# Do conversion if needed
# - change staging dir of source representation
# - must be set back after output definitions processing
convert_dir = None
if do_convert:
convert_dir = get_transcode_temp_directory()
filename = os.path.basename(full_input_path)
convert_for_ffmpeg(
full_input_path,
convert_dir,
None,
None,
self.log
)
full_input_path = os.path.join(convert_dir, filename)
filename = os.path.splitext(input_file)[0]
if not filename.endswith('.'):
@ -124,29 +124,45 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
)
except RuntimeError as exp:
if "Compression" in str(exp):
self.log.debug("Unsupported compression on input files. " +
"Skipping!!!")
self.log.debug(
"Unsupported compression on input files. Skipping!!!"
)
return
self.log.warning("Conversion crashed", exc_info=True)
raise
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'thumbnail',
'ext': 'jpg',
'files': jpeg_file,
new_repre = {
"name": "thumbnail",
"ext": "jpg",
"files": jpeg_file,
"stagingDir": stagingdir,
"thumbnail": True,
"tags": ['thumbnail']
"tags": ["thumbnail"]
}
# adding representation
self.log.debug("Adding: {}".format(representation))
representations_new.append(representation)
self.log.debug("Adding: {}".format(new_repre))
instance.data["representations"].append(new_repre)
if do_decompress and os.path.exists(decompressed_dir):
shutil.rmtree(decompressed_dir)
# Cleanup temp folder
if convert_dir is not None and os.path.exists(convert_dir):
shutil.rmtree(convert_dir)
instance.data["representations"] = representations_new
def _get_filtered_repres(self, instance):
filtered_repres = []
src_repres = instance.data.get("representations") or []
for repre in src_repres:
self.log.debug(repre)
tags = repre.get("tags") or []
valid = "review" in tags or "thumb-nuke" in tags
if not valid:
continue
if not repre.get("files"):
self.log.info((
"Representation \"{}\" don't have files. Skipping"
).format(repre["name"]))
continue
filtered_repres.append(repre)
return filtered_repres

View file

@ -2,6 +2,7 @@ import os
import re
import copy
import json
import shutil
from abc import ABCMeta, abstractmethod
import six
@ -16,9 +17,10 @@ from openpype.lib import (
path_to_subprocess_arg,
should_decompress,
get_decompress_dir,
decompress
should_convert_for_ffmpeg,
convert_for_ffmpeg,
get_transcode_temp_directory,
get_transcode_temp_directory
)
import speedcopy
@ -71,18 +73,6 @@ class ExtractReview(pyblish.api.InstancePlugin):
if not instance.data.get("review", True):
return
# ffmpeg doesn't support multipart exrs
if instance.data.get("multipartExr") is True:
instance_label = (
getattr(instance, "label", None)
or instance.data.get("label")
or instance.data.get("name")
)
self.log.info((
"Instance \"{}\" contain \"multipartExr\". Skipped."
).format(instance_label))
return
# Run processing
self.main_process(instance)
@ -92,7 +82,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
if "delete" in tags and "thumbnail" not in tags:
instance.data["representations"].remove(repre)
def main_process(self, instance):
def _get_outputs_for_instance(self, instance):
host_name = instance.context.data["hostName"]
task_name = os.environ["AVALON_TASK"]
family = self.main_family_from_instance(instance)
@ -114,24 +104,25 @@ class ExtractReview(pyblish.api.InstancePlugin):
self.log.debug("Matching profile: \"{}\"".format(json.dumps(profile)))
instance_families = self.families_from_instance(instance)
_profile_outputs = self.filter_outputs_by_families(
filtered_outputs = self.filter_outputs_by_families(
profile, instance_families
)
if not _profile_outputs:
# Store `filename_suffix` to save arguments
profile_outputs = []
for filename_suffix, definition in filtered_outputs.items():
definition["filename_suffix"] = filename_suffix
profile_outputs.append(definition)
if not filtered_outputs:
self.log.info((
"Skipped instance. All output definitions from selected"
" profile does not match to instance families. \"{}\""
).format(str(instance_families)))
return
return profile_outputs
# Store `filename_suffix` to save arguments
profile_outputs = []
for filename_suffix, definition in _profile_outputs.items():
definition["filename_suffix"] = filename_suffix
profile_outputs.append(definition)
# Loop through representations
for repre in tuple(instance.data["representations"]):
def _get_outputs_per_representations(self, instance, profile_outputs):
outputs_per_representations = []
for repre in instance.data["representations"]:
repre_name = str(repre.get("name"))
tags = repre.get("tags") or []
if "review" not in tags:
@ -173,6 +164,80 @@ class ExtractReview(pyblish.api.InstancePlugin):
" tags. \"{}\""
).format(str(tags)))
continue
outputs_per_representations.append((repre, outputs))
return outputs_per_representations
@staticmethod
def get_instance_label(instance):
return (
getattr(instance, "label", None)
or instance.data.get("label")
or instance.data.get("name")
or str(instance)
)
def main_process(self, instance):
instance_label = self.get_instance_label(instance)
self.log.debug("Processing instance \"{}\"".format(instance_label))
profile_outputs = self._get_outputs_for_instance(instance)
if not profile_outputs:
return
# Loop through representations
outputs_per_repres = self._get_outputs_per_representations(
instance, profile_outputs
)
for repre, outputs in outputs_per_repres:
# Check if input should be preconverted before processing
# Store original staging dir (it's value may change)
src_repre_staging_dir = repre["stagingDir"]
# Receive filepath to first file in representation
first_input_path = None
if not self.input_is_sequence(repre):
first_input_path = os.path.join(
src_repre_staging_dir, repre["files"]
)
else:
for filename in repre["files"]:
first_input_path = os.path.join(
src_repre_staging_dir, filename
)
break
# Skip if file is not set
if first_input_path is None:
self.log.warning((
"Representation \"{}\" have empty files. Skipped."
).format(repre["name"]))
continue
# Determine if representation requires pre conversion for ffmpeg
do_convert = should_convert_for_ffmpeg(first_input_path)
# If result is None the requirement of conversion can't be
# determined
if do_convert is None:
self.log.info((
"Can't determine if representation requires conversion."
" Skipped."
))
continue
# Do conversion if needed
# - change staging dir of source representation
# - must be set back after output definitions processing
if do_convert:
new_staging_dir = get_transcode_temp_directory()
repre["stagingDir"] = new_staging_dir
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
convert_for_ffmpeg(
first_input_path,
new_staging_dir,
frame_start,
frame_end,
self.log
)
for _output_def in outputs:
output_def = copy.deepcopy(_output_def)
@ -185,6 +250,10 @@ class ExtractReview(pyblish.api.InstancePlugin):
# Create copy of representation
new_repre = copy.deepcopy(repre)
# Make sure new representation has origin staging dir
# - this is because source representation may change
# it's staging dir because of ffmpeg conversion
new_repre["stagingDir"] = src_repre_staging_dir
# Remove "delete" tag from new repre if there is
if "delete" in new_repre["tags"]:
@ -276,6 +345,14 @@ class ExtractReview(pyblish.api.InstancePlugin):
)
instance.data["representations"].append(new_repre)
# Cleanup temp staging dir after procesisng of output definitions
if do_convert:
temp_dir = repre["stagingDir"]
shutil.rmtree(temp_dir)
# Set staging dir of source representation back to previous
# value
repre["stagingDir"] = src_repre_staging_dir
def input_is_sequence(self, repre):
"""Deduce from representation data if input is sequence."""
# TODO GLOBAL ISSUE - Find better way how to find out if input
@ -405,35 +482,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
value for value in _ffmpeg_audio_filters if value.strip()
]
if isinstance(new_repre['files'], list):
input_files_urls = [os.path.join(new_repre["stagingDir"], f) for f
in new_repre['files']]
test_path = input_files_urls[0]
else:
test_path = os.path.join(
new_repre["stagingDir"], new_repre['files'])
do_decompress = should_decompress(test_path)
if do_decompress:
# change stagingDir, decompress first
# calculate all paths with modified directory, used on too many
# places
# will be purged by cleanup.py automatically
orig_staging_dir = new_repre["stagingDir"]
new_repre["stagingDir"] = get_decompress_dir()
# Prepare input and output filepaths
self.input_output_paths(new_repre, output_def, temp_data)
if do_decompress:
input_file = temp_data["full_input_path"].\
replace(new_repre["stagingDir"], orig_staging_dir)
decompress(new_repre["stagingDir"], input_file,
temp_data["frame_start"],
temp_data["frame_end"],
self.log)
# Set output frames len to 1 when ouput is single image
if (
temp_data["output_ext_is_image"]
@ -744,13 +795,14 @@ class ExtractReview(pyblish.api.InstancePlugin):
"sequence_file" (if output is sequence) keys to new representation.
"""
staging_dir = new_repre["stagingDir"]
repre = temp_data["origin_repre"]
src_staging_dir = repre["stagingDir"]
dst_staging_dir = new_repre["stagingDir"]
if temp_data["input_is_sequence"]:
collections = clique.assemble(repre["files"])[0]
full_input_path = os.path.join(
staging_dir,
src_staging_dir,
collections[0].format("{head}{padding}{tail}")
)
@ -760,12 +812,12 @@ class ExtractReview(pyblish.api.InstancePlugin):
# Make sure to have full path to one input file
full_input_path_single_file = os.path.join(
staging_dir, repre["files"][0]
src_staging_dir, repre["files"][0]
)
else:
full_input_path = os.path.join(
staging_dir, repre["files"]
src_staging_dir, repre["files"]
)
filename = os.path.splitext(repre["files"])[0]
@ -811,27 +863,27 @@ class ExtractReview(pyblish.api.InstancePlugin):
new_repre["sequence_file"] = repr_file
full_output_path = os.path.join(
staging_dir, filename_base, repr_file
dst_staging_dir, filename_base, repr_file
)
else:
repr_file = "{}_{}.{}".format(
filename, filename_suffix, output_ext
)
full_output_path = os.path.join(staging_dir, repr_file)
full_output_path = os.path.join(dst_staging_dir, repr_file)
new_repre_files = repr_file
# Store files to representation
new_repre["files"] = new_repre_files
# Make sure stagingDire exists
staging_dir = os.path.normpath(os.path.dirname(full_output_path))
if not os.path.exists(staging_dir):
self.log.debug("Creating dir: {}".format(staging_dir))
os.makedirs(staging_dir)
dst_staging_dir = os.path.normpath(os.path.dirname(full_output_path))
if not os.path.exists(dst_staging_dir):
self.log.debug("Creating dir: {}".format(dst_staging_dir))
os.makedirs(dst_staging_dir)
# Store stagingDir to representaion
new_repre["stagingDir"] = staging_dir
new_repre["stagingDir"] = dst_staging_dir
# Store paths to temp data
temp_data["full_input_path"] = full_input_path

View file

@ -1194,3 +1194,17 @@ QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical {
#ImageButton:disabled {
background: {color:bg-buttons-disabled};
}
/* Input field that looks like disabled
- QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit
- usage: QLineEdit that is not editable but has selectable color
*/
#LikeDisabledInput {
background: {color:bg-inputs-disabled};
}
#LikeDisabledInput:hover {
border-color: {color:border};
}
#LikeDisabledInput:focus {
border-color: {color:border};
}

View file

@ -101,7 +101,8 @@ class LookModel(models.TreeModel):
for look in asset_item["looks"]:
look_subsets[look["name"]].append(asset)
for subset, assets in sorted(look_subsets.iteritems()):
for subset in sorted(look_subsets.keys()):
assets = look_subsets[subset]
# Define nice label without "look" prefix for readability
label = subset if not subset.startswith("look") else subset[4:]

View file

@ -64,8 +64,9 @@ class AppVariantWidget(QtWidgets.QWidget):
for item in studio_executables:
path_widget = QtWidgets.QLineEdit(content_widget)
path_widget.setObjectName("LikeDisabledInput")
path_widget.setText(item.value)
path_widget.setEnabled(False)
path_widget.setReadOnly(True)
content_layout.addWidget(path_widget)
def update_local_settings(self, value):

View file

@ -370,8 +370,12 @@ class PypeTrayStarter(QtCore.QObject):
splash = self._get_splash()
splash.show()
self._tray_widget.show()
# Make sure tray and splash are painted out
QtWidgets.QApplication.processEvents()
elif self._timer_counter == 1:
# Second processing of events to make sure splash is painted
QtWidgets.QApplication.processEvents()
self._timer_counter += 1
self._tray_widget.initialize_modules()

View file

@ -5,6 +5,7 @@ use singleton approach with global functions (using helper anyway).
"""
import avalon.api
from .lib import qt_app_context
class HostToolsHelper:
@ -61,22 +62,23 @@ class HostToolsHelper:
if save is None:
save = True
workfiles_tool = self.get_workfiles_tool(parent)
workfiles_tool.set_save_enabled(save)
with qt_app_context():
workfiles_tool = self.get_workfiles_tool(parent)
workfiles_tool.set_save_enabled(save)
if not workfiles_tool.isVisible():
workfiles_tool.show()
if not workfiles_tool.isVisible():
workfiles_tool.show()
if use_context:
context = {
"asset": avalon.api.Session["AVALON_ASSET"],
"task": avalon.api.Session["AVALON_TASK"]
}
workfiles_tool.set_context(context)
if use_context:
context = {
"asset": avalon.api.Session["AVALON_ASSET"],
"task": avalon.api.Session["AVALON_TASK"]
}
workfiles_tool.set_context(context)
# Pull window to the front.
workfiles_tool.raise_()
workfiles_tool.activateWindow()
# Pull window to the front.
workfiles_tool.raise_()
workfiles_tool.activateWindow()
def get_loader_tool(self, parent):
"""Create, cache and return loader tool window."""
@ -90,20 +92,21 @@ class HostToolsHelper:
def show_loader(self, parent=None, use_context=None):
"""Loader tool for loading representations."""
loader_tool = self.get_loader_tool(parent)
with qt_app_context():
loader_tool = self.get_loader_tool(parent)
loader_tool.show()
loader_tool.raise_()
loader_tool.activateWindow()
loader_tool.show()
loader_tool.raise_()
loader_tool.activateWindow()
if use_context is None:
use_context = False
if use_context is None:
use_context = False
if use_context:
context = {"asset": avalon.api.Session["AVALON_ASSET"]}
loader_tool.set_context(context, refresh=True)
else:
loader_tool.refresh()
if use_context:
context = {"asset": avalon.api.Session["AVALON_ASSET"]}
loader_tool.set_context(context, refresh=True)
else:
loader_tool.refresh()
def get_creator_tool(self, parent):
"""Create, cache and return creator tool window."""
@ -117,13 +120,14 @@ class HostToolsHelper:
def show_creator(self, parent=None):
"""Show tool to create new instantes for publishing."""
creator_tool = self.get_creator_tool(parent)
creator_tool.refresh()
creator_tool.show()
with qt_app_context():
creator_tool = self.get_creator_tool(parent)
creator_tool.refresh()
creator_tool.show()
# Pull window to the front.
creator_tool.raise_()
creator_tool.activateWindow()
# Pull window to the front.
creator_tool.raise_()
creator_tool.activateWindow()
def get_subset_manager_tool(self, parent):
"""Create, cache and return subset manager tool window."""
@ -139,12 +143,13 @@ class HostToolsHelper:
def show_subset_manager(self, parent=None):
"""Show tool display/remove existing created instances."""
subset_manager_tool = self.get_subset_manager_tool(parent)
subset_manager_tool.show()
with qt_app_context():
subset_manager_tool = self.get_subset_manager_tool(parent)
subset_manager_tool.show()
# Pull window to the front.
subset_manager_tool.raise_()
subset_manager_tool.activateWindow()
# Pull window to the front.
subset_manager_tool.raise_()
subset_manager_tool.activateWindow()
def get_scene_inventory_tool(self, parent):
"""Create, cache and return scene inventory tool window."""
@ -160,13 +165,14 @@ class HostToolsHelper:
def show_scene_inventory(self, parent=None):
"""Show tool maintain loaded containers."""
scene_inventory_tool = self.get_scene_inventory_tool(parent)
scene_inventory_tool.show()
scene_inventory_tool.refresh()
with qt_app_context():
scene_inventory_tool = self.get_scene_inventory_tool(parent)
scene_inventory_tool.show()
scene_inventory_tool.refresh()
# Pull window to the front.
scene_inventory_tool.raise_()
scene_inventory_tool.activateWindow()
# Pull window to the front.
scene_inventory_tool.raise_()
scene_inventory_tool.activateWindow()
def get_library_loader_tool(self, parent):
"""Create, cache and return library loader tool window."""
@ -182,11 +188,12 @@ class HostToolsHelper:
def show_library_loader(self, parent=None):
"""Loader tool for loading representations from library project."""
library_loader_tool = self.get_library_loader_tool(parent)
library_loader_tool.show()
library_loader_tool.raise_()
library_loader_tool.activateWindow()
library_loader_tool.refresh()
with qt_app_context():
library_loader_tool = self.get_library_loader_tool(parent)
library_loader_tool.show()
library_loader_tool.raise_()
library_loader_tool.activateWindow()
library_loader_tool.refresh()
def show_publish(self, parent=None):
"""Publish UI."""
@ -207,9 +214,10 @@ class HostToolsHelper:
"""Look manager is Maya specific tool for look management."""
from avalon import style
look_assigner_tool = self.get_look_assigner_tool(parent)
look_assigner_tool.show()
look_assigner_tool.setStyleSheet(style.load_stylesheet())
with qt_app_context():
look_assigner_tool = self.get_look_assigner_tool(parent)
look_assigner_tool.show()
look_assigner_tool.setStyleSheet(style.load_stylesheet())
def get_experimental_tools_dialog(self, parent=None):
"""Dialog of experimental tools.
@ -232,11 +240,12 @@ class HostToolsHelper:
def show_experimental_tools_dialog(self, parent=None):
"""Show dialog with experimental tools."""
dialog = self.get_experimental_tools_dialog(parent)
with qt_app_context():
dialog = self.get_experimental_tools_dialog(parent)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.show()
dialog.raise_()
dialog.activateWindow()
def get_tool_by_name(self, tool_name, parent=None, *args, **kwargs):
"""Show tool by it's name.

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
__version__ = "3.7.0-nightly.3"
__version__ = "3.7.0-nightly.4"

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
version = "3.7.0-nightly.3" # OpenPype
version = "3.7.0-nightly.4" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team <info@openpype.io>"]
license = "MIT License"