mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
merge develop into main
This commit is contained in:
parent
f82ed5e127
commit
a8ac7cd538
418 changed files with 26017 additions and 4665 deletions
7
.github/weekly-digest.yml
vendored
7
.github/weekly-digest.yml
vendored
|
|
@ -1,7 +0,0 @@
|
|||
# Configuration for weekly-digest - https://github.com/apps/weekly-digest
|
||||
publishDay: sun
|
||||
canPublishIssues: true
|
||||
canPublishPullRequests: true
|
||||
canPublishContributors: true
|
||||
canPublishStargazers: true
|
||||
canPublishCommits: true
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -64,7 +64,6 @@ coverage.xml
|
|||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
|
||||
# Node JS packages
|
||||
##################
|
||||
node_modules
|
||||
|
|
|
|||
56
CHANGELOG.md
56
CHANGELOG.md
|
|
@ -1,5 +1,61 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [2.17.1](https://github.com/pypeclub/openpype/tree/2.17.1) (2021-04-30)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/openpype/compare/2.17.0...2.17.1)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- TVPaint frame range definition [\#1424](https://github.com/pypeclub/OpenPype/pull/1424)
|
||||
- PS - group all published instances [\#1415](https://github.com/pypeclub/OpenPype/pull/1415)
|
||||
- Nuke: deadline submission with gpu [\#1414](https://github.com/pypeclub/OpenPype/pull/1414)
|
||||
- Add task name to context pop up. [\#1383](https://github.com/pypeclub/OpenPype/pull/1383)
|
||||
- AE add duration validation [\#1363](https://github.com/pypeclub/OpenPype/pull/1363)
|
||||
- Maya: Support for Redshift proxies [\#1360](https://github.com/pypeclub/OpenPype/pull/1360)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Nuke: fixing undo for loaded mov and sequence [\#1433](https://github.com/pypeclub/OpenPype/pull/1433)
|
||||
- AE - validation for duration was 1 frame shorter [\#1426](https://github.com/pypeclub/OpenPype/pull/1426)
|
||||
- Houdini menu filename [\#1417](https://github.com/pypeclub/OpenPype/pull/1417)
|
||||
- Maya: Vray - problem getting all file nodes for look publishing [\#1399](https://github.com/pypeclub/OpenPype/pull/1399)
|
||||
|
||||
|
||||
## [2.17.0](https://github.com/pypeclub/openpype/tree/2.17.0) (2021-04-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/openpype/compare/3.0.0-beta2...2.17.0)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- Forward compatible ftrack group [\#1243](https://github.com/pypeclub/OpenPype/pull/1243)
|
||||
- Maya: Make tx option configurable with presets [\#1328](https://github.com/pypeclub/OpenPype/pull/1328)
|
||||
- TVPaint asset name validation [\#1302](https://github.com/pypeclub/OpenPype/pull/1302)
|
||||
- TV Paint: Set initial project settings. [\#1299](https://github.com/pypeclub/OpenPype/pull/1299)
|
||||
- TV Paint: Validate mark in and out. [\#1298](https://github.com/pypeclub/OpenPype/pull/1298)
|
||||
- Validate project settings [\#1297](https://github.com/pypeclub/OpenPype/pull/1297)
|
||||
- After Effects: added SubsetManager [\#1234](https://github.com/pypeclub/OpenPype/pull/1234)
|
||||
- Show error message in pyblish UI [\#1206](https://github.com/pypeclub/OpenPype/pull/1206)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Hiero: fixing source frame from correct object [\#1362](https://github.com/pypeclub/OpenPype/pull/1362)
|
||||
- Nuke: fix colourspace, prerenders and nuke panes opening [\#1308](https://github.com/pypeclub/OpenPype/pull/1308)
|
||||
- AE remove orphaned instance from workfile - fix self.stub [\#1282](https://github.com/pypeclub/OpenPype/pull/1282)
|
||||
- Nuke: deadline submission with search replaced env values from preset [\#1194](https://github.com/pypeclub/OpenPype/pull/1194)
|
||||
- Ftrack custom attributes in bulks [\#1312](https://github.com/pypeclub/OpenPype/pull/1312)
|
||||
- Ftrack optional pypclub role [\#1303](https://github.com/pypeclub/OpenPype/pull/1303)
|
||||
- After Effects: remove orphaned instances [\#1275](https://github.com/pypeclub/OpenPype/pull/1275)
|
||||
- Avalon schema names [\#1242](https://github.com/pypeclub/OpenPype/pull/1242)
|
||||
- Handle duplication of Task name [\#1226](https://github.com/pypeclub/OpenPype/pull/1226)
|
||||
- Modified path of plugin loads for Harmony and TVPaint [\#1217](https://github.com/pypeclub/OpenPype/pull/1217)
|
||||
- Regex checks in profiles filtering [\#1214](https://github.com/pypeclub/OpenPype/pull/1214)
|
||||
- Bulk mov strict task [\#1204](https://github.com/pypeclub/OpenPype/pull/1204)
|
||||
- Update custom ftrack session attributes [\#1202](https://github.com/pypeclub/OpenPype/pull/1202)
|
||||
- Nuke: write node colorspace ignore `default\(\)` label [\#1199](https://github.com/pypeclub/OpenPype/pull/1199)
|
||||
- Nuke: reverse search to make it more versatile [\#1178](https://github.com/pypeclub/OpenPype/pull/1178)
|
||||
|
||||
|
||||
## [2.16.1](https://github.com/pypeclub/pype/tree/2.16.1) (2021-04-13)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.16.0...2.16.1)
|
||||
|
|
|
|||
11
README.md
11
README.md
|
|
@ -2,6 +2,10 @@
|
|||
OpenPype
|
||||
====
|
||||
|
||||
[](https://github.com/pypeclub/pype/actions/workflows/documentation.yml)  
|
||||
|
||||
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
||||
|
|
@ -61,7 +65,8 @@ git clone --recurse-submodules git@github.com:Pypeclub/OpenPype.git
|
|||
#### To build OpenPype:
|
||||
|
||||
1) Run `.\tools\create_env.ps1` to create virtual environment in `.\venv`
|
||||
2) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\`
|
||||
2) Run `.\tools\fetch_thirdparty_libs.ps1` to download third-party dependencies like ffmpeg and oiio. Those will be included in build.
|
||||
3) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\`
|
||||
|
||||
To create distributable OpenPype versions, run `./tools/create_zip.ps1` - that will
|
||||
create zip file with name `openpype-vx.x.x.zip` parsed from current OpenPype repository and
|
||||
|
|
@ -116,8 +121,8 @@ pyenv local 3.7.9
|
|||
#### To build OpenPype:
|
||||
|
||||
1) Run `.\tools\create_env.sh` to create virtual environment in `.\venv`
|
||||
2) Run `.\tools\build.sh` to build OpenPype executables in `.\build\`
|
||||
|
||||
2) Run `.\tools\fetch_thirdparty_libs.sh` to download third-party dependencies like ffmpeg and oiio. Those will be included in build.
|
||||
3) Run `.\tools\build.sh` to build OpenPype executables in `.\build\`
|
||||
|
||||
### Linux
|
||||
|
||||
|
|
|
|||
93
igniter/Poppins/OFL.txt
Normal file
93
igniter/Poppins/OFL.txt
Normal file
|
|
@ -0,0 +1,93 @@
|
|||
Copyright 2020 The Poppins Project Authors (https://github.com/itfoundry/Poppins)
|
||||
|
||||
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||
This license is copied below, and is also available with a FAQ at:
|
||||
http://scripts.sil.org/OFL
|
||||
|
||||
|
||||
-----------------------------------------------------------
|
||||
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||
-----------------------------------------------------------
|
||||
|
||||
PREAMBLE
|
||||
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||
development of collaborative font projects, to support the font creation
|
||||
efforts of academic and linguistic communities, and to provide a free and
|
||||
open framework in which fonts may be shared and improved in partnership
|
||||
with others.
|
||||
|
||||
The OFL allows the licensed fonts to be used, studied, modified and
|
||||
redistributed freely as long as they are not sold by themselves. The
|
||||
fonts, including any derivative works, can be bundled, embedded,
|
||||
redistributed and/or sold with any software provided that any reserved
|
||||
names are not used by derivative works. The fonts and derivatives,
|
||||
however, cannot be released under any other type of license. The
|
||||
requirement for fonts to remain under this license does not apply
|
||||
to any document created using the fonts or their derivatives.
|
||||
|
||||
DEFINITIONS
|
||||
"Font Software" refers to the set of files released by the Copyright
|
||||
Holder(s) under this license and clearly marked as such. This may
|
||||
include source files, build scripts and documentation.
|
||||
|
||||
"Reserved Font Name" refers to any names specified as such after the
|
||||
copyright statement(s).
|
||||
|
||||
"Original Version" refers to the collection of Font Software components as
|
||||
distributed by the Copyright Holder(s).
|
||||
|
||||
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||
or substituting -- in part or in whole -- any of the components of the
|
||||
Original Version, by changing formats or by porting the Font Software to a
|
||||
new environment.
|
||||
|
||||
"Author" refers to any designer, engineer, programmer, technical
|
||||
writer or other person who contributed to the Font Software.
|
||||
|
||||
PERMISSION & CONDITIONS
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||
redistribute, and sell modified and unmodified copies of the Font
|
||||
Software, subject to the following conditions:
|
||||
|
||||
1) Neither the Font Software nor any of its individual components,
|
||||
in Original or Modified Versions, may be sold by itself.
|
||||
|
||||
2) Original or Modified Versions of the Font Software may be bundled,
|
||||
redistributed and/or sold with any software, provided that each copy
|
||||
contains the above copyright notice and this license. These can be
|
||||
included either as stand-alone text files, human-readable headers or
|
||||
in the appropriate machine-readable metadata fields within text or
|
||||
binary files as long as those fields can be easily viewed by the user.
|
||||
|
||||
3) No Modified Version of the Font Software may use the Reserved Font
|
||||
Name(s) unless explicit written permission is granted by the corresponding
|
||||
Copyright Holder. This restriction only applies to the primary font name as
|
||||
presented to the users.
|
||||
|
||||
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||
Software shall not be used to promote, endorse or advertise any
|
||||
Modified Version, except to acknowledge the contribution(s) of the
|
||||
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||
permission.
|
||||
|
||||
5) The Font Software, modified or unmodified, in part or in whole,
|
||||
must be distributed entirely under this license, and must not be
|
||||
distributed under any other license. The requirement for fonts to
|
||||
remain under this license does not apply to any document created
|
||||
using the Font Software.
|
||||
|
||||
TERMINATION
|
||||
This license becomes null and void if any of the above conditions are
|
||||
not met.
|
||||
|
||||
DISCLAIMER
|
||||
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||
BIN
igniter/Poppins/Poppins-Black.ttf
Normal file
BIN
igniter/Poppins/Poppins-Black.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-BlackItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-BlackItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-Bold.ttf
Normal file
BIN
igniter/Poppins/Poppins-Bold.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-BoldItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-BoldItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-ExtraBold.ttf
Normal file
BIN
igniter/Poppins/Poppins-ExtraBold.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-ExtraBoldItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-ExtraBoldItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-ExtraLight.ttf
Normal file
BIN
igniter/Poppins/Poppins-ExtraLight.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-ExtraLightItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-ExtraLightItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-Italic.ttf
Normal file
BIN
igniter/Poppins/Poppins-Italic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-Light.ttf
Normal file
BIN
igniter/Poppins/Poppins-Light.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-LightItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-LightItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-Medium.ttf
Normal file
BIN
igniter/Poppins/Poppins-Medium.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-MediumItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-MediumItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-Regular.ttf
Normal file
BIN
igniter/Poppins/Poppins-Regular.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-SemiBold.ttf
Normal file
BIN
igniter/Poppins/Poppins-SemiBold.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-SemiBoldItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-SemiBoldItalic.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-Thin.ttf
Normal file
BIN
igniter/Poppins/Poppins-Thin.ttf
Normal file
Binary file not shown.
BIN
igniter/Poppins/Poppins-ThinItalic.ttf
Normal file
BIN
igniter/Poppins/Poppins-ThinItalic.ttf
Normal file
Binary file not shown.
|
|
@ -10,29 +10,22 @@ from .bootstrap_repos import BootstrapRepos
|
|||
from .version import __version__ as version
|
||||
|
||||
|
||||
RESULT = 0
|
||||
|
||||
|
||||
def get_result(res: int):
|
||||
"""Sets result returned from dialog."""
|
||||
global RESULT
|
||||
RESULT = res
|
||||
|
||||
|
||||
def open_dialog():
|
||||
"""Show Igniter dialog."""
|
||||
from Qt import QtWidgets
|
||||
from Qt import QtWidgets, QtCore
|
||||
from .install_dialog import InstallDialog
|
||||
|
||||
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
|
||||
if scale_attr is not None:
|
||||
QtWidgets.QApplication.setAttribute(scale_attr)
|
||||
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
d = InstallDialog()
|
||||
d.finished.connect(get_result)
|
||||
d.open()
|
||||
|
||||
app.exec()
|
||||
|
||||
return RESULT
|
||||
app.exec_()
|
||||
return d.result()
|
||||
|
||||
|
||||
__all__ = [
|
||||
|
|
|
|||
|
|
@ -223,7 +223,7 @@ class BootstrapRepos:
|
|||
otherwise `None`.
|
||||
registry (OpenPypeSettingsRegistry): OpenPype registry object.
|
||||
zip_filter (list): List of files to exclude from zip
|
||||
openpype_filter (list): list of top level directories not to
|
||||
openpype_filter (list): list of top level directories to
|
||||
include in zip in OpenPype repository.
|
||||
|
||||
"""
|
||||
|
|
@ -246,7 +246,7 @@ class BootstrapRepos:
|
|||
self.registry = OpenPypeSettingsRegistry()
|
||||
self.zip_filter = [".pyc", "__pycache__"]
|
||||
self.openpype_filter = [
|
||||
"build", "docs", "tests", "tools", "venv", "coverage"
|
||||
"openpype", "repos", "schema", "LICENSE"
|
||||
]
|
||||
self._message = message
|
||||
|
||||
|
|
@ -285,7 +285,7 @@ class BootstrapRepos:
|
|||
"""Get version of local OpenPype."""
|
||||
|
||||
version = {}
|
||||
path = Path(os.path.dirname(__file__)).parent / "openpype" / "version.py"
|
||||
path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
|
||||
with open(path, "r") as fp:
|
||||
exec(fp.read(), version)
|
||||
return version["__version__"]
|
||||
|
|
@ -423,18 +423,13 @@ class BootstrapRepos:
|
|||
"""
|
||||
frozen_root = Path(sys.executable).parent
|
||||
|
||||
# from frozen code we need igniter, openpype, schema vendor
|
||||
openpype_list = self._filter_dir(
|
||||
frozen_root / "openpype", self.zip_filter)
|
||||
openpype_list += self._filter_dir(
|
||||
frozen_root / "igniter", self.zip_filter)
|
||||
openpype_list += self._filter_dir(
|
||||
frozen_root / "repos", self.zip_filter)
|
||||
openpype_list += self._filter_dir(
|
||||
frozen_root / "schema", self.zip_filter)
|
||||
openpype_list += self._filter_dir(
|
||||
frozen_root / "vendor", self.zip_filter)
|
||||
openpype_list.append(frozen_root / "LICENSE")
|
||||
openpype_list = []
|
||||
for f in self.openpype_filter:
|
||||
if (frozen_root / f).is_dir():
|
||||
openpype_list += self._filter_dir(
|
||||
frozen_root / f, self.zip_filter)
|
||||
else:
|
||||
openpype_list.append(frozen_root / f)
|
||||
|
||||
version = self.get_version(frozen_root)
|
||||
|
||||
|
|
@ -477,11 +472,16 @@ class BootstrapRepos:
|
|||
openpype_path (Path): Path to OpenPype sources.
|
||||
|
||||
"""
|
||||
openpype_list = []
|
||||
openpype_inc = 0
|
||||
|
||||
# get filtered list of file in Pype repository
|
||||
openpype_list = self._filter_dir(openpype_path, self.zip_filter)
|
||||
# openpype_list = self._filter_dir(openpype_path, self.zip_filter)
|
||||
openpype_list = []
|
||||
for f in self.openpype_filter:
|
||||
if (openpype_path / f).is_dir():
|
||||
openpype_list += self._filter_dir(
|
||||
openpype_path / f, self.zip_filter)
|
||||
else:
|
||||
openpype_list.append(openpype_path / f)
|
||||
|
||||
openpype_files = len(openpype_list)
|
||||
|
||||
openpype_inc = 98.0 / float(openpype_files)
|
||||
|
|
@ -506,7 +506,7 @@ class BootstrapRepos:
|
|||
except ValueError:
|
||||
pass
|
||||
|
||||
if is_inside:
|
||||
if not is_inside:
|
||||
continue
|
||||
|
||||
processed_path = file
|
||||
|
|
@ -575,7 +575,7 @@ class BootstrapRepos:
|
|||
|
||||
"""
|
||||
sys.path.insert(0, directory.as_posix())
|
||||
directory = directory / "repos"
|
||||
directory /= "repos"
|
||||
if not directory.exists() and not directory.is_dir():
|
||||
raise ValueError("directory is invalid")
|
||||
|
||||
|
|
@ -681,7 +681,7 @@ class BootstrapRepos:
|
|||
openpype_path = None
|
||||
# try to get OpenPype path from mongo.
|
||||
if location.startswith("mongodb"):
|
||||
pype_path = get_openpype_path_from_db(location)
|
||||
openpype_path = get_openpype_path_from_db(location)
|
||||
if not openpype_path:
|
||||
self._print("cannot find OPENPYPE_PATH in settings.")
|
||||
return None
|
||||
|
|
@ -808,7 +808,7 @@ class BootstrapRepos:
|
|||
"""Install OpenPype version to user data directory.
|
||||
|
||||
Args:
|
||||
oepnpype_version (OpenPypeVersion): OpenPype version to install.
|
||||
openpype_version (OpenPypeVersion): OpenPype version to install.
|
||||
force (bool, optional): Force overwrite existing version.
|
||||
|
||||
Returns:
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -17,12 +17,6 @@ from .bootstrap_repos import (
|
|||
from .tools import validate_mongo_connection
|
||||
|
||||
|
||||
class InstallResult(QObject):
|
||||
"""Used to pass results back."""
|
||||
def __init__(self, value):
|
||||
self.status = value
|
||||
|
||||
|
||||
class InstallThread(QThread):
|
||||
"""Install Worker thread.
|
||||
|
||||
|
|
@ -36,15 +30,22 @@ class InstallThread(QThread):
|
|||
"""
|
||||
progress = Signal(int)
|
||||
message = Signal((str, bool))
|
||||
finished = Signal(object)
|
||||
|
||||
def __init__(self, callback, parent=None,):
|
||||
def __init__(self, parent=None,):
|
||||
self._mongo = None
|
||||
self._path = None
|
||||
self.result_callback = callback
|
||||
self._result = None
|
||||
|
||||
QThread.__init__(self, parent)
|
||||
self.finished.connect(callback)
|
||||
|
||||
def result(self):
|
||||
"""Result of finished installation."""
|
||||
return self._result
|
||||
|
||||
def _set_result(self, value):
|
||||
if self._result is not None:
|
||||
raise AssertionError("BUG: Result was set more than once!")
|
||||
self._result = value
|
||||
|
||||
def run(self):
|
||||
"""Thread entry point.
|
||||
|
|
@ -76,7 +77,7 @@ class InstallThread(QThread):
|
|||
except ValueError:
|
||||
self.message.emit(
|
||||
"!!! We need MongoDB URL to proceed.", True)
|
||||
self.finished.emit(InstallResult(-1))
|
||||
self._set_result(-1)
|
||||
return
|
||||
else:
|
||||
self._mongo = os.getenv("OPENPYPE_MONGO")
|
||||
|
|
@ -101,7 +102,7 @@ class InstallThread(QThread):
|
|||
self.message.emit("Skipping OpenPype install ...", False)
|
||||
if detected[-1].path.suffix.lower() == ".zip":
|
||||
bs.extract_openpype(detected[-1])
|
||||
self.finished.emit(InstallResult(0))
|
||||
self._set_result(0)
|
||||
return
|
||||
|
||||
if OpenPypeVersion(version=local_version).get_main_version() == detected[-1].get_main_version(): # noqa
|
||||
|
|
@ -110,7 +111,7 @@ class InstallThread(QThread):
|
|||
f"currently running {local_version}"
|
||||
), False)
|
||||
self.message.emit("Skipping OpenPype install ...", False)
|
||||
self.finished.emit(InstallResult(0))
|
||||
self._set_result(0)
|
||||
return
|
||||
|
||||
self.message.emit((
|
||||
|
|
@ -126,13 +127,13 @@ class InstallThread(QThread):
|
|||
if not openpype_version:
|
||||
self.message.emit(
|
||||
f"!!! Install failed - {openpype_version}", True)
|
||||
self.finished.emit(InstallResult(-1))
|
||||
self._set_result(-1)
|
||||
return
|
||||
self.message.emit(f"Using: {openpype_version}", False)
|
||||
bs.install_version(openpype_version)
|
||||
self.message.emit(f"Installed as {openpype_version}", False)
|
||||
self.progress.emit(100)
|
||||
self.finished.emit(InstallResult(1))
|
||||
self._set_result(1)
|
||||
return
|
||||
else:
|
||||
self.message.emit("None detected.", False)
|
||||
|
|
@ -144,7 +145,7 @@ class InstallThread(QThread):
|
|||
if not local_openpype:
|
||||
self.message.emit(
|
||||
f"!!! Install failed - {local_openpype}", True)
|
||||
self.finished.emit(InstallResult(-1))
|
||||
self._set_result(-1)
|
||||
return
|
||||
|
||||
try:
|
||||
|
|
@ -154,11 +155,12 @@ class InstallThread(QThread):
|
|||
OpenPypeVersionIOError) as e:
|
||||
self.message.emit(f"Installed failed: ", True)
|
||||
self.message.emit(str(e), True)
|
||||
self.finished.emit(InstallResult(-1))
|
||||
self._set_result(-1)
|
||||
return
|
||||
|
||||
self.message.emit(f"Installed as {local_openpype}", False)
|
||||
self.progress.emit(100)
|
||||
self._set_result(1)
|
||||
return
|
||||
else:
|
||||
# if we have mongo connection string, validate it, set it to
|
||||
|
|
@ -167,7 +169,7 @@ class InstallThread(QThread):
|
|||
if not validate_mongo_connection(self._mongo):
|
||||
self.message.emit(
|
||||
f"!!! invalid mongo url {self._mongo}", True)
|
||||
self.finished.emit(InstallResult(-1))
|
||||
self._set_result(-1)
|
||||
return
|
||||
bs.secure_registry.set_item("openPypeMongo", self._mongo)
|
||||
os.environ["OPENPYPE_MONGO"] = self._mongo
|
||||
|
|
@ -177,11 +179,11 @@ class InstallThread(QThread):
|
|||
|
||||
if not repo_file:
|
||||
self.message.emit("!!! Cannot install", True)
|
||||
self.finished.emit(InstallResult(-1))
|
||||
self._set_result(-1)
|
||||
return
|
||||
|
||||
self.progress.emit(100)
|
||||
self.finished.emit(InstallResult(1))
|
||||
self._set_result(1)
|
||||
return
|
||||
|
||||
def set_path(self, path: str) -> None:
|
||||
|
|
|
|||
BIN
igniter/openpype.icns
Normal file
BIN
igniter/openpype.icns
Normal file
Binary file not shown.
280
igniter/stylesheet.css
Normal file
280
igniter/stylesheet.css
Normal file
|
|
@ -0,0 +1,280 @@
|
|||
*{
|
||||
font-size: 10pt;
|
||||
font-family: "Poppins";
|
||||
}
|
||||
|
||||
QWidget {
|
||||
color: #bfccd6;
|
||||
background-color: #282C34;
|
||||
border-radius: 0px;
|
||||
}
|
||||
|
||||
QMenu {
|
||||
border: 1px solid #555555;
|
||||
background-color: #21252B;
|
||||
}
|
||||
|
||||
QMenu::item {
|
||||
padding: 5px 10px 5px 10px;
|
||||
border-left: 5px solid #313741;;
|
||||
}
|
||||
|
||||
QMenu::item:selected {
|
||||
border-left-color: rgb(84, 209, 178);
|
||||
background-color: #222d37;
|
||||
}
|
||||
|
||||
QLineEdit, QPlainTextEdit {
|
||||
border: 1px solid #464b54;
|
||||
border-radius: 3px;
|
||||
background-color: #21252B;
|
||||
padding: 0.5em;
|
||||
}
|
||||
|
||||
QLineEdit[state="valid"] {
|
||||
background-color: rgb(19, 19, 19);
|
||||
color: rgb(64, 230, 132);
|
||||
border-color: rgb(32, 64, 32);
|
||||
}
|
||||
|
||||
QLineEdit[state="invalid"] {
|
||||
background-color: rgb(32, 19, 19);
|
||||
color: rgb(255, 69, 0);
|
||||
border-color: rgb(64, 32, 32);
|
||||
}
|
||||
|
||||
QLabel {
|
||||
background: transparent;
|
||||
color: #969b9e;
|
||||
}
|
||||
|
||||
QLabel:hover {color: #b8c1c5;}
|
||||
|
||||
QPushButton {
|
||||
border: 1px solid #aaaaaa;
|
||||
border-radius: 3px;
|
||||
padding: 5px;
|
||||
}
|
||||
|
||||
QPushButton:hover {
|
||||
background-color: #333840;
|
||||
border: 1px solid #fff;
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
QTableView {
|
||||
border: 1px solid #444;
|
||||
gridline-color: #6c6c6c;
|
||||
background-color: #201F1F;
|
||||
alternate-background-color:#21252B;
|
||||
}
|
||||
|
||||
QTableView::item:pressed, QListView::item:pressed, QTreeView::item:pressed {
|
||||
background: #78879b;
|
||||
color: #FFFFFF;
|
||||
}
|
||||
|
||||
QTableView::item:selected:active, QTreeView::item:selected:active, QListView::item:selected:active {
|
||||
background: #3d8ec9;
|
||||
}
|
||||
|
||||
QProgressBar {
|
||||
border: 1px solid grey;
|
||||
border-radius: 10px;
|
||||
color: #222222;
|
||||
font-weight: bold;
|
||||
}
|
||||
QProgressBar:horizontal {
|
||||
height: 20px;
|
||||
}
|
||||
|
||||
QProgressBar::chunk {
|
||||
border-radius: 10px;
|
||||
background-color: qlineargradient(
|
||||
x1: 0,
|
||||
y1: 0.5,
|
||||
x2: 1,
|
||||
y2: 0.5,
|
||||
stop: 0 rgb(72, 200, 150),
|
||||
stop: 1 rgb(82, 172, 215)
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
QScrollBar:horizontal {
|
||||
height: 15px;
|
||||
margin: 3px 15px 3px 15px;
|
||||
border: 1px transparent #21252B;
|
||||
border-radius: 4px;
|
||||
background-color: #21252B;
|
||||
}
|
||||
|
||||
QScrollBar::handle:horizontal {
|
||||
background-color: #4B5362;
|
||||
min-width: 5px;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
QScrollBar::add-line:horizontal {
|
||||
margin: 0px 3px 0px 3px;
|
||||
border-image: url(:/qss_icons/rc/right_arrow_disabled.png);
|
||||
width: 10px;
|
||||
height: 10px;
|
||||
subcontrol-position: right;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::sub-line:horizontal {
|
||||
margin: 0px 3px 0px 3px;
|
||||
border-image: url(:/qss_icons/rc/left_arrow_disabled.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: left;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::add-line:horizontal:hover,QScrollBar::add-line:horizontal:on {
|
||||
border-image: url(:/qss_icons/rc/right_arrow.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: right;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::sub-line:horizontal:hover, QScrollBar::sub-line:horizontal:on {
|
||||
border-image: url(:/qss_icons/rc/left_arrow.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: left;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::up-arrow:horizontal, QScrollBar::down-arrow:horizontal {
|
||||
background: none;
|
||||
}
|
||||
|
||||
QScrollBar::add-page:horizontal, QScrollBar::sub-page:horizontal {
|
||||
background: none;
|
||||
}
|
||||
|
||||
QScrollBar:vertical {
|
||||
background-color: #21252B;
|
||||
width: 15px;
|
||||
margin: 15px 3px 15px 3px;
|
||||
border: 1px transparent #21252B;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
QScrollBar::handle:vertical {
|
||||
background-color: #4B5362;
|
||||
min-height: 5px;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
QScrollBar::sub-line:vertical {
|
||||
margin: 3px 0px 3px 0px;
|
||||
border-image: url(:/qss_icons/rc/up_arrow_disabled.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: top;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::add-line:vertical {
|
||||
margin: 3px 0px 3px 0px;
|
||||
border-image: url(:/qss_icons/rc/down_arrow_disabled.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: bottom;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::sub-line:vertical:hover,QScrollBar::sub-line:vertical:on {
|
||||
|
||||
border-image: url(:/qss_icons/rc/up_arrow.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: top;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
|
||||
QScrollBar::add-line:vertical:hover, QScrollBar::add-line:vertical:on {
|
||||
border-image: url(:/qss_icons/rc/down_arrow.png);
|
||||
height: 10px;
|
||||
width: 10px;
|
||||
subcontrol-position: bottom;
|
||||
subcontrol-origin: margin;
|
||||
}
|
||||
|
||||
QScrollBar::up-arrow:vertical, QScrollBar::down-arrow:vertical {
|
||||
background: none;
|
||||
}
|
||||
|
||||
|
||||
QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical {
|
||||
background: none;
|
||||
}
|
||||
|
||||
#MainLabel {
|
||||
color: rgb(200, 200, 200);
|
||||
font-size: 12pt;
|
||||
}
|
||||
|
||||
#Console {
|
||||
background-color: #21252B;
|
||||
color: rgb(72, 200, 150);
|
||||
font-family: "Roboto Mono";
|
||||
font-size: 8pt;
|
||||
}
|
||||
|
||||
#ExitBtn {
|
||||
/* `border` must be set to background of flat button is painted .*/
|
||||
border: none;
|
||||
color: rgb(39, 39, 39);
|
||||
background-color: #828a97;
|
||||
padding: 0.5em;
|
||||
font-weight: 400;
|
||||
}
|
||||
|
||||
#ExitBtn:hover{
|
||||
background-color: #b2bece
|
||||
}
|
||||
#ExitBtn:disabled {
|
||||
background-color: rgba(185, 185, 185, 31);
|
||||
color: rgba(64, 64, 64, 63);
|
||||
}
|
||||
|
||||
#ButtonWithOptions QPushButton{
|
||||
border-top-right-radius: 0px;
|
||||
border-bottom-right-radius: 0px;
|
||||
border: none;
|
||||
background-color: rgb(84, 209, 178);
|
||||
color: rgb(39, 39, 39);
|
||||
font-weight: 400;
|
||||
padding: 0.5em;
|
||||
}
|
||||
#ButtonWithOptions QPushButton:hover{
|
||||
background-color: rgb(85, 224, 189)
|
||||
}
|
||||
#ButtonWithOptions QPushButton:disabled {
|
||||
background-color: rgba(72, 200, 150, 31);
|
||||
color: rgba(64, 64, 64, 63);
|
||||
}
|
||||
|
||||
#ButtonWithOptions QToolButton{
|
||||
border: none;
|
||||
border-top-left-radius: 0px;
|
||||
border-bottom-left-radius: 0px;
|
||||
border-top-right-radius: 3px;
|
||||
border-bottom-right-radius: 3px;
|
||||
background-color: rgb(84, 209, 178);
|
||||
color: rgb(39, 39, 39);
|
||||
}
|
||||
#ButtonWithOptions QToolButton:hover{
|
||||
background-color: rgb(85, 224, 189)
|
||||
}
|
||||
#ButtonWithOptions QToolButton:disabled {
|
||||
background-color: rgba(72, 200, 150, 31);
|
||||
color: rgba(64, 64, 64, 63);
|
||||
}
|
||||
|
|
@ -14,7 +14,12 @@ from pathlib import Path
|
|||
import platform
|
||||
|
||||
from pymongo import MongoClient
|
||||
from pymongo.errors import ServerSelectionTimeoutError, InvalidURI
|
||||
from pymongo.errors import (
|
||||
ServerSelectionTimeoutError,
|
||||
InvalidURI,
|
||||
ConfigurationError,
|
||||
OperationFailure
|
||||
)
|
||||
|
||||
|
||||
def decompose_url(url: str) -> Dict:
|
||||
|
|
@ -115,30 +120,20 @@ def validate_mongo_connection(cnx: str) -> (bool, str):
|
|||
parsed = urlparse(cnx)
|
||||
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
|
||||
return False, "Not mongodb schema"
|
||||
# we have mongo connection string. Let's try if we can connect.
|
||||
try:
|
||||
components = decompose_url(cnx)
|
||||
except RuntimeError:
|
||||
return False, f"Invalid port specified."
|
||||
|
||||
mongo_args = {
|
||||
"host": compose_url(**components),
|
||||
"serverSelectionTimeoutMS": 2000
|
||||
}
|
||||
port = components.get("port")
|
||||
if port is not None:
|
||||
mongo_args["port"] = int(port)
|
||||
|
||||
try:
|
||||
client = MongoClient(**mongo_args)
|
||||
client = MongoClient(
|
||||
cnx,
|
||||
serverSelectionTimeoutMS=2000
|
||||
)
|
||||
client.server_info()
|
||||
client.close()
|
||||
except ServerSelectionTimeoutError as e:
|
||||
return False, f"Cannot connect to server {cnx} - {e}"
|
||||
except ValueError:
|
||||
return False, f"Invalid port specified {parsed.port}"
|
||||
except InvalidURI as e:
|
||||
return False, str(e)
|
||||
except (ConfigurationError, OperationFailure, InvalidURI) as exc:
|
||||
return False, str(exc)
|
||||
else:
|
||||
return True, "Connection is successful"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Definition of Igniter version."""
|
||||
|
||||
__version__ = "1.0.0-beta"
|
||||
__version__ = "1.0.0-rc1"
|
||||
|
|
|
|||
50
inno_setup.iss
Normal file
50
inno_setup.iss
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
; Script generated by the Inno Setup Script Wizard.
|
||||
; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
|
||||
|
||||
|
||||
#define MyAppName "OpenPype"
|
||||
#define Build GetEnv("BUILD_DIR")
|
||||
#define AppVer GetEnv("BUILD_VERSION")
|
||||
|
||||
|
||||
[Setup]
|
||||
; NOTE: The value of AppId uniquely identifies this application. Do not use the same AppId value in installers for other applications.
|
||||
; (To generate a new GUID, click Tools | Generate GUID inside the IDE.)
|
||||
AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93}
|
||||
AppName={#MyAppName}
|
||||
AppVersion={#AppVer}
|
||||
AppVerName={#MyAppName} version {#AppVer}
|
||||
AppPublisher=Orbi Tools s.r.o
|
||||
AppPublisherURL=http://pype.club
|
||||
AppSupportURL=http://pype.club
|
||||
AppUpdatesURL=http://pype.club
|
||||
DefaultDirName={autopf}\{#MyAppName}
|
||||
DisableProgramGroupPage=yes
|
||||
OutputBaseFilename={#MyAppName}-{#AppVer}-install
|
||||
AllowCancelDuringInstall=yes
|
||||
; Uncomment the following line to run in non administrative install mode (install for current user only.)
|
||||
;PrivilegesRequired=lowest
|
||||
PrivilegesRequiredOverridesAllowed=dialog
|
||||
SetupIconFile=igniter\openpype.ico
|
||||
OutputDir=build\
|
||||
Compression=lzma
|
||||
SolidCompression=yes
|
||||
WizardStyle=modern
|
||||
|
||||
[Languages]
|
||||
Name: "english"; MessagesFile: "compiler:Default.isl"
|
||||
|
||||
[Tasks]
|
||||
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
|
||||
|
||||
[Files]
|
||||
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
|
||||
; NOTE: Don't use "Flags: ignoreversion" on any shared system files
|
||||
|
||||
[Icons]
|
||||
Name: "{autoprograms}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"
|
||||
Name: "{autodesktop}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"; Tasks: desktopicon
|
||||
|
||||
[Run]
|
||||
Filename: "{app}\openpype_gui.exe"; Description: "{cm:LaunchProgram,OpenPype}"; Flags: nowait postinstall skipifsilent
|
||||
|
||||
|
|
@ -9,6 +9,7 @@ from .settings import get_project_settings
|
|||
from .lib import (
|
||||
Anatomy,
|
||||
filter_pyblish_plugins,
|
||||
set_plugin_attributes_from_settings,
|
||||
change_timer_to_current_context
|
||||
)
|
||||
|
||||
|
|
@ -58,38 +59,8 @@ def patched_discover(superclass):
|
|||
# run original discover and get plugins
|
||||
plugins = _original_discover(superclass)
|
||||
|
||||
# determine host application to use for finding presets
|
||||
if avalon.registered_host() is None:
|
||||
return plugins
|
||||
host = avalon.registered_host().__name__.split(".")[-1]
|
||||
set_plugin_attributes_from_settings(plugins, superclass)
|
||||
|
||||
# map plugin superclass to preset json. Currenly suppoted is load and
|
||||
# create (avalon.api.Loader and avalon.api.Creator)
|
||||
plugin_type = "undefined"
|
||||
if superclass.__name__.split(".")[-1] == "Loader":
|
||||
plugin_type = "load"
|
||||
elif superclass.__name__.split(".")[-1] == "Creator":
|
||||
plugin_type = "create"
|
||||
|
||||
print(">>> Finding presets for {}:{} ...".format(host, plugin_type))
|
||||
try:
|
||||
settings = (
|
||||
get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
[host][plugin_type]
|
||||
)
|
||||
except KeyError:
|
||||
print("*** no presets found.")
|
||||
else:
|
||||
for plugin in plugins:
|
||||
if plugin.__name__ in settings:
|
||||
print(">>> We have preset for {}".format(plugin.__name__))
|
||||
for option, value in settings[plugin.__name__].items():
|
||||
if option == "enabled" and value is False:
|
||||
setattr(plugin, "active", False)
|
||||
print(" - is disabled by preset")
|
||||
else:
|
||||
setattr(plugin, option, value)
|
||||
print(" - setting `{}`: `{}`".format(option, value))
|
||||
return plugins
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -224,17 +224,6 @@ def launch(app, project, asset, task,
|
|||
PypeCommands().run_application(app, project, asset, task, tools, arguments)
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.option("-p", "--path", help="Path to zip file", default=None)
|
||||
def generate_zip(path):
|
||||
"""Generate Pype zip from current sources.
|
||||
|
||||
If PATH is not provided, it will create zip file in user data dir.
|
||||
|
||||
"""
|
||||
PypeCommands().generate_zip(path)
|
||||
|
||||
|
||||
@main.command(
|
||||
context_settings=dict(
|
||||
ignore_unknown_options=True,
|
||||
|
|
|
|||
|
|
@ -4,14 +4,15 @@ from openpype.lib import PreLaunchHook
|
|||
|
||||
class PrePython2Vendor(PreLaunchHook):
|
||||
"""Prepend python 2 dependencies for py2 hosts."""
|
||||
# WARNING This hook will probably be deprecated in OpenPype 3 - kept for test
|
||||
order = 10
|
||||
app_groups = ["hiero", "nuke", "nukex", "unreal"]
|
||||
|
||||
def execute(self):
|
||||
if not self.application.use_python_2:
|
||||
return
|
||||
|
||||
# Prepare vendor dir path
|
||||
self.log.info("adding global python 2 vendor")
|
||||
pype_root = os.getenv("OPENPYPE_ROOT")
|
||||
pype_root = os.getenv("OPENPYPE_REPOS_ROOT")
|
||||
python_2_vendor = os.path.join(
|
||||
pype_root,
|
||||
"openpype",
|
||||
|
|
@ -5,7 +5,7 @@ import logging
|
|||
from avalon import io
|
||||
from avalon import api as avalon
|
||||
from avalon.vendor import Qt
|
||||
from openpype import lib
|
||||
from openpype import lib, api
|
||||
import pyblish.api as pyblish
|
||||
import openpype.hosts.aftereffects
|
||||
|
||||
|
|
@ -81,3 +81,69 @@ def uninstall():
|
|||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle layer visibility on instance toggles."""
|
||||
instance[0].Visible = new_value
|
||||
|
||||
|
||||
def get_asset_settings():
|
||||
"""Get settings on current asset from database.
|
||||
|
||||
Returns:
|
||||
dict: Scene data.
|
||||
|
||||
"""
|
||||
asset_data = lib.get_asset()["data"]
|
||||
fps = asset_data.get("fps")
|
||||
frame_start = asset_data.get("frameStart")
|
||||
frame_end = asset_data.get("frameEnd")
|
||||
handle_start = asset_data.get("handleStart")
|
||||
handle_end = asset_data.get("handleEnd")
|
||||
resolution_width = asset_data.get("resolutionWidth")
|
||||
resolution_height = asset_data.get("resolutionHeight")
|
||||
duration = (frame_end - frame_start + 1) + handle_start + handle_end
|
||||
entity_type = asset_data.get("entityType")
|
||||
|
||||
scene_data = {
|
||||
"fps": fps,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"resolutionWidth": resolution_width,
|
||||
"resolutionHeight": resolution_height,
|
||||
"duration": duration
|
||||
}
|
||||
|
||||
try:
|
||||
# temporary, in pype3 replace with api.get_current_project_settings
|
||||
skip_resolution_check = (
|
||||
api.get_current_project_settings()
|
||||
["plugins"]
|
||||
["aftereffects"]
|
||||
["publish"]
|
||||
["ValidateSceneSettings"]
|
||||
["skip_resolution_check"]
|
||||
)
|
||||
skip_timelines_check = (
|
||||
api.get_current_project_settings()
|
||||
["plugins"]
|
||||
["aftereffects"]
|
||||
["publish"]
|
||||
["ValidateSceneSettings"]
|
||||
["skip_timelines_check"]
|
||||
)
|
||||
except KeyError:
|
||||
skip_resolution_check = ['*']
|
||||
skip_timelines_check = ['*']
|
||||
|
||||
if os.getenv('AVALON_TASK') in skip_resolution_check or \
|
||||
'*' in skip_timelines_check:
|
||||
scene_data.pop("resolutionWidth")
|
||||
scene_data.pop("resolutionHeight")
|
||||
|
||||
if entity_type in skip_timelines_check or '*' in skip_timelines_check:
|
||||
scene_data.pop('fps', None)
|
||||
scene_data.pop('frameStart', None)
|
||||
scene_data.pop('frameEnd', None)
|
||||
scene_data.pop('handleStart', None)
|
||||
scene_data.pop('handleEnd', None)
|
||||
|
||||
return scene_data
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class AERenderInstance(RenderInstance):
|
|||
# extend generic, composition name is needed
|
||||
comp_name = attr.ib(default=None)
|
||||
comp_id = attr.ib(default=None)
|
||||
fps = attr.ib(default=None)
|
||||
|
||||
|
||||
class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
||||
|
|
@ -45,6 +46,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
raise ValueError("Couldn't find id, unable to publish. " +
|
||||
"Please recreate instance.")
|
||||
item_id = inst["members"][0]
|
||||
|
||||
work_area_info = self.stub.get_work_area(int(item_id))
|
||||
|
||||
if not work_area_info:
|
||||
|
|
@ -57,6 +59,8 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
frameEnd = round(work_area_info.workAreaStart +
|
||||
float(work_area_info.workAreaDuration) *
|
||||
float(work_area_info.frameRate)) - 1
|
||||
fps = work_area_info.frameRate
|
||||
# TODO add resolution when supported by extension
|
||||
|
||||
if inst["family"] == "render" and inst["active"]:
|
||||
instance = AERenderInstance(
|
||||
|
|
@ -86,7 +90,8 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
frameStart=frameStart,
|
||||
frameEnd=frameEnd,
|
||||
frameStep=1,
|
||||
toBeRenderedOn='deadline'
|
||||
toBeRenderedOn='deadline',
|
||||
fps=fps
|
||||
)
|
||||
|
||||
comp = compositions_by_id.get(int(item_id))
|
||||
|
|
@ -102,7 +107,6 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
|
||||
instances.append(instance)
|
||||
|
||||
self.log.debug("instances::{}".format(instances))
|
||||
return instances
|
||||
|
||||
def get_expected_files(self, render_instance):
|
||||
|
|
|
|||
|
|
@ -0,0 +1,110 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate scene settings."""
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from avalon import aftereffects
|
||||
|
||||
import openpype.hosts.aftereffects.api as api
|
||||
|
||||
stub = aftereffects.stub()
|
||||
|
||||
|
||||
class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
Ensures that Composition Settings (right mouse on comp) are same as
|
||||
in FTrack on task.
|
||||
|
||||
By default checks only duration - how many frames should be rendered.
|
||||
Compares:
|
||||
Frame start - Frame end + 1 from FTrack
|
||||
against
|
||||
Duration in Composition Settings.
|
||||
|
||||
If this complains:
|
||||
Check error message where is discrepancy.
|
||||
Check FTrack task 'pype' section of task attributes for expected
|
||||
values.
|
||||
Check/modify rendered Composition Settings.
|
||||
|
||||
If you know what you are doing run publishing again, uncheck this
|
||||
validation before Validation phase.
|
||||
"""
|
||||
|
||||
"""
|
||||
Dev docu:
|
||||
Could be configured by 'presets/plugins/aftereffects/publish'
|
||||
|
||||
skip_timelines_check - fill task name for which skip validation of
|
||||
frameStart
|
||||
frameEnd
|
||||
fps
|
||||
handleStart
|
||||
handleEnd
|
||||
skip_resolution_check - fill entity type ('asset') to skip validation
|
||||
resolutionWidth
|
||||
resolutionHeight
|
||||
TODO support in extension is missing for now
|
||||
|
||||
By defaults validates duration (how many frames should be published)
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Scene Settings"
|
||||
families = ["render.farm"]
|
||||
hosts = ["aftereffects"]
|
||||
optional = True
|
||||
|
||||
skip_timelines_check = ["*"] # * >> skip for all
|
||||
skip_resolution_check = ["*"]
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
expected_settings = api.get_asset_settings()
|
||||
self.log.info("expected_settings::{}".format(expected_settings))
|
||||
|
||||
# handle case where ftrack uses only two decimal places
|
||||
# 23.976023976023978 vs. 23.98
|
||||
fps = instance.data.get("fps")
|
||||
if fps:
|
||||
if isinstance(fps, float):
|
||||
fps = float(
|
||||
"{:.2f}".format(fps))
|
||||
expected_settings["fps"] = fps
|
||||
|
||||
duration = instance.data.get("frameEndHandle") - \
|
||||
instance.data.get("frameStartHandle") + 1
|
||||
|
||||
current_settings = {
|
||||
"fps": fps,
|
||||
"frameStartHandle": instance.data.get("frameStartHandle"),
|
||||
"frameEndHandle": instance.data.get("frameEndHandle"),
|
||||
"resolutionWidth": instance.data.get("resolutionWidth"),
|
||||
"resolutionHeight": instance.data.get("resolutionHeight"),
|
||||
"duration": duration
|
||||
}
|
||||
self.log.info("current_settings:: {}".format(current_settings))
|
||||
|
||||
invalid_settings = []
|
||||
for key, value in expected_settings.items():
|
||||
if value != current_settings[key]:
|
||||
invalid_settings.append(
|
||||
"{} expected: {} found: {}".format(key, value,
|
||||
current_settings[key])
|
||||
)
|
||||
|
||||
if ((expected_settings.get("handleStart")
|
||||
or expected_settings.get("handleEnd"))
|
||||
and invalid_settings):
|
||||
msg = "Handles included in calculation. Remove handles in DB " +\
|
||||
"or extend frame range in Composition Setting."
|
||||
invalid_settings[-1]["reason"] = msg
|
||||
|
||||
msg = "Found invalid settings:\n{}".format(
|
||||
"\n".join(invalid_settings)
|
||||
)
|
||||
assert not invalid_settings, msg
|
||||
assert os.path.exists(instance.data.get("source")), (
|
||||
"Scene file not found (saved under wrong name)"
|
||||
)
|
||||
|
|
@ -9,7 +9,7 @@ from avalon import api
|
|||
import avalon.blender
|
||||
from openpype.api import PypeCreatorMixin
|
||||
|
||||
VALID_EXTENSIONS = [".blend", ".json"]
|
||||
VALID_EXTENSIONS = [".blend", ".json", ".abc"]
|
||||
|
||||
|
||||
def asset_name(
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import re
|
||||
import subprocess
|
||||
from openpype.lib import PreLaunchHook
|
||||
|
||||
|
|
@ -31,10 +32,46 @@ class InstallPySideToBlender(PreLaunchHook):
|
|||
|
||||
def inner_execute(self):
|
||||
# Get blender's python directory
|
||||
version_regex = re.compile(r"^2\.[0-9]{2}$")
|
||||
|
||||
executable = self.launch_context.executable.executable_path
|
||||
# Blender installation contain subfolder named with it's version where
|
||||
# python binaries are stored.
|
||||
version_subfolder = self.launch_context.app_name.split("_")[1]
|
||||
if os.path.basename(executable).lower() != "blender.exe":
|
||||
self.log.info((
|
||||
"Executable does not lead to blender.exe file. Can't determine"
|
||||
" blender's python to check/install PySide2."
|
||||
))
|
||||
return
|
||||
|
||||
executable_dir = os.path.dirname(executable)
|
||||
version_subfolders = []
|
||||
for name in os.listdir(executable_dir):
|
||||
fullpath = os.path.join(name, executable_dir)
|
||||
if not os.path.isdir(fullpath):
|
||||
continue
|
||||
|
||||
if not version_regex.match(name):
|
||||
continue
|
||||
|
||||
version_subfolders.append(name)
|
||||
|
||||
if not version_subfolders:
|
||||
self.log.info(
|
||||
"Didn't find version subfolder next to Blender executable"
|
||||
)
|
||||
return
|
||||
|
||||
if len(version_subfolders) > 1:
|
||||
self.log.info((
|
||||
"Found more than one version subfolder next"
|
||||
" to blender executable. {}"
|
||||
).format(", ".join([
|
||||
'"./{}"'.format(name)
|
||||
for name in version_subfolders
|
||||
])))
|
||||
return
|
||||
|
||||
version_subfolder = version_subfolders[0]
|
||||
|
||||
pythond_dir = os.path.join(
|
||||
os.path.dirname(executable),
|
||||
version_subfolder,
|
||||
|
|
@ -65,6 +102,7 @@ class InstallPySideToBlender(PreLaunchHook):
|
|||
|
||||
# Check if PySide2 is installed and skip if yes
|
||||
if self.is_pyside_installed(python_executable):
|
||||
self.log.debug("Blender has already installed PySide2.")
|
||||
return
|
||||
|
||||
# Install PySide2 in blender's python
|
||||
|
|
|
|||
35
openpype/hosts/blender/plugins/create/create_pointcache.py
Normal file
35
openpype/hosts/blender/plugins/create/create_pointcache.py
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
"""Create a pointcache asset."""
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
import openpype.hosts.blender.api.plugin
|
||||
|
||||
|
||||
class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
|
||||
"""Polygonal static geometry"""
|
||||
|
||||
name = "pointcacheMain"
|
||||
label = "Point Cache"
|
||||
family = "pointcache"
|
||||
icon = "gears"
|
||||
|
||||
def process(self):
|
||||
|
||||
asset = self.data["asset"]
|
||||
subset = self.data["subset"]
|
||||
name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
|
||||
collection = bpy.data.collections.new(name=name)
|
||||
bpy.context.scene.collection.children.link(collection)
|
||||
self.data['task'] = api.Session.get('AVALON_TASK')
|
||||
lib.imprint(collection, self.data)
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
objects = lib.get_selection()
|
||||
for obj in objects:
|
||||
collection.objects.link(obj)
|
||||
if obj.type == 'EMPTY':
|
||||
objects.extend(obj.children)
|
||||
|
||||
return collection
|
||||
246
openpype/hosts/blender/plugins/load/load_abc.py
Normal file
246
openpype/hosts/blender/plugins/load/load_abc.py
Normal file
|
|
@ -0,0 +1,246 @@
|
|||
"""Load an asset in Blender from an Alembic file."""
|
||||
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from avalon import api, blender
|
||||
import bpy
|
||||
import openpype.hosts.blender.api.plugin as plugin
|
||||
|
||||
|
||||
class CacheModelLoader(plugin.AssetLoader):
|
||||
"""Load cache models.
|
||||
|
||||
Stores the imported asset in a collection named after the asset.
|
||||
|
||||
Note:
|
||||
At least for now it only supports Alembic files.
|
||||
"""
|
||||
|
||||
families = ["model", "pointcache"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Link Alembic"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, objects, container):
|
||||
for obj in list(objects):
|
||||
if obj.type == 'MESH':
|
||||
bpy.data.meshes.remove(obj.data)
|
||||
elif obj.type == 'EMPTY':
|
||||
bpy.data.objects.remove(obj)
|
||||
|
||||
bpy.data.collections.remove(container)
|
||||
|
||||
def _process(self, libpath, container_name, parent_collection):
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
view_layer = bpy.context.view_layer
|
||||
view_layer_collection = view_layer.active_layer_collection.collection
|
||||
|
||||
relative = bpy.context.preferences.filepaths.use_relative_paths
|
||||
bpy.ops.wm.alembic_import(
|
||||
filepath=libpath,
|
||||
relative_path=relative
|
||||
)
|
||||
|
||||
parent = parent_collection
|
||||
|
||||
if parent is None:
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
model_container = bpy.data.collections.new(container_name)
|
||||
parent.children.link(model_container)
|
||||
for obj in bpy.context.selected_objects:
|
||||
model_container.objects.link(obj)
|
||||
view_layer_collection.objects.unlink(obj)
|
||||
|
||||
name = obj.name
|
||||
obj.name = f"{name}:{container_name}"
|
||||
|
||||
# Groups are imported as Empty objects in Blender
|
||||
if obj.type == 'MESH':
|
||||
data_name = obj.data.name
|
||||
obj.data.name = f"{data_name}:{container_name}"
|
||||
|
||||
if not obj.get(blender.pipeline.AVALON_PROPERTY):
|
||||
obj[blender.pipeline.AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = obj[blender.pipeline.AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": container_name})
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
return model_container
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
lib_container = plugin.asset_name(
|
||||
asset, subset
|
||||
)
|
||||
unique_number = plugin.get_unique_number(
|
||||
asset, subset
|
||||
)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
container_name = plugin.asset_name(
|
||||
asset, subset, unique_number
|
||||
)
|
||||
|
||||
container = bpy.data.collections.new(lib_container)
|
||||
container.name = container_name
|
||||
blender.pipeline.containerise_existing(
|
||||
container,
|
||||
name,
|
||||
namespace,
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
)
|
||||
|
||||
container_metadata = container.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
|
||||
container_metadata["libpath"] = libpath
|
||||
container_metadata["lib_container"] = lib_container
|
||||
|
||||
obj_container = self._process(
|
||||
libpath, container_name, None)
|
||||
|
||||
container_metadata["obj_container"] = obj_container
|
||||
|
||||
# Save the list of objects in the metadata container
|
||||
container_metadata["objects"] = obj_container.all_objects
|
||||
|
||||
nodes = list(container.objects)
|
||||
nodes.append(container)
|
||||
self[:] = nodes
|
||||
return nodes
|
||||
|
||||
def update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all objects of the current collection, load the new
|
||||
ones and add them to the collection.
|
||||
If the objects of the collection are used in another collection they
|
||||
will not be removed, only unlinked. Normally this should not be the
|
||||
case though.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
collection = bpy.data.collections.get(
|
||||
container["objectName"]
|
||||
)
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
self.log.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert collection, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert not (collection.children), (
|
||||
"Nested collections are not supported."
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
collection_libpath = collection_metadata["libpath"]
|
||||
|
||||
obj_container = plugin.get_local_collection_with_name(
|
||||
collection_metadata["obj_container"].name
|
||||
)
|
||||
objects = obj_container.all_objects
|
||||
|
||||
container_name = obj_container.name
|
||||
|
||||
normalized_collection_libpath = (
|
||||
str(Path(bpy.path.abspath(collection_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
self.log.debug(
|
||||
"normalized_collection_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_collection_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_collection_libpath == normalized_libpath:
|
||||
self.log.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
parent = plugin.get_parent_collection(obj_container)
|
||||
|
||||
self._remove(objects, obj_container)
|
||||
|
||||
obj_container = self._process(
|
||||
str(libpath), container_name, parent)
|
||||
|
||||
collection_metadata["obj_container"] = obj_container
|
||||
collection_metadata["objects"] = obj_container.all_objects
|
||||
collection_metadata["libpath"] = str(libpath)
|
||||
collection_metadata["representation"] = str(representation["_id"])
|
||||
|
||||
def remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
collection = bpy.data.collections.get(
|
||||
container["objectName"]
|
||||
)
|
||||
if not collection:
|
||||
return False
|
||||
assert not (collection.children), (
|
||||
"Nested collections are not supported."
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
|
||||
obj_container = plugin.get_local_collection_with_name(
|
||||
collection_metadata["obj_container"].name
|
||||
)
|
||||
objects = obj_container.all_objects
|
||||
|
||||
self._remove(objects, obj_container)
|
||||
|
||||
bpy.data.collections.remove(collection)
|
||||
|
||||
return True
|
||||
|
|
@ -242,65 +242,3 @@ class BlendModelLoader(plugin.AssetLoader):
|
|||
bpy.data.collections.remove(collection)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
class CacheModelLoader(plugin.AssetLoader):
|
||||
"""Load cache models.
|
||||
|
||||
Stores the imported asset in a collection named after the asset.
|
||||
|
||||
Note:
|
||||
At least for now it only supports Alembic files.
|
||||
"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Link Model"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
raise NotImplementedError(
|
||||
"Loading of Alembic files is not yet implemented.")
|
||||
# TODO (jasper): implement Alembic import.
|
||||
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
# TODO (jasper): evaluate use of namespace which is 'alien' to Blender.
|
||||
lib_container = container_name = (
|
||||
plugin.asset_name(asset, subset, namespace)
|
||||
)
|
||||
relative = bpy.context.preferences.filepaths.use_relative_paths
|
||||
|
||||
with bpy.data.libraries.load(
|
||||
libpath, link=True, relative=relative
|
||||
) as (data_from, data_to):
|
||||
data_to.collections = [lib_container]
|
||||
|
||||
scene = bpy.context.scene
|
||||
instance_empty = bpy.data.objects.new(
|
||||
container_name, None
|
||||
)
|
||||
scene.collection.objects.link(instance_empty)
|
||||
instance_empty.instance_type = 'COLLECTION'
|
||||
collection = bpy.data.collections[lib_container]
|
||||
collection.name = container_name
|
||||
instance_empty.instance_collection = collection
|
||||
|
||||
nodes = list(collection.objects)
|
||||
nodes.append(collection)
|
||||
nodes.append(instance_empty)
|
||||
self[:] = nodes
|
||||
return nodes
|
||||
|
|
|
|||
|
|
@ -11,14 +11,14 @@ class ExtractABC(openpype.api.Extractor):
|
|||
|
||||
label = "Extract ABC"
|
||||
hosts = ["blender"]
|
||||
families = ["model"]
|
||||
families = ["model", "pointcache"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
# Define extract output file path
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = f"{instance.name}.fbx"
|
||||
filename = f"{instance.name}.abc"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
context = bpy.context
|
||||
|
|
@ -52,6 +52,8 @@ class ExtractABC(openpype.api.Extractor):
|
|||
|
||||
old_scale = scene.unit_settings.scale_length
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
selected = list()
|
||||
|
||||
for obj in instance:
|
||||
|
|
@ -67,14 +69,11 @@ class ExtractABC(openpype.api.Extractor):
|
|||
# We set the scale of the scene for the export
|
||||
scene.unit_settings.scale_length = 0.01
|
||||
|
||||
self.log.info(new_context)
|
||||
|
||||
# We export the abc
|
||||
bpy.ops.wm.alembic_export(
|
||||
new_context,
|
||||
filepath=filepath,
|
||||
start=1,
|
||||
end=1
|
||||
selected=True
|
||||
)
|
||||
|
||||
view_layer.active_layer_collection = old_active_layer_collection
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ from .pipeline import (
|
|||
)
|
||||
|
||||
from .lib import (
|
||||
pype_tag_name,
|
||||
get_track_items,
|
||||
get_current_project,
|
||||
get_current_sequence,
|
||||
|
|
@ -73,6 +74,7 @@ __all__ = [
|
|||
"work_root",
|
||||
|
||||
# Lib functions
|
||||
"pype_tag_name",
|
||||
"get_track_items",
|
||||
"get_current_project",
|
||||
"get_current_sequence",
|
||||
|
|
|
|||
|
|
@ -2,7 +2,12 @@ import os
|
|||
import hiero.core.events
|
||||
import avalon.api as avalon
|
||||
from openpype.api import Logger
|
||||
from .lib import sync_avalon_data_to_workfile, launch_workfiles_app
|
||||
from .lib import (
|
||||
sync_avalon_data_to_workfile,
|
||||
launch_workfiles_app,
|
||||
selection_changed_timeline,
|
||||
before_project_save
|
||||
)
|
||||
from .tags import add_tags_to_workfile
|
||||
from .menu import update_menu_task_label
|
||||
|
||||
|
|
@ -78,7 +83,7 @@ def register_hiero_events():
|
|||
"Registering events for: kBeforeNewProjectCreated, "
|
||||
"kAfterNewProjectCreated, kBeforeProjectLoad, kAfterProjectLoad, "
|
||||
"kBeforeProjectSave, kAfterProjectSave, kBeforeProjectClose, "
|
||||
"kAfterProjectClose, kShutdown, kStartup"
|
||||
"kAfterProjectClose, kShutdown, kStartup, kSelectionChanged"
|
||||
)
|
||||
|
||||
# hiero.core.events.registerInterest(
|
||||
|
|
@ -91,8 +96,8 @@ def register_hiero_events():
|
|||
hiero.core.events.registerInterest(
|
||||
"kAfterProjectLoad", afterProjectLoad)
|
||||
|
||||
# hiero.core.events.registerInterest(
|
||||
# "kBeforeProjectSave", beforeProjectSaved)
|
||||
hiero.core.events.registerInterest(
|
||||
"kBeforeProjectSave", before_project_save)
|
||||
# hiero.core.events.registerInterest(
|
||||
# "kAfterProjectSave", afterProjectSaved)
|
||||
#
|
||||
|
|
@ -104,10 +109,16 @@ def register_hiero_events():
|
|||
# hiero.core.events.registerInterest("kShutdown", shutDown)
|
||||
# hiero.core.events.registerInterest("kStartup", startupCompleted)
|
||||
|
||||
# workfiles
|
||||
hiero.core.events.registerEventType("kStartWorkfiles")
|
||||
hiero.core.events.registerInterest("kStartWorkfiles", launch_workfiles_app)
|
||||
hiero.core.events.registerInterest(
|
||||
("kSelectionChanged", "kTimeline"), selection_changed_timeline)
|
||||
|
||||
# workfiles
|
||||
try:
|
||||
hiero.core.events.registerEventType("kStartWorkfiles")
|
||||
hiero.core.events.registerInterest(
|
||||
"kStartWorkfiles", launch_workfiles_app)
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
def register_events():
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ import hiero
|
|||
import avalon.api as avalon
|
||||
import avalon.io
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
from openpype.api import (Logger, Anatomy, config)
|
||||
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
|
||||
from . import tags
|
||||
import shutil
|
||||
from compiler.ast import flatten
|
||||
|
|
@ -30,9 +30,9 @@ self = sys.modules[__name__]
|
|||
self._has_been_setup = False
|
||||
self._has_menu = False
|
||||
self._registered_gui = None
|
||||
self.pype_tag_name = "Pype Data"
|
||||
self.default_sequence_name = "PypeSequence"
|
||||
self.default_bin_name = "PypeBin"
|
||||
self.pype_tag_name = "openpypeData"
|
||||
self.default_sequence_name = "openpypeSequence"
|
||||
self.default_bin_name = "openpypeBin"
|
||||
|
||||
AVALON_CONFIG = os.getenv("AVALON_CONFIG", "pype")
|
||||
|
||||
|
|
@ -150,15 +150,27 @@ def get_track_items(
|
|||
|
||||
# get selected track items or all in active sequence
|
||||
if selected:
|
||||
selected_items = list(hiero.selection)
|
||||
for item in selected_items:
|
||||
if track_name and track_name in item.parent().name():
|
||||
# filter only items fitting input track name
|
||||
track_items.append(item)
|
||||
elif not track_name:
|
||||
# or add all if no track_name was defined
|
||||
track_items.append(item)
|
||||
else:
|
||||
try:
|
||||
selected_items = list(hiero.selection)
|
||||
for item in selected_items:
|
||||
if track_name and track_name in item.parent().name():
|
||||
# filter only items fitting input track name
|
||||
track_items.append(item)
|
||||
elif not track_name:
|
||||
# or add all if no track_name was defined
|
||||
track_items.append(item)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# check if any collected track items are
|
||||
# `core.Hiero.Python.TrackItem` instance
|
||||
if track_items:
|
||||
any_track_item = track_items[0]
|
||||
if not isinstance(any_track_item, hiero.core.TrackItem):
|
||||
selected_items = []
|
||||
|
||||
# collect all available active sequence track items
|
||||
if not track_items:
|
||||
sequence = get_current_sequence(name=sequence_name)
|
||||
# get all available tracks from sequence
|
||||
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
|
||||
|
|
@ -240,7 +252,7 @@ def set_track_item_pype_tag(track_item, data=None):
|
|||
# basic Tag's attribute
|
||||
tag_data = {
|
||||
"editable": "0",
|
||||
"note": "Pype data holder",
|
||||
"note": "OpenPype data container",
|
||||
"icon": "openpype_icon.png",
|
||||
"metadata": {k: v for k, v in data.items()}
|
||||
}
|
||||
|
|
@ -744,10 +756,13 @@ def _set_hrox_project_knobs(doc, **knobs):
|
|||
# set attributes to Project Tag
|
||||
proj_elem = doc.documentElement().firstChildElement("Project")
|
||||
for k, v in knobs.items():
|
||||
proj_elem.setAttribute(k, v)
|
||||
if isinstance(v, dict):
|
||||
continue
|
||||
proj_elem.setAttribute(str(k), v)
|
||||
|
||||
|
||||
def apply_colorspace_project():
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
# get path the the active projects
|
||||
project = get_current_project(remove_untitled=True)
|
||||
current_file = project.path()
|
||||
|
|
@ -756,9 +771,9 @@ def apply_colorspace_project():
|
|||
project.close()
|
||||
|
||||
# get presets for hiero
|
||||
presets = config.get_init_presets()
|
||||
colorspace = presets["colorspace"]
|
||||
hiero_project_clrs = colorspace.get("hiero", {}).get("project", {})
|
||||
imageio = get_anatomy_settings(
|
||||
project_name)["imageio"].get("hiero", None)
|
||||
presets = imageio.get("workfile")
|
||||
|
||||
# save the workfile as subversion "comment:_colorspaceChange"
|
||||
split_current_file = os.path.splitext(current_file)
|
||||
|
|
@ -789,13 +804,13 @@ def apply_colorspace_project():
|
|||
os.remove(copy_current_file_tmp)
|
||||
|
||||
# use the code from bellow for changing xml hrox Attributes
|
||||
hiero_project_clrs.update({"name": os.path.basename(copy_current_file)})
|
||||
presets.update({"name": os.path.basename(copy_current_file)})
|
||||
|
||||
# read HROX in as QDomSocument
|
||||
doc = _read_doc_from_path(copy_current_file)
|
||||
|
||||
# apply project colorspace properties
|
||||
_set_hrox_project_knobs(doc, **hiero_project_clrs)
|
||||
_set_hrox_project_knobs(doc, **presets)
|
||||
|
||||
# write QDomSocument back as HROX
|
||||
_write_doc_to_path(doc, copy_current_file)
|
||||
|
|
@ -805,14 +820,17 @@ def apply_colorspace_project():
|
|||
|
||||
|
||||
def apply_colorspace_clips():
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
project = get_current_project(remove_untitled=True)
|
||||
clips = project.clips()
|
||||
|
||||
# get presets for hiero
|
||||
presets = config.get_init_presets()
|
||||
colorspace = presets["colorspace"]
|
||||
hiero_clips_clrs = colorspace.get("hiero", {}).get("clips", {})
|
||||
imageio = get_anatomy_settings(
|
||||
project_name)["imageio"].get("hiero", None)
|
||||
from pprint import pprint
|
||||
|
||||
presets = imageio.get("regexInputs", {}).get("inputs", {})
|
||||
pprint(presets)
|
||||
for clip in clips:
|
||||
clip_media_source_path = clip.mediaSource().firstpath()
|
||||
clip_name = clip.name()
|
||||
|
|
@ -822,10 +840,11 @@ def apply_colorspace_clips():
|
|||
continue
|
||||
|
||||
# check if any colorspace presets for read is mathing
|
||||
preset_clrsp = next((hiero_clips_clrs[k]
|
||||
for k in hiero_clips_clrs
|
||||
if bool(re.search(k, clip_media_source_path))),
|
||||
None)
|
||||
preset_clrsp = None
|
||||
for k in presets:
|
||||
if not bool(re.search(k["regex"], clip_media_source_path)):
|
||||
continue
|
||||
preset_clrsp = k["colorspace"]
|
||||
|
||||
if preset_clrsp:
|
||||
log.debug("Changing clip.path: {}".format(clip_media_source_path))
|
||||
|
|
@ -893,3 +912,61 @@ def get_sequence_pattern_and_padding(file):
|
|||
return found, padding
|
||||
else:
|
||||
return None, None
|
||||
|
||||
|
||||
def sync_clip_name_to_data_asset(track_items_list):
|
||||
# loop trough all selected clips
|
||||
for track_item in track_items_list:
|
||||
# ignore if parent track is locked or disabled
|
||||
if track_item.parent().isLocked():
|
||||
continue
|
||||
if not track_item.parent().isEnabled():
|
||||
continue
|
||||
# ignore if the track item is disabled
|
||||
if not track_item.isEnabled():
|
||||
continue
|
||||
|
||||
# get name and data
|
||||
ti_name = track_item.name()
|
||||
data = get_track_item_pype_data(track_item)
|
||||
|
||||
# ignore if no data on the clip or not publish instance
|
||||
if not data:
|
||||
continue
|
||||
if data.get("id") != "pyblish.avalon.instance":
|
||||
continue
|
||||
|
||||
# fix data if wrong name
|
||||
if data["asset"] != ti_name:
|
||||
data["asset"] = ti_name
|
||||
# remove the original tag
|
||||
tag = get_track_item_pype_tag(track_item)
|
||||
track_item.removeTag(tag)
|
||||
# create new tag with updated data
|
||||
set_track_item_pype_tag(track_item, data)
|
||||
print("asset was changed in clip: {}".format(ti_name))
|
||||
|
||||
|
||||
def selection_changed_timeline(event):
|
||||
"""Callback on timeline to check if asset in data is the same as clip name.
|
||||
|
||||
Args:
|
||||
event (hiero.core.Event): timeline event
|
||||
"""
|
||||
timeline_editor = event.sender
|
||||
selection = timeline_editor.selection()
|
||||
|
||||
# run checking function
|
||||
sync_clip_name_to_data_asset(selection)
|
||||
|
||||
|
||||
def before_project_save(event):
|
||||
track_items = get_track_items(
|
||||
selected=False,
|
||||
track_type="video",
|
||||
check_enabled=True,
|
||||
check_locked=True,
|
||||
check_tagged=True)
|
||||
|
||||
# run checking function
|
||||
sync_clip_name_to_data_asset(track_items)
|
||||
|
|
|
|||
|
|
@ -68,50 +68,45 @@ def menu_install():
|
|||
|
||||
menu.addSeparator()
|
||||
|
||||
workfiles_action = menu.addAction("Work Files...")
|
||||
workfiles_action = menu.addAction("Work Files ...")
|
||||
workfiles_action.setIcon(QtGui.QIcon("icons:Position.png"))
|
||||
workfiles_action.triggered.connect(launch_workfiles_app)
|
||||
|
||||
default_tags_action = menu.addAction("Create Default Tags...")
|
||||
default_tags_action = menu.addAction("Create Default Tags")
|
||||
default_tags_action.setIcon(QtGui.QIcon("icons:Position.png"))
|
||||
default_tags_action.triggered.connect(tags.add_tags_to_workfile)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
publish_action = menu.addAction("Publish...")
|
||||
publish_action = menu.addAction("Publish ...")
|
||||
publish_action.setIcon(QtGui.QIcon("icons:Output.png"))
|
||||
publish_action.triggered.connect(
|
||||
lambda *args: publish(hiero.ui.mainWindow())
|
||||
)
|
||||
|
||||
creator_action = menu.addAction("Create...")
|
||||
creator_action = menu.addAction("Create ...")
|
||||
creator_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
creator_action.triggered.connect(creator.show)
|
||||
|
||||
loader_action = menu.addAction("Load...")
|
||||
loader_action = menu.addAction("Load ...")
|
||||
loader_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
loader_action.triggered.connect(cbloader.show)
|
||||
|
||||
sceneinventory_action = menu.addAction("Manage...")
|
||||
sceneinventory_action = menu.addAction("Manage ...")
|
||||
sceneinventory_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
sceneinventory_action.triggered.connect(sceneinventory.show)
|
||||
menu.addSeparator()
|
||||
|
||||
reload_action = menu.addAction("Reload pipeline...")
|
||||
reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
|
||||
reload_action.triggered.connect(reload_config)
|
||||
if os.getenv("OPENPYPE_DEVELOP"):
|
||||
reload_action = menu.addAction("Reload pipeline")
|
||||
reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
|
||||
reload_action.triggered.connect(reload_config)
|
||||
|
||||
menu.addSeparator()
|
||||
apply_colorspace_p_action = menu.addAction("Apply Colorspace Project...")
|
||||
apply_colorspace_p_action = menu.addAction("Apply Colorspace Project")
|
||||
apply_colorspace_p_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
|
||||
apply_colorspace_p_action.triggered.connect(apply_colorspace_project)
|
||||
|
||||
apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips...")
|
||||
apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips")
|
||||
apply_colorspace_c_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
|
||||
apply_colorspace_c_action.triggered.connect(apply_colorspace_clips)
|
||||
|
||||
self.context_label_action = context_label_action
|
||||
self.workfile_actions = workfiles_action
|
||||
self.default_tags_action = default_tags_action
|
||||
self.publish_action = publish_action
|
||||
self.reload_action = reload_action
|
||||
|
|
|
|||
|
|
@ -4,10 +4,10 @@ import hiero
|
|||
from Qt import QtWidgets, QtCore
|
||||
from avalon.vendor import qargparse
|
||||
import avalon.api as avalon
|
||||
import openpype.api as pype
|
||||
import openpype.api as openpype
|
||||
from . import lib
|
||||
|
||||
log = pype.Logger().get_logger(__name__)
|
||||
log = openpype.Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def load_stylesheet():
|
||||
|
|
@ -266,7 +266,8 @@ class CreatorWidget(QtWidgets.QDialog):
|
|||
elif v["type"] == "QSpinBox":
|
||||
data[k]["value"] = self.create_row(
|
||||
content_layout, "QSpinBox", v["label"],
|
||||
setValue=v["value"], setMaximum=10000, setToolTip=tool_tip)
|
||||
setValue=v["value"], setMinimum=0,
|
||||
setMaximum=100000, setToolTip=tool_tip)
|
||||
return data
|
||||
|
||||
|
||||
|
|
@ -387,7 +388,8 @@ class ClipLoader:
|
|||
# try to get value from options or evaluate key value for `load_to`
|
||||
self.new_sequence = options.get("newSequence") or bool(
|
||||
"New timeline" in options.get("load_to", ""))
|
||||
|
||||
self.clip_name_template = options.get(
|
||||
"clipNameTemplate") or "{asset}_{subset}_{representation}"
|
||||
assert self._populate_data(), str(
|
||||
"Cannot Load selected data, look into database "
|
||||
"or call your supervisor")
|
||||
|
|
@ -432,7 +434,7 @@ class ClipLoader:
|
|||
asset = str(repr_cntx["asset"])
|
||||
subset = str(repr_cntx["subset"])
|
||||
representation = str(repr_cntx["representation"])
|
||||
self.data["clip_name"] = "_".join([asset, subset, representation])
|
||||
self.data["clip_name"] = self.clip_name_template.format(**repr_cntx)
|
||||
self.data["track_name"] = "_".join([subset, representation])
|
||||
self.data["versionData"] = self.context["version"]["data"]
|
||||
# gets file path
|
||||
|
|
@ -476,7 +478,7 @@ class ClipLoader:
|
|||
|
||||
"""
|
||||
asset_name = self.context["representation"]["context"]["asset"]
|
||||
self.data["assetData"] = pype.get_asset(asset_name)["data"]
|
||||
self.data["assetData"] = openpype.get_asset(asset_name)["data"]
|
||||
|
||||
def _make_track_item(self, source_bin_item, audio=False):
|
||||
""" Create track item with """
|
||||
|
|
@ -543,15 +545,9 @@ class ClipLoader:
|
|||
if "slate" in f),
|
||||
# if nothing was found then use default None
|
||||
# so other bool could be used
|
||||
None) or bool(((
|
||||
# put together duration of clip attributes
|
||||
self.timeline_out - self.timeline_in + 1) \
|
||||
+ self.handle_start \
|
||||
+ self.handle_end
|
||||
# and compare it with meda duration
|
||||
) > self.media_duration)
|
||||
|
||||
print("__ slate_on: `{}`".format(slate_on))
|
||||
None) or bool(int(
|
||||
(self.timeline_out - self.timeline_in + 1)
|
||||
+ self.handle_start + self.handle_end) < self.media_duration)
|
||||
|
||||
# if slate is on then remove the slate frame from begining
|
||||
if slate_on:
|
||||
|
|
@ -592,7 +588,7 @@ class ClipLoader:
|
|||
return track_item
|
||||
|
||||
|
||||
class Creator(pype.Creator):
|
||||
class Creator(openpype.Creator):
|
||||
"""Creator class wrapper
|
||||
"""
|
||||
clip_color = "Purple"
|
||||
|
|
@ -601,7 +597,7 @@ class Creator(pype.Creator):
|
|||
def __init__(self, *args, **kwargs):
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
super(Creator, self).__init__(*args, **kwargs)
|
||||
self.presets = pype.get_current_project_settings()[
|
||||
self.presets = openpype.get_current_project_settings()[
|
||||
"hiero"]["create"].get(self.__class__.__name__, {})
|
||||
|
||||
# adding basic current context resolve objects
|
||||
|
|
@ -674,6 +670,9 @@ class PublishClip:
|
|||
if kwargs.get("avalon"):
|
||||
self.tag_data.update(kwargs["avalon"])
|
||||
|
||||
# add publish attribute to tag data
|
||||
self.tag_data.update({"publish": True})
|
||||
|
||||
# adding ui inputs if any
|
||||
self.ui_inputs = kwargs.get("ui_inputs", {})
|
||||
|
||||
|
|
@ -687,6 +686,7 @@ class PublishClip:
|
|||
self._create_parents()
|
||||
|
||||
def convert(self):
|
||||
|
||||
# solve track item data and add them to tag data
|
||||
self._convert_to_tag_data()
|
||||
|
||||
|
|
@ -705,6 +705,12 @@ class PublishClip:
|
|||
self.tag_data["asset"] = new_name
|
||||
else:
|
||||
self.tag_data["asset"] = self.ti_name
|
||||
self.tag_data["hierarchyData"]["shot"] = self.ti_name
|
||||
|
||||
if self.tag_data["heroTrack"] and self.review_layer:
|
||||
self.tag_data.update({"reviewTrack": self.review_layer})
|
||||
else:
|
||||
self.tag_data.update({"reviewTrack": None})
|
||||
|
||||
# create pype tag on track_item and add data
|
||||
lib.imprint(self.track_item, self.tag_data)
|
||||
|
|
@ -773,8 +779,8 @@ class PublishClip:
|
|||
_spl = text.split("#")
|
||||
_len = (len(_spl) - 1)
|
||||
_repl = "{{{0}:0>{1}}}".format(name, _len)
|
||||
new_text = text.replace(("#" * _len), _repl)
|
||||
return new_text
|
||||
return text.replace(("#" * _len), _repl)
|
||||
|
||||
|
||||
def _convert_to_tag_data(self):
|
||||
""" Convert internal data to tag data.
|
||||
|
|
@ -782,13 +788,13 @@ class PublishClip:
|
|||
Populating the tag data into internal variable self.tag_data
|
||||
"""
|
||||
# define vertical sync attributes
|
||||
master_layer = True
|
||||
hero_track = True
|
||||
self.review_layer = ""
|
||||
if self.vertical_sync:
|
||||
# check if track name is not in driving layer
|
||||
if self.track_name not in self.driving_layer:
|
||||
# if it is not then define vertical sync as None
|
||||
master_layer = False
|
||||
hero_track = False
|
||||
|
||||
# increasing steps by index of rename iteration
|
||||
self.count_steps *= self.rename_index
|
||||
|
|
@ -802,7 +808,7 @@ class PublishClip:
|
|||
self.tag_data[_k] = _v["value"]
|
||||
|
||||
# driving layer is set as positive match
|
||||
if master_layer or self.vertical_sync:
|
||||
if hero_track or self.vertical_sync:
|
||||
# mark review layer
|
||||
if self.review_track and (
|
||||
self.review_track not in self.review_track_default):
|
||||
|
|
@ -836,40 +842,40 @@ class PublishClip:
|
|||
hierarchy_formating_data
|
||||
)
|
||||
|
||||
tag_hierarchy_data.update({"masterLayer": True})
|
||||
if master_layer and self.vertical_sync:
|
||||
tag_hierarchy_data.update({"heroTrack": True})
|
||||
if hero_track and self.vertical_sync:
|
||||
self.vertical_clip_match.update({
|
||||
(self.clip_in, self.clip_out): tag_hierarchy_data
|
||||
})
|
||||
|
||||
if not master_layer and self.vertical_sync:
|
||||
if not hero_track and self.vertical_sync:
|
||||
# driving layer is set as negative match
|
||||
for (_in, _out), master_data in self.vertical_clip_match.items():
|
||||
master_data.update({"masterLayer": False})
|
||||
for (_in, _out), hero_data in self.vertical_clip_match.items():
|
||||
hero_data.update({"heroTrack": False})
|
||||
if _in == self.clip_in and _out == self.clip_out:
|
||||
data_subset = master_data["subset"]
|
||||
# add track index in case duplicity of names in master data
|
||||
data_subset = hero_data["subset"]
|
||||
# add track index in case duplicity of names in hero data
|
||||
if self.subset in data_subset:
|
||||
master_data["subset"] = self.subset + str(
|
||||
hero_data["subset"] = self.subset + str(
|
||||
self.track_index)
|
||||
# in case track name and subset name is the same then add
|
||||
if self.subset_name == self.track_name:
|
||||
master_data["subset"] = self.subset
|
||||
hero_data["subset"] = self.subset
|
||||
# assing data to return hierarchy data to tag
|
||||
tag_hierarchy_data = master_data
|
||||
tag_hierarchy_data = hero_data
|
||||
|
||||
# add data to return data dict
|
||||
self.tag_data.update(tag_hierarchy_data)
|
||||
|
||||
if master_layer and self.review_layer:
|
||||
self.tag_data.update({"reviewTrack": self.review_layer})
|
||||
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
|
||||
""" Solve tag data from hierarchy data and templates. """
|
||||
# fill up clip name and hierarchy keys
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
|
||||
|
||||
# remove shot from hierarchy data: is not needed anymore
|
||||
hierarchy_formating_data.pop("shot")
|
||||
|
||||
return {
|
||||
"newClipName": clip_name_filled,
|
||||
"hierarchy": hierarchy_filled,
|
||||
|
|
|
|||
|
|
@ -84,6 +84,13 @@ def update_tag(tag, data):
|
|||
mtd = tag.metadata()
|
||||
# get metadata key from data
|
||||
data_mtd = data.get("metadata", {})
|
||||
|
||||
# due to hiero bug we have to make sure keys which are not existent in
|
||||
# data are cleared of value by `None`
|
||||
for _mk in mtd.keys():
|
||||
if _mk.replace("tag.", "") not in data_mtd.keys():
|
||||
mtd.setValue(_mk, str(None))
|
||||
|
||||
# set all data metadata to tag metadata
|
||||
for k, v in data_mtd.items():
|
||||
mtd.setValue(
|
||||
|
|
|
|||
0
openpype/hosts/hiero/otio/__init__.py
Normal file
0
openpype/hosts/hiero/otio/__init__.py
Normal file
366
openpype/hosts/hiero/otio/hiero_export.py
Normal file
366
openpype/hosts/hiero/otio/hiero_export.py
Normal file
|
|
@ -0,0 +1,366 @@
|
|||
""" compatibility OpenTimelineIO 0.12.0 and newer
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import ast
|
||||
from compiler.ast import flatten
|
||||
import opentimelineio as otio
|
||||
from . import utils
|
||||
import hiero.core
|
||||
import hiero.ui
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self.track_types = {
|
||||
hiero.core.VideoTrack: otio.schema.TrackKind.Video,
|
||||
hiero.core.AudioTrack: otio.schema.TrackKind.Audio
|
||||
}
|
||||
self.project_fps = None
|
||||
self.marker_color_map = {
|
||||
"magenta": otio.schema.MarkerColor.MAGENTA,
|
||||
"red": otio.schema.MarkerColor.RED,
|
||||
"yellow": otio.schema.MarkerColor.YELLOW,
|
||||
"green": otio.schema.MarkerColor.GREEN,
|
||||
"cyan": otio.schema.MarkerColor.CYAN,
|
||||
"blue": otio.schema.MarkerColor.BLUE,
|
||||
}
|
||||
self.timeline = None
|
||||
self.include_tags = True
|
||||
|
||||
|
||||
def get_current_hiero_project(remove_untitled=False):
|
||||
projects = flatten(hiero.core.projects())
|
||||
if not remove_untitled:
|
||||
return next(iter(projects))
|
||||
|
||||
# if remove_untitled
|
||||
for proj in projects:
|
||||
if "Untitled" in proj.name():
|
||||
proj.close()
|
||||
else:
|
||||
return proj
|
||||
|
||||
|
||||
def create_otio_rational_time(frame, fps):
|
||||
return otio.opentime.RationalTime(
|
||||
float(frame),
|
||||
float(fps)
|
||||
)
|
||||
|
||||
|
||||
def create_otio_time_range(start_frame, frame_duration, fps):
|
||||
return otio.opentime.TimeRange(
|
||||
start_time=create_otio_rational_time(start_frame, fps),
|
||||
duration=create_otio_rational_time(frame_duration, fps)
|
||||
)
|
||||
|
||||
|
||||
def _get_metadata(item):
|
||||
if hasattr(item, 'metadata'):
|
||||
return {key: value for key, value in dict(item.metadata()).items()}
|
||||
return {}
|
||||
|
||||
|
||||
def create_otio_reference(clip):
|
||||
metadata = _get_metadata(clip)
|
||||
media_source = clip.mediaSource()
|
||||
|
||||
# get file info for path and start frame
|
||||
file_info = media_source.fileinfos().pop()
|
||||
frame_start = file_info.startFrame()
|
||||
path = file_info.filename()
|
||||
|
||||
# get padding and other file infos
|
||||
padding = media_source.filenamePadding()
|
||||
file_head = media_source.filenameHead()
|
||||
is_sequence = not media_source.singleFile()
|
||||
frame_duration = media_source.duration()
|
||||
fps = utils.get_rate(clip) or self.project_fps
|
||||
extension = os.path.splitext(path)[-1]
|
||||
|
||||
if is_sequence:
|
||||
metadata.update({
|
||||
"isSequence": True,
|
||||
"padding": padding
|
||||
})
|
||||
|
||||
# add resolution metadata
|
||||
metadata.update({
|
||||
"openpype.source.colourtransform": clip.sourceMediaColourTransform(),
|
||||
"openpype.source.width": int(media_source.width()),
|
||||
"openpype.source.height": int(media_source.height()),
|
||||
"openpype.source.pixelAspect": float(media_source.pixelAspect())
|
||||
})
|
||||
|
||||
otio_ex_ref_item = None
|
||||
|
||||
if is_sequence:
|
||||
# if it is file sequence try to create `ImageSequenceReference`
|
||||
# the OTIO might not be compatible so return nothing and do it old way
|
||||
try:
|
||||
dirname = os.path.dirname(path)
|
||||
otio_ex_ref_item = otio.schema.ImageSequenceReference(
|
||||
target_url_base=dirname + os.sep,
|
||||
name_prefix=file_head,
|
||||
name_suffix=extension,
|
||||
start_frame=frame_start,
|
||||
frame_zero_padding=padding,
|
||||
rate=fps,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
if not otio_ex_ref_item:
|
||||
reformat_path = utils.get_reformated_path(path, padded=False)
|
||||
# in case old OTIO or video file create `ExternalReference`
|
||||
otio_ex_ref_item = otio.schema.ExternalReference(
|
||||
target_url=reformat_path,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
||||
# add metadata to otio item
|
||||
add_otio_metadata(otio_ex_ref_item, media_source, **metadata)
|
||||
|
||||
return otio_ex_ref_item
|
||||
|
||||
|
||||
def get_marker_color(tag):
|
||||
icon = tag.icon()
|
||||
pat = r'icons:Tag(?P<color>\w+)\.\w+'
|
||||
|
||||
res = re.search(pat, icon)
|
||||
if res:
|
||||
color = res.groupdict().get('color')
|
||||
if color.lower() in self.marker_color_map:
|
||||
return self.marker_color_map[color.lower()]
|
||||
|
||||
return otio.schema.MarkerColor.RED
|
||||
|
||||
|
||||
def create_otio_markers(otio_item, item):
|
||||
for tag in item.tags():
|
||||
if not tag.visible():
|
||||
continue
|
||||
|
||||
if tag.name() == 'Copy':
|
||||
# Hiero adds this tag to a lot of clips
|
||||
continue
|
||||
|
||||
frame_rate = utils.get_rate(item) or self.project_fps
|
||||
|
||||
marked_range = otio.opentime.TimeRange(
|
||||
start_time=otio.opentime.RationalTime(
|
||||
tag.inTime(),
|
||||
frame_rate
|
||||
),
|
||||
duration=otio.opentime.RationalTime(
|
||||
int(tag.metadata().dict().get('tag.length', '0')),
|
||||
frame_rate
|
||||
)
|
||||
)
|
||||
# add tag metadata but remove "tag." string
|
||||
metadata = {}
|
||||
|
||||
for key, value in tag.metadata().dict().items():
|
||||
_key = key.replace("tag.", "")
|
||||
|
||||
try:
|
||||
# capture exceptions which are related to strings only
|
||||
_value = ast.literal_eval(value)
|
||||
except (ValueError, SyntaxError):
|
||||
_value = value
|
||||
|
||||
metadata.update({_key: _value})
|
||||
|
||||
# Store the source item for future import assignment
|
||||
metadata['hiero_source_type'] = item.__class__.__name__
|
||||
|
||||
marker = otio.schema.Marker(
|
||||
name=tag.name(),
|
||||
color=get_marker_color(tag),
|
||||
marked_range=marked_range,
|
||||
metadata=metadata
|
||||
)
|
||||
|
||||
otio_item.markers.append(marker)
|
||||
|
||||
|
||||
def create_otio_clip(track_item):
|
||||
clip = track_item.source()
|
||||
source_in = track_item.sourceIn()
|
||||
duration = track_item.sourceDuration()
|
||||
fps = utils.get_rate(track_item) or self.project_fps
|
||||
name = track_item.name()
|
||||
|
||||
media_reference = create_otio_reference(clip)
|
||||
source_range = create_otio_time_range(
|
||||
int(source_in),
|
||||
int(duration),
|
||||
fps
|
||||
)
|
||||
|
||||
otio_clip = otio.schema.Clip(
|
||||
name=name,
|
||||
source_range=source_range,
|
||||
media_reference=media_reference
|
||||
)
|
||||
|
||||
# Add tags as markers
|
||||
if self.include_tags:
|
||||
create_otio_markers(otio_clip, track_item)
|
||||
create_otio_markers(otio_clip, track_item.source())
|
||||
|
||||
return otio_clip
|
||||
|
||||
|
||||
def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
|
||||
return otio.schema.Gap(
|
||||
source_range=create_otio_time_range(
|
||||
gap_start,
|
||||
(clip_start - tl_start_frame) - gap_start,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def _create_otio_timeline():
|
||||
project = get_current_hiero_project(remove_untitled=False)
|
||||
metadata = _get_metadata(self.timeline)
|
||||
|
||||
metadata.update({
|
||||
"openpype.timeline.width": int(self.timeline.format().width()),
|
||||
"openpype.timeline.height": int(self.timeline.format().height()),
|
||||
"openpype.timeline.pixelAspect": int(self.timeline.format().pixelAspect()), # noqa
|
||||
"openpype.project.useOCIOEnvironmentOverride": project.useOCIOEnvironmentOverride(), # noqa
|
||||
"openpype.project.lutSetting16Bit": project.lutSetting16Bit(),
|
||||
"openpype.project.lutSetting8Bit": project.lutSetting8Bit(),
|
||||
"openpype.project.lutSettingFloat": project.lutSettingFloat(),
|
||||
"openpype.project.lutSettingLog": project.lutSettingLog(),
|
||||
"openpype.project.lutSettingViewer": project.lutSettingViewer(),
|
||||
"openpype.project.lutSettingWorkingSpace": project.lutSettingWorkingSpace(), # noqa
|
||||
"openpype.project.lutUseOCIOForExport": project.lutUseOCIOForExport(),
|
||||
"openpype.project.ocioConfigName": project.ocioConfigName(),
|
||||
"openpype.project.ocioConfigPath": project.ocioConfigPath()
|
||||
})
|
||||
|
||||
start_time = create_otio_rational_time(
|
||||
self.timeline.timecodeStart(), self.project_fps)
|
||||
|
||||
return otio.schema.Timeline(
|
||||
name=self.timeline.name(),
|
||||
global_start_time=start_time,
|
||||
metadata=metadata
|
||||
)
|
||||
|
||||
|
||||
def create_otio_track(track_type, track_name):
|
||||
return otio.schema.Track(
|
||||
name=track_name,
|
||||
kind=self.track_types[track_type]
|
||||
)
|
||||
|
||||
|
||||
def add_otio_gap(track_item, otio_track, prev_out):
|
||||
gap_length = track_item.timelineIn() - prev_out
|
||||
if prev_out != 0:
|
||||
gap_length -= 1
|
||||
|
||||
gap = otio.opentime.TimeRange(
|
||||
duration=otio.opentime.RationalTime(
|
||||
gap_length,
|
||||
self.project_fps
|
||||
)
|
||||
)
|
||||
otio_gap = otio.schema.Gap(source_range=gap)
|
||||
otio_track.append(otio_gap)
|
||||
|
||||
|
||||
def add_otio_metadata(otio_item, media_source, **kwargs):
|
||||
metadata = _get_metadata(media_source)
|
||||
|
||||
# add additional metadata from kwargs
|
||||
if kwargs:
|
||||
metadata.update(kwargs)
|
||||
|
||||
# add metadata to otio item metadata
|
||||
for key, value in metadata.items():
|
||||
otio_item.metadata.update({key: value})
|
||||
|
||||
|
||||
def create_otio_timeline():
|
||||
|
||||
# get current timeline
|
||||
self.timeline = hiero.ui.activeSequence()
|
||||
self.project_fps = self.timeline.framerate().toFloat()
|
||||
|
||||
# convert timeline to otio
|
||||
otio_timeline = _create_otio_timeline()
|
||||
|
||||
# loop all defined track types
|
||||
for track in self.timeline.items():
|
||||
# skip if track is disabled
|
||||
if not track.isEnabled():
|
||||
continue
|
||||
|
||||
# convert track to otio
|
||||
otio_track = create_otio_track(
|
||||
type(track), track.name())
|
||||
|
||||
for itemindex, track_item in enumerate(track):
|
||||
# skip offline track items
|
||||
if not track_item.isMediaPresent():
|
||||
continue
|
||||
|
||||
# skip if track item is disabled
|
||||
if not track_item.isEnabled():
|
||||
continue
|
||||
|
||||
# Add Gap if needed
|
||||
if itemindex == 0:
|
||||
# if it is first track item at track then add
|
||||
# it to previouse item
|
||||
prev_item = track_item
|
||||
|
||||
else:
|
||||
# get previouse item
|
||||
prev_item = track_item.parent().items()[itemindex - 1]
|
||||
|
||||
# calculate clip frame range difference from each other
|
||||
clip_diff = track_item.timelineIn() - prev_item.timelineOut()
|
||||
|
||||
# add gap if first track item is not starting
|
||||
# at first timeline frame
|
||||
if itemindex == 0 and track_item.timelineIn() > 0:
|
||||
add_otio_gap(track_item, otio_track, 0)
|
||||
|
||||
# or add gap if following track items are having
|
||||
# frame range differences from each other
|
||||
elif itemindex and clip_diff != 1:
|
||||
add_otio_gap(track_item, otio_track, prev_item.timelineOut())
|
||||
|
||||
# create otio clip and add it to track
|
||||
otio_clip = create_otio_clip(track_item)
|
||||
otio_track.append(otio_clip)
|
||||
|
||||
# Add tags as markers
|
||||
if self.include_tags:
|
||||
create_otio_markers(otio_track, track)
|
||||
|
||||
# add track to otio timeline
|
||||
otio_timeline.tracks.append(otio_track)
|
||||
|
||||
return otio_timeline
|
||||
|
||||
|
||||
def write_to_file(otio_timeline, path):
|
||||
otio.adapters.write_to_file(otio_timeline, path)
|
||||
545
openpype/hosts/hiero/otio/hiero_import.py
Normal file
545
openpype/hosts/hiero/otio/hiero_import.py
Normal file
|
|
@ -0,0 +1,545 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
__author__ = "Daniel Flehner Heen"
|
||||
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
|
||||
|
||||
|
||||
import os
|
||||
import hiero.core
|
||||
import hiero.ui
|
||||
|
||||
import PySide2.QtWidgets as qw
|
||||
|
||||
try:
|
||||
from urllib import unquote
|
||||
|
||||
except ImportError:
|
||||
from urllib.parse import unquote # lint:ok
|
||||
|
||||
import opentimelineio as otio
|
||||
|
||||
_otio_old = False
|
||||
|
||||
|
||||
def inform(messages):
|
||||
if isinstance(messages, type('')):
|
||||
messages = [messages]
|
||||
|
||||
qw.QMessageBox.information(
|
||||
hiero.ui.mainWindow(),
|
||||
'OTIO Import',
|
||||
'\n'.join(messages),
|
||||
qw.QMessageBox.StandardButton.Ok
|
||||
)
|
||||
|
||||
|
||||
def get_transition_type(otio_item, otio_track):
|
||||
_in, _out = otio_track.neighbors_of(otio_item)
|
||||
|
||||
if isinstance(_in, otio.schema.Gap):
|
||||
_in = None
|
||||
|
||||
if isinstance(_out, otio.schema.Gap):
|
||||
_out = None
|
||||
|
||||
if _in and _out:
|
||||
return 'dissolve'
|
||||
|
||||
elif _in and not _out:
|
||||
return 'fade_out'
|
||||
|
||||
elif not _in and _out:
|
||||
return 'fade_in'
|
||||
|
||||
else:
|
||||
return 'unknown'
|
||||
|
||||
|
||||
def find_trackitem(otio_clip, hiero_track):
|
||||
for item in hiero_track.items():
|
||||
if item.timelineIn() == otio_clip.range_in_parent().start_time.value:
|
||||
if item.name() == otio_clip.name:
|
||||
return item
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_neighboring_trackitems(otio_item, otio_track, hiero_track):
|
||||
_in, _out = otio_track.neighbors_of(otio_item)
|
||||
trackitem_in = None
|
||||
trackitem_out = None
|
||||
|
||||
if _in:
|
||||
trackitem_in = find_trackitem(_in, hiero_track)
|
||||
|
||||
if _out:
|
||||
trackitem_out = find_trackitem(_out, hiero_track)
|
||||
|
||||
return trackitem_in, trackitem_out
|
||||
|
||||
|
||||
def apply_transition(otio_track, otio_item, track):
|
||||
warning = None
|
||||
|
||||
# Figure out type of transition
|
||||
transition_type = get_transition_type(otio_item, otio_track)
|
||||
|
||||
# Figure out track kind for getattr below
|
||||
kind = ''
|
||||
if isinstance(track, hiero.core.AudioTrack):
|
||||
kind = 'Audio'
|
||||
|
||||
# Gather TrackItems involved in trasition
|
||||
item_in, item_out = get_neighboring_trackitems(
|
||||
otio_item,
|
||||
otio_track,
|
||||
track
|
||||
)
|
||||
|
||||
# Create transition object
|
||||
if transition_type == 'dissolve':
|
||||
transition_func = getattr(
|
||||
hiero.core.Transition,
|
||||
'create{kind}DissolveTransition'.format(kind=kind)
|
||||
)
|
||||
|
||||
try:
|
||||
transition = transition_func(
|
||||
item_in,
|
||||
item_out,
|
||||
otio_item.in_offset.value,
|
||||
otio_item.out_offset.value
|
||||
)
|
||||
|
||||
# Catch error raised if transition is bigger than TrackItem source
|
||||
except RuntimeError as e:
|
||||
transition = None
|
||||
warning = (
|
||||
"Unable to apply transition \"{t.name}\": {e} "
|
||||
"Ignoring the transition.").format(t=otio_item, e=str(e))
|
||||
|
||||
elif transition_type == 'fade_in':
|
||||
transition_func = getattr(
|
||||
hiero.core.Transition,
|
||||
'create{kind}FadeInTransition'.format(kind=kind)
|
||||
)
|
||||
|
||||
# Warn user if part of fade is outside of clip
|
||||
if otio_item.in_offset.value:
|
||||
warning = \
|
||||
'Fist half of transition "{t.name}" is outside of clip and ' \
|
||||
'not valid in Hiero. Only applied second half.' \
|
||||
.format(t=otio_item)
|
||||
|
||||
transition = transition_func(
|
||||
item_out,
|
||||
otio_item.out_offset.value
|
||||
)
|
||||
|
||||
elif transition_type == 'fade_out':
|
||||
transition_func = getattr(
|
||||
hiero.core.Transition,
|
||||
'create{kind}FadeOutTransition'.format(kind=kind)
|
||||
)
|
||||
transition = transition_func(
|
||||
item_in,
|
||||
otio_item.in_offset.value
|
||||
)
|
||||
|
||||
# Warn user if part of fade is outside of clip
|
||||
if otio_item.out_offset.value:
|
||||
warning = \
|
||||
'Second half of transition "{t.name}" is outside of clip ' \
|
||||
'and not valid in Hiero. Only applied first half.' \
|
||||
.format(t=otio_item)
|
||||
|
||||
else:
|
||||
# Unknown transition
|
||||
return
|
||||
|
||||
# Apply transition to track
|
||||
if transition:
|
||||
track.addTransition(transition)
|
||||
|
||||
# Inform user about missing or adjusted transitions
|
||||
return warning
|
||||
|
||||
|
||||
def prep_url(url_in):
|
||||
url = unquote(url_in)
|
||||
|
||||
if url.startswith('file://localhost/'):
|
||||
return url
|
||||
|
||||
url = 'file://localhost{sep}{url}'.format(
|
||||
sep=url.startswith(os.sep) and '' or os.sep,
|
||||
url=url.startswith(os.sep) and url[1:] or url
|
||||
)
|
||||
|
||||
return url
|
||||
|
||||
|
||||
def create_offline_mediasource(otio_clip, path=None):
|
||||
global _otio_old
|
||||
|
||||
hiero_rate = hiero.core.TimeBase(
|
||||
otio_clip.source_range.start_time.rate
|
||||
)
|
||||
|
||||
try:
|
||||
legal_media_refs = (
|
||||
otio.schema.ExternalReference,
|
||||
otio.schema.ImageSequenceReference
|
||||
)
|
||||
except AttributeError:
|
||||
_otio_old = True
|
||||
legal_media_refs = (
|
||||
otio.schema.ExternalReference
|
||||
)
|
||||
|
||||
if isinstance(otio_clip.media_reference, legal_media_refs):
|
||||
source_range = otio_clip.available_range()
|
||||
|
||||
else:
|
||||
source_range = otio_clip.source_range
|
||||
|
||||
if path is None:
|
||||
path = otio_clip.name
|
||||
|
||||
media = hiero.core.MediaSource.createOfflineVideoMediaSource(
|
||||
prep_url(path),
|
||||
source_range.start_time.value,
|
||||
source_range.duration.value,
|
||||
hiero_rate,
|
||||
source_range.start_time.value
|
||||
)
|
||||
|
||||
return media
|
||||
|
||||
|
||||
def load_otio(otio_file, project=None, sequence=None):
|
||||
otio_timeline = otio.adapters.read_from_file(otio_file)
|
||||
build_sequence(otio_timeline, project=project, sequence=sequence)
|
||||
|
||||
|
||||
marker_color_map = {
|
||||
"PINK": "Magenta",
|
||||
"RED": "Red",
|
||||
"ORANGE": "Yellow",
|
||||
"YELLOW": "Yellow",
|
||||
"GREEN": "Green",
|
||||
"CYAN": "Cyan",
|
||||
"BLUE": "Blue",
|
||||
"PURPLE": "Magenta",
|
||||
"MAGENTA": "Magenta",
|
||||
"BLACK": "Blue",
|
||||
"WHITE": "Green"
|
||||
}
|
||||
|
||||
|
||||
def get_tag(tagname, tagsbin):
|
||||
for tag in tagsbin.items():
|
||||
if tag.name() == tagname:
|
||||
return tag
|
||||
|
||||
if isinstance(tag, hiero.core.Bin):
|
||||
tag = get_tag(tagname, tag)
|
||||
|
||||
if tag is not None:
|
||||
return tag
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def add_metadata(metadata, hiero_item):
|
||||
for key, value in metadata.get('Hiero', dict()).items():
|
||||
if key == 'source_type':
|
||||
# Only used internally to reassign tag to correct Hiero item
|
||||
continue
|
||||
|
||||
if isinstance(value, dict):
|
||||
add_metadata(value, hiero_item)
|
||||
continue
|
||||
|
||||
if value is not None:
|
||||
if not key.startswith('tag.'):
|
||||
key = 'tag.' + key
|
||||
|
||||
hiero_item.metadata().setValue(key, str(value))
|
||||
|
||||
|
||||
def add_markers(otio_item, hiero_item, tagsbin):
|
||||
if isinstance(otio_item, (otio.schema.Stack, otio.schema.Clip)):
|
||||
markers = otio_item.markers
|
||||
|
||||
elif isinstance(otio_item, otio.schema.Timeline):
|
||||
markers = otio_item.tracks.markers
|
||||
|
||||
else:
|
||||
markers = []
|
||||
|
||||
for marker in markers:
|
||||
meta = marker.metadata.get('Hiero', dict())
|
||||
if 'source_type' in meta:
|
||||
if hiero_item.__class__.__name__ != meta.get('source_type'):
|
||||
continue
|
||||
|
||||
marker_color = marker.color
|
||||
|
||||
_tag = get_tag(marker.name, tagsbin)
|
||||
if _tag is None:
|
||||
_tag = get_tag(marker_color_map[marker_color], tagsbin)
|
||||
|
||||
if _tag is None:
|
||||
_tag = hiero.core.Tag(marker_color_map[marker.color])
|
||||
|
||||
start = marker.marked_range.start_time.value
|
||||
end = (
|
||||
marker.marked_range.start_time.value +
|
||||
marker.marked_range.duration.value
|
||||
)
|
||||
|
||||
if hasattr(hiero_item, 'addTagToRange'):
|
||||
tag = hiero_item.addTagToRange(_tag, start, end)
|
||||
|
||||
else:
|
||||
tag = hiero_item.addTag(_tag)
|
||||
|
||||
tag.setName(marker.name or marker_color_map[marker_color])
|
||||
# tag.setNote(meta.get('tag.note', ''))
|
||||
|
||||
# Add metadata
|
||||
add_metadata(marker.metadata, tag)
|
||||
|
||||
|
||||
def create_track(otio_track, tracknum, track_kind):
|
||||
if track_kind is None and hasattr(otio_track, 'kind'):
|
||||
track_kind = otio_track.kind
|
||||
|
||||
# Create a Track
|
||||
if track_kind == otio.schema.TrackKind.Video:
|
||||
track = hiero.core.VideoTrack(
|
||||
otio_track.name or 'Video{n}'.format(n=tracknum)
|
||||
)
|
||||
|
||||
else:
|
||||
track = hiero.core.AudioTrack(
|
||||
otio_track.name or 'Audio{n}'.format(n=tracknum)
|
||||
)
|
||||
|
||||
return track
|
||||
|
||||
|
||||
def create_clip(otio_clip, tagsbin, sequencebin):
|
||||
# Create MediaSource
|
||||
url = None
|
||||
media = None
|
||||
otio_media = otio_clip.media_reference
|
||||
|
||||
if isinstance(otio_media, otio.schema.ExternalReference):
|
||||
url = prep_url(otio_media.target_url)
|
||||
media = hiero.core.MediaSource(url)
|
||||
|
||||
elif not _otio_old:
|
||||
if isinstance(otio_media, otio.schema.ImageSequenceReference):
|
||||
url = prep_url(otio_media.abstract_target_url('#'))
|
||||
media = hiero.core.MediaSource(url)
|
||||
|
||||
if media is None or media.isOffline():
|
||||
media = create_offline_mediasource(otio_clip, url)
|
||||
|
||||
# Reuse previous clip if possible
|
||||
clip = None
|
||||
for item in sequencebin.clips():
|
||||
if item.activeItem().mediaSource() == media:
|
||||
clip = item.activeItem()
|
||||
break
|
||||
|
||||
if not clip:
|
||||
# Create new Clip
|
||||
clip = hiero.core.Clip(media)
|
||||
|
||||
# Add Clip to a Bin
|
||||
sequencebin.addItem(hiero.core.BinItem(clip))
|
||||
|
||||
# Add markers
|
||||
add_markers(otio_clip, clip, tagsbin)
|
||||
|
||||
return clip
|
||||
|
||||
|
||||
def create_trackitem(playhead, track, otio_clip, clip):
|
||||
source_range = otio_clip.source_range
|
||||
|
||||
trackitem = track.createTrackItem(otio_clip.name)
|
||||
trackitem.setPlaybackSpeed(source_range.start_time.rate)
|
||||
trackitem.setSource(clip)
|
||||
|
||||
time_scalar = 1.
|
||||
|
||||
# Check for speed effects and adjust playback speed accordingly
|
||||
for effect in otio_clip.effects:
|
||||
if isinstance(effect, otio.schema.LinearTimeWarp):
|
||||
time_scalar = effect.time_scalar
|
||||
# Only reverse effect can be applied here
|
||||
if abs(time_scalar) == 1.:
|
||||
trackitem.setPlaybackSpeed(
|
||||
trackitem.playbackSpeed() * time_scalar)
|
||||
|
||||
elif isinstance(effect, otio.schema.FreezeFrame):
|
||||
# For freeze frame, playback speed must be set after range
|
||||
time_scalar = 0.
|
||||
|
||||
# If reverse playback speed swap source in and out
|
||||
if trackitem.playbackSpeed() < 0:
|
||||
source_out = source_range.start_time.value
|
||||
source_in = source_range.end_time_inclusive().value
|
||||
|
||||
timeline_in = playhead + source_out
|
||||
timeline_out = (
|
||||
timeline_in +
|
||||
source_range.duration.value
|
||||
) - 1
|
||||
else:
|
||||
# Normal playback speed
|
||||
source_in = source_range.start_time.value
|
||||
source_out = source_range.end_time_inclusive().value
|
||||
|
||||
timeline_in = playhead
|
||||
timeline_out = (
|
||||
timeline_in +
|
||||
source_range.duration.value
|
||||
) - 1
|
||||
|
||||
# Set source and timeline in/out points
|
||||
trackitem.setTimes(
|
||||
timeline_in,
|
||||
timeline_out,
|
||||
source_in,
|
||||
source_out
|
||||
|
||||
)
|
||||
|
||||
# Apply playback speed for freeze frames
|
||||
if abs(time_scalar) != 1.:
|
||||
trackitem.setPlaybackSpeed(trackitem.playbackSpeed() * time_scalar)
|
||||
|
||||
# Link audio to video when possible
|
||||
if isinstance(track, hiero.core.AudioTrack):
|
||||
for other in track.parent().trackItemsAt(playhead):
|
||||
if other.source() == clip:
|
||||
trackitem.link(other)
|
||||
|
||||
return trackitem
|
||||
|
||||
|
||||
def build_sequence(
|
||||
otio_timeline, project=None, sequence=None, track_kind=None):
|
||||
if project is None:
|
||||
if sequence:
|
||||
project = sequence.project()
|
||||
|
||||
else:
|
||||
# Per version 12.1v2 there is no way of getting active project
|
||||
project = hiero.core.projects(hiero.core.Project.kUserProjects)[-1]
|
||||
|
||||
projectbin = project.clipsBin()
|
||||
|
||||
if not sequence:
|
||||
# Create a Sequence
|
||||
sequence = hiero.core.Sequence(otio_timeline.name or 'OTIOSequence')
|
||||
|
||||
# Set sequence settings from otio timeline if available
|
||||
if (
|
||||
hasattr(otio_timeline, 'global_start_time')
|
||||
and otio_timeline.global_start_time
|
||||
):
|
||||
start_time = otio_timeline.global_start_time
|
||||
sequence.setFramerate(start_time.rate)
|
||||
sequence.setTimecodeStart(start_time.value)
|
||||
|
||||
# Create a Bin to hold clips
|
||||
projectbin.addItem(hiero.core.BinItem(sequence))
|
||||
|
||||
sequencebin = hiero.core.Bin(sequence.name())
|
||||
projectbin.addItem(sequencebin)
|
||||
|
||||
else:
|
||||
sequencebin = projectbin
|
||||
|
||||
# Get tagsBin
|
||||
tagsbin = hiero.core.project("Tag Presets").tagsBin()
|
||||
|
||||
# Add timeline markers
|
||||
add_markers(otio_timeline, sequence, tagsbin)
|
||||
|
||||
if isinstance(otio_timeline, otio.schema.Timeline):
|
||||
tracks = otio_timeline.tracks
|
||||
|
||||
else:
|
||||
tracks = [otio_timeline]
|
||||
|
||||
for tracknum, otio_track in enumerate(tracks):
|
||||
playhead = 0
|
||||
_transitions = []
|
||||
|
||||
# Add track to sequence
|
||||
track = create_track(otio_track, tracknum, track_kind)
|
||||
sequence.addTrack(track)
|
||||
|
||||
# iterate over items in track
|
||||
for _itemnum, otio_clip in enumerate(otio_track):
|
||||
if isinstance(otio_clip, (otio.schema.Track, otio.schema.Stack)):
|
||||
inform('Nested sequences/tracks are created separately.')
|
||||
|
||||
# Add gap where the nested sequence would have been
|
||||
playhead += otio_clip.source_range.duration.value
|
||||
|
||||
# Process nested sequence
|
||||
build_sequence(
|
||||
otio_clip,
|
||||
project=project,
|
||||
track_kind=otio_track.kind
|
||||
)
|
||||
|
||||
elif isinstance(otio_clip, otio.schema.Clip):
|
||||
# Create a Clip
|
||||
clip = create_clip(otio_clip, tagsbin, sequencebin)
|
||||
|
||||
# Create TrackItem
|
||||
trackitem = create_trackitem(
|
||||
playhead,
|
||||
track,
|
||||
otio_clip,
|
||||
clip
|
||||
)
|
||||
|
||||
# Add markers
|
||||
add_markers(otio_clip, trackitem, tagsbin)
|
||||
|
||||
# Add trackitem to track
|
||||
track.addTrackItem(trackitem)
|
||||
|
||||
# Update playhead
|
||||
playhead = trackitem.timelineOut() + 1
|
||||
|
||||
elif isinstance(otio_clip, otio.schema.Transition):
|
||||
# Store transitions for when all clips in the track are created
|
||||
_transitions.append((otio_track, otio_clip))
|
||||
|
||||
elif isinstance(otio_clip, otio.schema.Gap):
|
||||
# Hiero has no fillers, slugs or blanks at the moment
|
||||
playhead += otio_clip.source_range.duration.value
|
||||
|
||||
# Apply transitions we stored earlier now that all clips are present
|
||||
warnings = []
|
||||
for otio_track, otio_item in _transitions:
|
||||
# Catch warnings form transitions in case
|
||||
# of unsupported transitions
|
||||
warning = apply_transition(otio_track, otio_item, track)
|
||||
if warning:
|
||||
warnings.append(warning)
|
||||
|
||||
if warnings:
|
||||
inform(warnings)
|
||||
76
openpype/hosts/hiero/otio/utils.py
Normal file
76
openpype/hosts/hiero/otio/utils.py
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
import re
|
||||
import opentimelineio as otio
|
||||
|
||||
|
||||
def timecode_to_frames(timecode, framerate):
|
||||
rt = otio.opentime.from_timecode(timecode, 24)
|
||||
return int(otio.opentime.to_frames(rt))
|
||||
|
||||
|
||||
def frames_to_timecode(frames, framerate):
|
||||
rt = otio.opentime.from_frames(frames, framerate)
|
||||
return otio.opentime.to_timecode(rt)
|
||||
|
||||
|
||||
def frames_to_secons(frames, framerate):
|
||||
rt = otio.opentime.from_frames(frames, framerate)
|
||||
return otio.opentime.to_seconds(rt)
|
||||
|
||||
|
||||
def get_reformated_path(path, padded=True):
|
||||
"""
|
||||
Return fixed python expression path
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
|
||||
Returns:
|
||||
type: string with reformated path
|
||||
|
||||
Example:
|
||||
get_reformated_path("plate.[0001-1008].exr") > plate.%04d.exr
|
||||
|
||||
"""
|
||||
if "%" in path:
|
||||
padding_pattern = r"(\d+)"
|
||||
padding = int(re.findall(padding_pattern, path).pop())
|
||||
num_pattern = r"(%\d+d)"
|
||||
if padded:
|
||||
path = re.sub(num_pattern, "%0{}d".format(padding), path)
|
||||
else:
|
||||
path = re.sub(num_pattern, "%d", path)
|
||||
return path
|
||||
|
||||
|
||||
def get_padding_from_path(path):
|
||||
"""
|
||||
Return padding number from DaVinci Resolve sequence path style
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
|
||||
Returns:
|
||||
int: padding number
|
||||
|
||||
Example:
|
||||
get_padding_from_path("plate.[0001-1008].exr") > 4
|
||||
|
||||
"""
|
||||
padding_pattern = "(\\d+)(?=-)"
|
||||
if "[" in path:
|
||||
return len(re.findall(padding_pattern, path).pop())
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_rate(item):
|
||||
if not hasattr(item, 'framerate'):
|
||||
return None
|
||||
|
||||
num, den = item.framerate().toRational()
|
||||
rate = float(num) / float(den)
|
||||
|
||||
if rate.is_integer():
|
||||
return rate
|
||||
|
||||
return round(rate, 4)
|
||||
|
|
@ -120,9 +120,9 @@ class CreateShotClip(phiero.Creator):
|
|||
"vSyncTrack": {
|
||||
"value": gui_tracks, # noqa
|
||||
"type": "QComboBox",
|
||||
"label": "Master track",
|
||||
"label": "Hero track",
|
||||
"target": "ui",
|
||||
"toolTip": "Select driving track name which should be mastering all others", # noqa
|
||||
"toolTip": "Select driving track name which should be hero for all others", # noqa
|
||||
"order": 1}
|
||||
}
|
||||
},
|
||||
|
|
|
|||
|
|
@ -29,13 +29,19 @@ class LoadClip(phiero.SequenceLoader):
|
|||
clip_color_last = "green"
|
||||
clip_color = "red"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
clip_name_template = "{asset}_{subset}_{representation}"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
# add clip name template to options
|
||||
options.update({
|
||||
"clipNameTemplate": self.clip_name_template
|
||||
})
|
||||
# in case loader uses multiselection
|
||||
if self.track and self.sequence:
|
||||
options.update({
|
||||
"sequence": self.sequence,
|
||||
"track": self.track
|
||||
"track": self.track,
|
||||
"clipNameTemplate": self.clip_name_template
|
||||
})
|
||||
|
||||
# load clip to timeline and get main variables
|
||||
|
|
@ -45,7 +51,8 @@ class LoadClip(phiero.SequenceLoader):
|
|||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
colorspace = version_data.get("colorspace", None)
|
||||
object_name = "{}_{}".format(name, namespace)
|
||||
object_name = self.clip_name_template.format(
|
||||
**context["representation"]["context"])
|
||||
|
||||
# add additional metadata from the version to imprint Avalon knob
|
||||
add_keys = [
|
||||
|
|
|
|||
59
openpype/hosts/hiero/plugins/publish/extract_thumbnail.py
Normal file
59
openpype/hosts/hiero/plugins/publish/extract_thumbnail.py
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ExtractThumnail(openpype.api.Extractor):
|
||||
"""
|
||||
Extractor for track item's tumnails
|
||||
"""
|
||||
|
||||
label = "Extract Thumnail"
|
||||
order = pyblish.api.ExtractorOrder
|
||||
families = ["plate", "take"]
|
||||
hosts = ["hiero"]
|
||||
|
||||
def process(self, instance):
|
||||
# create representation data
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
|
||||
self.create_thumbnail(staging_dir, instance)
|
||||
|
||||
def create_thumbnail(self, staging_dir, instance):
|
||||
track_item = instance.data["item"]
|
||||
track_item_name = track_item.name()
|
||||
|
||||
# frames
|
||||
duration = track_item.sourceDuration()
|
||||
frame_start = track_item.sourceIn()
|
||||
self.log.debug(
|
||||
"__ frame_start: `{}`, duration: `{}`".format(
|
||||
frame_start, duration))
|
||||
|
||||
# get thumbnail frame from the middle
|
||||
thumb_frame = int(frame_start + (duration / 2))
|
||||
|
||||
thumb_file = "{}thumbnail{}{}".format(
|
||||
track_item_name, thumb_frame, ".png")
|
||||
thumb_path = os.path.join(staging_dir, thumb_file)
|
||||
|
||||
thumbnail = track_item.thumbnail(thumb_frame).save(
|
||||
thumb_path,
|
||||
format='png'
|
||||
)
|
||||
self.log.debug(
|
||||
"__ thumb_path: `{}`, frame: `{}`".format(thumbnail, thumb_frame))
|
||||
|
||||
self.log.info("Thumnail was generated to: {}".format(thumb_path))
|
||||
thumb_representation = {
|
||||
'files': thumb_file,
|
||||
'stagingDir': staging_dir,
|
||||
'name': "thumbnail",
|
||||
'thumbnail': True,
|
||||
'ext': "png"
|
||||
}
|
||||
instance.data["representations"].append(
|
||||
thumb_representation)
|
||||
|
|
@ -2,7 +2,7 @@ from pyblish import api
|
|||
import openpype.api as pype
|
||||
|
||||
|
||||
class VersionUpWorkfile(api.ContextPlugin):
|
||||
class IntegrateVersionUpWorkfile(api.ContextPlugin):
|
||||
"""Save as new workfile version"""
|
||||
|
||||
order = api.IntegratorOrder + 10.1
|
||||
|
|
@ -1,221 +1,204 @@
|
|||
from compiler.ast import flatten
|
||||
from pyblish import api
|
||||
import pyblish
|
||||
import openpype
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
import hiero
|
||||
# from openpype.hosts.hiero.api import lib
|
||||
# reload(lib)
|
||||
# reload(phiero)
|
||||
from openpype.hosts.hiero.otio import hiero_export
|
||||
|
||||
# # developer reload modules
|
||||
from pprint import pformat
|
||||
|
||||
|
||||
class PreCollectInstances(api.ContextPlugin):
|
||||
class PrecollectInstances(pyblish.api.ContextPlugin):
|
||||
"""Collect all Track items selection."""
|
||||
|
||||
order = api.CollectorOrder - 0.509
|
||||
label = "Pre-collect Instances"
|
||||
order = pyblish.api.CollectorOrder - 0.59
|
||||
label = "Precollect Instances"
|
||||
hosts = ["hiero"]
|
||||
|
||||
def process(self, context):
|
||||
track_items = phiero.get_track_items(
|
||||
selected=True, check_tagged=True, check_enabled=True)
|
||||
# only return enabled track items
|
||||
if not track_items:
|
||||
track_items = phiero.get_track_items(
|
||||
check_enabled=True, check_tagged=True)
|
||||
# get sequence and video tracks
|
||||
sequence = context.data["activeSequence"]
|
||||
tracks = sequence.videoTracks()
|
||||
|
||||
# add collection to context
|
||||
tracks_effect_items = self.collect_sub_track_items(tracks)
|
||||
|
||||
context.data["tracksEffectItems"] = tracks_effect_items
|
||||
|
||||
otio_timeline = context.data["otioTimeline"]
|
||||
selected_timeline_items = phiero.get_track_items(
|
||||
selected=True, check_enabled=True, check_tagged=True)
|
||||
self.log.info(
|
||||
"Processing enabled track items: {}".format(len(track_items)))
|
||||
"Processing enabled track items: {}".format(
|
||||
selected_timeline_items))
|
||||
|
||||
for track_item in selected_timeline_items:
|
||||
|
||||
for _ti in track_items:
|
||||
data = dict()
|
||||
clip = _ti.source()
|
||||
clip_name = track_item.name()
|
||||
|
||||
# get clips subtracks and anotations
|
||||
annotations = self.clip_annotations(clip)
|
||||
subtracks = self.clip_subtrack(_ti)
|
||||
self.log.debug("Annotations: {}".format(annotations))
|
||||
self.log.debug(">> Subtracks: {}".format(subtracks))
|
||||
# get openpype tag data
|
||||
tag_data = phiero.get_track_item_pype_data(track_item)
|
||||
self.log.debug("__ tag_data: {}".format(pformat(tag_data)))
|
||||
|
||||
# get pype tag data
|
||||
tag_parsed_data = phiero.get_track_item_pype_data(_ti)
|
||||
# self.log.debug(pformat(tag_parsed_data))
|
||||
|
||||
if not tag_parsed_data:
|
||||
if not tag_data:
|
||||
continue
|
||||
|
||||
if tag_parsed_data.get("id") != "pyblish.avalon.instance":
|
||||
if tag_data.get("id") != "pyblish.avalon.instance":
|
||||
continue
|
||||
|
||||
# solve handles length
|
||||
tag_data["handleStart"] = min(
|
||||
tag_data["handleStart"], int(track_item.handleInLength()))
|
||||
tag_data["handleEnd"] = min(
|
||||
tag_data["handleEnd"], int(track_item.handleOutLength()))
|
||||
|
||||
# add tag data to instance data
|
||||
data.update({
|
||||
k: v for k, v in tag_parsed_data.items()
|
||||
k: v for k, v in tag_data.items()
|
||||
if k not in ("id", "applieswhole", "label")
|
||||
})
|
||||
|
||||
asset = tag_parsed_data["asset"]
|
||||
subset = tag_parsed_data["subset"]
|
||||
review = tag_parsed_data.get("review")
|
||||
audio = tag_parsed_data.get("audio")
|
||||
|
||||
# remove audio attribute from data
|
||||
data.pop("audio")
|
||||
asset = tag_data["asset"]
|
||||
subset = tag_data["subset"]
|
||||
|
||||
# insert family into families
|
||||
family = tag_parsed_data["family"]
|
||||
families = [str(f) for f in tag_parsed_data["families"]]
|
||||
family = tag_data["family"]
|
||||
families = [str(f) for f in tag_data["families"]]
|
||||
families.insert(0, str(family))
|
||||
|
||||
track = _ti.parent()
|
||||
media_source = _ti.source().mediaSource()
|
||||
source_path = media_source.firstpath()
|
||||
file_head = media_source.filenameHead()
|
||||
file_info = media_source.fileinfos().pop()
|
||||
source_first_frame = int(file_info.startFrame())
|
||||
|
||||
# apply only for feview and master track instance
|
||||
if review:
|
||||
families += ["review", "ftrack"]
|
||||
# form label
|
||||
label = asset
|
||||
if asset != clip_name:
|
||||
label += " ({})".format(clip_name)
|
||||
label += " {}".format(subset)
|
||||
label += " {}".format("[" + ", ".join(families) + "]")
|
||||
|
||||
data.update({
|
||||
"name": "{} {} {}".format(asset, subset, families),
|
||||
"name": "{}_{}".format(asset, subset),
|
||||
"label": label,
|
||||
"asset": asset,
|
||||
"item": _ti,
|
||||
"item": track_item,
|
||||
"families": families,
|
||||
|
||||
# tags
|
||||
"tags": _ti.tags(),
|
||||
|
||||
# track item attributes
|
||||
"track": track.name(),
|
||||
"trackItem": track,
|
||||
|
||||
# version data
|
||||
"versionData": {
|
||||
"colorspace": _ti.sourceMediaColourTransform()
|
||||
},
|
||||
|
||||
# source attribute
|
||||
"source": source_path,
|
||||
"sourceMedia": media_source,
|
||||
"sourcePath": source_path,
|
||||
"sourceFileHead": file_head,
|
||||
"sourceFirst": source_first_frame,
|
||||
|
||||
# clip's effect
|
||||
"clipEffectItems": subtracks
|
||||
"publish": tag_data["publish"],
|
||||
"fps": context.data["fps"]
|
||||
})
|
||||
|
||||
# otio clip data
|
||||
otio_data = self.get_otio_clip_instance_data(
|
||||
otio_timeline, track_item) or {}
|
||||
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
|
||||
data.update(otio_data)
|
||||
self.log.debug("__ data: {}".format(pformat(data)))
|
||||
|
||||
# add resolution
|
||||
self.get_resolution_to_data(data, context)
|
||||
|
||||
# create instance
|
||||
instance = context.create_instance(**data)
|
||||
|
||||
# create shot instance for shot attributes create/update
|
||||
self.create_shot_instance(context, **data)
|
||||
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
self.log.debug(
|
||||
"_ instance.data: {}".format(pformat(instance.data)))
|
||||
|
||||
if audio:
|
||||
a_data = dict()
|
||||
def get_resolution_to_data(self, data, context):
|
||||
assert data.get("otioClip"), "Missing `otioClip` data"
|
||||
|
||||
# add tag data to instance data
|
||||
a_data.update({
|
||||
k: v for k, v in tag_parsed_data.items()
|
||||
if k not in ("id", "applieswhole", "label")
|
||||
})
|
||||
# solve source resolution option
|
||||
if data.get("sourceResolution", None):
|
||||
otio_clip_metadata = data[
|
||||
"otioClip"].media_reference.metadata
|
||||
data.update({
|
||||
"resolutionWidth": otio_clip_metadata[
|
||||
"openpype.source.width"],
|
||||
"resolutionHeight": otio_clip_metadata[
|
||||
"openpype.source.height"],
|
||||
"pixelAspect": otio_clip_metadata[
|
||||
"openpype.source.pixelAspect"]
|
||||
})
|
||||
else:
|
||||
otio_tl_metadata = context.data["otioTimeline"].metadata
|
||||
data.update({
|
||||
"resolutionWidth": otio_tl_metadata["openpype.timeline.width"],
|
||||
"resolutionHeight": otio_tl_metadata[
|
||||
"openpype.timeline.height"],
|
||||
"pixelAspect": otio_tl_metadata[
|
||||
"openpype.timeline.pixelAspect"]
|
||||
})
|
||||
|
||||
# create main attributes
|
||||
subset = "audioMain"
|
||||
family = "audio"
|
||||
families = ["clip", "ftrack"]
|
||||
families.insert(0, str(family))
|
||||
def create_shot_instance(self, context, **data):
|
||||
master_layer = data.get("heroTrack")
|
||||
hierarchy_data = data.get("hierarchyData")
|
||||
asset = data.get("asset")
|
||||
item = data.get("item")
|
||||
clip_name = item.name()
|
||||
|
||||
name = "{} {} {}".format(asset, subset, families)
|
||||
if not master_layer:
|
||||
return
|
||||
|
||||
a_data.update({
|
||||
"name": name,
|
||||
"subset": subset,
|
||||
"asset": asset,
|
||||
"family": family,
|
||||
"families": families,
|
||||
"item": _ti,
|
||||
if not hierarchy_data:
|
||||
return
|
||||
|
||||
# tags
|
||||
"tags": _ti.tags(),
|
||||
})
|
||||
asset = data["asset"]
|
||||
subset = "shotMain"
|
||||
|
||||
a_instance = context.create_instance(**a_data)
|
||||
self.log.info("Creating audio instance: {}".format(a_instance))
|
||||
# insert family into families
|
||||
family = "shot"
|
||||
|
||||
# form label
|
||||
label = asset
|
||||
if asset != clip_name:
|
||||
label += " ({}) ".format(clip_name)
|
||||
label += " {}".format(subset)
|
||||
label += " [{}]".format(family)
|
||||
|
||||
data.update({
|
||||
"name": "{}_{}".format(asset, subset),
|
||||
"label": label,
|
||||
"subset": subset,
|
||||
"asset": asset,
|
||||
"family": family,
|
||||
"families": []
|
||||
})
|
||||
|
||||
instance = context.create_instance(**data)
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
self.log.debug(
|
||||
"_ instance.data: {}".format(pformat(instance.data)))
|
||||
|
||||
def get_otio_clip_instance_data(self, otio_timeline, track_item):
|
||||
"""
|
||||
Return otio objects for timeline, track and clip
|
||||
|
||||
Args:
|
||||
timeline_item_data (dict): timeline_item_data from list returned by
|
||||
resolve.get_current_timeline_items()
|
||||
otio_timeline (otio.schema.Timeline): otio object
|
||||
|
||||
Returns:
|
||||
dict: otio clip object
|
||||
|
||||
"""
|
||||
ti_track_name = track_item.parent().name()
|
||||
timeline_range = self.create_otio_time_range_from_timeline_item_data(
|
||||
track_item)
|
||||
for otio_clip in otio_timeline.each_clip():
|
||||
track_name = otio_clip.parent().name
|
||||
parent_range = otio_clip.range_in_parent()
|
||||
if ti_track_name not in track_name:
|
||||
continue
|
||||
if otio_clip.name not in track_item.name():
|
||||
continue
|
||||
if openpype.lib.is_overlapping_otio_ranges(
|
||||
parent_range, timeline_range, strict=True):
|
||||
|
||||
# add pypedata marker to otio_clip metadata
|
||||
for marker in otio_clip.markers:
|
||||
if phiero.pype_tag_name in marker.name:
|
||||
otio_clip.metadata.update(marker.metadata)
|
||||
return {"otioClip": otio_clip}
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def clip_annotations(clip):
|
||||
"""
|
||||
Returns list of Clip's hiero.core.Annotation
|
||||
"""
|
||||
annotations = []
|
||||
subTrackItems = flatten(clip.subTrackItems())
|
||||
annotations += [item for item in subTrackItems if isinstance(
|
||||
item, hiero.core.Annotation)]
|
||||
return annotations
|
||||
def create_otio_time_range_from_timeline_item_data(track_item):
|
||||
timeline = phiero.get_current_sequence()
|
||||
frame_start = int(track_item.timelineIn())
|
||||
frame_duration = int(track_item.sourceDuration())
|
||||
fps = timeline.framerate().toFloat()
|
||||
|
||||
@staticmethod
|
||||
def clip_subtrack(clip):
|
||||
"""
|
||||
Returns list of Clip's hiero.core.SubTrackItem
|
||||
"""
|
||||
subtracks = []
|
||||
subTrackItems = flatten(clip.parent().subTrackItems())
|
||||
for item in subTrackItems:
|
||||
# avoid all anotation
|
||||
if isinstance(item, hiero.core.Annotation):
|
||||
continue
|
||||
# # avoid all not anaibled
|
||||
if not item.isEnabled():
|
||||
continue
|
||||
subtracks.append(item)
|
||||
return subtracks
|
||||
|
||||
@staticmethod
|
||||
def collect_sub_track_items(tracks):
|
||||
"""
|
||||
Returns dictionary with track index as key and list of subtracks
|
||||
"""
|
||||
# collect all subtrack items
|
||||
sub_track_items = dict()
|
||||
for track in tracks:
|
||||
items = track.items()
|
||||
|
||||
# skip if no clips on track > need track with effect only
|
||||
if items:
|
||||
continue
|
||||
|
||||
# skip all disabled tracks
|
||||
if not track.isEnabled():
|
||||
continue
|
||||
|
||||
track_index = track.trackIndex()
|
||||
_sub_track_items = flatten(track.subTrackItems())
|
||||
|
||||
# continue only if any subtrack items are collected
|
||||
if len(_sub_track_items) < 1:
|
||||
continue
|
||||
|
||||
enabled_sti = list()
|
||||
# loop all found subtrack items and check if they are enabled
|
||||
for _sti in _sub_track_items:
|
||||
# checking if not enabled
|
||||
if not _sti.isEnabled():
|
||||
continue
|
||||
if isinstance(_sti, hiero.core.Annotation):
|
||||
continue
|
||||
# collect the subtrack item
|
||||
enabled_sti.append(_sti)
|
||||
|
||||
# continue only if any subtrack items are collected
|
||||
if len(enabled_sti) < 1:
|
||||
continue
|
||||
|
||||
# add collection of subtrackitems to dict
|
||||
sub_track_items[track_index] = enabled_sti
|
||||
|
||||
return sub_track_items
|
||||
return hiero_export.create_otio_time_range(
|
||||
frame_start, frame_duration, fps)
|
||||
|
|
|
|||
|
|
@ -1,52 +1,57 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
import hiero.ui
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
from avalon import api as avalon
|
||||
from pprint import pformat
|
||||
from openpype.hosts.hiero.otio import hiero_export
|
||||
from Qt.QtGui import QPixmap
|
||||
import tempfile
|
||||
|
||||
|
||||
class PreCollectWorkfile(pyblish.api.ContextPlugin):
|
||||
class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
label = "Pre-collect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.51
|
||||
label = "Precollect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.6
|
||||
|
||||
def process(self, context):
|
||||
|
||||
asset = avalon.Session["AVALON_ASSET"]
|
||||
subset = "workfile"
|
||||
|
||||
project = phiero.get_current_project()
|
||||
active_sequence = phiero.get_current_sequence()
|
||||
video_tracks = active_sequence.videoTracks()
|
||||
audio_tracks = active_sequence.audioTracks()
|
||||
current_file = project.path()
|
||||
staging_dir = os.path.dirname(current_file)
|
||||
base_name = os.path.basename(current_file)
|
||||
active_timeline = hiero.ui.activeSequence()
|
||||
fps = active_timeline.framerate().toFloat()
|
||||
|
||||
# get workfile's colorspace properties
|
||||
_clrs = {}
|
||||
_clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
|
||||
_clrs["lutSetting16Bit"] = project.lutSetting16Bit()
|
||||
_clrs["lutSetting8Bit"] = project.lutSetting8Bit()
|
||||
_clrs["lutSettingFloat"] = project.lutSettingFloat()
|
||||
_clrs["lutSettingLog"] = project.lutSettingLog()
|
||||
_clrs["lutSettingViewer"] = project.lutSettingViewer()
|
||||
_clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
|
||||
_clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
|
||||
_clrs["ocioConfigName"] = project.ocioConfigName()
|
||||
_clrs["ocioConfigPath"] = project.ocioConfigPath()
|
||||
# adding otio timeline to context
|
||||
otio_timeline = hiero_export.create_otio_timeline()
|
||||
|
||||
# set main project attributes to context
|
||||
context.data["activeProject"] = project
|
||||
context.data["activeSequence"] = active_sequence
|
||||
context.data["videoTracks"] = video_tracks
|
||||
context.data["audioTracks"] = audio_tracks
|
||||
context.data["currentFile"] = current_file
|
||||
context.data["colorspace"] = _clrs
|
||||
# get workfile thumnail paths
|
||||
tmp_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
|
||||
thumbnail_name = "workfile_thumbnail.png"
|
||||
thumbnail_path = os.path.join(tmp_staging, thumbnail_name)
|
||||
|
||||
self.log.info("currentFile: {}".format(current_file))
|
||||
# search for all windows with name of actual sequence
|
||||
_windows = [w for w in hiero.ui.windowManager().windows()
|
||||
if active_timeline.name() in w.windowTitle()]
|
||||
|
||||
# export window to thumb path
|
||||
QPixmap.grabWidget(_windows[-1]).save(thumbnail_path, 'png')
|
||||
|
||||
# thumbnail
|
||||
thumb_representation = {
|
||||
'files': thumbnail_name,
|
||||
'stagingDir': tmp_staging,
|
||||
'name': "thumbnail",
|
||||
'thumbnail': True,
|
||||
'ext': "png"
|
||||
}
|
||||
|
||||
# get workfile paths
|
||||
curent_file = project.path()
|
||||
staging_dir, base_name = os.path.split(curent_file)
|
||||
|
||||
# creating workfile representation
|
||||
representation = {
|
||||
workfile_representation = {
|
||||
'name': 'hrox',
|
||||
'ext': 'hrox',
|
||||
'files': base_name,
|
||||
|
|
@ -59,16 +64,21 @@ class PreCollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"subset": "{}{}".format(asset, subset.capitalize()),
|
||||
"item": project,
|
||||
"family": "workfile",
|
||||
|
||||
# version data
|
||||
"versionData": {
|
||||
"colorspace": _clrs
|
||||
},
|
||||
|
||||
# source attribute
|
||||
"sourcePath": current_file,
|
||||
"representations": [representation]
|
||||
"representations": [workfile_representation, thumb_representation]
|
||||
}
|
||||
|
||||
# create instance with workfile
|
||||
instance = context.create_instance(**instance_data)
|
||||
|
||||
# update context with main project attributes
|
||||
context_data = {
|
||||
"activeProject": project,
|
||||
"otioTimeline": otio_timeline,
|
||||
"currentFile": curent_file,
|
||||
"fps": fps,
|
||||
}
|
||||
context.data.update(context_data)
|
||||
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
self.log.debug("__ instance.data: {}".format(pformat(instance.data)))
|
||||
self.log.debug("__ context_data: {}".format(pformat(context_data)))
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ class CollectFrameRanges(pyblish.api.InstancePlugin):
|
|||
""" Collect all framranges.
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
label = "Collect Frame Ranges"
|
||||
hosts = ["hiero"]
|
||||
families = ["clip", "effect"]
|
||||
|
|
@ -39,8 +39,8 @@ class CollectHierarchy(pyblish.api.ContextPlugin):
|
|||
if not set(self.families).intersection(families):
|
||||
continue
|
||||
|
||||
# exclude if not masterLayer True
|
||||
if not instance.data.get("masterLayer"):
|
||||
# exclude if not heroTrack True
|
||||
if not instance.data.get("heroTrack"):
|
||||
continue
|
||||
|
||||
# update families to include `shot` for hierarchy integration
|
||||
|
|
@ -29,7 +29,7 @@ class CollectReview(api.InstancePlugin):
|
|||
Exception: description
|
||||
|
||||
"""
|
||||
review_track = instance.data.get("review")
|
||||
review_track = instance.data.get("reviewTrack")
|
||||
video_tracks = instance.context.data["videoTracks"]
|
||||
for track in video_tracks:
|
||||
if review_track not in track.name():
|
||||
|
|
@ -132,7 +132,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
|
|||
).format(**locals())
|
||||
|
||||
self.log.debug("ffprob_cmd: {}".format(ffprob_cmd))
|
||||
audio_check_output = openpype.api.subprocess(ffprob_cmd)
|
||||
audio_check_output = openpype.api.run_subprocess(ffprob_cmd)
|
||||
self.log.debug(
|
||||
"audio_check_output: {}".format(audio_check_output))
|
||||
|
||||
|
|
@ -167,7 +167,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
|
|||
|
||||
# try to get video native resolution data
|
||||
try:
|
||||
resolution_output = openpype.api.subprocess((
|
||||
resolution_output = openpype.api.run_subprocess((
|
||||
"\"{ffprobe_path}\" -i \"{full_input_path}\""
|
||||
" -v error "
|
||||
"-select_streams v:0 -show_entries "
|
||||
|
|
@ -280,7 +280,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
|
|||
|
||||
# run subprocess
|
||||
self.log.debug("Executing: {}".format(subprcs_cmd))
|
||||
output = openpype.api.subprocess(subprcs_cmd)
|
||||
output = openpype.api.run_subprocess(subprcs_cmd)
|
||||
self.log.debug("Output: {}".format(output))
|
||||
|
||||
repre_new = {
|
||||
|
|
@ -0,0 +1,223 @@
|
|||
from compiler.ast import flatten
|
||||
from pyblish import api
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
import hiero
|
||||
# from openpype.hosts.hiero.api import lib
|
||||
# reload(lib)
|
||||
# reload(phiero)
|
||||
|
||||
|
||||
class PreCollectInstances(api.ContextPlugin):
|
||||
"""Collect all Track items selection."""
|
||||
|
||||
order = api.CollectorOrder - 0.509
|
||||
label = "Pre-collect Instances"
|
||||
hosts = ["hiero"]
|
||||
|
||||
def process(self, context):
|
||||
track_items = phiero.get_track_items(
|
||||
selected=True, check_tagged=True, check_enabled=True)
|
||||
# only return enabled track items
|
||||
if not track_items:
|
||||
track_items = phiero.get_track_items(
|
||||
check_enabled=True, check_tagged=True)
|
||||
# get sequence and video tracks
|
||||
sequence = context.data["activeSequence"]
|
||||
tracks = sequence.videoTracks()
|
||||
|
||||
# add collection to context
|
||||
tracks_effect_items = self.collect_sub_track_items(tracks)
|
||||
|
||||
context.data["tracksEffectItems"] = tracks_effect_items
|
||||
|
||||
self.log.info(
|
||||
"Processing enabled track items: {}".format(len(track_items)))
|
||||
|
||||
for _ti in track_items:
|
||||
data = {}
|
||||
clip = _ti.source()
|
||||
|
||||
# get clips subtracks and anotations
|
||||
annotations = self.clip_annotations(clip)
|
||||
subtracks = self.clip_subtrack(_ti)
|
||||
self.log.debug("Annotations: {}".format(annotations))
|
||||
self.log.debug(">> Subtracks: {}".format(subtracks))
|
||||
|
||||
# get pype tag data
|
||||
tag_parsed_data = phiero.get_track_item_pype_data(_ti)
|
||||
# self.log.debug(pformat(tag_parsed_data))
|
||||
|
||||
if not tag_parsed_data:
|
||||
continue
|
||||
|
||||
if tag_parsed_data.get("id") != "pyblish.avalon.instance":
|
||||
continue
|
||||
# add tag data to instance data
|
||||
data.update({
|
||||
k: v for k, v in tag_parsed_data.items()
|
||||
if k not in ("id", "applieswhole", "label")
|
||||
})
|
||||
|
||||
asset = tag_parsed_data["asset"]
|
||||
subset = tag_parsed_data["subset"]
|
||||
review_track = tag_parsed_data.get("reviewTrack")
|
||||
hiero_track = tag_parsed_data.get("heroTrack")
|
||||
audio = tag_parsed_data.get("audio")
|
||||
|
||||
# remove audio attribute from data
|
||||
data.pop("audio")
|
||||
|
||||
# insert family into families
|
||||
family = tag_parsed_data["family"]
|
||||
families = [str(f) for f in tag_parsed_data["families"]]
|
||||
families.insert(0, str(family))
|
||||
|
||||
track = _ti.parent()
|
||||
media_source = _ti.source().mediaSource()
|
||||
source_path = media_source.firstpath()
|
||||
file_head = media_source.filenameHead()
|
||||
file_info = media_source.fileinfos().pop()
|
||||
source_first_frame = int(file_info.startFrame())
|
||||
|
||||
# apply only for review and master track instance
|
||||
if review_track and hiero_track:
|
||||
families += ["review", "ftrack"]
|
||||
|
||||
data.update({
|
||||
"name": "{} {} {}".format(asset, subset, families),
|
||||
"asset": asset,
|
||||
"item": _ti,
|
||||
"families": families,
|
||||
|
||||
# tags
|
||||
"tags": _ti.tags(),
|
||||
|
||||
# track item attributes
|
||||
"track": track.name(),
|
||||
"trackItem": track,
|
||||
"reviewTrack": review_track,
|
||||
|
||||
# version data
|
||||
"versionData": {
|
||||
"colorspace": _ti.sourceMediaColourTransform()
|
||||
},
|
||||
|
||||
# source attribute
|
||||
"source": source_path,
|
||||
"sourceMedia": media_source,
|
||||
"sourcePath": source_path,
|
||||
"sourceFileHead": file_head,
|
||||
"sourceFirst": source_first_frame,
|
||||
|
||||
# clip's effect
|
||||
"clipEffectItems": subtracks
|
||||
})
|
||||
|
||||
instance = context.create_instance(**data)
|
||||
|
||||
self.log.info("Creating instance.data: {}".format(instance.data))
|
||||
|
||||
if audio:
|
||||
a_data = dict()
|
||||
|
||||
# add tag data to instance data
|
||||
a_data.update({
|
||||
k: v for k, v in tag_parsed_data.items()
|
||||
if k not in ("id", "applieswhole", "label")
|
||||
})
|
||||
|
||||
# create main attributes
|
||||
subset = "audioMain"
|
||||
family = "audio"
|
||||
families = ["clip", "ftrack"]
|
||||
families.insert(0, str(family))
|
||||
|
||||
name = "{} {} {}".format(asset, subset, families)
|
||||
|
||||
a_data.update({
|
||||
"name": name,
|
||||
"subset": subset,
|
||||
"asset": asset,
|
||||
"family": family,
|
||||
"families": families,
|
||||
"item": _ti,
|
||||
|
||||
# tags
|
||||
"tags": _ti.tags(),
|
||||
})
|
||||
|
||||
a_instance = context.create_instance(**a_data)
|
||||
self.log.info("Creating audio instance: {}".format(a_instance))
|
||||
|
||||
@staticmethod
|
||||
def clip_annotations(clip):
|
||||
"""
|
||||
Returns list of Clip's hiero.core.Annotation
|
||||
"""
|
||||
annotations = []
|
||||
subTrackItems = flatten(clip.subTrackItems())
|
||||
annotations += [item for item in subTrackItems if isinstance(
|
||||
item, hiero.core.Annotation)]
|
||||
return annotations
|
||||
|
||||
@staticmethod
|
||||
def clip_subtrack(clip):
|
||||
"""
|
||||
Returns list of Clip's hiero.core.SubTrackItem
|
||||
"""
|
||||
subtracks = []
|
||||
subTrackItems = flatten(clip.parent().subTrackItems())
|
||||
for item in subTrackItems:
|
||||
# avoid all anotation
|
||||
if isinstance(item, hiero.core.Annotation):
|
||||
continue
|
||||
# # avoid all not anaibled
|
||||
if not item.isEnabled():
|
||||
continue
|
||||
subtracks.append(item)
|
||||
return subtracks
|
||||
|
||||
@staticmethod
|
||||
def collect_sub_track_items(tracks):
|
||||
"""
|
||||
Returns dictionary with track index as key and list of subtracks
|
||||
"""
|
||||
# collect all subtrack items
|
||||
sub_track_items = dict()
|
||||
for track in tracks:
|
||||
items = track.items()
|
||||
|
||||
# skip if no clips on track > need track with effect only
|
||||
if items:
|
||||
continue
|
||||
|
||||
# skip all disabled tracks
|
||||
if not track.isEnabled():
|
||||
continue
|
||||
|
||||
track_index = track.trackIndex()
|
||||
_sub_track_items = flatten(track.subTrackItems())
|
||||
|
||||
# continue only if any subtrack items are collected
|
||||
if len(_sub_track_items) < 1:
|
||||
continue
|
||||
|
||||
enabled_sti = list()
|
||||
# loop all found subtrack items and check if they are enabled
|
||||
for _sti in _sub_track_items:
|
||||
# checking if not enabled
|
||||
if not _sti.isEnabled():
|
||||
continue
|
||||
if isinstance(_sti, hiero.core.Annotation):
|
||||
continue
|
||||
# collect the subtrack item
|
||||
enabled_sti.append(_sti)
|
||||
|
||||
# continue only if any subtrack items are collected
|
||||
if len(enabled_sti) < 1:
|
||||
continue
|
||||
|
||||
# add collection of subtrackitems to dict
|
||||
sub_track_items[track_index] = enabled_sti
|
||||
|
||||
return sub_track_items
|
||||
|
|
@ -0,0 +1,74 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
from avalon import api as avalon
|
||||
|
||||
|
||||
class PreCollectWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
label = "Pre-collect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.51
|
||||
|
||||
def process(self, context):
|
||||
asset = avalon.Session["AVALON_ASSET"]
|
||||
subset = "workfile"
|
||||
|
||||
project = phiero.get_current_project()
|
||||
active_sequence = phiero.get_current_sequence()
|
||||
video_tracks = active_sequence.videoTracks()
|
||||
audio_tracks = active_sequence.audioTracks()
|
||||
current_file = project.path()
|
||||
staging_dir = os.path.dirname(current_file)
|
||||
base_name = os.path.basename(current_file)
|
||||
|
||||
# get workfile's colorspace properties
|
||||
_clrs = {}
|
||||
_clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
|
||||
_clrs["lutSetting16Bit"] = project.lutSetting16Bit()
|
||||
_clrs["lutSetting8Bit"] = project.lutSetting8Bit()
|
||||
_clrs["lutSettingFloat"] = project.lutSettingFloat()
|
||||
_clrs["lutSettingLog"] = project.lutSettingLog()
|
||||
_clrs["lutSettingViewer"] = project.lutSettingViewer()
|
||||
_clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
|
||||
_clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
|
||||
_clrs["ocioConfigName"] = project.ocioConfigName()
|
||||
_clrs["ocioConfigPath"] = project.ocioConfigPath()
|
||||
|
||||
# set main project attributes to context
|
||||
context.data["activeProject"] = project
|
||||
context.data["activeSequence"] = active_sequence
|
||||
context.data["videoTracks"] = video_tracks
|
||||
context.data["audioTracks"] = audio_tracks
|
||||
context.data["currentFile"] = current_file
|
||||
context.data["colorspace"] = _clrs
|
||||
|
||||
self.log.info("currentFile: {}".format(current_file))
|
||||
|
||||
# creating workfile representation
|
||||
representation = {
|
||||
'name': 'hrox',
|
||||
'ext': 'hrox',
|
||||
'files': base_name,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
instance_data = {
|
||||
"name": "{}_{}".format(asset, subset),
|
||||
"asset": asset,
|
||||
"subset": "{}{}".format(asset, subset.capitalize()),
|
||||
"item": project,
|
||||
"family": "workfile",
|
||||
|
||||
# version data
|
||||
"versionData": {
|
||||
"colorspace": _clrs
|
||||
},
|
||||
|
||||
# source attribute
|
||||
"sourcePath": current_file,
|
||||
"representations": [representation]
|
||||
}
|
||||
|
||||
instance = context.create_instance(**instance_data)
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
|
|
@ -1,338 +1,28 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
__author__ = "Daniel Flehner Heen"
|
||||
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
|
||||
|
||||
import os
|
||||
import re
|
||||
import hiero.core
|
||||
from hiero.core import util
|
||||
|
||||
import opentimelineio as otio
|
||||
|
||||
|
||||
marker_color_map = {
|
||||
"magenta": otio.schema.MarkerColor.MAGENTA,
|
||||
"red": otio.schema.MarkerColor.RED,
|
||||
"yellow": otio.schema.MarkerColor.YELLOW,
|
||||
"green": otio.schema.MarkerColor.GREEN,
|
||||
"cyan": otio.schema.MarkerColor.CYAN,
|
||||
"blue": otio.schema.MarkerColor.BLUE,
|
||||
}
|
||||
|
||||
from openpype.hosts.hiero.otio import hiero_export
|
||||
|
||||
class OTIOExportTask(hiero.core.TaskBase):
|
||||
|
||||
def __init__(self, initDict):
|
||||
"""Initialize"""
|
||||
hiero.core.TaskBase.__init__(self, initDict)
|
||||
self.otio_timeline = None
|
||||
|
||||
def name(self):
|
||||
return str(type(self))
|
||||
|
||||
def get_rate(self, item):
|
||||
if not hasattr(item, 'framerate'):
|
||||
item = item.sequence()
|
||||
|
||||
num, den = item.framerate().toRational()
|
||||
rate = float(num) / float(den)
|
||||
|
||||
if rate.is_integer():
|
||||
return rate
|
||||
|
||||
return round(rate, 2)
|
||||
|
||||
def get_clip_ranges(self, trackitem):
|
||||
# Get rate from source or sequence
|
||||
if trackitem.source().mediaSource().hasVideo():
|
||||
rate_item = trackitem.source()
|
||||
|
||||
else:
|
||||
rate_item = trackitem.sequence()
|
||||
|
||||
source_rate = self.get_rate(rate_item)
|
||||
|
||||
# Reversed video/audio
|
||||
if trackitem.playbackSpeed() < 0:
|
||||
start = trackitem.sourceOut()
|
||||
|
||||
else:
|
||||
start = trackitem.sourceIn()
|
||||
|
||||
source_start_time = otio.opentime.RationalTime(
|
||||
start,
|
||||
source_rate
|
||||
)
|
||||
source_duration = otio.opentime.RationalTime(
|
||||
trackitem.duration(),
|
||||
source_rate
|
||||
)
|
||||
|
||||
source_range = otio.opentime.TimeRange(
|
||||
start_time=source_start_time,
|
||||
duration=source_duration
|
||||
)
|
||||
|
||||
hiero_clip = trackitem.source()
|
||||
|
||||
available_range = None
|
||||
if hiero_clip.mediaSource().isMediaPresent():
|
||||
start_time = otio.opentime.RationalTime(
|
||||
hiero_clip.mediaSource().startTime(),
|
||||
source_rate
|
||||
)
|
||||
duration = otio.opentime.RationalTime(
|
||||
hiero_clip.mediaSource().duration(),
|
||||
source_rate
|
||||
)
|
||||
available_range = otio.opentime.TimeRange(
|
||||
start_time=start_time,
|
||||
duration=duration
|
||||
)
|
||||
|
||||
return source_range, available_range
|
||||
|
||||
def add_gap(self, trackitem, otio_track, prev_out):
|
||||
gap_length = trackitem.timelineIn() - prev_out
|
||||
if prev_out != 0:
|
||||
gap_length -= 1
|
||||
|
||||
rate = self.get_rate(trackitem.sequence())
|
||||
gap = otio.opentime.TimeRange(
|
||||
duration=otio.opentime.RationalTime(
|
||||
gap_length,
|
||||
rate
|
||||
)
|
||||
)
|
||||
otio_gap = otio.schema.Gap(source_range=gap)
|
||||
otio_track.append(otio_gap)
|
||||
|
||||
def get_marker_color(self, tag):
|
||||
icon = tag.icon()
|
||||
pat = r'icons:Tag(?P<color>\w+)\.\w+'
|
||||
|
||||
res = re.search(pat, icon)
|
||||
if res:
|
||||
color = res.groupdict().get('color')
|
||||
if color.lower() in marker_color_map:
|
||||
return marker_color_map[color.lower()]
|
||||
|
||||
return otio.schema.MarkerColor.RED
|
||||
|
||||
def add_markers(self, hiero_item, otio_item):
|
||||
for tag in hiero_item.tags():
|
||||
if not tag.visible():
|
||||
continue
|
||||
|
||||
if tag.name() == 'Copy':
|
||||
# Hiero adds this tag to a lot of clips
|
||||
continue
|
||||
|
||||
frame_rate = self.get_rate(hiero_item)
|
||||
|
||||
marked_range = otio.opentime.TimeRange(
|
||||
start_time=otio.opentime.RationalTime(
|
||||
tag.inTime(),
|
||||
frame_rate
|
||||
),
|
||||
duration=otio.opentime.RationalTime(
|
||||
int(tag.metadata().dict().get('tag.length', '0')),
|
||||
frame_rate
|
||||
)
|
||||
)
|
||||
|
||||
metadata = dict(
|
||||
Hiero=tag.metadata().dict()
|
||||
)
|
||||
# Store the source item for future import assignment
|
||||
metadata['Hiero']['source_type'] = hiero_item.__class__.__name__
|
||||
|
||||
marker = otio.schema.Marker(
|
||||
name=tag.name(),
|
||||
color=self.get_marker_color(tag),
|
||||
marked_range=marked_range,
|
||||
metadata=metadata
|
||||
)
|
||||
|
||||
otio_item.markers.append(marker)
|
||||
|
||||
def add_clip(self, trackitem, otio_track, itemindex):
|
||||
hiero_clip = trackitem.source()
|
||||
|
||||
# Add Gap if needed
|
||||
if itemindex == 0:
|
||||
prev_item = trackitem
|
||||
|
||||
else:
|
||||
prev_item = trackitem.parent().items()[itemindex - 1]
|
||||
|
||||
clip_diff = trackitem.timelineIn() - prev_item.timelineOut()
|
||||
|
||||
if itemindex == 0 and trackitem.timelineIn() > 0:
|
||||
self.add_gap(trackitem, otio_track, 0)
|
||||
|
||||
elif itemindex and clip_diff != 1:
|
||||
self.add_gap(trackitem, otio_track, prev_item.timelineOut())
|
||||
|
||||
# Create Clip
|
||||
source_range, available_range = self.get_clip_ranges(trackitem)
|
||||
|
||||
otio_clip = otio.schema.Clip(
|
||||
name=trackitem.name(),
|
||||
source_range=source_range
|
||||
)
|
||||
|
||||
# Add media reference
|
||||
media_reference = otio.schema.MissingReference()
|
||||
if hiero_clip.mediaSource().isMediaPresent():
|
||||
source = hiero_clip.mediaSource()
|
||||
first_file = source.fileinfos()[0]
|
||||
path = first_file.filename()
|
||||
|
||||
if "%" in path:
|
||||
path = re.sub(r"%\d+d", "%d", path)
|
||||
if "#" in path:
|
||||
path = re.sub(r"#+", "%d", path)
|
||||
|
||||
media_reference = otio.schema.ExternalReference(
|
||||
target_url=u'{}'.format(path),
|
||||
available_range=available_range
|
||||
)
|
||||
|
||||
otio_clip.media_reference = media_reference
|
||||
|
||||
# Add Time Effects
|
||||
playbackspeed = trackitem.playbackSpeed()
|
||||
if playbackspeed != 1:
|
||||
if playbackspeed == 0:
|
||||
time_effect = otio.schema.FreezeFrame()
|
||||
|
||||
else:
|
||||
time_effect = otio.schema.LinearTimeWarp(
|
||||
time_scalar=playbackspeed
|
||||
)
|
||||
otio_clip.effects.append(time_effect)
|
||||
|
||||
# Add tags as markers
|
||||
if self._preset.properties()["includeTags"]:
|
||||
self.add_markers(trackitem, otio_clip)
|
||||
self.add_markers(trackitem.source(), otio_clip)
|
||||
|
||||
otio_track.append(otio_clip)
|
||||
|
||||
# Add Transition if needed
|
||||
if trackitem.inTransition() or trackitem.outTransition():
|
||||
self.add_transition(trackitem, otio_track)
|
||||
|
||||
def add_transition(self, trackitem, otio_track):
|
||||
transitions = []
|
||||
|
||||
if trackitem.inTransition():
|
||||
if trackitem.inTransition().alignment().name == 'kFadeIn':
|
||||
transitions.append(trackitem.inTransition())
|
||||
|
||||
if trackitem.outTransition():
|
||||
transitions.append(trackitem.outTransition())
|
||||
|
||||
for transition in transitions:
|
||||
alignment = transition.alignment().name
|
||||
|
||||
if alignment == 'kFadeIn':
|
||||
in_offset_frames = 0
|
||||
out_offset_frames = (
|
||||
transition.timelineOut() - transition.timelineIn()
|
||||
) + 1
|
||||
|
||||
elif alignment == 'kFadeOut':
|
||||
in_offset_frames = (
|
||||
trackitem.timelineOut() - transition.timelineIn()
|
||||
) + 1
|
||||
out_offset_frames = 0
|
||||
|
||||
elif alignment == 'kDissolve':
|
||||
in_offset_frames = (
|
||||
transition.inTrackItem().timelineOut() -
|
||||
transition.timelineIn()
|
||||
)
|
||||
out_offset_frames = (
|
||||
transition.timelineOut() -
|
||||
transition.outTrackItem().timelineIn()
|
||||
)
|
||||
|
||||
else:
|
||||
# kUnknown transition is ignored
|
||||
continue
|
||||
|
||||
rate = trackitem.source().framerate().toFloat()
|
||||
in_time = otio.opentime.RationalTime(in_offset_frames, rate)
|
||||
out_time = otio.opentime.RationalTime(out_offset_frames, rate)
|
||||
|
||||
otio_transition = otio.schema.Transition(
|
||||
name=alignment, # Consider placing Hiero name in metadata
|
||||
transition_type=otio.schema.TransitionTypes.SMPTE_Dissolve,
|
||||
in_offset=in_time,
|
||||
out_offset=out_time
|
||||
)
|
||||
|
||||
if alignment == 'kFadeIn':
|
||||
otio_track.insert(-1, otio_transition)
|
||||
|
||||
else:
|
||||
otio_track.append(otio_transition)
|
||||
|
||||
|
||||
def add_tracks(self):
|
||||
for track in self._sequence.items():
|
||||
if isinstance(track, hiero.core.AudioTrack):
|
||||
kind = otio.schema.TrackKind.Audio
|
||||
|
||||
else:
|
||||
kind = otio.schema.TrackKind.Video
|
||||
|
||||
otio_track = otio.schema.Track(name=track.name(), kind=kind)
|
||||
|
||||
for itemindex, trackitem in enumerate(track):
|
||||
if isinstance(trackitem.source(), hiero.core.Clip):
|
||||
self.add_clip(trackitem, otio_track, itemindex)
|
||||
|
||||
self.otio_timeline.tracks.append(otio_track)
|
||||
|
||||
# Add tags as markers
|
||||
if self._preset.properties()["includeTags"]:
|
||||
self.add_markers(self._sequence, self.otio_timeline.tracks)
|
||||
|
||||
def create_OTIO(self):
|
||||
self.otio_timeline = otio.schema.Timeline()
|
||||
|
||||
# Set global start time based on sequence
|
||||
self.otio_timeline.global_start_time = otio.opentime.RationalTime(
|
||||
self._sequence.timecodeStart(),
|
||||
self._sequence.framerate().toFloat()
|
||||
)
|
||||
self.otio_timeline.name = self._sequence.name()
|
||||
|
||||
self.add_tracks()
|
||||
|
||||
def startTask(self):
|
||||
self.create_OTIO()
|
||||
self.otio_timeline = hiero_export.create_otio_timeline()
|
||||
|
||||
def taskStep(self):
|
||||
return False
|
||||
|
|
@ -350,7 +40,7 @@ class OTIOExportTask(hiero.core.TaskBase):
|
|||
util.filesystem.makeDirs(dirname)
|
||||
|
||||
# write otio file
|
||||
otio.adapters.write_to_file(self.otio_timeline, exportPath)
|
||||
hiero_export.write_to_file(self.otio_timeline, exportPath)
|
||||
|
||||
# Catch all exceptions and log error
|
||||
except Exception as e:
|
||||
|
|
@ -370,7 +60,7 @@ class OTIOExportPreset(hiero.core.TaskPresetBase):
|
|||
"""Initialise presets to default values"""
|
||||
hiero.core.TaskPresetBase.__init__(self, OTIOExportTask, name)
|
||||
|
||||
self.properties()["includeTags"] = True
|
||||
self.properties()["includeTags"] = hiero_export.include_tags = True
|
||||
self.properties().update(properties)
|
||||
|
||||
def supportedItems(self):
|
||||
|
|
|
|||
|
|
@ -1,3 +1,9 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
__author__ = "Daniel Flehner Heen"
|
||||
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
|
||||
|
||||
import hiero.ui
|
||||
import OTIOExportTask
|
||||
|
||||
|
|
@ -14,6 +20,7 @@ except ImportError:
|
|||
|
||||
FormLayout = QFormLayout # lint:ok
|
||||
|
||||
from openpype.hosts.hiero.otio import hiero_export
|
||||
|
||||
class OTIOExportUI(hiero.ui.TaskUIBase):
|
||||
def __init__(self, preset):
|
||||
|
|
@ -27,7 +34,7 @@ class OTIOExportUI(hiero.ui.TaskUIBase):
|
|||
|
||||
def includeMarkersCheckboxChanged(self, state):
|
||||
# Slot to handle change of checkbox state
|
||||
self._preset.properties()["includeTags"] = state == QtCore.Qt.Checked
|
||||
hiero_export.include_tags = state == QtCore.Qt.Checked
|
||||
|
||||
def populateUI(self, widget, exportTemplate):
|
||||
layout = widget.layout()
|
||||
|
|
|
|||
|
|
@ -1,25 +1,3 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
|
||||
from OTIOExportTask import OTIOExportTask
|
||||
from OTIOExportUI import OTIOExportUI
|
||||
|
||||
|
|
|
|||
|
|
@ -1,42 +1,91 @@
|
|||
# MIT License
|
||||
#
|
||||
# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
# SOFTWARE.
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
__author__ = "Daniel Flehner Heen"
|
||||
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
|
||||
|
||||
import hiero.ui
|
||||
import hiero.core
|
||||
|
||||
from otioimporter.OTIOImport import load_otio
|
||||
import PySide2.QtWidgets as qw
|
||||
|
||||
from openpype.hosts.hiero.otio.hiero_import import load_otio
|
||||
|
||||
|
||||
class OTIOProjectSelect(qw.QDialog):
|
||||
|
||||
def __init__(self, projects, *args, **kwargs):
|
||||
super(OTIOProjectSelect, self).__init__(*args, **kwargs)
|
||||
self.setWindowTitle('Please select active project')
|
||||
self.layout = qw.QVBoxLayout()
|
||||
|
||||
self.label = qw.QLabel(
|
||||
'Unable to determine which project to import sequence to.\n'
|
||||
'Please select one.'
|
||||
)
|
||||
self.layout.addWidget(self.label)
|
||||
|
||||
self.projects = qw.QComboBox()
|
||||
self.projects.addItems(map(lambda p: p.name(), projects))
|
||||
self.layout.addWidget(self.projects)
|
||||
|
||||
QBtn = qw.QDialogButtonBox.Ok | qw.QDialogButtonBox.Cancel
|
||||
self.buttonBox = qw.QDialogButtonBox(QBtn)
|
||||
self.buttonBox.accepted.connect(self.accept)
|
||||
self.buttonBox.rejected.connect(self.reject)
|
||||
|
||||
self.layout.addWidget(self.buttonBox)
|
||||
self.setLayout(self.layout)
|
||||
|
||||
|
||||
def get_sequence(view):
|
||||
sequence = None
|
||||
if isinstance(view, hiero.ui.TimelineEditor):
|
||||
sequence = view.sequence()
|
||||
|
||||
elif isinstance(view, hiero.ui.BinView):
|
||||
for item in view.selection():
|
||||
if not hasattr(item, 'acitveItem'):
|
||||
continue
|
||||
|
||||
if isinstance(item.activeItem(), hiero.core.Sequence):
|
||||
sequence = item.activeItem()
|
||||
|
||||
return sequence
|
||||
|
||||
|
||||
def OTIO_menu_action(event):
|
||||
otio_action = hiero.ui.createMenuAction(
|
||||
'Import OTIO',
|
||||
# Menu actions
|
||||
otio_import_action = hiero.ui.createMenuAction(
|
||||
'Import OTIO...',
|
||||
open_otio_file,
|
||||
icon=None
|
||||
)
|
||||
hiero.ui.registerAction(otio_action)
|
||||
|
||||
otio_add_track_action = hiero.ui.createMenuAction(
|
||||
'New Track(s) from OTIO...',
|
||||
open_otio_file,
|
||||
icon=None
|
||||
)
|
||||
otio_add_track_action.setEnabled(False)
|
||||
|
||||
hiero.ui.registerAction(otio_import_action)
|
||||
hiero.ui.registerAction(otio_add_track_action)
|
||||
|
||||
view = hiero.ui.currentContextMenuView()
|
||||
|
||||
if view:
|
||||
sequence = get_sequence(view)
|
||||
if sequence:
|
||||
otio_add_track_action.setEnabled(True)
|
||||
|
||||
for action in event.menu.actions():
|
||||
if action.text() == 'Import':
|
||||
action.menu().addAction(otio_action)
|
||||
break
|
||||
action.menu().addAction(otio_import_action)
|
||||
action.menu().addAction(otio_add_track_action)
|
||||
|
||||
elif action.text() == 'New Track':
|
||||
action.menu().addAction(otio_add_track_action)
|
||||
|
||||
|
||||
def open_otio_file():
|
||||
|
|
@ -45,8 +94,39 @@ def open_otio_file():
|
|||
pattern='*.otio',
|
||||
requiredExtension='.otio'
|
||||
)
|
||||
|
||||
selection = None
|
||||
sequence = None
|
||||
|
||||
view = hiero.ui.currentContextMenuView()
|
||||
if view:
|
||||
sequence = get_sequence(view)
|
||||
selection = view.selection()
|
||||
|
||||
if sequence:
|
||||
project = sequence.project()
|
||||
|
||||
elif selection:
|
||||
project = selection[0].project()
|
||||
|
||||
elif len(hiero.core.projects()) > 1:
|
||||
dialog = OTIOProjectSelect(hiero.core.projects())
|
||||
if dialog.exec_():
|
||||
project = hiero.core.projects()[dialog.projects.currentIndex()]
|
||||
|
||||
else:
|
||||
bar = hiero.ui.mainWindow().statusBar()
|
||||
bar.showMessage(
|
||||
'OTIO Import aborted by user',
|
||||
timeout=3000
|
||||
)
|
||||
return
|
||||
|
||||
else:
|
||||
project = hiero.core.projects()[-1]
|
||||
|
||||
for otio_file in files:
|
||||
load_otio(otio_file)
|
||||
load_otio(otio_file, project, sequence)
|
||||
|
||||
|
||||
# HieroPlayer is quite limited and can't create transitions etc.
|
||||
|
|
@ -55,3 +135,7 @@ if not hiero.core.isHieroPlayer():
|
|||
"kShowContextMenu/kBin",
|
||||
OTIO_menu_action
|
||||
)
|
||||
hiero.core.events.registerInterest(
|
||||
"kShowContextMenu/kTimeline",
|
||||
OTIO_menu_action
|
||||
)
|
||||
|
|
|
|||
|
|
@ -210,7 +210,7 @@ def validate_fps():
|
|||
|
||||
if current_fps != fps:
|
||||
|
||||
from ...widgets import popup
|
||||
from openpype.widgets import popup
|
||||
|
||||
# Find main window
|
||||
parent = hou.ui.mainQtWindow()
|
||||
|
|
@ -219,8 +219,8 @@ def validate_fps():
|
|||
else:
|
||||
dialog = popup.Popup2(parent=parent)
|
||||
dialog.setModal(True)
|
||||
dialog.setWindowTitle("Maya scene not in line with project")
|
||||
dialog.setMessage("The FPS is out of sync, please fix")
|
||||
dialog.setWindowTitle("Houdini scene not in line with project")
|
||||
dialog.setMessage("The FPS is out of sync, please fix it")
|
||||
|
||||
# Set new text for button (add optional argument for the popup?)
|
||||
toggle = dialog.widgets["toggle"]
|
||||
|
|
|
|||
|
|
@ -1872,7 +1872,7 @@ def set_context_settings():
|
|||
|
||||
# Set project fps
|
||||
fps = asset_data.get("fps", project_data.get("fps", 25))
|
||||
api.Session["AVALON_FPS"] = fps
|
||||
api.Session["AVALON_FPS"] = str(fps)
|
||||
set_scene_fps(fps)
|
||||
|
||||
# Set project resolution
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class CreateLook(plugin.Creator):
|
|||
family = "look"
|
||||
icon = "paint-brush"
|
||||
defaults = ['Main']
|
||||
make_tx = True
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateLook, self).__init__(*args, **kwargs)
|
||||
|
|
@ -19,7 +20,7 @@ class CreateLook(plugin.Creator):
|
|||
self.data["renderlayer"] = lib.get_current_renderlayer()
|
||||
|
||||
# Whether to automatically convert the textures to .tx upon publish.
|
||||
self.data["maketx"] = True
|
||||
self.data["maketx"] = self.make_tx
|
||||
|
||||
# Enable users to force a copy.
|
||||
self.data["forceCopy"] = False
|
||||
|
|
|
|||
23
openpype/hosts/maya/plugins/create/create_redshift_proxy.py
Normal file
23
openpype/hosts/maya/plugins/create/create_redshift_proxy.py
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator of Redshift proxy subset types."""
|
||||
|
||||
from openpype.hosts.maya.api import plugin, lib
|
||||
|
||||
|
||||
class CreateRedshiftProxy(plugin.Creator):
|
||||
"""Create instance of Redshift Proxy subset."""
|
||||
|
||||
name = "redshiftproxy"
|
||||
label = "Redshift Proxy"
|
||||
family = "redshiftproxy"
|
||||
icon = "gears"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
|
||||
|
||||
animation_data = lib.collect_animation_data()
|
||||
|
||||
self.data["animation"] = False
|
||||
self.data["proxyFrameStart"] = animation_data["frameStart"]
|
||||
self.data["proxyFrameEnd"] = animation_data["frameEnd"]
|
||||
self.data["proxyFrameStep"] = animation_data["step"]
|
||||
|
|
@ -105,7 +105,23 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
# Load relationships
|
||||
shader_relation = api.get_representation_path(json_representation)
|
||||
with open(shader_relation, "r") as f:
|
||||
relationships = json.load(f)
|
||||
json_data = json.load(f)
|
||||
|
||||
for rel, data in json_data["relationships"].items():
|
||||
# process only non-shading nodes
|
||||
current_node = "{}:{}".format(container["namespace"], rel)
|
||||
if current_node in shader_nodes:
|
||||
continue
|
||||
print("processing {}".format(rel))
|
||||
current_members = set(cmds.ls(
|
||||
cmds.sets(current_node, query=True) or [], long=True))
|
||||
new_members = {"{}".format(
|
||||
m["name"]) for m in data["members"] or []}
|
||||
dif = new_members.difference(current_members)
|
||||
|
||||
# add to set
|
||||
cmds.sets(
|
||||
dif, forceElement="{}:{}".format(container["namespace"], rel))
|
||||
|
||||
# update of reference could result in failed edits - material is not
|
||||
# present because of renaming etc.
|
||||
|
|
@ -120,7 +136,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
cmds.file(cr=reference_node) # cleanReference
|
||||
|
||||
# reapply shading groups from json representation on orig nodes
|
||||
openpype.hosts.maya.api.lib.apply_shaders(relationships,
|
||||
openpype.hosts.maya.api.lib.apply_shaders(json_data,
|
||||
shader_nodes,
|
||||
orig_nodes)
|
||||
|
||||
|
|
@ -128,12 +144,13 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"All successful edits were kept intact.\n",
|
||||
"Failed and removed edits:"]
|
||||
msg.extend(failed_edits)
|
||||
|
||||
msg = ScrollMessageBox(QtWidgets.QMessageBox.Warning,
|
||||
"Some reference edit failed",
|
||||
msg)
|
||||
msg.exec_()
|
||||
|
||||
attributes = relationships.get("attributes", [])
|
||||
attributes = json_data.get("attributes", [])
|
||||
|
||||
# region compute lookup
|
||||
nodes_by_id = defaultdict(list)
|
||||
|
|
|
|||
146
openpype/hosts/maya/plugins/load/load_redshift_proxy.py
Normal file
146
openpype/hosts/maya/plugins/load/load_redshift_proxy.py
Normal file
|
|
@ -0,0 +1,146 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Loader for Redshift proxy."""
|
||||
from avalon.maya import lib
|
||||
from avalon import api
|
||||
from openpype.api import get_project_settings
|
||||
import os
|
||||
import maya.cmds as cmds
|
||||
import clique
|
||||
|
||||
|
||||
class RedshiftProxyLoader(api.Loader):
|
||||
"""Load Redshift proxy"""
|
||||
|
||||
families = ["redshiftproxy"]
|
||||
representations = ["rs"]
|
||||
|
||||
label = "Import Redshift Proxy"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, options=None):
|
||||
"""Plugin entry point."""
|
||||
|
||||
from avalon.maya.pipeline import containerise
|
||||
from openpype.hosts.maya.api.lib import namespaced
|
||||
|
||||
try:
|
||||
family = context["representation"]["context"]["family"]
|
||||
except ValueError:
|
||||
family = "redshiftproxy"
|
||||
|
||||
asset_name = context['asset']["name"]
|
||||
namespace = namespace or lib.unique_namespace(
|
||||
asset_name + "_",
|
||||
prefix="_" if asset_name[0].isdigit() else "",
|
||||
suffix="_",
|
||||
)
|
||||
|
||||
# Ensure Redshift for Maya is loaded.
|
||||
cmds.loadPlugin("redshift4maya", quiet=True)
|
||||
|
||||
with lib.maintained_selection():
|
||||
cmds.namespace(addNamespace=namespace)
|
||||
with namespaced(namespace, new=False):
|
||||
nodes, group_node = self.create_rs_proxy(
|
||||
name, self.fname)
|
||||
|
||||
self[:] = nodes
|
||||
if not nodes:
|
||||
return
|
||||
|
||||
# colour the group node
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
colors = settings['maya']['load']['colors']
|
||||
c = colors.get(family)
|
||||
if c is not None:
|
||||
cmds.setAttr("{0}.useOutlinerColor".format(group_node), 1)
|
||||
cmds.setAttr("{0}.outlinerColor".format(group_node),
|
||||
c[0], c[1], c[2])
|
||||
|
||||
return containerise(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
nodes=nodes,
|
||||
context=context,
|
||||
loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
|
||||
node = container['objectName']
|
||||
assert cmds.objExists(node), "Missing container"
|
||||
|
||||
members = cmds.sets(node, query=True) or []
|
||||
rs_meshes = cmds.ls(members, type="RedshiftProxyMesh")
|
||||
assert rs_meshes, "Cannot find RedshiftProxyMesh in container"
|
||||
|
||||
filename = api.get_representation_path(representation)
|
||||
|
||||
for rs_mesh in rs_meshes:
|
||||
cmds.setAttr("{}.fileName".format(rs_mesh),
|
||||
filename,
|
||||
type="string")
|
||||
|
||||
# Update metadata
|
||||
cmds.setAttr("{}.representation".format(node),
|
||||
str(representation["_id"]),
|
||||
type="string")
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
# Delete container and its contents
|
||||
if cmds.objExists(container['objectName']):
|
||||
members = cmds.sets(container['objectName'], query=True) or []
|
||||
cmds.delete([container['objectName']] + members)
|
||||
|
||||
# Remove the namespace, if empty
|
||||
namespace = container['namespace']
|
||||
if cmds.namespace(exists=namespace):
|
||||
members = cmds.namespaceInfo(namespace, listNamespace=True)
|
||||
if not members:
|
||||
cmds.namespace(removeNamespace=namespace)
|
||||
else:
|
||||
self.log.warning("Namespace not deleted because it "
|
||||
"still has members: %s", namespace)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def create_rs_proxy(self, name, path):
|
||||
"""Creates Redshift Proxies showing a proxy object.
|
||||
|
||||
Args:
|
||||
name (str): Proxy name.
|
||||
path (str): Path to proxy file.
|
||||
|
||||
Returns:
|
||||
(str, str): Name of mesh with Redshift proxy and its parent
|
||||
transform.
|
||||
|
||||
"""
|
||||
rs_mesh = cmds.createNode(
|
||||
'RedshiftProxyMesh', name="{}_RS".format(name))
|
||||
mesh_shape = cmds.createNode("mesh", name="{}_GEOShape".format(name))
|
||||
|
||||
cmds.setAttr("{}.fileName".format(rs_mesh),
|
||||
path,
|
||||
type="string")
|
||||
|
||||
cmds.connectAttr("{}.outMesh".format(rs_mesh),
|
||||
"{}.inMesh".format(mesh_shape))
|
||||
|
||||
group_node = cmds.group(empty=True, name="{}_GRP".format(name))
|
||||
mesh_transform = cmds.listRelatives(mesh_shape,
|
||||
parent=True, fullPath=True)
|
||||
cmds.parent(mesh_transform, group_node)
|
||||
nodes = [rs_mesh, mesh_shape, group_node]
|
||||
|
||||
# determine if we need to enable animation support
|
||||
files_in_folder = os.listdir(os.path.dirname(path))
|
||||
collections, remainder = clique.assemble(files_in_folder)
|
||||
|
||||
if collections:
|
||||
cmds.setAttr("{}.useFrameExtension".format(rs_mesh), 1)
|
||||
|
||||
return nodes, group_node
|
||||
|
|
@ -1,8 +1,10 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Maya look collector."""
|
||||
import re
|
||||
import os
|
||||
import glob
|
||||
|
||||
from maya import cmds
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
|
@ -16,6 +18,11 @@ SHAPE_ATTRS = ["castsShadows",
|
|||
"doubleSided",
|
||||
"opposite"]
|
||||
|
||||
RENDERER_NODE_TYPES = [
|
||||
# redshift
|
||||
"RedshiftMeshParameters"
|
||||
]
|
||||
|
||||
SHAPE_ATTRS = set(SHAPE_ATTRS)
|
||||
|
||||
|
||||
|
|
@ -29,7 +36,6 @@ def get_look_attrs(node):
|
|||
list: Attribute names to extract
|
||||
|
||||
"""
|
||||
|
||||
# When referenced get only attributes that are "changed since file open"
|
||||
# which includes any reference edits, otherwise take *all* user defined
|
||||
# attributes
|
||||
|
|
@ -219,9 +225,13 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
with lib.renderlayer(instance.data["renderlayer"]):
|
||||
self.collect(instance)
|
||||
|
||||
|
||||
def collect(self, instance):
|
||||
"""Collect looks.
|
||||
|
||||
Args:
|
||||
instance: Instance to collect.
|
||||
|
||||
"""
|
||||
self.log.info("Looking for look associations "
|
||||
"for %s" % instance.data['name'])
|
||||
|
||||
|
|
@ -235,48 +245,91 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
self.log.info("Gathering set relations..")
|
||||
# Ensure iteration happen in a list so we can remove keys from the
|
||||
# dict within the loop
|
||||
for objset in list(sets):
|
||||
self.log.debug("From %s.." % objset)
|
||||
|
||||
# skipped types of attribute on render specific nodes
|
||||
disabled_types = ["message", "TdataCompound"]
|
||||
|
||||
for obj_set in list(sets):
|
||||
self.log.debug("From {}".format(obj_set))
|
||||
|
||||
# if node is specified as renderer node type, it will be
|
||||
# serialized with its attributes.
|
||||
if cmds.nodeType(obj_set) in RENDERER_NODE_TYPES:
|
||||
self.log.info("- {} is {}".format(
|
||||
obj_set, cmds.nodeType(obj_set)))
|
||||
|
||||
node_attrs = []
|
||||
|
||||
# serialize its attributes so they can be recreated on look
|
||||
# load.
|
||||
for attr in cmds.listAttr(obj_set):
|
||||
# skip publishedNodeInfo attributes as they break
|
||||
# getAttr() and we don't need them anyway
|
||||
if attr.startswith("publishedNodeInfo"):
|
||||
continue
|
||||
|
||||
# skip attributes types defined in 'disabled_type' list
|
||||
if cmds.getAttr("{}.{}".format(obj_set, attr), type=True) in disabled_types: # noqa
|
||||
continue
|
||||
|
||||
node_attrs.append((
|
||||
attr,
|
||||
cmds.getAttr("{}.{}".format(obj_set, attr)),
|
||||
cmds.getAttr(
|
||||
"{}.{}".format(obj_set, attr), type=True)
|
||||
))
|
||||
|
||||
for member in cmds.ls(
|
||||
cmds.sets(obj_set, query=True), long=True):
|
||||
member_data = self.collect_member_data(member,
|
||||
instance_lookup)
|
||||
if not member_data:
|
||||
continue
|
||||
|
||||
# Add information of the node to the members list
|
||||
sets[obj_set]["members"].append(member_data)
|
||||
|
||||
# Get all nodes of the current objectSet (shadingEngine)
|
||||
for member in cmds.ls(cmds.sets(objset, query=True), long=True):
|
||||
for member in cmds.ls(cmds.sets(obj_set, query=True), long=True):
|
||||
member_data = self.collect_member_data(member,
|
||||
instance_lookup)
|
||||
if not member_data:
|
||||
continue
|
||||
|
||||
# Add information of the node to the members list
|
||||
sets[objset]["members"].append(member_data)
|
||||
sets[obj_set]["members"].append(member_data)
|
||||
|
||||
# Remove sets that didn't have any members assigned in the end
|
||||
# Thus the data will be limited to only what we need.
|
||||
self.log.info("objset {}".format(sets[objset]))
|
||||
if not sets[objset]["members"] or (not objset.endswith("SG")):
|
||||
self.log.info("Removing redundant set information: "
|
||||
"%s" % objset)
|
||||
sets.pop(objset, None)
|
||||
self.log.info("obj_set {}".format(sets[obj_set]))
|
||||
if not sets[obj_set]["members"]:
|
||||
self.log.info(
|
||||
"Removing redundant set information: {}".format(obj_set))
|
||||
sets.pop(obj_set, None)
|
||||
|
||||
self.log.info("Gathering attribute changes to instance members..")
|
||||
attributes = self.collect_attributes_changed(instance)
|
||||
|
||||
# Store data on the instance
|
||||
instance.data["lookData"] = {"attributes": attributes,
|
||||
"relationships": sets}
|
||||
instance.data["lookData"] = {
|
||||
"attributes": attributes,
|
||||
"relationships": sets
|
||||
}
|
||||
|
||||
# Collect file nodes used by shading engines (if we have any)
|
||||
files = list()
|
||||
looksets = sets.keys()
|
||||
shaderAttrs = [
|
||||
"surfaceShader",
|
||||
"volumeShader",
|
||||
"displacementShader",
|
||||
"aiSurfaceShader",
|
||||
"aiVolumeShader"]
|
||||
materials = list()
|
||||
files = []
|
||||
look_sets = sets.keys()
|
||||
shader_attrs = [
|
||||
"surfaceShader",
|
||||
"volumeShader",
|
||||
"displacementShader",
|
||||
"aiSurfaceShader",
|
||||
"aiVolumeShader"]
|
||||
if look_sets:
|
||||
materials = []
|
||||
|
||||
if looksets:
|
||||
for look in looksets:
|
||||
for at in shaderAttrs:
|
||||
for look in look_sets:
|
||||
for at in shader_attrs:
|
||||
try:
|
||||
con = cmds.listConnections("{}.{}".format(look, at))
|
||||
except ValueError:
|
||||
|
|
@ -289,12 +342,19 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.info("Found materials:\n{}".format(materials))
|
||||
|
||||
self.log.info("Found the following sets:\n{}".format(looksets))
|
||||
self.log.info("Found the following sets:\n{}".format(look_sets))
|
||||
# Get the entire node chain of the look sets
|
||||
# history = cmds.listHistory(looksets)
|
||||
history = list()
|
||||
# history = cmds.listHistory(look_sets)
|
||||
history = []
|
||||
for material in materials:
|
||||
history.extend(cmds.listHistory(material))
|
||||
|
||||
# handle VrayPluginNodeMtl node - see #1397
|
||||
vray_plugin_nodes = cmds.ls(
|
||||
history, type="VRayPluginNodeMtl", long=True)
|
||||
for vray_node in vray_plugin_nodes:
|
||||
history.extend(cmds.listHistory(vray_node))
|
||||
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
files.extend(cmds.ls(history, type="aiImage", long=True))
|
||||
|
||||
|
|
@ -313,7 +373,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
# Ensure unique shader sets
|
||||
# Add shader sets to the instance for unify ID validation
|
||||
instance.extend(shader for shader in looksets if shader
|
||||
instance.extend(shader for shader in look_sets if shader
|
||||
not in instance_lookup)
|
||||
|
||||
self.log.info("Collected look for %s" % instance)
|
||||
|
|
@ -331,7 +391,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
dict
|
||||
"""
|
||||
|
||||
sets = dict()
|
||||
sets = {}
|
||||
for node in instance:
|
||||
related_sets = lib.get_related_sets(node)
|
||||
if not related_sets:
|
||||
|
|
@ -427,6 +487,11 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
self.log.debug("processing: {}".format(node))
|
||||
if cmds.nodeType(node) not in ["file", "aiImage"]:
|
||||
self.log.error(
|
||||
"Unsupported file node: {}".format(cmds.nodeType(node)))
|
||||
raise AssertionError("Unsupported file node")
|
||||
|
||||
if cmds.nodeType(node) == 'file':
|
||||
self.log.debug(" - file node")
|
||||
attribute = "{}.fileTextureName".format(node)
|
||||
|
|
@ -435,6 +500,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
self.log.debug("aiImage node")
|
||||
attribute = "{}.filename".format(node)
|
||||
computed_attribute = attribute
|
||||
|
||||
source = cmds.getAttr(attribute)
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
|
|
|
|||
|
|
@ -2,14 +2,9 @@ import pyblish.api
|
|||
|
||||
|
||||
class CollectRemoveMarked(pyblish.api.ContextPlugin):
|
||||
"""Collect model data
|
||||
"""Remove marked data
|
||||
|
||||
Ensures always only a single frame is extracted (current frame).
|
||||
|
||||
Note:
|
||||
This is a workaround so that the `pype.model` family can use the
|
||||
same pointcache extractor implementation as animation and pointcaches.
|
||||
This always enforces the "current" frame to be published.
|
||||
Remove instances that have 'remove' in their instance.data
|
||||
|
||||
"""
|
||||
|
||||
|
|
|
|||
|
|
@ -358,9 +358,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
options["extendFrames"] = extend_frames
|
||||
options["overrideExistingFrame"] = override_frames
|
||||
|
||||
maya_render_plugin = "MayaPype"
|
||||
if attributes.get("useMayaBatch", True):
|
||||
maya_render_plugin = "MayaBatch"
|
||||
maya_render_plugin = "MayaBatch"
|
||||
|
||||
options["mayaRenderPlugin"] = maya_render_plugin
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,14 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Maya look extractor."""
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import copy
|
||||
import tempfile
|
||||
import contextlib
|
||||
import subprocess
|
||||
from collections import OrderedDict
|
||||
|
||||
from maya import cmds
|
||||
from maya import cmds # noqa
|
||||
|
||||
import pyblish.api
|
||||
import avalon.maya
|
||||
|
|
@ -22,23 +23,38 @@ HARDLINK = 2
|
|||
|
||||
|
||||
def find_paths_by_hash(texture_hash):
|
||||
# Find the texture hash key in the dictionary and all paths that
|
||||
# originate from it.
|
||||
"""Find the texture hash key in the dictionary.
|
||||
|
||||
All paths that originate from it.
|
||||
|
||||
Args:
|
||||
texture_hash (str): Hash of the texture.
|
||||
|
||||
Return:
|
||||
str: path to texture if found.
|
||||
|
||||
"""
|
||||
key = "data.sourceHashes.{0}".format(texture_hash)
|
||||
return io.distinct(key, {"type": "version"})
|
||||
|
||||
|
||||
def maketx(source, destination, *args):
|
||||
"""Make .tx using maketx with some default settings.
|
||||
"""Make `.tx` using `maketx` with some default settings.
|
||||
|
||||
The settings are based on default as used in Arnold's
|
||||
txManager in the scene.
|
||||
This function requires the `maketx` executable to be
|
||||
on the `PATH`.
|
||||
|
||||
Args:
|
||||
source (str): Path to source file.
|
||||
destination (str): Writing destination path.
|
||||
"""
|
||||
*args: Additional arguments for `maketx`.
|
||||
|
||||
Returns:
|
||||
str: Output of `maketx` command.
|
||||
|
||||
"""
|
||||
cmd = [
|
||||
"maketx",
|
||||
"-v", # verbose
|
||||
|
|
@ -56,7 +72,7 @@ def maketx(source, destination, *args):
|
|||
|
||||
cmd = " ".join(cmd)
|
||||
|
||||
CREATE_NO_WINDOW = 0x08000000
|
||||
CREATE_NO_WINDOW = 0x08000000 # noqa
|
||||
kwargs = dict(args=cmd, stderr=subprocess.STDOUT)
|
||||
|
||||
if sys.platform == "win32":
|
||||
|
|
@ -118,12 +134,58 @@ class ExtractLook(openpype.api.Extractor):
|
|||
hosts = ["maya"]
|
||||
families = ["look"]
|
||||
order = pyblish.api.ExtractorOrder + 0.2
|
||||
scene_type = "ma"
|
||||
|
||||
@staticmethod
|
||||
def get_renderer_name():
|
||||
"""Get renderer name from Maya.
|
||||
|
||||
Returns:
|
||||
str: Renderer name.
|
||||
|
||||
"""
|
||||
renderer = cmds.getAttr(
|
||||
"defaultRenderGlobals.currentRenderer"
|
||||
).lower()
|
||||
# handle various renderman names
|
||||
if renderer.startswith("renderman"):
|
||||
renderer = "renderman"
|
||||
return renderer
|
||||
|
||||
def get_maya_scene_type(self, instance):
|
||||
"""Get Maya scene type from settings.
|
||||
|
||||
Args:
|
||||
instance (pyblish.api.Instance): Instance with collected
|
||||
project settings.
|
||||
|
||||
"""
|
||||
ext_mapping = (
|
||||
instance.context.data["project_settings"]["maya"]["ext_mapping"]
|
||||
)
|
||||
if ext_mapping:
|
||||
self.log.info("Looking in settings for scene type ...")
|
||||
# use extension mapping for first family found
|
||||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
except KeyError:
|
||||
# no preset found
|
||||
pass
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point.
|
||||
|
||||
Args:
|
||||
instance: Instance to process.
|
||||
|
||||
"""
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
maya_fname = "{0}.ma".format(instance.name)
|
||||
maya_fname = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
json_fname = "{0}.json".format(instance.name)
|
||||
|
||||
# Make texture dump folder
|
||||
|
|
@ -148,7 +210,7 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
# Collect all unique files used in the resources
|
||||
files = set()
|
||||
files_metadata = dict()
|
||||
files_metadata = {}
|
||||
for resource in resources:
|
||||
# Preserve color space values (force value after filepath change)
|
||||
# This will also trigger in the same order at end of context to
|
||||
|
|
@ -162,35 +224,33 @@ class ExtractLook(openpype.api.Extractor):
|
|||
# files.update(os.path.normpath(f))
|
||||
|
||||
# Process the resource files
|
||||
transfers = list()
|
||||
hardlinks = list()
|
||||
hashes = dict()
|
||||
forceCopy = instance.data.get("forceCopy", False)
|
||||
transfers = []
|
||||
hardlinks = []
|
||||
hashes = {}
|
||||
force_copy = instance.data.get("forceCopy", False)
|
||||
|
||||
self.log.info(files)
|
||||
for filepath in files_metadata:
|
||||
|
||||
cspace = files_metadata[filepath]["color_space"]
|
||||
linearise = False
|
||||
if do_maketx:
|
||||
if cspace == "sRGB":
|
||||
linearise = True
|
||||
# set its file node to 'raw' as tx will be linearized
|
||||
files_metadata[filepath]["color_space"] = "raw"
|
||||
linearize = False
|
||||
if do_maketx and files_metadata[filepath]["color_space"] == "sRGB": # noqa: E501
|
||||
linearize = True
|
||||
# set its file node to 'raw' as tx will be linearized
|
||||
files_metadata[filepath]["color_space"] = "raw"
|
||||
|
||||
source, mode, hash = self._process_texture(
|
||||
source, mode, texture_hash = self._process_texture(
|
||||
filepath,
|
||||
do_maketx,
|
||||
staging=dir_path,
|
||||
linearise=linearise,
|
||||
force=forceCopy
|
||||
linearize=linearize,
|
||||
force=force_copy
|
||||
)
|
||||
destination = self.resource_destination(instance,
|
||||
source,
|
||||
do_maketx)
|
||||
|
||||
# Force copy is specified.
|
||||
if forceCopy:
|
||||
if force_copy:
|
||||
mode = COPY
|
||||
|
||||
if mode == COPY:
|
||||
|
|
@ -202,10 +262,10 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
# Store the hashes from hash to destination to include in the
|
||||
# database
|
||||
hashes[hash] = destination
|
||||
hashes[texture_hash] = destination
|
||||
|
||||
# Remap the resources to the destination path (change node attributes)
|
||||
destinations = dict()
|
||||
destinations = {}
|
||||
remap = OrderedDict() # needs to be ordered, see color space values
|
||||
for resource in resources:
|
||||
source = os.path.normpath(resource["source"])
|
||||
|
|
@ -222,7 +282,7 @@ class ExtractLook(openpype.api.Extractor):
|
|||
color_space_attr = resource["node"] + ".colorSpace"
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
if files_metadata[source]["color_space"] == "raw":
|
||||
# set colorpsace to raw if we linearized it
|
||||
# set color space to raw if we linearized it
|
||||
color_space = "Raw"
|
||||
# Remap file node filename to destination
|
||||
attr = resource["attribute"]
|
||||
|
|
@ -267,11 +327,11 @@ class ExtractLook(openpype.api.Extractor):
|
|||
json.dump(data, f)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
instance.data["files"] = []
|
||||
if "hardlinks" not in instance.data:
|
||||
instance.data["hardlinks"] = list()
|
||||
instance.data["hardlinks"] = []
|
||||
if "transfers" not in instance.data:
|
||||
instance.data["transfers"] = list()
|
||||
instance.data["transfers"] = []
|
||||
|
||||
instance.data["files"].append(maya_fname)
|
||||
instance.data["files"].append(json_fname)
|
||||
|
|
@ -311,14 +371,26 @@ class ExtractLook(openpype.api.Extractor):
|
|||
maya_path))
|
||||
|
||||
def resource_destination(self, instance, filepath, do_maketx):
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
"""Get resource destination path.
|
||||
|
||||
This is utility function to change path if resource file name is
|
||||
changed by some external tool like `maketx`.
|
||||
|
||||
Args:
|
||||
instance: Current Instance.
|
||||
filepath (str): Resource path
|
||||
do_maketx (bool): Flag if resource is processed by `maketx`.
|
||||
|
||||
Returns:
|
||||
str: Path to resource file
|
||||
|
||||
"""
|
||||
resources_dir = instance.data["resourcesDir"]
|
||||
|
||||
# Compute destination location
|
||||
basename, ext = os.path.splitext(os.path.basename(filepath))
|
||||
|
||||
# If maketx then the texture will always end with .tx
|
||||
# If `maketx` then the texture will always end with .tx
|
||||
if do_maketx:
|
||||
ext = ".tx"
|
||||
|
||||
|
|
@ -326,7 +398,7 @@ class ExtractLook(openpype.api.Extractor):
|
|||
resources_dir, basename + ext
|
||||
)
|
||||
|
||||
def _process_texture(self, filepath, do_maketx, staging, linearise, force):
|
||||
def _process_texture(self, filepath, do_maketx, staging, linearize, force):
|
||||
"""Process a single texture file on disk for publishing.
|
||||
This will:
|
||||
1. Check whether it's already published, if so it will do hardlink
|
||||
|
|
@ -363,7 +435,7 @@ class ExtractLook(openpype.api.Extractor):
|
|||
# Produce .tx file in staging if source file is not .tx
|
||||
converted = os.path.join(staging, "resources", fname + ".tx")
|
||||
|
||||
if linearise:
|
||||
if linearize:
|
||||
self.log.info("tx: converting sRGB -> linear")
|
||||
colorconvert = "--colorconvert sRGB linear"
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,82 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Redshift Proxy extractor."""
|
||||
import os
|
||||
|
||||
import avalon.maya
|
||||
import openpype.api
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class ExtractRedshiftProxy(openpype.api.Extractor):
|
||||
"""Extract the content of the instance to a redshift proxy file."""
|
||||
|
||||
label = "Redshift Proxy (.rs)"
|
||||
hosts = ["maya"]
|
||||
families = ["redshiftproxy"]
|
||||
|
||||
def process(self, instance):
|
||||
"""Extractor entry point."""
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
file_name = "{}.rs".format(instance.name)
|
||||
file_path = os.path.join(staging_dir, file_name)
|
||||
|
||||
anim_on = instance.data["animation"]
|
||||
rs_options = "exportConnectivity=0;enableCompression=1;keepUnused=0;"
|
||||
repr_files = file_name
|
||||
|
||||
if not anim_on:
|
||||
# Remove animation information because it is not required for
|
||||
# non-animated subsets
|
||||
instance.data.pop("proxyFrameStart", None)
|
||||
instance.data.pop("proxyFrameEnd", None)
|
||||
|
||||
else:
|
||||
start_frame = instance.data["proxyFrameStart"]
|
||||
end_frame = instance.data["proxyFrameEnd"]
|
||||
rs_options = "{}startFrame={};endFrame={};frameStep={};".format(
|
||||
rs_options, start_frame,
|
||||
end_frame, instance.data["proxyFrameStep"]
|
||||
)
|
||||
|
||||
root, ext = os.path.splitext(file_path)
|
||||
# Padding is taken from number of digits of the end_frame.
|
||||
# Not sure where Redshift is taking it.
|
||||
repr_files = [
|
||||
"{}.{}{}".format(root, str(frame).rjust(4, "0"), ext) # noqa: E501
|
||||
for frame in range(
|
||||
int(start_frame),
|
||||
int(end_frame) + 1,
|
||||
int(instance.data["proxyFrameStep"]),
|
||||
)]
|
||||
# vertex_colors = instance.data.get("vertexColors", False)
|
||||
|
||||
# Write out rs file
|
||||
self.log.info("Writing: '%s'" % file_path)
|
||||
with avalon.maya.maintained_selection():
|
||||
cmds.select(instance.data["setMembers"], noExpand=True)
|
||||
cmds.file(file_path,
|
||||
pr=False,
|
||||
force=True,
|
||||
type="Redshift Proxy",
|
||||
exportSelected=True,
|
||||
options=rs_options)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self.log.debug("Files: {}".format(repr_files))
|
||||
|
||||
representation = {
|
||||
'name': 'rs',
|
||||
'ext': 'rs',
|
||||
'files': repr_files,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
if anim_on:
|
||||
representation["frameStart"] = instance.data["proxyFrameStart"]
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
|
|
@ -5,7 +5,7 @@ import re
|
|||
|
||||
import avalon.maya
|
||||
import openpype.api
|
||||
from openpype.hosts.maya.render_setup_tools import export_in_rs_layer
|
||||
from openpype.hosts.maya.api.render_setup_tools import export_in_rs_layer
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
|
|
|||
|
|
@ -73,8 +73,10 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
|
|||
# check if any objectSets are not present ion the relationships
|
||||
missing_sets = [s for s in sets if s not in relationships]
|
||||
if missing_sets:
|
||||
for set in missing_sets:
|
||||
if '_SET' not in set:
|
||||
for missing_set in missing_sets:
|
||||
cls.log.debug(missing_set)
|
||||
|
||||
if '_SET' not in missing_set:
|
||||
# A set of this node is not coming along, this is wrong!
|
||||
cls.log.error("Missing sets '{}' for node "
|
||||
"'{}'".format(missing_sets, node))
|
||||
|
|
@ -82,8 +84,8 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
# Ensure the node is in the sets that are collected
|
||||
for shaderset, data in relationships.items():
|
||||
if shaderset not in sets:
|
||||
for shader_set, data in relationships.items():
|
||||
if shader_set not in sets:
|
||||
# no need to check for a set if the node
|
||||
# isn't in it anyway
|
||||
continue
|
||||
|
|
@ -94,7 +96,7 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
|
|||
# The node is not found in the collected set
|
||||
# relationships
|
||||
cls.log.error("Missing '{}' in collected set node "
|
||||
"'{}'".format(node, shaderset))
|
||||
"'{}'".format(node, shader_set))
|
||||
invalid.append(node)
|
||||
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import openpype.api
|
|||
class ValidateUnrealMeshTriangulated(pyblish.api.InstancePlugin):
|
||||
"""Validate if mesh is made of triangles for Unreal Engine"""
|
||||
|
||||
order = openpype.api.ValidateMeshOder
|
||||
order = openpype.api.ValidateMeshOrder
|
||||
hosts = ["maya"]
|
||||
families = ["unrealStaticMesh"]
|
||||
category = "geometry"
|
||||
|
|
|
|||
|
|
@ -10,7 +10,6 @@ print("starting OpenPype usersetup")
|
|||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
shelf_preset = settings['maya'].get('project_shelf')
|
||||
|
||||
|
||||
if shelf_preset:
|
||||
project = os.environ["AVALON_PROJECT"]
|
||||
|
||||
|
|
@ -23,7 +22,7 @@ if shelf_preset:
|
|||
print(import_string)
|
||||
exec(import_string)
|
||||
|
||||
cmds.evalDeferred("mlib.shelf(name=shelf_preset['name'], iconPath=icon_path, preset=shelf_preset)")
|
||||
cmds.evalDeferred("mlib.shelf(name=shelf_preset['name'], iconPath=icon_path, preset=shelf_preset)")
|
||||
|
||||
|
||||
print("finished OpenPype usersetup")
|
||||
|
|
|
|||
|
|
@ -106,7 +106,7 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
|
|||
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
|
||||
instance, old_value, new_value))
|
||||
|
||||
from avalon.api.nuke import (
|
||||
from avalon.nuke import (
|
||||
viewer_update_and_undo_stop,
|
||||
add_publish_knob
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
import os
|
||||
import re
|
||||
import sys
|
||||
import six
|
||||
import platform
|
||||
from collections import OrderedDict
|
||||
|
||||
|
||||
|
|
@ -19,7 +21,6 @@ from openpype.api import (
|
|||
get_hierarchy,
|
||||
get_asset,
|
||||
get_current_project_settings,
|
||||
config,
|
||||
ApplicationManager
|
||||
)
|
||||
|
||||
|
|
@ -29,36 +30,34 @@ from .utils import set_context_favorites
|
|||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self._project = None
|
||||
self.workfiles_launched = False
|
||||
self._node_tab_name = "{}".format(os.getenv("AVALON_LABEL") or "Avalon")
|
||||
opnl = sys.modules[__name__]
|
||||
opnl._project = None
|
||||
opnl.project_name = os.getenv("AVALON_PROJECT")
|
||||
opnl.workfiles_launched = False
|
||||
opnl._node_tab_name = "{}".format(os.getenv("AVALON_LABEL") or "Avalon")
|
||||
|
||||
|
||||
def get_node_imageio_setting(**kwarg):
|
||||
def get_created_node_imageio_setting(**kwarg):
|
||||
''' Get preset data for dataflow (fileType, compression, bitDepth)
|
||||
'''
|
||||
log.info(kwarg)
|
||||
host = str(kwarg.get("host", "nuke"))
|
||||
log.debug(kwarg)
|
||||
nodeclass = kwarg.get("nodeclass", None)
|
||||
creator = kwarg.get("creator", None)
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
|
||||
assert any([host, nodeclass]), nuke.message(
|
||||
assert any([creator, nodeclass]), nuke.message(
|
||||
"`{}`: Missing mandatory kwargs `host`, `cls`".format(__file__))
|
||||
|
||||
imageio_nodes = (get_anatomy_settings(project_name)
|
||||
["imageio"]
|
||||
.get(host, None)
|
||||
["nodes"]
|
||||
["requiredNodes"]
|
||||
)
|
||||
imageio = get_anatomy_settings(opnl.project_name)["imageio"]
|
||||
imageio_nodes = imageio["nuke"]["nodes"]["requiredNodes"]
|
||||
|
||||
imageio_node = None
|
||||
for node in imageio_nodes:
|
||||
log.info(node)
|
||||
if node["nukeNodeClass"] == nodeclass:
|
||||
if creator in node["plugins"]:
|
||||
imageio_node = node
|
||||
if (node["nukeNodeClass"] != nodeclass) and (
|
||||
creator not in node["plugins"]):
|
||||
continue
|
||||
|
||||
imageio_node = node
|
||||
|
||||
log.info("ImageIO node: {}".format(imageio_node))
|
||||
return imageio_node
|
||||
|
|
@ -67,12 +66,9 @@ def get_node_imageio_setting(**kwarg):
|
|||
def get_imageio_input_colorspace(filename):
|
||||
''' Get input file colorspace based on regex in settings.
|
||||
'''
|
||||
imageio_regex_inputs = (get_anatomy_settings(os.getenv("AVALON_PROJECT"))
|
||||
["imageio"]
|
||||
["nuke"]
|
||||
["regexInputs"]
|
||||
["inputs"]
|
||||
)
|
||||
imageio_regex_inputs = (
|
||||
get_anatomy_settings(opnl.project_name)
|
||||
["imageio"]["nuke"]["regexInputs"]["inputs"])
|
||||
|
||||
preset_clrsp = None
|
||||
for regexInput in imageio_regex_inputs:
|
||||
|
|
@ -104,40 +100,39 @@ def check_inventory_versions():
|
|||
"""
|
||||
# get all Loader nodes by avalon attribute metadata
|
||||
for each in nuke.allNodes():
|
||||
if each.Class() == 'Read':
|
||||
container = avalon.nuke.parse_container(each)
|
||||
container = avalon.nuke.parse_container(each)
|
||||
|
||||
if container:
|
||||
node = nuke.toNode(container["objectName"])
|
||||
avalon_knob_data = avalon.nuke.read(
|
||||
node)
|
||||
if container:
|
||||
node = nuke.toNode(container["objectName"])
|
||||
avalon_knob_data = avalon.nuke.read(
|
||||
node)
|
||||
|
||||
# get representation from io
|
||||
representation = io.find_one({
|
||||
"type": "representation",
|
||||
"_id": io.ObjectId(avalon_knob_data["representation"])
|
||||
})
|
||||
# get representation from io
|
||||
representation = io.find_one({
|
||||
"type": "representation",
|
||||
"_id": io.ObjectId(avalon_knob_data["representation"])
|
||||
})
|
||||
|
||||
# Get start frame from version data
|
||||
version = io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
# Get start frame from version data
|
||||
version = io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
|
||||
# get all versions in list
|
||||
versions = io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
# get all versions in list
|
||||
versions = io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
max_version = max(versions)
|
||||
|
||||
# check the available version and do match
|
||||
# change color of node if not max verion
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
else:
|
||||
node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
# check the available version and do match
|
||||
# change color of node if not max verion
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
else:
|
||||
node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
|
||||
def writes_version_sync():
|
||||
|
|
@ -153,34 +148,33 @@ def writes_version_sync():
|
|||
except Exception:
|
||||
return
|
||||
|
||||
for each in nuke.allNodes():
|
||||
if each.Class() == 'Write':
|
||||
# check if the node is avalon tracked
|
||||
if self._node_tab_name not in each.knobs():
|
||||
for each in nuke.allNodes(filter="Write"):
|
||||
# check if the node is avalon tracked
|
||||
if opnl._node_tab_name not in each.knobs():
|
||||
continue
|
||||
|
||||
avalon_knob_data = avalon.nuke.read(
|
||||
each)
|
||||
|
||||
try:
|
||||
if avalon_knob_data['families'] not in ["render"]:
|
||||
log.debug(avalon_knob_data['families'])
|
||||
continue
|
||||
|
||||
avalon_knob_data = avalon.nuke.read(
|
||||
each)
|
||||
node_file = each['file'].value()
|
||||
|
||||
try:
|
||||
if avalon_knob_data['families'] not in ["render"]:
|
||||
log.debug(avalon_knob_data['families'])
|
||||
continue
|
||||
node_version = "v" + get_version_from_path(node_file)
|
||||
log.debug("node_version: {}".format(node_version))
|
||||
|
||||
node_file = each['file'].value()
|
||||
|
||||
node_version = "v" + get_version_from_path(node_file)
|
||||
log.debug("node_version: {}".format(node_version))
|
||||
|
||||
node_new_file = node_file.replace(node_version, new_version)
|
||||
each['file'].setValue(node_new_file)
|
||||
if not os.path.isdir(os.path.dirname(node_new_file)):
|
||||
log.warning("Path does not exist! I am creating it.")
|
||||
os.makedirs(os.path.dirname(node_new_file))
|
||||
except Exception as e:
|
||||
log.warning(
|
||||
"Write node: `{}` has no version in path: {}".format(
|
||||
each.name(), e))
|
||||
node_new_file = node_file.replace(node_version, new_version)
|
||||
each['file'].setValue(node_new_file)
|
||||
if not os.path.isdir(os.path.dirname(node_new_file)):
|
||||
log.warning("Path does not exist! I am creating it.")
|
||||
os.makedirs(os.path.dirname(node_new_file))
|
||||
except Exception as e:
|
||||
log.warning(
|
||||
"Write node: `{}` has no version in path: {}".format(
|
||||
each.name(), e))
|
||||
|
||||
|
||||
def version_up_script():
|
||||
|
|
@ -201,24 +195,22 @@ def check_subsetname_exists(nodes, subset_name):
|
|||
Returns:
|
||||
bool: True of False
|
||||
"""
|
||||
result = next((True for n in nodes
|
||||
if subset_name in avalon.nuke.read(n).get("subset", "")), False)
|
||||
return result
|
||||
return next((True for n in nodes
|
||||
if subset_name in avalon.nuke.read(n).get("subset", "")),
|
||||
False)
|
||||
|
||||
|
||||
def get_render_path(node):
|
||||
''' Generate Render path from presets regarding avalon knob data
|
||||
'''
|
||||
data = dict()
|
||||
data['avalon'] = avalon.nuke.read(
|
||||
node)
|
||||
|
||||
data = {'avalon': avalon.nuke.read(node)}
|
||||
data_preset = {
|
||||
"class": data['avalon']['family'],
|
||||
"preset": data['avalon']['families']
|
||||
"nodeclass": data['avalon']['family'],
|
||||
"families": [data['avalon']['families']],
|
||||
"creator": data['avalon']['creator']
|
||||
}
|
||||
|
||||
nuke_imageio_writes = get_node_imageio_setting(**data_preset)
|
||||
nuke_imageio_writes = get_created_node_imageio_setting(**data_preset)
|
||||
|
||||
application = lib.get_application(os.environ["AVALON_APP_NAME"])
|
||||
data.update({
|
||||
|
|
@ -324,7 +316,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
node (obj): group node with avalon data as Knobs
|
||||
'''
|
||||
|
||||
imageio_writes = get_node_imageio_setting(**data)
|
||||
imageio_writes = get_created_node_imageio_setting(**data)
|
||||
app_manager = ApplicationManager()
|
||||
app_name = os.environ.get("AVALON_APP_NAME")
|
||||
if app_name:
|
||||
|
|
@ -367,8 +359,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
# adding dataflow template
|
||||
log.debug("imageio_writes: `{}`".format(imageio_writes))
|
||||
for knob in imageio_writes["knobs"]:
|
||||
if knob["name"] not in ["_id", "_previous"]:
|
||||
_data.update({knob["name"]: knob["value"]})
|
||||
_data.update({knob["name"]: knob["value"]})
|
||||
|
||||
_data = anlib.fix_data_for_node_create(_data)
|
||||
|
||||
|
|
@ -390,16 +381,19 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
"inputName": input.name()})
|
||||
prev_node = nuke.createNode(
|
||||
"Input", "name {}".format(input.name()))
|
||||
prev_node.hideControlPanel()
|
||||
else:
|
||||
# generic input node connected to nothing
|
||||
prev_node = nuke.createNode(
|
||||
"Input", "name {}".format("rgba"))
|
||||
prev_node.hideControlPanel()
|
||||
|
||||
# creating pre-write nodes `prenodes`
|
||||
if prenodes:
|
||||
for name, klass, properties, set_output_to in prenodes:
|
||||
# create node
|
||||
now_node = nuke.createNode(klass, "name {}".format(name))
|
||||
now_node.hideControlPanel()
|
||||
|
||||
# add data to knob
|
||||
for k, v in properties:
|
||||
|
|
@ -421,17 +415,21 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
for i, node_name in enumerate(set_output_to):
|
||||
input_node = nuke.createNode(
|
||||
"Input", "name {}".format(node_name))
|
||||
input_node.hideControlPanel()
|
||||
connections.append({
|
||||
"node": nuke.toNode(node_name),
|
||||
"inputName": node_name})
|
||||
now_node.setInput(1, input_node)
|
||||
|
||||
elif isinstance(set_output_to, str):
|
||||
input_node = nuke.createNode(
|
||||
"Input", "name {}".format(node_name))
|
||||
input_node.hideControlPanel()
|
||||
connections.append({
|
||||
"node": nuke.toNode(set_output_to),
|
||||
"inputName": set_output_to})
|
||||
now_node.setInput(0, input_node)
|
||||
|
||||
else:
|
||||
now_node.setInput(0, prev_node)
|
||||
|
||||
|
|
@ -443,7 +441,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
"inside_{}".format(name),
|
||||
**_data
|
||||
)
|
||||
|
||||
write_node.hideControlPanel()
|
||||
# connect to previous node
|
||||
now_node.setInput(0, prev_node)
|
||||
|
||||
|
|
@ -451,6 +449,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
prev_node = now_node
|
||||
|
||||
now_node = nuke.createNode("Output", "name Output1")
|
||||
now_node.hideControlPanel()
|
||||
|
||||
# connect to previous node
|
||||
now_node.setInput(0, prev_node)
|
||||
|
|
@ -498,7 +497,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
add_deadline_tab(GN)
|
||||
|
||||
# open the our Tab as default
|
||||
GN[self._node_tab_name].setFlag(0)
|
||||
GN[opnl._node_tab_name].setFlag(0)
|
||||
|
||||
# set tile color
|
||||
tile_color = _data.get("tile_color", "0xff0000ff")
|
||||
|
|
@ -621,7 +620,7 @@ class WorkfileSettings(object):
|
|||
root_node=None,
|
||||
nodes=None,
|
||||
**kwargs):
|
||||
self._project = kwargs.get(
|
||||
opnl._project = kwargs.get(
|
||||
"project") or io.find_one({"type": "project"})
|
||||
self._asset = kwargs.get("asset_name") or api.Session["AVALON_ASSET"]
|
||||
self._asset_entity = get_asset(self._asset)
|
||||
|
|
@ -664,8 +663,7 @@ class WorkfileSettings(object):
|
|||
]
|
||||
|
||||
erased_viewers = []
|
||||
for v in [n for n in self._nodes
|
||||
if "Viewer" in n.Class()]:
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
v['viewerProcess'].setValue(str(viewer_dict["viewerProcess"]))
|
||||
if str(viewer_dict["viewerProcess"]) \
|
||||
not in v['viewerProcess'].value():
|
||||
|
|
@ -709,7 +707,7 @@ class WorkfileSettings(object):
|
|||
log.error(msg)
|
||||
nuke.message(msg)
|
||||
|
||||
log.debug(">> root_dict: {}".format(root_dict))
|
||||
log.warning(">> root_dict: {}".format(root_dict))
|
||||
|
||||
# first set OCIO
|
||||
if self._root_node["colorManagement"].value() \
|
||||
|
|
@ -731,41 +729,41 @@ class WorkfileSettings(object):
|
|||
|
||||
# third set ocio custom path
|
||||
if root_dict.get("customOCIOConfigPath"):
|
||||
self._root_node["customOCIOConfigPath"].setValue(
|
||||
str(root_dict["customOCIOConfigPath"]).format(
|
||||
**os.environ
|
||||
).replace("\\", "/")
|
||||
)
|
||||
log.debug("nuke.root()['{}'] changed to: {}".format(
|
||||
"customOCIOConfigPath", root_dict["customOCIOConfigPath"]))
|
||||
root_dict.pop("customOCIOConfigPath")
|
||||
unresolved_path = root_dict["customOCIOConfigPath"]
|
||||
ocio_paths = unresolved_path[platform.system().lower()]
|
||||
|
||||
resolved_path = None
|
||||
for ocio_p in ocio_paths:
|
||||
resolved_path = str(ocio_p).format(**os.environ)
|
||||
if not os.path.exists(resolved_path):
|
||||
continue
|
||||
|
||||
if resolved_path:
|
||||
self._root_node["customOCIOConfigPath"].setValue(
|
||||
str(resolved_path).replace("\\", "/")
|
||||
)
|
||||
log.debug("nuke.root()['{}'] changed to: {}".format(
|
||||
"customOCIOConfigPath", resolved_path))
|
||||
root_dict.pop("customOCIOConfigPath")
|
||||
|
||||
# then set the rest
|
||||
for knob, value in root_dict.items():
|
||||
# skip unfilled ocio config path
|
||||
# it will be dict in value
|
||||
if isinstance(value, dict):
|
||||
continue
|
||||
if self._root_node[knob].value() not in value:
|
||||
self._root_node[knob].setValue(str(value))
|
||||
log.debug("nuke.root()['{}'] changed to: {}".format(
|
||||
knob, value))
|
||||
|
||||
def set_writes_colorspace(self, write_dict):
|
||||
def set_writes_colorspace(self):
|
||||
''' Adds correct colorspace to write node dict
|
||||
|
||||
Arguments:
|
||||
write_dict (dict): nuke write node as dictionary
|
||||
|
||||
'''
|
||||
# scene will have fixed colorspace following presets for the project
|
||||
if not isinstance(write_dict, dict):
|
||||
msg = "set_root_colorspace(): argument should be dictionary"
|
||||
log.error(msg)
|
||||
return
|
||||
|
||||
from avalon.nuke import read
|
||||
|
||||
for node in nuke.allNodes():
|
||||
|
||||
if node.Class() in ["Viewer", "Dot"]:
|
||||
continue
|
||||
for node in nuke.allNodes(filter="Group"):
|
||||
|
||||
# get data from avalon knob
|
||||
avalon_knob_data = read(node)
|
||||
|
|
@ -781,49 +779,63 @@ class WorkfileSettings(object):
|
|||
if avalon_knob_data.get("families"):
|
||||
families.append(avalon_knob_data.get("families"))
|
||||
|
||||
# except disabled nodes but exclude backdrops in test
|
||||
for fmly, knob in write_dict.items():
|
||||
write = None
|
||||
if (fmly in families):
|
||||
# Add all nodes in group instances.
|
||||
if node.Class() == "Group":
|
||||
node.begin()
|
||||
for x in nuke.allNodes():
|
||||
if x.Class() == "Write":
|
||||
write = x
|
||||
node.end()
|
||||
elif node.Class() == "Write":
|
||||
write = node
|
||||
else:
|
||||
log.warning("Wrong write node Class")
|
||||
data_preset = {
|
||||
"nodeclass": avalon_knob_data["family"],
|
||||
"families": families,
|
||||
"creator": avalon_knob_data['creator']
|
||||
}
|
||||
|
||||
write["colorspace"].setValue(str(knob["colorspace"]))
|
||||
log.info(
|
||||
"Setting `{0}` to `{1}`".format(
|
||||
write.name(),
|
||||
knob["colorspace"]))
|
||||
nuke_imageio_writes = get_created_node_imageio_setting(
|
||||
**data_preset)
|
||||
|
||||
def set_reads_colorspace(self, reads):
|
||||
log.debug("nuke_imageio_writes: `{}`".format(nuke_imageio_writes))
|
||||
|
||||
if not nuke_imageio_writes:
|
||||
return
|
||||
|
||||
write_node = None
|
||||
|
||||
# get into the group node
|
||||
node.begin()
|
||||
for x in nuke.allNodes():
|
||||
if x.Class() == "Write":
|
||||
write_node = x
|
||||
node.end()
|
||||
|
||||
if not write_node:
|
||||
return
|
||||
|
||||
# write all knobs to node
|
||||
for knob in nuke_imageio_writes["knobs"]:
|
||||
value = knob["value"]
|
||||
if isinstance(value, six.text_type):
|
||||
value = str(value)
|
||||
if str(value).startswith("0x"):
|
||||
value = int(value, 16)
|
||||
|
||||
write_node[knob["name"]].setValue(value)
|
||||
|
||||
|
||||
def set_reads_colorspace(self, read_clrs_inputs):
|
||||
""" Setting colorspace to Read nodes
|
||||
|
||||
Looping trought all read nodes and tries to set colorspace based
|
||||
on regex rules in presets
|
||||
"""
|
||||
changes = dict()
|
||||
changes = {}
|
||||
for n in nuke.allNodes():
|
||||
file = nuke.filename(n)
|
||||
if not n.Class() == "Read":
|
||||
if n.Class() != "Read":
|
||||
continue
|
||||
|
||||
# load nuke presets for Read's colorspace
|
||||
read_clrs_presets = config.get_init_presets()["colorspace"].get(
|
||||
"nuke", {}).get("read", {})
|
||||
|
||||
# check if any colorspace presets for read is mathing
|
||||
preset_clrsp = next((read_clrs_presets[k]
|
||||
for k in read_clrs_presets
|
||||
if bool(re.search(k, file))),
|
||||
None)
|
||||
preset_clrsp = None
|
||||
|
||||
for input in read_clrs_inputs:
|
||||
if not bool(re.search(input["regex"], file)):
|
||||
continue
|
||||
preset_clrsp = input["colorspace"]
|
||||
|
||||
log.debug(preset_clrsp)
|
||||
if preset_clrsp is not None:
|
||||
current = n["colorspace"].value()
|
||||
|
|
@ -857,13 +869,15 @@ class WorkfileSettings(object):
|
|||
def set_colorspace(self):
|
||||
''' Setting colorpace following presets
|
||||
'''
|
||||
nuke_colorspace = config.get_init_presets(
|
||||
)["colorspace"].get("nuke", None)
|
||||
# get imageio
|
||||
imageio = get_anatomy_settings(opnl.project_name)["imageio"]
|
||||
nuke_colorspace = imageio["nuke"]
|
||||
|
||||
try:
|
||||
self.set_root_colorspace(nuke_colorspace["root"])
|
||||
self.set_root_colorspace(nuke_colorspace["workfile"])
|
||||
except AttributeError:
|
||||
msg = "set_colorspace(): missing `root` settings in template"
|
||||
msg = "set_colorspace(): missing `workfile` settings in template"
|
||||
nuke.message(msg)
|
||||
|
||||
try:
|
||||
self.set_viewers_colorspace(nuke_colorspace["viewer"])
|
||||
|
|
@ -873,15 +887,14 @@ class WorkfileSettings(object):
|
|||
log.error(msg)
|
||||
|
||||
try:
|
||||
self.set_writes_colorspace(nuke_colorspace["write"])
|
||||
except AttributeError:
|
||||
msg = "set_colorspace(): missing `write` settings in template"
|
||||
nuke.message(msg)
|
||||
log.error(msg)
|
||||
self.set_writes_colorspace()
|
||||
except AttributeError as _error:
|
||||
nuke.message(_error)
|
||||
log.error(_error)
|
||||
|
||||
reads = nuke_colorspace.get("read")
|
||||
if reads:
|
||||
self.set_reads_colorspace(reads)
|
||||
read_clrs_inputs = nuke_colorspace["regexInputs"].get("inputs", [])
|
||||
if read_clrs_inputs:
|
||||
self.set_reads_colorspace(read_clrs_inputs)
|
||||
|
||||
try:
|
||||
for key in nuke_colorspace:
|
||||
|
|
@ -1063,15 +1076,14 @@ class WorkfileSettings(object):
|
|||
def set_favorites(self):
|
||||
work_dir = os.getenv("AVALON_WORKDIR")
|
||||
asset = os.getenv("AVALON_ASSET")
|
||||
project = os.getenv("AVALON_PROJECT")
|
||||
favorite_items = OrderedDict()
|
||||
|
||||
# project
|
||||
# get project's root and split to parts
|
||||
projects_root = os.path.normpath(work_dir.split(
|
||||
project)[0])
|
||||
opnl.project_name)[0])
|
||||
# add project name
|
||||
project_dir = os.path.join(projects_root, project) + "/"
|
||||
project_dir = os.path.join(projects_root, opnl.project_name) + "/"
|
||||
# add to favorites
|
||||
favorite_items.update({"Project dir": project_dir.replace("\\", "/")})
|
||||
|
||||
|
|
@ -1121,13 +1133,13 @@ def get_write_node_template_attr(node):
|
|||
data['avalon'] = avalon.nuke.read(
|
||||
node)
|
||||
data_preset = {
|
||||
"class": data['avalon']['family'],
|
||||
"families": data['avalon']['families'],
|
||||
"preset": data['avalon']['families'] # omit < 2.0.0v
|
||||
"nodeclass": data['avalon']['family'],
|
||||
"families": [data['avalon']['families']],
|
||||
"creator": data['avalon']['creator']
|
||||
}
|
||||
|
||||
# get template data
|
||||
nuke_imageio_writes = get_node_imageio_setting(**data_preset)
|
||||
nuke_imageio_writes = get_created_node_imageio_setting(**data_preset)
|
||||
|
||||
# collecting correct data
|
||||
correct_data = OrderedDict({
|
||||
|
|
@ -1223,8 +1235,7 @@ class ExporterReview:
|
|||
"""
|
||||
anlib.reset_selection()
|
||||
ipn_orig = None
|
||||
for v in [n for n in nuke.allNodes()
|
||||
if "Viewer" == n.Class()]:
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ip = v['input_process'].getValue()
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
|
|
@ -1637,8 +1648,8 @@ def launch_workfiles_app():
|
|||
if not open_at_start:
|
||||
return
|
||||
|
||||
if not self.workfiles_launched:
|
||||
self.workfiles_launched = True
|
||||
if not opnl.workfiles_launched:
|
||||
opnl.workfiles_launched = True
|
||||
workfiles.show(os.environ["AVALON_WORKDIR"])
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -26,9 +26,9 @@ def install():
|
|||
menu.addCommand(
|
||||
name,
|
||||
workfiles.show,
|
||||
index=(rm_item[0])
|
||||
index=2
|
||||
)
|
||||
|
||||
menu.addSeparator(index=3)
|
||||
# replace reset resolution from avalon core to pype's
|
||||
name = "Reset Resolution"
|
||||
new_name = "Set Resolution"
|
||||
|
|
@ -63,16 +63,7 @@ def install():
|
|||
# add colorspace menu item
|
||||
name = "Set Colorspace"
|
||||
menu.addCommand(
|
||||
name, lambda: WorkfileSettings().set_colorspace(),
|
||||
index=(rm_item[0] + 2)
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
# add workfile builder menu item
|
||||
name = "Build Workfile"
|
||||
menu.addCommand(
|
||||
name, lambda: BuildWorkfile().process(),
|
||||
index=(rm_item[0] + 7)
|
||||
name, lambda: WorkfileSettings().set_colorspace()
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
|
|
@ -80,11 +71,20 @@ def install():
|
|||
name = "Apply All Settings"
|
||||
menu.addCommand(
|
||||
name,
|
||||
lambda: WorkfileSettings().set_context_settings(),
|
||||
index=(rm_item[0] + 3)
|
||||
lambda: WorkfileSettings().set_context_settings()
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
# add workfile builder menu item
|
||||
name = "Build Workfile"
|
||||
menu.addCommand(
|
||||
name, lambda: BuildWorkfile().process()
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
|
||||
# adding shortcuts
|
||||
add_shortcuts_from_presets()
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue