mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-25 05:14:40 +01:00
Merge branch 'develop' into feature/OP-2065_Validate-third-party-before-build
This commit is contained in:
commit
8a947e708e
221 changed files with 14931 additions and 1669 deletions
93
CHANGELOG.md
93
CHANGELOG.md
|
|
@ -1,44 +1,83 @@
|
|||
# Changelog
|
||||
|
||||
## [3.7.0-nightly.9](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.8.0-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.7.0...HEAD)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: OpenTimelineIO Export Modul [\#2398](https://github.com/pypeclub/OpenPype/pull/2398)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Photoshop: Move implementation to OpenPype [\#2510](https://github.com/pypeclub/OpenPype/pull/2510)
|
||||
- TimersManager: Move module one hierarchy higher [\#2501](https://github.com/pypeclub/OpenPype/pull/2501)
|
||||
- Ftrack: Event handlers settings [\#2496](https://github.com/pypeclub/OpenPype/pull/2496)
|
||||
- Tools: Fix style and modality of errors in loader and creator [\#2489](https://github.com/pypeclub/OpenPype/pull/2489)
|
||||
- Project Manager: Remove project button cleanup [\#2482](https://github.com/pypeclub/OpenPype/pull/2482)
|
||||
- Tools: Be able to change models of tasks and assets widgets [\#2475](https://github.com/pypeclub/OpenPype/pull/2475)
|
||||
- Publish pype: Reduce publish process defering [\#2464](https://github.com/pypeclub/OpenPype/pull/2464)
|
||||
- Maya: Improve speed of Collect History logic [\#2460](https://github.com/pypeclub/OpenPype/pull/2460)
|
||||
- Maya: Validate Rig Controllers - fix Error: in script editor [\#2459](https://github.com/pypeclub/OpenPype/pull/2459)
|
||||
- Maya: Optimize Validate Locked Normals speed for dense polymeshes [\#2457](https://github.com/pypeclub/OpenPype/pull/2457)
|
||||
- Fix \#2453 Refactor missing \_get\_reference\_node method [\#2455](https://github.com/pypeclub/OpenPype/pull/2455)
|
||||
- Houdini: Remove broken unique name counter [\#2450](https://github.com/pypeclub/OpenPype/pull/2450)
|
||||
- Maya: Improve lib.polyConstraint performance when Select tool is not the active tool context [\#2447](https://github.com/pypeclub/OpenPype/pull/2447)
|
||||
- Maya : add option to not group reference in ReferenceLoader [\#2383](https://github.com/pypeclub/OpenPype/pull/2383)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- General: Settings work if OpenPypeVersion is available [\#2494](https://github.com/pypeclub/OpenPype/pull/2494)
|
||||
- General: PYTHONPATH may break OpenPype dependencies [\#2493](https://github.com/pypeclub/OpenPype/pull/2493)
|
||||
- Workfiles tool: Files widget show files on first show [\#2488](https://github.com/pypeclub/OpenPype/pull/2488)
|
||||
- General: Custom template paths filter fix [\#2483](https://github.com/pypeclub/OpenPype/pull/2483)
|
||||
- Loader: Remove always on top flag in tray [\#2480](https://github.com/pypeclub/OpenPype/pull/2480)
|
||||
- General: Anatomy does not return root envs as unicode [\#2465](https://github.com/pypeclub/OpenPype/pull/2465)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- General: Modules import function output fix [\#2492](https://github.com/pypeclub/OpenPype/pull/2492)
|
||||
- AE: fix hiding of alert window below Publish [\#2491](https://github.com/pypeclub/OpenPype/pull/2491)
|
||||
- Maya: Validate NGONs re-use polyConstraint code from openpype.host.maya.api.lib [\#2458](https://github.com/pypeclub/OpenPype/pull/2458)
|
||||
- Version handling [\#2363](https://github.com/pypeclub/OpenPype/pull/2363)
|
||||
|
||||
## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.14...3.7.0)
|
||||
|
||||
**Deprecated:**
|
||||
|
||||
- General: Default modules hierarchy n2 [\#2368](https://github.com/pypeclub/OpenPype/pull/2368)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Settings UI use OpenPype styles [\#2296](https://github.com/pypeclub/OpenPype/pull/2296)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462)
|
||||
- Photoshop: New style validations for New publisher [\#2429](https://github.com/pypeclub/OpenPype/pull/2429)
|
||||
- General: Environment variables groups [\#2424](https://github.com/pypeclub/OpenPype/pull/2424)
|
||||
- Unreal: Dynamic menu created in Python [\#2422](https://github.com/pypeclub/OpenPype/pull/2422)
|
||||
- Settings UI: Hyperlinks to settings [\#2420](https://github.com/pypeclub/OpenPype/pull/2420)
|
||||
- Modules: JobQueue module moved one hierarchy level higher [\#2419](https://github.com/pypeclub/OpenPype/pull/2419)
|
||||
- TimersManager: Start timer post launch hook [\#2418](https://github.com/pypeclub/OpenPype/pull/2418)
|
||||
- General: Run applications as separate processes under linux [\#2408](https://github.com/pypeclub/OpenPype/pull/2408)
|
||||
- Ftrack: Check existence of object type on recreation [\#2404](https://github.com/pypeclub/OpenPype/pull/2404)
|
||||
- Enhancement: Global cleanup plugin that explicitly remove paths from context [\#2402](https://github.com/pypeclub/OpenPype/pull/2402)
|
||||
- General: MongoDB ability to specify replica set groups [\#2401](https://github.com/pypeclub/OpenPype/pull/2401)
|
||||
- Flame: moving `utility\_scripts` to api folder also with `scripts` [\#2385](https://github.com/pypeclub/OpenPype/pull/2385)
|
||||
- Centos 7 dependency compatibility [\#2384](https://github.com/pypeclub/OpenPype/pull/2384)
|
||||
- Enhancement: Settings: Use project settings values from another project [\#2382](https://github.com/pypeclub/OpenPype/pull/2382)
|
||||
- Blender 3: Support auto install for new blender version [\#2377](https://github.com/pypeclub/OpenPype/pull/2377)
|
||||
- Maya add render image path to settings [\#2375](https://github.com/pypeclub/OpenPype/pull/2375)
|
||||
- Settings: Webpublisher in hosts enum [\#2367](https://github.com/pypeclub/OpenPype/pull/2367)
|
||||
- Hiero: python3 compatibility [\#2365](https://github.com/pypeclub/OpenPype/pull/2365)
|
||||
- Burnins: Be able recognize mxf OPAtom format [\#2361](https://github.com/pypeclub/OpenPype/pull/2361)
|
||||
- Maya: Add is\_static\_image\_plane and is\_in\_all\_views option in imagePlaneLoader [\#2356](https://github.com/pypeclub/OpenPype/pull/2356)
|
||||
- Local settings: Copyable studio paths [\#2349](https://github.com/pypeclub/OpenPype/pull/2349)
|
||||
- Assets Widget: Clear model on project change [\#2345](https://github.com/pypeclub/OpenPype/pull/2345)
|
||||
- General: OpenPype default modules hierarchy [\#2338](https://github.com/pypeclub/OpenPype/pull/2338)
|
||||
- General: FFprobe error exception contain original error message [\#2328](https://github.com/pypeclub/OpenPype/pull/2328)
|
||||
- Resolve: Add experimental button to menu [\#2325](https://github.com/pypeclub/OpenPype/pull/2325)
|
||||
- General: Reduce vendor imports [\#2305](https://github.com/pypeclub/OpenPype/pull/2305)
|
||||
- Ftrack: Synchronize input links [\#2287](https://github.com/pypeclub/OpenPype/pull/2287)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471)
|
||||
- Short Pyblish plugin path [\#2428](https://github.com/pypeclub/OpenPype/pull/2428)
|
||||
- PS: Introduced settings for invalid characters to use in ValidateNaming plugin [\#2417](https://github.com/pypeclub/OpenPype/pull/2417)
|
||||
- Settings UI: Breadcrumbs path does not create new entities [\#2416](https://github.com/pypeclub/OpenPype/pull/2416)
|
||||
- AfterEffects: Variant 2022 is in defaults but missing in schemas [\#2412](https://github.com/pypeclub/OpenPype/pull/2412)
|
||||
- Nuke: baking representations was not additive [\#2406](https://github.com/pypeclub/OpenPype/pull/2406)
|
||||
- General: Fix access to environments from default settings [\#2403](https://github.com/pypeclub/OpenPype/pull/2403)
|
||||
- Fix: Placeholder Input color set fix [\#2399](https://github.com/pypeclub/OpenPype/pull/2399)
|
||||
- Settings: Fix state change of wrapper label [\#2396](https://github.com/pypeclub/OpenPype/pull/2396)
|
||||
|
|
@ -48,43 +87,25 @@
|
|||
- Nuke: fixing menu re-drawing during context change [\#2374](https://github.com/pypeclub/OpenPype/pull/2374)
|
||||
- Webpublisher: Fix assignment of families of TVpaint instances [\#2373](https://github.com/pypeclub/OpenPype/pull/2373)
|
||||
- Nuke: fixing node name based on switched asset name [\#2369](https://github.com/pypeclub/OpenPype/pull/2369)
|
||||
- JobQueue: Fix loading of settings [\#2362](https://github.com/pypeclub/OpenPype/pull/2362)
|
||||
- Tools: Placeholder color [\#2359](https://github.com/pypeclub/OpenPype/pull/2359)
|
||||
- Launcher: Minimize button on MacOs [\#2355](https://github.com/pypeclub/OpenPype/pull/2355)
|
||||
- StandalonePublisher: Fix import of constant [\#2354](https://github.com/pypeclub/OpenPype/pull/2354)
|
||||
- Adobe products show issue [\#2347](https://github.com/pypeclub/OpenPype/pull/2347)
|
||||
- Maya Look Assigner: Fix Python 3 compatibility [\#2343](https://github.com/pypeclub/OpenPype/pull/2343)
|
||||
- Remove wrongly used host for hook [\#2342](https://github.com/pypeclub/OpenPype/pull/2342)
|
||||
- Tools: Use Qt context on tools show [\#2340](https://github.com/pypeclub/OpenPype/pull/2340)
|
||||
- Flame: Fix default argument value in custom dictionary [\#2339](https://github.com/pypeclub/OpenPype/pull/2339)
|
||||
- Timers Manager: Disable auto stop timer on linux platform [\#2334](https://github.com/pypeclub/OpenPype/pull/2334)
|
||||
- Fix - provider icons are pulled from a folder [\#2326](https://github.com/pypeclub/OpenPype/pull/2326)
|
||||
- Royal Render: Fix plugin order and OpenPype auto-detection [\#2291](https://github.com/pypeclub/OpenPype/pull/2291)
|
||||
- Houdini: Fix HDA creation [\#2350](https://github.com/pypeclub/OpenPype/pull/2350)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Forced cx\_freeze to include sqlite3 into build [\#2432](https://github.com/pypeclub/OpenPype/pull/2432)
|
||||
- Maya: Replaced PATH usage with vendored oiio path for maketx utility [\#2405](https://github.com/pypeclub/OpenPype/pull/2405)
|
||||
- \[Fix\]\[MAYA\] Handle message type attribute within CollectLook [\#2394](https://github.com/pypeclub/OpenPype/pull/2394)
|
||||
- Add validator to check correct version of extension for PS and AE [\#2387](https://github.com/pypeclub/OpenPype/pull/2387)
|
||||
- Linux : flip updating submodules logic [\#2357](https://github.com/pypeclub/OpenPype/pull/2357)
|
||||
- Update of avalon-core [\#2346](https://github.com/pypeclub/OpenPype/pull/2346)
|
||||
- Maya: configurable model top level validation [\#2321](https://github.com/pypeclub/OpenPype/pull/2321)
|
||||
|
||||
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: inventory update removes all loaded read nodes [\#2294](https://github.com/pypeclub/OpenPype/pull/2294)
|
||||
|
||||
## [3.6.3](https://github.com/pypeclub/OpenPype/tree/3.6.3) (2021-11-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.3-nightly.1...3.6.3)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Deadline: Fix publish targets [\#2280](https://github.com/pypeclub/OpenPype/pull/2280)
|
||||
|
||||
## [3.6.2](https://github.com/pypeclub/OpenPype/tree/3.6.2) (2021-11-18)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.2-nightly.2...3.6.2)
|
||||
|
|
|
|||
49
app_launcher.py
Normal file
49
app_launcher.py
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
"""Launch process that is not child process of python or OpenPype.
|
||||
|
||||
This is written for linux distributions where process tree may affect what
|
||||
is when closed or blocked to be closed.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
import json
|
||||
|
||||
|
||||
def main(input_json_path):
|
||||
"""Read launch arguments from json file and launch the process.
|
||||
|
||||
Expected that json contains "args" key with string or list of strings.
|
||||
|
||||
Arguments are converted to string using `list2cmdline`. At the end is added
|
||||
`&` which will cause that launched process is detached and running as
|
||||
"background" process.
|
||||
|
||||
## Notes
|
||||
@iLLiCiT: This should be possible to do with 'disown' or double forking but
|
||||
I didn't find a way how to do it properly. Disown didn't work as
|
||||
expected for me and double forking killed parent process which is
|
||||
unexpected too.
|
||||
"""
|
||||
with open(input_json_path, "r") as stream:
|
||||
data = json.load(stream)
|
||||
|
||||
# Change environment variables
|
||||
env = data.get("env") or {}
|
||||
for key, value in env.items():
|
||||
os.environ[key] = value
|
||||
|
||||
# Prepare launch arguments
|
||||
args = data["args"]
|
||||
if isinstance(args, list):
|
||||
args = subprocess.list2cmdline(args)
|
||||
|
||||
# Run the command as background process
|
||||
shell_cmd = args + " &"
|
||||
os.system(shell_cmd)
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Expect that last argument is path to a json with launch args information
|
||||
main(sys.argv[-1])
|
||||
|
|
@ -6,9 +6,18 @@ import sys
|
|||
|
||||
os.chdir(os.path.dirname(__file__)) # for override sys.path in Deadline
|
||||
|
||||
from .bootstrap_repos import BootstrapRepos
|
||||
from .bootstrap_repos import (
|
||||
BootstrapRepos,
|
||||
OpenPypeVersion
|
||||
)
|
||||
from .version import __version__ as version
|
||||
|
||||
# Store OpenPypeVersion to 'sys.modules'
|
||||
# - this makes it available in OpenPype processes without modifying
|
||||
# 'sys.path' or 'PYTHONPATH'
|
||||
if "OpenPypeVersion" not in sys.modules:
|
||||
sys.modules["OpenPypeVersion"] = OpenPypeVersion
|
||||
|
||||
|
||||
def open_dialog():
|
||||
"""Show Igniter dialog."""
|
||||
|
|
@ -22,7 +31,9 @@ def open_dialog():
|
|||
if scale_attr is not None:
|
||||
QtWidgets.QApplication.setAttribute(scale_attr)
|
||||
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
app = QtWidgets.QApplication.instance()
|
||||
if not app:
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
d = InstallDialog()
|
||||
d.open()
|
||||
|
|
@ -43,7 +54,9 @@ def open_update_window(openpype_version):
|
|||
if scale_attr is not None:
|
||||
QtWidgets.QApplication.setAttribute(scale_attr)
|
||||
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
app = QtWidgets.QApplication.instance()
|
||||
if not app:
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
d = UpdateWindow(version=openpype_version)
|
||||
d.open()
|
||||
|
|
@ -53,9 +66,32 @@ def open_update_window(openpype_version):
|
|||
return version_path
|
||||
|
||||
|
||||
def show_message_dialog(title, message):
|
||||
"""Show dialog with a message and title to user."""
|
||||
if os.getenv("OPENPYPE_HEADLESS_MODE"):
|
||||
print("!!! Can't open dialog in headless mode. Exiting.")
|
||||
sys.exit(1)
|
||||
from Qt import QtWidgets, QtCore
|
||||
from .message_dialog import MessageDialog
|
||||
|
||||
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
|
||||
if scale_attr is not None:
|
||||
QtWidgets.QApplication.setAttribute(scale_attr)
|
||||
|
||||
app = QtWidgets.QApplication.instance()
|
||||
if not app:
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
dialog = MessageDialog(title, message)
|
||||
dialog.open()
|
||||
|
||||
app.exec_()
|
||||
|
||||
|
||||
__all__ = [
|
||||
"BootstrapRepos",
|
||||
"open_dialog",
|
||||
"open_update_window",
|
||||
"show_message_dialog",
|
||||
"version"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -22,7 +22,10 @@ from .user_settings import (
|
|||
OpenPypeSecureRegistry,
|
||||
OpenPypeSettingsRegistry
|
||||
)
|
||||
from .tools import get_openpype_path_from_db
|
||||
from .tools import (
|
||||
get_openpype_path_from_db,
|
||||
get_expected_studio_version_str
|
||||
)
|
||||
|
||||
|
||||
LOG_INFO = 0
|
||||
|
|
@ -60,6 +63,7 @@ class OpenPypeVersion(semver.VersionInfo):
|
|||
staging = False
|
||||
path = None
|
||||
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501
|
||||
_installed_version = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Create OpenPype version.
|
||||
|
|
@ -232,6 +236,390 @@ class OpenPypeVersion(semver.VersionInfo):
|
|||
else:
|
||||
return hash(str(self))
|
||||
|
||||
@staticmethod
|
||||
def is_version_in_dir(
|
||||
dir_item: Path, version: OpenPypeVersion) -> Tuple[bool, str]:
|
||||
"""Test if path item is OpenPype version matching detected version.
|
||||
|
||||
If item is directory that might (based on it's name)
|
||||
contain OpenPype version, check if it really does contain
|
||||
OpenPype and that their versions matches.
|
||||
|
||||
Args:
|
||||
dir_item (Path): Directory to test.
|
||||
version (OpenPypeVersion): OpenPype version detected
|
||||
from name.
|
||||
|
||||
Returns:
|
||||
Tuple: State and reason, True if it is valid OpenPype version,
|
||||
False otherwise.
|
||||
|
||||
"""
|
||||
try:
|
||||
# add one 'openpype' level as inside dir there should
|
||||
# be many other repositories.
|
||||
version_str = OpenPypeVersion.get_version_string_from_directory(
|
||||
dir_item) # noqa: E501
|
||||
version_check = OpenPypeVersion(version=version_str)
|
||||
except ValueError:
|
||||
return False, f"cannot determine version from {dir_item}"
|
||||
|
||||
version_main = version_check.get_main_version()
|
||||
detected_main = version.get_main_version()
|
||||
if version_main != detected_main:
|
||||
return False, (f"dir version ({version}) and "
|
||||
f"its content version ({version_check}) "
|
||||
"doesn't match. Skipping.")
|
||||
return True, "Versions match"
|
||||
|
||||
@staticmethod
|
||||
def is_version_in_zip(
|
||||
zip_item: Path, version: OpenPypeVersion) -> Tuple[bool, str]:
|
||||
"""Test if zip path is OpenPype version matching detected version.
|
||||
|
||||
Open zip file, look inside and parse version from OpenPype
|
||||
inside it. If there is none, or it is different from
|
||||
version specified in file name, skip it.
|
||||
|
||||
Args:
|
||||
zip_item (Path): Zip file to test.
|
||||
version (OpenPypeVersion): Pype version detected
|
||||
from name.
|
||||
|
||||
Returns:
|
||||
Tuple: State and reason, True if it is valid OpenPype version,
|
||||
False otherwise.
|
||||
|
||||
"""
|
||||
# skip non-zip files
|
||||
if zip_item.suffix.lower() != ".zip":
|
||||
return False, "Not a zip"
|
||||
|
||||
try:
|
||||
with ZipFile(zip_item, "r") as zip_file:
|
||||
with zip_file.open(
|
||||
"openpype/version.py") as version_file:
|
||||
zip_version = {}
|
||||
exec(version_file.read(), zip_version)
|
||||
try:
|
||||
version_check = OpenPypeVersion(
|
||||
version=zip_version["__version__"])
|
||||
except ValueError as e:
|
||||
return False, str(e)
|
||||
|
||||
version_main = version_check.get_main_version() #
|
||||
# noqa: E501
|
||||
detected_main = version.get_main_version()
|
||||
# noqa: E501
|
||||
|
||||
if version_main != detected_main:
|
||||
return False, (f"zip version ({version}) "
|
||||
f"and its content version "
|
||||
f"({version_check}) "
|
||||
"doesn't match. Skipping.")
|
||||
except BadZipFile:
|
||||
return False, f"{zip_item} is not a zip file"
|
||||
except KeyError:
|
||||
return False, "Zip does not contain OpenPype"
|
||||
return True, "Versions match"
|
||||
|
||||
@staticmethod
|
||||
def get_version_string_from_directory(repo_dir: Path) -> Union[str, None]:
|
||||
"""Get version of OpenPype in given directory.
|
||||
|
||||
Note: in frozen OpenPype installed in user data dir, this must point
|
||||
one level deeper as it is:
|
||||
`openpype-version-v3.0.0/openpype/version.py`
|
||||
|
||||
Args:
|
||||
repo_dir (Path): Path to OpenPype repo.
|
||||
|
||||
Returns:
|
||||
str: version string.
|
||||
None: if OpenPype is not found.
|
||||
|
||||
"""
|
||||
# try to find version
|
||||
version_file = Path(repo_dir) / "openpype" / "version.py"
|
||||
if not version_file.exists():
|
||||
return None
|
||||
|
||||
version = {}
|
||||
with version_file.open("r") as fp:
|
||||
exec(fp.read(), version)
|
||||
|
||||
return version['__version__']
|
||||
|
||||
@classmethod
|
||||
def get_openpype_path(cls):
|
||||
"""Path to openpype zip directory.
|
||||
|
||||
Path can be set through environment variable 'OPENPYPE_PATH' which
|
||||
is set during start of OpenPype if is not available.
|
||||
"""
|
||||
return os.getenv("OPENPYPE_PATH")
|
||||
|
||||
@classmethod
|
||||
def openpype_path_is_set(cls):
|
||||
"""Path to OpenPype zip directory is set."""
|
||||
if cls.get_openpype_path():
|
||||
return True
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def openpype_path_is_accessible(cls):
|
||||
"""Path to OpenPype zip directory is accessible.
|
||||
|
||||
Exists for this machine.
|
||||
"""
|
||||
# First check if is set
|
||||
if not cls.openpype_path_is_set():
|
||||
return False
|
||||
|
||||
# Validate existence
|
||||
if Path(cls.get_openpype_path()).exists():
|
||||
return True
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def get_local_versions(
|
||||
cls, production: bool = None, staging: bool = None
|
||||
) -> List:
|
||||
"""Get all versions available on this machine.
|
||||
|
||||
Arguments give ability to specify if filtering is needed. If both
|
||||
arguments are set to None all found versions are returned.
|
||||
|
||||
Args:
|
||||
production (bool): Return production versions.
|
||||
staging (bool): Return staging versions.
|
||||
"""
|
||||
# Return all local versions if arguments are set to None
|
||||
if production is None and staging is None:
|
||||
production = True
|
||||
staging = True
|
||||
|
||||
elif production is None and not staging:
|
||||
production = True
|
||||
|
||||
elif staging is None and not production:
|
||||
staging = True
|
||||
|
||||
# Just return empty output if both are disabled
|
||||
if not production and not staging:
|
||||
return []
|
||||
|
||||
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
|
||||
versions = OpenPypeVersion.get_versions_from_directory(
|
||||
dir_to_search
|
||||
)
|
||||
filtered_versions = []
|
||||
for version in versions:
|
||||
if version.is_staging():
|
||||
if staging:
|
||||
filtered_versions.append(version)
|
||||
elif production:
|
||||
filtered_versions.append(version)
|
||||
return list(sorted(set(filtered_versions)))
|
||||
|
||||
@classmethod
|
||||
def get_remote_versions(
|
||||
cls, production: bool = None, staging: bool = None
|
||||
) -> List:
|
||||
"""Get all versions available in OpenPype Path.
|
||||
|
||||
Arguments give ability to specify if filtering is needed. If both
|
||||
arguments are set to None all found versions are returned.
|
||||
|
||||
Args:
|
||||
production (bool): Return production versions.
|
||||
staging (bool): Return staging versions.
|
||||
"""
|
||||
# Return all local versions if arguments are set to None
|
||||
if production is None and staging is None:
|
||||
production = True
|
||||
staging = True
|
||||
|
||||
elif production is None and not staging:
|
||||
production = True
|
||||
|
||||
elif staging is None and not production:
|
||||
staging = True
|
||||
|
||||
# Just return empty output if both are disabled
|
||||
if not production and not staging:
|
||||
return []
|
||||
|
||||
dir_to_search = None
|
||||
if cls.openpype_path_is_accessible():
|
||||
dir_to_search = Path(cls.get_openpype_path())
|
||||
else:
|
||||
registry = OpenPypeSettingsRegistry()
|
||||
try:
|
||||
registry_dir = Path(str(registry.get_item("openPypePath")))
|
||||
if registry_dir.exists():
|
||||
dir_to_search = registry_dir
|
||||
|
||||
except ValueError:
|
||||
# nothing found in registry, we'll use data dir
|
||||
pass
|
||||
|
||||
if not dir_to_search:
|
||||
return []
|
||||
|
||||
versions = cls.get_versions_from_directory(dir_to_search)
|
||||
filtered_versions = []
|
||||
for version in versions:
|
||||
if version.is_staging():
|
||||
if staging:
|
||||
filtered_versions.append(version)
|
||||
elif production:
|
||||
filtered_versions.append(version)
|
||||
return list(sorted(set(filtered_versions)))
|
||||
|
||||
@staticmethod
|
||||
def get_versions_from_directory(openpype_dir: Path) -> List:
|
||||
"""Get all detected OpenPype versions in directory.
|
||||
|
||||
Args:
|
||||
openpype_dir (Path): Directory to scan.
|
||||
|
||||
Returns:
|
||||
list of OpenPypeVersion
|
||||
|
||||
Throws:
|
||||
ValueError: if invalid path is specified.
|
||||
|
||||
"""
|
||||
if not openpype_dir.exists() and not openpype_dir.is_dir():
|
||||
raise ValueError("specified directory is invalid")
|
||||
|
||||
_openpype_versions = []
|
||||
# iterate over directory in first level and find all that might
|
||||
# contain OpenPype.
|
||||
for item in openpype_dir.iterdir():
|
||||
|
||||
# if file, strip extension, in case of dir not.
|
||||
name = item.name if item.is_dir() else item.stem
|
||||
result = OpenPypeVersion.version_in_str(name)
|
||||
|
||||
if result:
|
||||
detected_version: OpenPypeVersion
|
||||
detected_version = result
|
||||
|
||||
if item.is_dir() and not OpenPypeVersion.is_version_in_dir(
|
||||
item, detected_version
|
||||
)[0]:
|
||||
continue
|
||||
|
||||
if item.is_file() and not OpenPypeVersion.is_version_in_zip(
|
||||
item, detected_version
|
||||
)[0]:
|
||||
continue
|
||||
|
||||
detected_version.path = item
|
||||
_openpype_versions.append(detected_version)
|
||||
|
||||
return sorted(_openpype_versions)
|
||||
|
||||
@staticmethod
|
||||
def get_installed_version_str() -> str:
|
||||
"""Get version of local OpenPype."""
|
||||
|
||||
version = {}
|
||||
path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
|
||||
with open(path, "r") as fp:
|
||||
exec(fp.read(), version)
|
||||
return version["__version__"]
|
||||
|
||||
@classmethod
|
||||
def get_installed_version(cls):
|
||||
"""Get version of OpenPype inside build."""
|
||||
if cls._installed_version is None:
|
||||
installed_version_str = cls.get_installed_version_str()
|
||||
if installed_version_str:
|
||||
cls._installed_version = OpenPypeVersion(
|
||||
version=installed_version_str,
|
||||
path=Path(os.environ["OPENPYPE_ROOT"])
|
||||
)
|
||||
return cls._installed_version
|
||||
|
||||
@staticmethod
|
||||
def get_latest_version(
|
||||
staging: bool = False,
|
||||
local: bool = None,
|
||||
remote: bool = None
|
||||
) -> OpenPypeVersion:
|
||||
"""Get latest available version.
|
||||
|
||||
The version does not contain information about path and source.
|
||||
|
||||
This is utility version to get latest version from all found. Build
|
||||
version is not listed if staging is enabled.
|
||||
|
||||
Arguments 'local' and 'remote' define if local and remote repository
|
||||
versions are used. All versions are used if both are not set (or set
|
||||
to 'None'). If only one of them is set to 'True' the other is disabled.
|
||||
It is possible to set both to 'True' (same as both set to None) and to
|
||||
'False' in that case only build version can be used.
|
||||
|
||||
Args:
|
||||
staging (bool, optional): List staging versions if True.
|
||||
local (bool, optional): List local versions if True.
|
||||
remote (bool, optional): List remote versions if True.
|
||||
"""
|
||||
if local is None and remote is None:
|
||||
local = True
|
||||
remote = True
|
||||
|
||||
elif local is None and not remote:
|
||||
local = True
|
||||
|
||||
elif remote is None and not local:
|
||||
remote = True
|
||||
|
||||
installed_version = OpenPypeVersion.get_installed_version()
|
||||
local_versions = []
|
||||
remote_versions = []
|
||||
if local:
|
||||
local_versions = OpenPypeVersion.get_local_versions(
|
||||
staging=staging
|
||||
)
|
||||
if remote:
|
||||
remote_versions = OpenPypeVersion.get_remote_versions(
|
||||
staging=staging
|
||||
)
|
||||
all_versions = local_versions + remote_versions
|
||||
if not staging:
|
||||
all_versions.append(installed_version)
|
||||
|
||||
if not all_versions:
|
||||
return None
|
||||
|
||||
all_versions.sort()
|
||||
return all_versions[-1]
|
||||
|
||||
@classmethod
|
||||
def get_expected_studio_version(cls, staging=False, global_settings=None):
|
||||
"""Expected OpenPype version that should be used at the moment.
|
||||
|
||||
If version is not defined in settings the latest found version is
|
||||
used.
|
||||
|
||||
Using precached global settings is needed for usage inside OpenPype.
|
||||
|
||||
Args:
|
||||
staging (bool): Staging version or production version.
|
||||
global_settings (dict): Optional precached global settings.
|
||||
|
||||
Returns:
|
||||
OpenPypeVersion: Version that should be used.
|
||||
"""
|
||||
result = get_expected_studio_version_str(staging, global_settings)
|
||||
if not result:
|
||||
return None
|
||||
return OpenPypeVersion(version=result)
|
||||
|
||||
|
||||
class BootstrapRepos:
|
||||
"""Class for bootstrapping local OpenPype installation.
|
||||
|
|
@ -301,16 +689,6 @@ class BootstrapRepos:
|
|||
return v.path
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_local_live_version() -> str:
|
||||
"""Get version of local OpenPype."""
|
||||
|
||||
version = {}
|
||||
path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
|
||||
with open(path, "r") as fp:
|
||||
exec(fp.read(), version)
|
||||
return version["__version__"]
|
||||
|
||||
@staticmethod
|
||||
def get_version(repo_dir: Path) -> Union[str, None]:
|
||||
"""Get version of OpenPype in given directory.
|
||||
|
|
@ -358,7 +736,7 @@ class BootstrapRepos:
|
|||
# version and use it as a source. Otherwise repo_dir is user
|
||||
# entered location.
|
||||
if not repo_dir:
|
||||
version = self.get_local_live_version()
|
||||
version = OpenPypeVersion.get_installed_version_str()
|
||||
repo_dir = self.live_repo_dir
|
||||
else:
|
||||
version = self.get_version(repo_dir)
|
||||
|
|
@ -384,7 +762,7 @@ class BootstrapRepos:
|
|||
|
||||
destination = self._move_zip_to_data_dir(temp_zip)
|
||||
|
||||
return OpenPypeVersion(version=version, path=destination)
|
||||
return OpenPypeVersion(version=version, path=Path(destination))
|
||||
|
||||
def _move_zip_to_data_dir(self, zip_file) -> Union[None, Path]:
|
||||
"""Move zip with OpenPype version to user data directory.
|
||||
|
|
@ -734,6 +1112,65 @@ class BootstrapRepos:
|
|||
|
||||
os.environ["PYTHONPATH"] = os.pathsep.join(paths)
|
||||
|
||||
@staticmethod
|
||||
def find_openpype_version(version, staging):
|
||||
if isinstance(version, str):
|
||||
version = OpenPypeVersion(version=version)
|
||||
|
||||
installed_version = OpenPypeVersion.get_installed_version()
|
||||
if installed_version == version:
|
||||
return installed_version
|
||||
|
||||
local_versions = OpenPypeVersion.get_local_versions(
|
||||
staging=staging, production=not staging
|
||||
)
|
||||
zip_version = None
|
||||
for local_version in local_versions:
|
||||
if local_version == version:
|
||||
if local_version.path.suffix.lower() == ".zip":
|
||||
zip_version = local_version
|
||||
else:
|
||||
return local_version
|
||||
|
||||
if zip_version is not None:
|
||||
return zip_version
|
||||
|
||||
remote_versions = OpenPypeVersion.get_remote_versions(
|
||||
staging=staging, production=not staging
|
||||
)
|
||||
for remote_version in remote_versions:
|
||||
if remote_version == version:
|
||||
return remote_version
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def find_latest_openpype_version(staging):
|
||||
installed_version = OpenPypeVersion.get_installed_version()
|
||||
local_versions = OpenPypeVersion.get_local_versions(
|
||||
staging=staging
|
||||
)
|
||||
remote_versions = OpenPypeVersion.get_remote_versions(
|
||||
staging=staging
|
||||
)
|
||||
all_versions = local_versions + remote_versions
|
||||
if not staging:
|
||||
all_versions.append(installed_version)
|
||||
|
||||
if not all_versions:
|
||||
return None
|
||||
|
||||
all_versions.sort()
|
||||
latest_version = all_versions[-1]
|
||||
if latest_version == installed_version:
|
||||
return latest_version
|
||||
|
||||
if not latest_version.path.is_dir():
|
||||
for version in local_versions:
|
||||
if version == latest_version and version.path.is_dir():
|
||||
latest_version = version
|
||||
break
|
||||
return latest_version
|
||||
|
||||
def find_openpype(
|
||||
self,
|
||||
openpype_path: Union[Path, str] = None,
|
||||
|
|
|
|||
|
|
@ -12,7 +12,8 @@ from Qt.QtCore import QTimer # noqa
|
|||
from .install_thread import InstallThread
|
||||
from .tools import (
|
||||
validate_mongo_connection,
|
||||
get_openpype_path_from_db
|
||||
get_openpype_path_from_db,
|
||||
get_openpype_icon_path
|
||||
)
|
||||
|
||||
from .nice_progress_bar import NiceProgressBar
|
||||
|
|
@ -187,7 +188,6 @@ class InstallDialog(QtWidgets.QDialog):
|
|||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
roboto_font_path = os.path.join(current_dir, "RobotoMono-Regular.ttf")
|
||||
poppins_font_path = os.path.join(current_dir, "Poppins")
|
||||
icon_path = os.path.join(current_dir, "openpype_icon.png")
|
||||
|
||||
# Install roboto font
|
||||
QtGui.QFontDatabase.addApplicationFont(roboto_font_path)
|
||||
|
|
@ -196,6 +196,7 @@ class InstallDialog(QtWidgets.QDialog):
|
|||
QtGui.QFontDatabase.addApplicationFont(filename)
|
||||
|
||||
# Load logo
|
||||
icon_path = get_openpype_icon_path()
|
||||
pixmap_openpype_logo = QtGui.QPixmap(icon_path)
|
||||
# Set logo as icon of window
|
||||
self.setWindowIcon(QtGui.QIcon(pixmap_openpype_logo))
|
||||
|
|
|
|||
44
igniter/message_dialog.py
Normal file
44
igniter/message_dialog.py
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
from Qt import QtWidgets, QtGui
|
||||
|
||||
from .tools import (
|
||||
load_stylesheet,
|
||||
get_openpype_icon_path
|
||||
)
|
||||
|
||||
|
||||
class MessageDialog(QtWidgets.QDialog):
|
||||
"""Simple message dialog with title, message and OK button."""
|
||||
def __init__(self, title, message):
|
||||
super(MessageDialog, self).__init__()
|
||||
|
||||
# Set logo as icon of window
|
||||
icon_path = get_openpype_icon_path()
|
||||
pixmap_openpype_logo = QtGui.QPixmap(icon_path)
|
||||
self.setWindowIcon(QtGui.QIcon(pixmap_openpype_logo))
|
||||
|
||||
# Set title
|
||||
self.setWindowTitle(title)
|
||||
|
||||
# Set message
|
||||
label_widget = QtWidgets.QLabel(message, self)
|
||||
|
||||
ok_btn = QtWidgets.QPushButton("OK", self)
|
||||
btns_layout = QtWidgets.QHBoxLayout()
|
||||
btns_layout.addStretch(1)
|
||||
btns_layout.addWidget(ok_btn, 0)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.addWidget(label_widget, 1)
|
||||
layout.addLayout(btns_layout, 0)
|
||||
|
||||
ok_btn.clicked.connect(self._on_ok_clicked)
|
||||
|
||||
self._label_widget = label_widget
|
||||
self._ok_btn = ok_btn
|
||||
|
||||
def _on_ok_clicked(self):
|
||||
self.close()
|
||||
|
||||
def showEvent(self, event):
|
||||
super(MessageDialog, self).showEvent(event)
|
||||
self.setStyleSheet(load_stylesheet())
|
||||
|
|
@ -16,6 +16,11 @@ from pymongo.errors import (
|
|||
)
|
||||
|
||||
|
||||
class OpenPypeVersionNotFound(Exception):
|
||||
"""OpenPype version was not found in remote and local repository."""
|
||||
pass
|
||||
|
||||
|
||||
def should_add_certificate_path_to_mongo_url(mongo_url):
|
||||
"""Check if should add ca certificate to mongo url.
|
||||
|
||||
|
|
@ -182,6 +187,28 @@ def get_openpype_path_from_db(url: str) -> Union[str, None]:
|
|||
return None
|
||||
|
||||
|
||||
def get_expected_studio_version_str(
|
||||
staging=False, global_settings=None
|
||||
) -> str:
|
||||
"""Version that should be currently used in studio.
|
||||
|
||||
Args:
|
||||
staging (bool): Get current version for staging.
|
||||
global_settings (dict): Optional precached global settings.
|
||||
|
||||
Returns:
|
||||
str: OpenPype version which should be used. Empty string means latest.
|
||||
"""
|
||||
mongo_url = os.environ.get("OPENPYPE_MONGO")
|
||||
if global_settings is None:
|
||||
global_settings = get_openpype_global_settings(mongo_url)
|
||||
if staging:
|
||||
key = "staging_version"
|
||||
else:
|
||||
key = "production_version"
|
||||
return global_settings.get(key) or ""
|
||||
|
||||
|
||||
def load_stylesheet() -> str:
|
||||
"""Load css style sheet.
|
||||
|
||||
|
|
@ -192,3 +219,11 @@ def load_stylesheet() -> str:
|
|||
stylesheet_path = Path(__file__).parent.resolve() / "stylesheet.css"
|
||||
|
||||
return stylesheet_path.read_text()
|
||||
|
||||
|
||||
def get_openpype_icon_path() -> str:
|
||||
"""Path to OpenPype icon png file."""
|
||||
return os.path.join(
|
||||
os.path.dirname(os.path.abspath(__file__)),
|
||||
"openpype_icon.png"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ class RepairContextAction(pyblish.api.Action):
|
|||
is available on the plugin.
|
||||
|
||||
"""
|
||||
label = "Repair Context"
|
||||
label = "Repair"
|
||||
on = "failed" # This action is only available on a failed plug-in
|
||||
|
||||
def process(self, context, plugin):
|
||||
|
|
|
|||
|
|
@ -31,8 +31,6 @@ from .lib import (
|
|||
)
|
||||
|
||||
from .lib.mongo import (
|
||||
decompose_url,
|
||||
compose_url,
|
||||
get_default_components
|
||||
)
|
||||
|
||||
|
|
@ -84,8 +82,6 @@ __all__ = [
|
|||
"Anatomy",
|
||||
"config",
|
||||
"execute",
|
||||
"decompose_url",
|
||||
"compose_url",
|
||||
"get_default_components",
|
||||
"ApplicationManager",
|
||||
"BuildWorkfile",
|
||||
|
|
|
|||
|
|
@ -138,7 +138,10 @@ def webpublisherwebserver(debug, executable, upload_dir, host=None, port=None):
|
|||
@click.option("--asset", help="Asset name", default=None)
|
||||
@click.option("--task", help="Task name", default=None)
|
||||
@click.option("--app", help="Application name", default=None)
|
||||
def extractenvironments(output_json_path, project, asset, task, app):
|
||||
@click.option(
|
||||
"--envgroup", help="Environment group (e.g. \"farm\")", default=None
|
||||
)
|
||||
def extractenvironments(output_json_path, project, asset, task, app, envgroup):
|
||||
"""Extract environment variables for entered context to a json file.
|
||||
|
||||
Entered output filepath will be created if does not exists.
|
||||
|
|
@ -149,7 +152,7 @@ def extractenvironments(output_json_path, project, asset, task, app):
|
|||
Context options are "project", "asset", "task", "app"
|
||||
"""
|
||||
PypeCommands.extractenvironments(
|
||||
output_json_path, project, asset, task, app
|
||||
output_json_path, project, asset, task, app, envgroup
|
||||
)
|
||||
|
||||
|
||||
|
|
|
|||
33
openpype/hooks/pre_create_extra_workdir_folders.py
Normal file
33
openpype/hooks/pre_create_extra_workdir_folders.py
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
import os
|
||||
from openpype.lib import (
|
||||
PreLaunchHook,
|
||||
create_workdir_extra_folders
|
||||
)
|
||||
|
||||
|
||||
class AddLastWorkfileToLaunchArgs(PreLaunchHook):
|
||||
"""Add last workfile path to launch arguments.
|
||||
|
||||
This is not possible to do for all applications the same way.
|
||||
"""
|
||||
|
||||
# Execute after workfile template copy
|
||||
order = 15
|
||||
|
||||
def execute(self):
|
||||
if not self.application.is_host:
|
||||
return
|
||||
|
||||
env = self.data.get("env") or {}
|
||||
workdir = env.get("AVALON_WORKDIR")
|
||||
if not workdir or not os.path.exists(workdir):
|
||||
return
|
||||
|
||||
host_name = self.application.host_name
|
||||
task_type = self.data["task_type"]
|
||||
task_name = self.data["task_name"]
|
||||
project_name = self.data["project_name"]
|
||||
|
||||
create_workdir_extra_folders(
|
||||
workdir, host_name, task_type, task_name, project_name,
|
||||
)
|
||||
|
|
@ -48,7 +48,7 @@ class GlobalHostDataHook(PreLaunchHook):
|
|||
"log": self.log
|
||||
})
|
||||
|
||||
prepare_host_environments(temp_data)
|
||||
prepare_host_environments(temp_data, self.launch_context.env_group)
|
||||
prepare_context_environments(temp_data)
|
||||
|
||||
temp_data.pop("log")
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import subprocess
|
|||
|
||||
from openpype.lib import (
|
||||
PreLaunchHook,
|
||||
get_pype_execute_args
|
||||
get_openpype_execute_args
|
||||
)
|
||||
|
||||
from openpype import PACKAGE_DIR as OPENPYPE_DIR
|
||||
|
|
@ -35,7 +35,7 @@ class NonPythonHostHook(PreLaunchHook):
|
|||
"non_python_host_launch.py"
|
||||
)
|
||||
|
||||
new_launch_args = get_pype_execute_args(
|
||||
new_launch_args = get_openpype_execute_args(
|
||||
"run", script_path, executable_path
|
||||
)
|
||||
# Add workfile path if exists
|
||||
|
|
@ -48,4 +48,3 @@ class NonPythonHostHook(PreLaunchHook):
|
|||
|
||||
if remainders:
|
||||
self.launch_context.launch_args.extend(remainders)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import openpype.api
|
||||
from Qt import QtWidgets
|
||||
from avalon import aftereffects
|
||||
from avalon.api import CreatorError
|
||||
|
||||
import openpype.api
|
||||
|
||||
import logging
|
||||
|
||||
|
|
@ -27,14 +28,13 @@ class CreateRender(openpype.api.Creator):
|
|||
folders=False,
|
||||
footages=False)
|
||||
if len(items) > 1:
|
||||
self._show_msg("Please select only single composition at time.")
|
||||
return False
|
||||
raise CreatorError("Please select only single "
|
||||
"composition at time.")
|
||||
|
||||
if not items:
|
||||
self._show_msg("Nothing to create. Select composition " +
|
||||
"if 'useSelection' or create at least " +
|
||||
"one composition.")
|
||||
return False
|
||||
raise CreatorError("Nothing to create. Select composition " +
|
||||
"if 'useSelection' or create at least " +
|
||||
"one composition.")
|
||||
|
||||
existing_subsets = [instance['subset'].lower()
|
||||
for instance in aftereffects.list_instances()]
|
||||
|
|
@ -42,8 +42,7 @@ class CreateRender(openpype.api.Creator):
|
|||
item = items.pop()
|
||||
if self.name.lower() in existing_subsets:
|
||||
txt = "Instance with name \"{}\" already exists.".format(self.name)
|
||||
self._show_msg(txt)
|
||||
return False
|
||||
raise CreatorError(txt)
|
||||
|
||||
self.data["members"] = [item.id]
|
||||
self.data["uuid"] = item.id # for SubsetManager
|
||||
|
|
@ -54,9 +53,3 @@ class CreateRender(openpype.api.Creator):
|
|||
stub.imprint(item, self.data)
|
||||
stub.set_label_color(item.id, 14) # Cyan options 0 - 16
|
||||
stub.rename_item(item.id, stub.PUBLISH_ICON + self.data["subset"])
|
||||
|
||||
def _show_msg(self, txt):
|
||||
msg = QtWidgets.QMessageBox()
|
||||
msg.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg.setText(txt)
|
||||
msg.exec_()
|
||||
|
|
|
|||
|
|
@ -22,21 +22,23 @@ class BackgroundLoader(api.Loader):
|
|||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
items = stub.get_items(comps=True)
|
||||
existing_items = [layer.name for layer in items]
|
||||
existing_items = [layer.name.replace(stub.LOADED_ICON, '')
|
||||
for layer in items]
|
||||
|
||||
comp_name = get_unique_layer_name(
|
||||
existing_items,
|
||||
"{}_{}".format(context["asset"]["name"], name))
|
||||
|
||||
layers = get_background_layers(self.fname)
|
||||
if not layers:
|
||||
raise ValueError("No layers found in {}".format(self.fname))
|
||||
|
||||
comp = stub.import_background(None, stub.LOADED_ICON + comp_name,
|
||||
layers)
|
||||
|
||||
if not comp:
|
||||
self.log.warning(
|
||||
"Import background failed.")
|
||||
self.log.warning("Check host app for alert error.")
|
||||
return
|
||||
raise ValueError("Import background failed. "
|
||||
"Please contact support")
|
||||
|
||||
self[:] = [comp]
|
||||
namespace = namespace or comp_name
|
||||
|
|
|
|||
|
|
@ -1,105 +1,5 @@
|
|||
from .api.utils import (
|
||||
setup
|
||||
)
|
||||
|
||||
from .api.pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ls,
|
||||
containerise,
|
||||
update_container,
|
||||
maintained_selection,
|
||||
remove_instance,
|
||||
list_instances,
|
||||
imprint
|
||||
)
|
||||
|
||||
from .api.lib import (
|
||||
FlameAppFramework,
|
||||
maintain_current_timeline,
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
get_current_timeline,
|
||||
create_bin,
|
||||
)
|
||||
|
||||
from .api.menu import (
|
||||
FlameMenuProjectConnect,
|
||||
FlameMenuTimeline
|
||||
)
|
||||
|
||||
from .api.workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
current_file,
|
||||
has_unsaved_changes,
|
||||
file_extensions,
|
||||
work_root
|
||||
)
|
||||
|
||||
import os
|
||||
|
||||
HOST_DIR = os.path.dirname(
|
||||
os.path.abspath(__file__)
|
||||
)
|
||||
API_DIR = os.path.join(HOST_DIR, "api")
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
app_framework = None
|
||||
apps = []
|
||||
|
||||
|
||||
__all__ = [
|
||||
"HOST_DIR",
|
||||
"API_DIR",
|
||||
"PLUGINS_DIR",
|
||||
"PUBLISH_PATH",
|
||||
"LOAD_PATH",
|
||||
"CREATE_PATH",
|
||||
"INVENTORY_PATH",
|
||||
"INVENTORY_PATH",
|
||||
|
||||
"app_framework",
|
||||
"apps",
|
||||
|
||||
# pipeline
|
||||
"install",
|
||||
"uninstall",
|
||||
"ls",
|
||||
"containerise",
|
||||
"update_container",
|
||||
"reload_pipeline",
|
||||
"maintained_selection",
|
||||
"remove_instance",
|
||||
"list_instances",
|
||||
"imprint",
|
||||
|
||||
# utils
|
||||
"setup",
|
||||
|
||||
# lib
|
||||
"FlameAppFramework",
|
||||
"maintain_current_timeline",
|
||||
"get_project_manager",
|
||||
"get_current_project",
|
||||
"get_current_timeline",
|
||||
"create_bin",
|
||||
|
||||
# menu
|
||||
"FlameMenuProjectConnect",
|
||||
"FlameMenuTimeline",
|
||||
|
||||
# plugin
|
||||
|
||||
# workio
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,3 +1,115 @@
|
|||
"""
|
||||
OpenPype Autodesk Flame api
|
||||
"""
|
||||
from .constants import (
|
||||
COLOR_MAP,
|
||||
MARKER_NAME,
|
||||
MARKER_COLOR,
|
||||
MARKER_DURATION,
|
||||
MARKER_PUBLISH_DEFAULT
|
||||
)
|
||||
from .lib import (
|
||||
CTX,
|
||||
FlameAppFramework,
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
get_current_sequence,
|
||||
create_bin,
|
||||
create_segment_data_marker,
|
||||
get_segment_data_marker,
|
||||
set_segment_data_marker,
|
||||
set_publish_attribute,
|
||||
get_publish_attribute,
|
||||
get_sequence_segments,
|
||||
maintained_segment_selection,
|
||||
reset_segment_selection,
|
||||
get_segment_attributes
|
||||
)
|
||||
from .utils import (
|
||||
setup
|
||||
)
|
||||
from .pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ls,
|
||||
containerise,
|
||||
update_container,
|
||||
remove_instance,
|
||||
list_instances,
|
||||
imprint,
|
||||
maintained_selection
|
||||
)
|
||||
from .menu import (
|
||||
FlameMenuProjectConnect,
|
||||
FlameMenuTimeline
|
||||
)
|
||||
from .plugin import (
|
||||
Creator,
|
||||
PublishableClip
|
||||
)
|
||||
from .workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
current_file,
|
||||
has_unsaved_changes,
|
||||
file_extensions,
|
||||
work_root
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# constants
|
||||
"COLOR_MAP",
|
||||
"MARKER_NAME",
|
||||
"MARKER_COLOR",
|
||||
"MARKER_DURATION",
|
||||
"MARKER_PUBLISH_DEFAULT",
|
||||
|
||||
# lib
|
||||
"CTX",
|
||||
"FlameAppFramework",
|
||||
"get_project_manager",
|
||||
"get_current_project",
|
||||
"get_current_sequence",
|
||||
"create_bin",
|
||||
"create_segment_data_marker",
|
||||
"get_segment_data_marker",
|
||||
"set_segment_data_marker",
|
||||
"set_publish_attribute",
|
||||
"get_publish_attribute",
|
||||
"get_sequence_segments",
|
||||
"maintained_segment_selection",
|
||||
"reset_segment_selection",
|
||||
"get_segment_attributes",
|
||||
|
||||
# pipeline
|
||||
"install",
|
||||
"uninstall",
|
||||
"ls",
|
||||
"containerise",
|
||||
"update_container",
|
||||
"reload_pipeline",
|
||||
"maintained_selection",
|
||||
"remove_instance",
|
||||
"list_instances",
|
||||
"imprint",
|
||||
"maintained_selection",
|
||||
|
||||
# utils
|
||||
"setup",
|
||||
|
||||
# menu
|
||||
"FlameMenuProjectConnect",
|
||||
"FlameMenuTimeline",
|
||||
|
||||
# plugin
|
||||
"Creator",
|
||||
"PublishableClip",
|
||||
|
||||
# workio
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
]
|
||||
|
|
|
|||
24
openpype/hosts/flame/api/constants.py
Normal file
24
openpype/hosts/flame/api/constants.py
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
|
||||
"""
|
||||
OpenPype Flame api constances
|
||||
"""
|
||||
# OpenPype marker workflow variables
|
||||
MARKER_NAME = "OpenPypeData"
|
||||
MARKER_DURATION = 0
|
||||
MARKER_COLOR = "cyan"
|
||||
MARKER_PUBLISH_DEFAULT = False
|
||||
|
||||
# OpenPype color definitions
|
||||
COLOR_MAP = {
|
||||
"red": (1.0, 0.0, 0.0),
|
||||
"orange": (1.0, 0.5, 0.0),
|
||||
"yellow": (1.0, 1.0, 0.0),
|
||||
"pink": (1.0, 0.5, 1.0),
|
||||
"white": (1.0, 1.0, 1.0),
|
||||
"green": (0.0, 1.0, 0.0),
|
||||
"cyan": (0.0, 1.0, 1.0),
|
||||
"blue": (0.0, 0.0, 1.0),
|
||||
"purple": (0.5, 0.0, 0.5),
|
||||
"magenta": (0.5, 0.0, 1.0),
|
||||
"black": (0.0, 0.0, 0.0)
|
||||
}
|
||||
|
|
@ -1,12 +1,27 @@
|
|||
import sys
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import pickle
|
||||
import contextlib
|
||||
from pprint import pformat
|
||||
|
||||
from .constants import (
|
||||
MARKER_COLOR,
|
||||
MARKER_DURATION,
|
||||
MARKER_NAME,
|
||||
COLOR_MAP,
|
||||
MARKER_PUBLISH_DEFAULT
|
||||
)
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
class CTX:
|
||||
# singleton used for passing data between api modules
|
||||
app_framework = None
|
||||
flame_apps = []
|
||||
selection = None
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
@ -115,10 +130,13 @@ class FlameAppFramework(object):
|
|||
)
|
||||
|
||||
self.log.info("[{}] waking up".format(self.__class__.__name__))
|
||||
self.load_prefs()
|
||||
|
||||
try:
|
||||
self.load_prefs()
|
||||
except RuntimeError:
|
||||
self.save_prefs()
|
||||
|
||||
# menu auto-refresh defaults
|
||||
|
||||
if not self.prefs_global.get("menu_auto_refresh"):
|
||||
self.prefs_global["menu_auto_refresh"] = {
|
||||
"media_panel": True,
|
||||
|
|
@ -207,40 +225,6 @@ class FlameAppFramework(object):
|
|||
return True
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintain_current_timeline(to_timeline, from_timeline=None):
|
||||
"""Maintain current timeline selection during context
|
||||
|
||||
Attributes:
|
||||
from_timeline (resolve.Timeline)[optional]:
|
||||
Example:
|
||||
>>> print(from_timeline.GetName())
|
||||
timeline1
|
||||
>>> print(to_timeline.GetName())
|
||||
timeline2
|
||||
|
||||
>>> with maintain_current_timeline(to_timeline):
|
||||
... print(get_current_timeline().GetName())
|
||||
timeline2
|
||||
|
||||
>>> print(get_current_timeline().GetName())
|
||||
timeline1
|
||||
"""
|
||||
# todo: this is still Resolve's implementation
|
||||
project = get_current_project()
|
||||
working_timeline = from_timeline or project.GetCurrentTimeline()
|
||||
|
||||
# swith to the input timeline
|
||||
project.SetCurrentTimeline(to_timeline)
|
||||
|
||||
try:
|
||||
# do a work
|
||||
yield
|
||||
finally:
|
||||
# put the original working timeline to context
|
||||
project.SetCurrentTimeline(working_timeline)
|
||||
|
||||
|
||||
def get_project_manager():
|
||||
# TODO: get_project_manager
|
||||
return
|
||||
|
|
@ -252,13 +236,32 @@ def get_media_storage():
|
|||
|
||||
|
||||
def get_current_project():
|
||||
# TODO: get_current_project
|
||||
return
|
||||
import flame
|
||||
return flame.project.current_project
|
||||
|
||||
|
||||
def get_current_timeline(new=False):
|
||||
# TODO: get_current_timeline
|
||||
return
|
||||
def get_current_sequence(selection):
|
||||
import flame
|
||||
|
||||
def segment_to_sequence(_segment):
|
||||
track = _segment.parent
|
||||
version = track.parent
|
||||
return version.parent
|
||||
|
||||
process_timeline = None
|
||||
|
||||
if len(selection) == 1:
|
||||
if isinstance(selection[0], flame.PySequence):
|
||||
process_timeline = selection[0]
|
||||
if isinstance(selection[0], flame.PySegment):
|
||||
process_timeline = segment_to_sequence(selection[0])
|
||||
else:
|
||||
for segment in selection:
|
||||
if isinstance(segment, flame.PySegment):
|
||||
process_timeline = segment_to_sequence(segment)
|
||||
break
|
||||
|
||||
return process_timeline
|
||||
|
||||
|
||||
def create_bin(name, root=None):
|
||||
|
|
@ -272,3 +275,287 @@ def rescan_hooks():
|
|||
flame.execute_shortcut('Rescan Python Hooks')
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def get_metadata(project_name, _log=None):
|
||||
from adsk.libwiretapPythonClientAPI import (
|
||||
WireTapClient,
|
||||
WireTapServerHandle,
|
||||
WireTapNodeHandle,
|
||||
WireTapStr
|
||||
)
|
||||
|
||||
class GetProjectColorPolicy(object):
|
||||
def __init__(self, host_name=None, _log=None):
|
||||
# Create a connection to the Backburner manager using the Wiretap
|
||||
# python API.
|
||||
#
|
||||
self.log = _log or log
|
||||
self.host_name = host_name or "localhost"
|
||||
self._wiretap_client = WireTapClient()
|
||||
if not self._wiretap_client.init():
|
||||
raise Exception("Could not initialize Wiretap Client")
|
||||
self._server = WireTapServerHandle(
|
||||
"{}:IFFFS".format(self.host_name))
|
||||
|
||||
def process(self, project_name):
|
||||
policy_node_handle = WireTapNodeHandle(
|
||||
self._server,
|
||||
"/projects/{}/syncolor/policy".format(project_name)
|
||||
)
|
||||
self.log.info(policy_node_handle)
|
||||
|
||||
policy = WireTapStr()
|
||||
if not policy_node_handle.getNodeTypeStr(policy):
|
||||
self.log.warning(
|
||||
"Could not retrieve policy of '%s': %s" % (
|
||||
policy_node_handle.getNodeId().id(),
|
||||
policy_node_handle.lastError()
|
||||
)
|
||||
)
|
||||
|
||||
return policy.c_str()
|
||||
|
||||
policy_wiretap = GetProjectColorPolicy(_log=_log)
|
||||
return policy_wiretap.process(project_name)
|
||||
|
||||
|
||||
def get_segment_data_marker(segment, with_marker=None):
|
||||
"""
|
||||
Get openpype track item tag created by creator or loader plugin.
|
||||
|
||||
Attributes:
|
||||
segment (flame.PySegment): flame api object
|
||||
with_marker (bool)[optional]: if true it will return also marker object
|
||||
|
||||
Returns:
|
||||
dict: openpype tag data
|
||||
|
||||
Returns(with_marker=True):
|
||||
flame.PyMarker, dict
|
||||
"""
|
||||
for marker in segment.markers:
|
||||
comment = marker.comment.get_value()
|
||||
color = marker.colour.get_value()
|
||||
name = marker.name.get_value()
|
||||
|
||||
if (name == MARKER_NAME) and (
|
||||
color == COLOR_MAP[MARKER_COLOR]):
|
||||
if not with_marker:
|
||||
return json.loads(comment)
|
||||
else:
|
||||
return marker, json.loads(comment)
|
||||
|
||||
|
||||
def set_segment_data_marker(segment, data=None):
|
||||
"""
|
||||
Set openpype track item tag to input segment.
|
||||
|
||||
Attributes:
|
||||
segment (flame.PySegment): flame api object
|
||||
|
||||
Returns:
|
||||
dict: json loaded data
|
||||
"""
|
||||
data = data or dict()
|
||||
|
||||
marker_data = get_segment_data_marker(segment, True)
|
||||
|
||||
if marker_data:
|
||||
# get available openpype tag if any
|
||||
marker, tag_data = marker_data
|
||||
# update tag data with new data
|
||||
tag_data.update(data)
|
||||
# update marker with tag data
|
||||
marker.comment = json.dumps(tag_data)
|
||||
else:
|
||||
# update tag data with new data
|
||||
marker = create_segment_data_marker(segment)
|
||||
# add tag data to marker's comment
|
||||
marker.comment = json.dumps(data)
|
||||
|
||||
|
||||
def set_publish_attribute(segment, value):
|
||||
""" Set Publish attribute in input Tag object
|
||||
|
||||
Attribute:
|
||||
segment (flame.PySegment)): flame api object
|
||||
value (bool): True or False
|
||||
"""
|
||||
tag_data = get_segment_data_marker(segment)
|
||||
tag_data["publish"] = value
|
||||
|
||||
# set data to the publish attribute
|
||||
set_segment_data_marker(segment, tag_data)
|
||||
|
||||
|
||||
def get_publish_attribute(segment):
|
||||
""" Get Publish attribute from input Tag object
|
||||
|
||||
Attribute:
|
||||
segment (flame.PySegment)): flame api object
|
||||
|
||||
Returns:
|
||||
bool: True or False
|
||||
"""
|
||||
tag_data = get_segment_data_marker(segment)
|
||||
|
||||
if not tag_data:
|
||||
set_publish_attribute(segment, MARKER_PUBLISH_DEFAULT)
|
||||
return MARKER_PUBLISH_DEFAULT
|
||||
|
||||
return tag_data["publish"]
|
||||
|
||||
|
||||
def create_segment_data_marker(segment):
|
||||
""" Create openpype marker on a segment.
|
||||
|
||||
Attributes:
|
||||
segment (flame.PySegment): flame api object
|
||||
|
||||
Returns:
|
||||
flame.PyMarker: flame api object
|
||||
"""
|
||||
# get duration of segment
|
||||
duration = segment.record_duration.relative_frame
|
||||
# calculate start frame of the new marker
|
||||
start_frame = int(segment.record_in.relative_frame) + int(duration / 2)
|
||||
# create marker
|
||||
marker = segment.create_marker(start_frame)
|
||||
# set marker name
|
||||
marker.name = MARKER_NAME
|
||||
# set duration
|
||||
marker.duration = MARKER_DURATION
|
||||
# set colour
|
||||
marker.colour = COLOR_MAP[MARKER_COLOR] # Red
|
||||
|
||||
return marker
|
||||
|
||||
|
||||
def get_sequence_segments(sequence, selected=False):
|
||||
segments = []
|
||||
# loop versions in sequence
|
||||
for ver in sequence.versions:
|
||||
# loop track in versions
|
||||
for track in ver.tracks:
|
||||
# ignore all empty tracks and hidden too
|
||||
if len(track.segments) == 0 and track.hidden:
|
||||
continue
|
||||
# loop all segment in remaining tracks
|
||||
for segment in track.segments:
|
||||
if segment.name.get_value() == "":
|
||||
continue
|
||||
if (
|
||||
selected is True
|
||||
and segment.selected.get_value() is not True
|
||||
):
|
||||
continue
|
||||
# add it to original selection
|
||||
segments.append(segment)
|
||||
return segments
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_segment_selection(sequence):
|
||||
"""Maintain selection during context
|
||||
|
||||
Attributes:
|
||||
sequence (flame.PySequence): python api object
|
||||
|
||||
Yield:
|
||||
list of flame.PySegment
|
||||
|
||||
Example:
|
||||
>>> with maintained_segment_selection(sequence) as selected_segments:
|
||||
... for segment in selected_segments:
|
||||
... segment.selected = False
|
||||
>>> print(segment.selected)
|
||||
True
|
||||
"""
|
||||
selected_segments = get_sequence_segments(sequence, True)
|
||||
try:
|
||||
# do the operation on selected segments
|
||||
yield selected_segments
|
||||
finally:
|
||||
# reset all selected clips
|
||||
reset_segment_selection(sequence)
|
||||
# select only original selection of segments
|
||||
for segment in selected_segments:
|
||||
segment.selected = True
|
||||
|
||||
|
||||
def reset_segment_selection(sequence):
|
||||
"""Deselect all selected nodes
|
||||
"""
|
||||
for ver in sequence.versions:
|
||||
for track in ver.tracks:
|
||||
if len(track.segments) == 0 and track.hidden:
|
||||
continue
|
||||
for segment in track.segments:
|
||||
segment.selected = False
|
||||
|
||||
|
||||
def _get_shot_tokens_values(clip, tokens):
|
||||
old_value = None
|
||||
output = {}
|
||||
|
||||
if not clip.shot_name:
|
||||
return output
|
||||
|
||||
old_value = clip.shot_name.get_value()
|
||||
|
||||
for token in tokens:
|
||||
clip.shot_name.set_value(token)
|
||||
_key = str(re.sub("[<>]", "", token)).replace(" ", "_")
|
||||
|
||||
try:
|
||||
output[_key] = int(clip.shot_name.get_value())
|
||||
except ValueError:
|
||||
output[_key] = clip.shot_name.get_value()
|
||||
|
||||
clip.shot_name.set_value(old_value)
|
||||
|
||||
return output
|
||||
|
||||
|
||||
def get_segment_attributes(segment):
|
||||
if str(segment.name)[1:-1] == "":
|
||||
return None
|
||||
|
||||
# Add timeline segment to tree
|
||||
clip_data = {
|
||||
"segment_name": segment.name.get_value(),
|
||||
"segment_comment": segment.comment.get_value(),
|
||||
"tape_name": segment.tape_name,
|
||||
"source_name": segment.source_name,
|
||||
"fpath": segment.file_path,
|
||||
"PySegment": segment
|
||||
}
|
||||
|
||||
# add all available shot tokens
|
||||
shot_tokens = _get_shot_tokens_values(segment, [
|
||||
"<colour space>", "<width>", "<height>", "<depth>", "<segment>",
|
||||
"<track>", "<track name>"
|
||||
])
|
||||
clip_data.update(shot_tokens)
|
||||
|
||||
# populate shot source metadata
|
||||
segment_attrs = [
|
||||
"record_duration", "record_in", "record_out",
|
||||
"source_duration", "source_in", "source_out"
|
||||
]
|
||||
segment_attrs_data = {}
|
||||
for attr_name in segment_attrs:
|
||||
if not hasattr(segment, attr_name):
|
||||
continue
|
||||
attr = getattr(segment, attr_name)
|
||||
segment_attrs_data[attr] = str(attr).replace("+", ":")
|
||||
|
||||
if attr in ["record_in", "record_out"]:
|
||||
clip_data[attr_name] = attr.relative_frame
|
||||
else:
|
||||
clip_data[attr_name] = attr.frame
|
||||
|
||||
clip_data["segment_timecodes"] = segment_attrs_data
|
||||
|
||||
return clip_data
|
||||
|
|
|
|||
|
|
@ -1,10 +1,9 @@
|
|||
import os
|
||||
from Qt import QtWidgets
|
||||
from copy import deepcopy
|
||||
|
||||
from pprint import pformat
|
||||
from openpype.tools.utils.host_tools import HostToolsHelper
|
||||
|
||||
|
||||
menu_group_name = 'OpenPype'
|
||||
|
||||
default_flame_export_presets = {
|
||||
|
|
@ -26,6 +25,17 @@ default_flame_export_presets = {
|
|||
}
|
||||
|
||||
|
||||
def callback_selection(selection, function):
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
opfapi.CTX.selection = selection
|
||||
print("Hook Selection: \n\t{}".format(
|
||||
pformat({
|
||||
index: (type(item), item.name)
|
||||
for index, item in enumerate(opfapi.CTX.selection)})
|
||||
))
|
||||
function()
|
||||
|
||||
|
||||
class _FlameMenuApp(object):
|
||||
def __init__(self, framework):
|
||||
self.name = self.__class__.__name__
|
||||
|
|
@ -97,23 +107,12 @@ class FlameMenuProjectConnect(_FlameMenuApp):
|
|||
if not self.flame:
|
||||
return []
|
||||
|
||||
flame_project_name = self.flame_project_name
|
||||
self.log.info("______ {} ______".format(flame_project_name))
|
||||
|
||||
menu = deepcopy(self.menu)
|
||||
|
||||
menu['actions'].append({
|
||||
"name": "Workfiles ...",
|
||||
"execute": lambda x: self.tools_helper.show_workfiles()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Create ...",
|
||||
"execute": lambda x: self.tools_helper.show_creator()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Publish ...",
|
||||
"execute": lambda x: self.tools_helper.show_publish()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Load ...",
|
||||
"execute": lambda x: self.tools_helper.show_loader()
|
||||
|
|
@ -128,9 +127,6 @@ class FlameMenuProjectConnect(_FlameMenuApp):
|
|||
})
|
||||
return menu
|
||||
|
||||
def get_projects(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def refresh(self, *args, **kwargs):
|
||||
self.rescan()
|
||||
|
||||
|
|
@ -165,18 +161,17 @@ class FlameMenuTimeline(_FlameMenuApp):
|
|||
if not self.flame:
|
||||
return []
|
||||
|
||||
flame_project_name = self.flame_project_name
|
||||
self.log.info("______ {} ______".format(flame_project_name))
|
||||
|
||||
menu = deepcopy(self.menu)
|
||||
|
||||
menu['actions'].append({
|
||||
"name": "Create ...",
|
||||
"execute": lambda x: self.tools_helper.show_creator()
|
||||
"execute": lambda x: callback_selection(
|
||||
x, self.tools_helper.show_creator)
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Publish ...",
|
||||
"execute": lambda x: self.tools_helper.show_publish()
|
||||
"execute": lambda x: callback_selection(
|
||||
x, self.tools_helper.show_publish)
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Load ...",
|
||||
|
|
@ -189,9 +184,6 @@ class FlameMenuTimeline(_FlameMenuApp):
|
|||
|
||||
return menu
|
||||
|
||||
def get_projects(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def refresh(self, *args, **kwargs):
|
||||
self.rescan()
|
||||
|
||||
|
|
|
|||
|
|
@ -1,25 +1,33 @@
|
|||
"""
|
||||
Basic avalon integration
|
||||
"""
|
||||
import os
|
||||
import contextlib
|
||||
from avalon import api as avalon
|
||||
from pyblish import api as pyblish
|
||||
from openpype.api import Logger
|
||||
from .lib import (
|
||||
set_segment_data_marker,
|
||||
set_publish_attribute,
|
||||
maintained_segment_selection,
|
||||
get_current_sequence,
|
||||
reset_segment_selection
|
||||
)
|
||||
from .. import HOST_DIR
|
||||
|
||||
API_DIR = os.path.join(HOST_DIR, "api")
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
AVALON_CONTAINERS = "AVALON_CONTAINERS"
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def install():
|
||||
from .. import (
|
||||
PUBLISH_PATH,
|
||||
LOAD_PATH,
|
||||
CREATE_PATH,
|
||||
INVENTORY_PATH
|
||||
)
|
||||
# TODO: install
|
||||
|
||||
# Disable all families except for the ones we explicitly want to see
|
||||
family_states = [
|
||||
"imagesequence",
|
||||
|
|
@ -32,33 +40,24 @@ def install():
|
|||
avalon.data["familiesStateDefault"] = False
|
||||
avalon.data["familiesStateToggled"] = family_states
|
||||
|
||||
log.info("openpype.hosts.flame installed")
|
||||
|
||||
pyblish.register_host("flame")
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
log.info("Registering Flame plug-ins..")
|
||||
|
||||
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
log.info("OpenPype Flame plug-ins registred ...")
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
log.info("OpenPype Flame host installed ...")
|
||||
|
||||
def uninstall():
|
||||
from .. import (
|
||||
PUBLISH_PATH,
|
||||
LOAD_PATH,
|
||||
CREATE_PATH,
|
||||
INVENTORY_PATH
|
||||
)
|
||||
|
||||
# TODO: uninstall
|
||||
pyblish.deregister_host("flame")
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
log.info("Deregistering DaVinci Resovle plug-ins..")
|
||||
|
||||
log.info("Deregistering Flame plug-ins..")
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
|
@ -66,6 +65,8 @@ def uninstall():
|
|||
# register callback for switching publishable
|
||||
pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
log.info("OpenPype Flame host uninstalled ...")
|
||||
|
||||
|
||||
def containerise(tl_segment,
|
||||
name,
|
||||
|
|
@ -97,32 +98,6 @@ def update_container(tl_segment, data=None):
|
|||
# TODO: update_container
|
||||
pass
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
"""Maintain selection during context
|
||||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... node['selected'].setValue(True)
|
||||
>>> print(node['selected'].value())
|
||||
False
|
||||
"""
|
||||
# TODO: maintained_selection + remove undo steps
|
||||
|
||||
try:
|
||||
# do the operation
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
def reset_selection():
|
||||
"""Deselect all selected nodes
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle node passthrough states on instance toggles."""
|
||||
|
||||
|
|
@ -150,6 +125,46 @@ def list_instances():
|
|||
pass
|
||||
|
||||
|
||||
def imprint(item, data=None):
|
||||
# TODO: imprint
|
||||
pass
|
||||
def imprint(segment, data=None):
|
||||
"""
|
||||
Adding openpype data to Flame timeline segment.
|
||||
|
||||
Also including publish attribute into tag.
|
||||
|
||||
Arguments:
|
||||
segment (flame.PySegment)): flame api object
|
||||
data (dict): Any data which needst to be imprinted
|
||||
|
||||
Examples:
|
||||
data = {
|
||||
'asset': 'sq020sh0280',
|
||||
'family': 'render',
|
||||
'subset': 'subsetMain'
|
||||
}
|
||||
"""
|
||||
data = data or {}
|
||||
|
||||
set_segment_data_marker(segment, data)
|
||||
|
||||
# add publish attribute
|
||||
set_publish_attribute(segment, True)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
import flame
|
||||
from .lib import CTX
|
||||
|
||||
# check if segment is selected
|
||||
if isinstance(CTX.selection[0], flame.PySegment):
|
||||
sequence = get_current_sequence(CTX.selection)
|
||||
|
||||
try:
|
||||
with maintained_segment_selection(sequence) as selected:
|
||||
yield
|
||||
finally:
|
||||
# reset all selected clips
|
||||
reset_segment_selection(sequence)
|
||||
# select only original selection of segments
|
||||
for segment in selected:
|
||||
segment.selected = True
|
||||
|
|
|
|||
|
|
@ -1,3 +1,646 @@
|
|||
# Creator plugin functions
|
||||
import re
|
||||
from Qt import QtWidgets, QtCore
|
||||
import openpype.api as openpype
|
||||
from openpype import style
|
||||
from . import (
|
||||
lib as flib,
|
||||
pipeline as fpipeline,
|
||||
constants
|
||||
)
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
log = openpype.Logger.get_logger(__name__)
|
||||
|
||||
|
||||
class CreatorWidget(QtWidgets.QDialog):
|
||||
|
||||
# output items
|
||||
items = dict()
|
||||
_results_back = None
|
||||
|
||||
def __init__(self, name, info, ui_inputs, parent=None):
|
||||
super(CreatorWidget, self).__init__(parent)
|
||||
|
||||
self.setObjectName(name)
|
||||
|
||||
self.setWindowFlags(
|
||||
QtCore.Qt.Window
|
||||
| QtCore.Qt.CustomizeWindowHint
|
||||
| QtCore.Qt.WindowTitleHint
|
||||
| QtCore.Qt.WindowCloseButtonHint
|
||||
| QtCore.Qt.WindowStaysOnTopHint
|
||||
)
|
||||
self.setWindowTitle(name or "Pype Creator Input")
|
||||
self.resize(500, 700)
|
||||
|
||||
# Where inputs and labels are set
|
||||
self.content_widget = [QtWidgets.QWidget(self)]
|
||||
top_layout = QtWidgets.QFormLayout(self.content_widget[0])
|
||||
top_layout.setObjectName("ContentLayout")
|
||||
top_layout.addWidget(Spacer(5, self))
|
||||
|
||||
# first add widget tag line
|
||||
top_layout.addWidget(QtWidgets.QLabel(info))
|
||||
|
||||
# main dynamic layout
|
||||
self.scroll_area = QtWidgets.QScrollArea(self, widgetResizable=True)
|
||||
self.scroll_area.setVerticalScrollBarPolicy(
|
||||
QtCore.Qt.ScrollBarAsNeeded)
|
||||
self.scroll_area.setVerticalScrollBarPolicy(
|
||||
QtCore.Qt.ScrollBarAlwaysOn)
|
||||
self.scroll_area.setHorizontalScrollBarPolicy(
|
||||
QtCore.Qt.ScrollBarAlwaysOff)
|
||||
self.scroll_area.setWidgetResizable(True)
|
||||
|
||||
self.content_widget.append(self.scroll_area)
|
||||
|
||||
scroll_widget = QtWidgets.QWidget(self)
|
||||
in_scroll_area = QtWidgets.QVBoxLayout(scroll_widget)
|
||||
self.content_layout = [in_scroll_area]
|
||||
|
||||
# add preset data into input widget layout
|
||||
self.items = self.populate_widgets(ui_inputs)
|
||||
self.scroll_area.setWidget(scroll_widget)
|
||||
|
||||
# Confirmation buttons
|
||||
btns_widget = QtWidgets.QWidget(self)
|
||||
btns_layout = QtWidgets.QHBoxLayout(btns_widget)
|
||||
|
||||
cancel_btn = QtWidgets.QPushButton("Cancel")
|
||||
btns_layout.addWidget(cancel_btn)
|
||||
|
||||
ok_btn = QtWidgets.QPushButton("Ok")
|
||||
btns_layout.addWidget(ok_btn)
|
||||
|
||||
# Main layout of the dialog
|
||||
main_layout = QtWidgets.QVBoxLayout(self)
|
||||
main_layout.setContentsMargins(10, 10, 10, 10)
|
||||
main_layout.setSpacing(0)
|
||||
|
||||
# adding content widget
|
||||
for w in self.content_widget:
|
||||
main_layout.addWidget(w)
|
||||
|
||||
main_layout.addWidget(btns_widget)
|
||||
|
||||
ok_btn.clicked.connect(self._on_ok_clicked)
|
||||
cancel_btn.clicked.connect(self._on_cancel_clicked)
|
||||
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
@classmethod
|
||||
def set_results_back(cls, value):
|
||||
cls._results_back = value
|
||||
|
||||
@classmethod
|
||||
def get_results_back(cls):
|
||||
return cls._results_back
|
||||
|
||||
def _on_ok_clicked(self):
|
||||
log.debug("ok is clicked: {}".format(self.items))
|
||||
results_back = self._values(self.items)
|
||||
self.set_results_back(results_back)
|
||||
self.close()
|
||||
|
||||
def _on_cancel_clicked(self):
|
||||
self.set_results_back(None)
|
||||
self.close()
|
||||
|
||||
def showEvent(self, event):
|
||||
self.set_results_back(None)
|
||||
super(CreatorWidget, self).showEvent(event)
|
||||
|
||||
def _values(self, data, new_data=None):
|
||||
new_data = new_data or dict()
|
||||
for k, v in data.items():
|
||||
new_data[k] = {
|
||||
"target": None,
|
||||
"value": None
|
||||
}
|
||||
if v["type"] == "dict":
|
||||
new_data[k]["target"] = v["target"]
|
||||
new_data[k]["value"] = self._values(v["value"])
|
||||
if v["type"] == "section":
|
||||
new_data.pop(k)
|
||||
new_data = self._values(v["value"], new_data)
|
||||
elif getattr(v["value"], "currentText", None):
|
||||
new_data[k]["target"] = v["target"]
|
||||
new_data[k]["value"] = v["value"].currentText()
|
||||
elif getattr(v["value"], "isChecked", None):
|
||||
new_data[k]["target"] = v["target"]
|
||||
new_data[k]["value"] = v["value"].isChecked()
|
||||
elif getattr(v["value"], "value", None):
|
||||
new_data[k]["target"] = v["target"]
|
||||
new_data[k]["value"] = v["value"].value()
|
||||
elif getattr(v["value"], "text", None):
|
||||
new_data[k]["target"] = v["target"]
|
||||
new_data[k]["value"] = v["value"].text()
|
||||
|
||||
return new_data
|
||||
|
||||
def camel_case_split(self, text):
|
||||
matches = re.finditer(
|
||||
'.+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)', text)
|
||||
return " ".join([str(m.group(0)).capitalize() for m in matches])
|
||||
|
||||
def create_row(self, layout, type_name, text, **kwargs):
|
||||
# get type attribute from qwidgets
|
||||
attr = getattr(QtWidgets, type_name)
|
||||
|
||||
# convert label text to normal capitalized text with spaces
|
||||
label_text = self.camel_case_split(text)
|
||||
|
||||
# assign the new text to lable widget
|
||||
label = QtWidgets.QLabel(label_text)
|
||||
label.setObjectName("LineLabel")
|
||||
|
||||
# create attribute name text strip of spaces
|
||||
attr_name = text.replace(" ", "")
|
||||
|
||||
# create attribute and assign default values
|
||||
setattr(
|
||||
self,
|
||||
attr_name,
|
||||
attr(parent=self))
|
||||
|
||||
# assign the created attribute to variable
|
||||
item = getattr(self, attr_name)
|
||||
for func, val in kwargs.items():
|
||||
if getattr(item, func):
|
||||
func_attr = getattr(item, func)
|
||||
func_attr(val)
|
||||
|
||||
# add to layout
|
||||
layout.addRow(label, item)
|
||||
|
||||
return item
|
||||
|
||||
def populate_widgets(self, data, content_layout=None):
|
||||
"""
|
||||
Populate widget from input dict.
|
||||
|
||||
Each plugin has its own set of widget rows defined in dictionary
|
||||
each row values should have following keys: `type`, `target`,
|
||||
`label`, `order`, `value` and optionally also `toolTip`.
|
||||
|
||||
Args:
|
||||
data (dict): widget rows or organized groups defined
|
||||
by types `dict` or `section`
|
||||
content_layout (QtWidgets.QFormLayout)[optional]: used when nesting
|
||||
|
||||
Returns:
|
||||
dict: redefined data dict updated with created widgets
|
||||
|
||||
"""
|
||||
|
||||
content_layout = content_layout or self.content_layout[-1]
|
||||
# fix order of process by defined order value
|
||||
ordered_keys = list(data.keys())
|
||||
for k, v in data.items():
|
||||
try:
|
||||
# try removing a key from index which should
|
||||
# be filled with new
|
||||
ordered_keys.pop(v["order"])
|
||||
except IndexError:
|
||||
pass
|
||||
# add key into correct order
|
||||
ordered_keys.insert(v["order"], k)
|
||||
|
||||
# process ordered
|
||||
for k in ordered_keys:
|
||||
v = data[k]
|
||||
tool_tip = v.get("toolTip", "")
|
||||
if v["type"] == "dict":
|
||||
self.content_layout.append(QtWidgets.QWidget(self))
|
||||
content_layout.addWidget(self.content_layout[-1])
|
||||
self.content_layout[-1].setObjectName("sectionHeadline")
|
||||
|
||||
headline = QtWidgets.QVBoxLayout(self.content_layout[-1])
|
||||
headline.addWidget(Spacer(20, self))
|
||||
headline.addWidget(QtWidgets.QLabel(v["label"]))
|
||||
|
||||
# adding nested layout with label
|
||||
self.content_layout.append(QtWidgets.QWidget(self))
|
||||
self.content_layout[-1].setObjectName("sectionContent")
|
||||
|
||||
nested_content_layout = QtWidgets.QFormLayout(
|
||||
self.content_layout[-1])
|
||||
nested_content_layout.setObjectName("NestedContentLayout")
|
||||
content_layout.addWidget(self.content_layout[-1])
|
||||
|
||||
# add nested key as label
|
||||
data[k]["value"] = self.populate_widgets(
|
||||
v["value"], nested_content_layout)
|
||||
|
||||
if v["type"] == "section":
|
||||
self.content_layout.append(QtWidgets.QWidget(self))
|
||||
content_layout.addWidget(self.content_layout[-1])
|
||||
self.content_layout[-1].setObjectName("sectionHeadline")
|
||||
|
||||
headline = QtWidgets.QVBoxLayout(self.content_layout[-1])
|
||||
headline.addWidget(Spacer(20, self))
|
||||
headline.addWidget(QtWidgets.QLabel(v["label"]))
|
||||
|
||||
# adding nested layout with label
|
||||
self.content_layout.append(QtWidgets.QWidget(self))
|
||||
self.content_layout[-1].setObjectName("sectionContent")
|
||||
|
||||
nested_content_layout = QtWidgets.QFormLayout(
|
||||
self.content_layout[-1])
|
||||
nested_content_layout.setObjectName("NestedContentLayout")
|
||||
content_layout.addWidget(self.content_layout[-1])
|
||||
|
||||
# add nested key as label
|
||||
data[k]["value"] = self.populate_widgets(
|
||||
v["value"], nested_content_layout)
|
||||
|
||||
elif v["type"] == "QLineEdit":
|
||||
data[k]["value"] = self.create_row(
|
||||
content_layout, "QLineEdit", v["label"],
|
||||
setText=v["value"], setToolTip=tool_tip)
|
||||
elif v["type"] == "QComboBox":
|
||||
data[k]["value"] = self.create_row(
|
||||
content_layout, "QComboBox", v["label"],
|
||||
addItems=v["value"], setToolTip=tool_tip)
|
||||
elif v["type"] == "QCheckBox":
|
||||
data[k]["value"] = self.create_row(
|
||||
content_layout, "QCheckBox", v["label"],
|
||||
setChecked=v["value"], setToolTip=tool_tip)
|
||||
elif v["type"] == "QSpinBox":
|
||||
data[k]["value"] = self.create_row(
|
||||
content_layout, "QSpinBox", v["label"],
|
||||
setValue=v["value"], setMinimum=0,
|
||||
setMaximum=100000, setToolTip=tool_tip)
|
||||
return data
|
||||
|
||||
|
||||
class Spacer(QtWidgets.QWidget):
|
||||
def __init__(self, height, *args, **kwargs):
|
||||
super(self.__class__, self).__init__(*args, **kwargs)
|
||||
|
||||
self.setFixedHeight(height)
|
||||
|
||||
real_spacer = QtWidgets.QWidget(self)
|
||||
real_spacer.setObjectName("Spacer")
|
||||
real_spacer.setFixedHeight(height)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.addWidget(real_spacer)
|
||||
|
||||
self.setLayout(layout)
|
||||
|
||||
|
||||
class Creator(openpype.Creator):
|
||||
"""Creator class wrapper
|
||||
"""
|
||||
clip_color = constants.COLOR_MAP["purple"]
|
||||
rename_index = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(Creator, self).__init__(*args, **kwargs)
|
||||
self.presets = openpype.get_current_project_settings()[
|
||||
"flame"]["create"].get(self.__class__.__name__, {})
|
||||
|
||||
# adding basic current context flame objects
|
||||
self.project = flib.get_current_project()
|
||||
self.sequence = flib.get_current_sequence(flib.CTX.selection)
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
self.selected = flib.get_sequence_segments(self.sequence, True)
|
||||
else:
|
||||
self.selected = flib.get_sequence_segments(self.sequence)
|
||||
|
||||
def create_widget(self, *args, **kwargs):
|
||||
widget = CreatorWidget(*args, **kwargs)
|
||||
widget.exec_()
|
||||
return widget.get_results_back()
|
||||
|
||||
|
||||
class PublishableClip:
|
||||
"""
|
||||
Convert a segment to publishable instance
|
||||
|
||||
Args:
|
||||
segment (flame.PySegment): flame api object
|
||||
kwargs (optional): additional data needed for rename=True (presets)
|
||||
|
||||
Returns:
|
||||
flame.PySegment: flame api object
|
||||
"""
|
||||
vertical_clip_match = {}
|
||||
marker_data = {}
|
||||
types = {
|
||||
"shot": "shot",
|
||||
"folder": "folder",
|
||||
"episode": "episode",
|
||||
"sequence": "sequence",
|
||||
"track": "sequence",
|
||||
}
|
||||
|
||||
# parents search patern
|
||||
parents_search_patern = r"\{([a-z]*?)\}"
|
||||
|
||||
# default templates for non-ui use
|
||||
rename_default = False
|
||||
hierarchy_default = "{_folder_}/{_sequence_}/{_track_}"
|
||||
clip_name_default = "shot_{_trackIndex_:0>3}_{_clipIndex_:0>4}"
|
||||
subset_name_default = "[ track name ]"
|
||||
review_track_default = "[ none ]"
|
||||
subset_family_default = "plate"
|
||||
count_from_default = 10
|
||||
count_steps_default = 10
|
||||
vertical_sync_default = False
|
||||
driving_layer_default = ""
|
||||
index_from_segment_default = False
|
||||
|
||||
def __init__(self, segment, **kwargs):
|
||||
self.rename_index = kwargs["rename_index"]
|
||||
self.family = kwargs["family"]
|
||||
self.log = kwargs["log"]
|
||||
|
||||
# get main parent objects
|
||||
self.current_segment = segment
|
||||
sequence_name = flib.get_current_sequence([segment]).name.get_value()
|
||||
self.sequence_name = str(sequence_name).replace(" ", "_")
|
||||
|
||||
self.clip_data = flib.get_segment_attributes(segment)
|
||||
# segment (clip) main attributes
|
||||
self.cs_name = self.clip_data["segment_name"]
|
||||
self.cs_index = int(self.clip_data["segment"])
|
||||
|
||||
# get track name and index
|
||||
self.track_index = int(self.clip_data["track"])
|
||||
track_name = self.clip_data["track_name"]
|
||||
self.track_name = str(track_name).replace(" ", "_").replace(
|
||||
"*", "noname{}".format(self.track_index))
|
||||
|
||||
# adding tag.family into tag
|
||||
if kwargs.get("avalon"):
|
||||
self.marker_data.update(kwargs["avalon"])
|
||||
|
||||
# add publish attribute to marker data
|
||||
self.marker_data.update({"publish": True})
|
||||
|
||||
# adding ui inputs if any
|
||||
self.ui_inputs = kwargs.get("ui_inputs", {})
|
||||
|
||||
self.log.info("Inside of plugin: {}".format(
|
||||
self.marker_data
|
||||
))
|
||||
# populate default data before we get other attributes
|
||||
self._populate_segment_default_data()
|
||||
|
||||
# use all populated default data to create all important attributes
|
||||
self._populate_attributes()
|
||||
|
||||
# create parents with correct types
|
||||
self._create_parents()
|
||||
|
||||
def convert(self):
|
||||
|
||||
# solve segment data and add them to marker data
|
||||
self._convert_to_marker_data()
|
||||
|
||||
# if track name is in review track name and also if driving track name
|
||||
# is not in review track name: skip tag creation
|
||||
if (self.track_name in self.review_layer) and (
|
||||
self.driving_layer not in self.review_layer):
|
||||
return
|
||||
|
||||
# deal with clip name
|
||||
new_name = self.marker_data.pop("newClipName")
|
||||
|
||||
if self.rename:
|
||||
# rename segment
|
||||
self.current_segment.name = str(new_name)
|
||||
self.marker_data["asset"] = str(new_name)
|
||||
else:
|
||||
self.marker_data["asset"] = self.cs_name
|
||||
self.marker_data["hierarchyData"]["shot"] = self.cs_name
|
||||
|
||||
if self.marker_data["heroTrack"] and self.review_layer:
|
||||
self.marker_data.update({"reviewTrack": self.review_layer})
|
||||
else:
|
||||
self.marker_data.update({"reviewTrack": None})
|
||||
|
||||
# create pype tag on track_item and add data
|
||||
fpipeline.imprint(self.current_segment, self.marker_data)
|
||||
|
||||
return self.current_segment
|
||||
|
||||
def _populate_segment_default_data(self):
|
||||
""" Populate default formating data from segment. """
|
||||
|
||||
self.current_segment_default_data = {
|
||||
"_folder_": "shots",
|
||||
"_sequence_": self.sequence_name,
|
||||
"_track_": self.track_name,
|
||||
"_clip_": self.cs_name,
|
||||
"_trackIndex_": self.track_index,
|
||||
"_clipIndex_": self.cs_index
|
||||
}
|
||||
|
||||
def _populate_attributes(self):
|
||||
""" Populate main object attributes. """
|
||||
# segment frame range and parent track name for vertical sync check
|
||||
self.clip_in = int(self.clip_data["record_in"])
|
||||
self.clip_out = int(self.clip_data["record_out"])
|
||||
|
||||
# define ui inputs if non gui mode was used
|
||||
self.shot_num = self.cs_index
|
||||
self.log.debug(
|
||||
"____ self.shot_num: {}".format(self.shot_num))
|
||||
|
||||
# ui_inputs data or default values if gui was not used
|
||||
self.rename = self.ui_inputs.get(
|
||||
"clipRename", {}).get("value") or self.rename_default
|
||||
self.clip_name = self.ui_inputs.get(
|
||||
"clipName", {}).get("value") or self.clip_name_default
|
||||
self.hierarchy = self.ui_inputs.get(
|
||||
"hierarchy", {}).get("value") or self.hierarchy_default
|
||||
self.hierarchy_data = self.ui_inputs.get(
|
||||
"hierarchyData", {}).get("value") or \
|
||||
self.current_segment_default_data.copy()
|
||||
self.index_from_segment = self.ui_inputs.get(
|
||||
"segmentIndex", {}).get("value") or self.index_from_segment_default
|
||||
self.count_from = self.ui_inputs.get(
|
||||
"countFrom", {}).get("value") or self.count_from_default
|
||||
self.count_steps = self.ui_inputs.get(
|
||||
"countSteps", {}).get("value") or self.count_steps_default
|
||||
self.subset_name = self.ui_inputs.get(
|
||||
"subsetName", {}).get("value") or self.subset_name_default
|
||||
self.subset_family = self.ui_inputs.get(
|
||||
"subsetFamily", {}).get("value") or self.subset_family_default
|
||||
self.vertical_sync = self.ui_inputs.get(
|
||||
"vSyncOn", {}).get("value") or self.vertical_sync_default
|
||||
self.driving_layer = self.ui_inputs.get(
|
||||
"vSyncTrack", {}).get("value") or self.driving_layer_default
|
||||
self.review_track = self.ui_inputs.get(
|
||||
"reviewTrack", {}).get("value") or self.review_track_default
|
||||
self.audio = self.ui_inputs.get(
|
||||
"audio", {}).get("value") or False
|
||||
|
||||
# build subset name from layer name
|
||||
if self.subset_name == "[ track name ]":
|
||||
self.subset_name = self.track_name
|
||||
|
||||
# create subset for publishing
|
||||
self.subset = self.subset_family + self.subset_name.capitalize()
|
||||
|
||||
def _replace_hash_to_expression(self, name, text):
|
||||
""" Replace hash with number in correct padding. """
|
||||
_spl = text.split("#")
|
||||
_len = (len(_spl) - 1)
|
||||
_repl = "{{{0}:0>{1}}}".format(name, _len)
|
||||
return text.replace(("#" * _len), _repl)
|
||||
|
||||
def _convert_to_marker_data(self):
|
||||
""" Convert internal data to marker data.
|
||||
|
||||
Populating the marker data into internal variable self.marker_data
|
||||
"""
|
||||
# define vertical sync attributes
|
||||
hero_track = True
|
||||
self.review_layer = ""
|
||||
if self.vertical_sync and self.track_name not in self.driving_layer:
|
||||
# if it is not then define vertical sync as None
|
||||
hero_track = False
|
||||
|
||||
# increasing steps by index of rename iteration
|
||||
if not self.index_from_segment:
|
||||
self.count_steps *= self.rename_index
|
||||
|
||||
hierarchy_formating_data = {}
|
||||
hierarchy_data = deepcopy(self.hierarchy_data)
|
||||
_data = self.current_segment_default_data.copy()
|
||||
if self.ui_inputs:
|
||||
# adding tag metadata from ui
|
||||
for _k, _v in self.ui_inputs.items():
|
||||
if _v["target"] == "tag":
|
||||
self.marker_data[_k] = _v["value"]
|
||||
|
||||
# driving layer is set as positive match
|
||||
if hero_track or self.vertical_sync:
|
||||
# mark review layer
|
||||
if self.review_track and (
|
||||
self.review_track not in self.review_track_default):
|
||||
# if review layer is defined and not the same as defalut
|
||||
self.review_layer = self.review_track
|
||||
|
||||
# shot num calculate
|
||||
if self.index_from_segment:
|
||||
# use clip index from timeline
|
||||
self.shot_num = self.count_steps * self.cs_index
|
||||
else:
|
||||
if self.rename_index == 0:
|
||||
self.shot_num = self.count_from
|
||||
else:
|
||||
self.shot_num = self.count_from + self.count_steps
|
||||
|
||||
# clip name sequence number
|
||||
_data.update({"shot": self.shot_num})
|
||||
|
||||
# solve # in test to pythonic expression
|
||||
for _k, _v in hierarchy_data.items():
|
||||
if "#" not in _v["value"]:
|
||||
continue
|
||||
hierarchy_data[
|
||||
_k]["value"] = self._replace_hash_to_expression(
|
||||
_k, _v["value"])
|
||||
|
||||
# fill up pythonic expresisons in hierarchy data
|
||||
for k, _v in hierarchy_data.items():
|
||||
hierarchy_formating_data[k] = _v["value"].format(**_data)
|
||||
else:
|
||||
# if no gui mode then just pass default data
|
||||
hierarchy_formating_data = hierarchy_data
|
||||
|
||||
tag_hierarchy_data = self._solve_tag_hierarchy_data(
|
||||
hierarchy_formating_data
|
||||
)
|
||||
|
||||
tag_hierarchy_data.update({"heroTrack": True})
|
||||
if hero_track and self.vertical_sync:
|
||||
self.vertical_clip_match.update({
|
||||
(self.clip_in, self.clip_out): tag_hierarchy_data
|
||||
})
|
||||
|
||||
if not hero_track and self.vertical_sync:
|
||||
# driving layer is set as negative match
|
||||
for (_in, _out), hero_data in self.vertical_clip_match.items():
|
||||
hero_data.update({"heroTrack": False})
|
||||
if _in == self.clip_in and _out == self.clip_out:
|
||||
data_subset = hero_data["subset"]
|
||||
# add track index in case duplicity of names in hero data
|
||||
if self.subset in data_subset:
|
||||
hero_data["subset"] = self.subset + str(
|
||||
self.track_index)
|
||||
# in case track name and subset name is the same then add
|
||||
if self.subset_name == self.track_name:
|
||||
hero_data["subset"] = self.subset
|
||||
# assing data to return hierarchy data to tag
|
||||
tag_hierarchy_data = hero_data
|
||||
|
||||
# add data to return data dict
|
||||
self.marker_data.update(tag_hierarchy_data)
|
||||
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
|
||||
""" Solve marker data from hierarchy data and templates. """
|
||||
# fill up clip name and hierarchy keys
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
|
||||
|
||||
# remove shot from hierarchy data: is not needed anymore
|
||||
hierarchy_formating_data.pop("shot")
|
||||
|
||||
return {
|
||||
"newClipName": clip_name_filled,
|
||||
"hierarchy": hierarchy_filled,
|
||||
"parents": self.parents,
|
||||
"hierarchyData": hierarchy_formating_data,
|
||||
"subset": self.subset,
|
||||
"family": self.subset_family,
|
||||
"families": [self.family]
|
||||
}
|
||||
|
||||
def _convert_to_entity(self, type, template):
|
||||
""" Converting input key to key with type. """
|
||||
# convert to entity type
|
||||
entity_type = self.types.get(type, None)
|
||||
|
||||
assert entity_type, "Missing entity type for `{}`".format(
|
||||
type
|
||||
)
|
||||
|
||||
# first collect formating data to use for formating template
|
||||
formating_data = {}
|
||||
for _k, _v in self.hierarchy_data.items():
|
||||
value = _v["value"].format(
|
||||
**self.current_segment_default_data)
|
||||
formating_data[_k] = value
|
||||
|
||||
return {
|
||||
"entity_type": entity_type,
|
||||
"entity_name": template.format(
|
||||
**formating_data
|
||||
)
|
||||
}
|
||||
|
||||
def _create_parents(self):
|
||||
""" Create parents and return it in list. """
|
||||
self.parents = []
|
||||
|
||||
patern = re.compile(self.parents_search_patern)
|
||||
|
||||
par_split = [(patern.findall(t).pop(), t)
|
||||
for t in self.hierarchy.split("/")]
|
||||
|
||||
for type, template in par_split:
|
||||
parent = self._convert_to_entity(type, template)
|
||||
self.parents.append(parent)
|
||||
|
||||
|
||||
# Publishing plugin functions
|
||||
# Loader plugin functions
|
||||
|
|
|
|||
|
|
@ -9,26 +9,14 @@ import json
|
|||
import xml.dom.minidom as minidom
|
||||
from copy import deepcopy
|
||||
import datetime
|
||||
|
||||
try:
|
||||
from libwiretapPythonClientAPI import (
|
||||
WireTapClientInit)
|
||||
except ImportError:
|
||||
flame_python_path = "/opt/Autodesk/flame_2021/python"
|
||||
flame_exe_path = (
|
||||
"/opt/Autodesk/flame_2021/bin/flame.app"
|
||||
"/Contents/MacOS/startApp")
|
||||
|
||||
sys.path.append(flame_python_path)
|
||||
|
||||
from libwiretapPythonClientAPI import (
|
||||
WireTapClientInit,
|
||||
WireTapClientUninit,
|
||||
WireTapNodeHandle,
|
||||
WireTapServerHandle,
|
||||
WireTapInt,
|
||||
WireTapStr
|
||||
)
|
||||
from libwiretapPythonClientAPI import ( # noqa
|
||||
WireTapClientInit,
|
||||
WireTapClientUninit,
|
||||
WireTapNodeHandle,
|
||||
WireTapServerHandle,
|
||||
WireTapInt,
|
||||
WireTapStr
|
||||
)
|
||||
|
||||
|
||||
class WireTapCom(object):
|
||||
|
|
@ -55,6 +43,9 @@ class WireTapCom(object):
|
|||
self.volume_name = volume_name or "stonefs"
|
||||
self.group_name = group_name or "staff"
|
||||
|
||||
# wiretap tools dir path
|
||||
self.wiretap_tools_dir = os.getenv("OPENPYPE_WIRETAP_TOOLS")
|
||||
|
||||
# initialize WireTap client
|
||||
WireTapClientInit()
|
||||
|
||||
|
|
@ -84,9 +75,11 @@ class WireTapCom(object):
|
|||
workspace_name = kwargs.get("workspace_name")
|
||||
color_policy = kwargs.get("color_policy")
|
||||
|
||||
self._project_prep(project_name)
|
||||
self._set_project_settings(project_name, project_data)
|
||||
self._set_project_colorspace(project_name, color_policy)
|
||||
project_exists = self._project_prep(project_name)
|
||||
if not project_exists:
|
||||
self._set_project_settings(project_name, project_data)
|
||||
self._set_project_colorspace(project_name, color_policy)
|
||||
|
||||
user_name = self._user_prep(user_name)
|
||||
|
||||
if workspace_name is None:
|
||||
|
|
@ -169,18 +162,15 @@ class WireTapCom(object):
|
|||
# check if volumes exists
|
||||
if self.volume_name not in volumes:
|
||||
raise AttributeError(
|
||||
("Volume '{}' does not exist '{}'").format(
|
||||
("Volume '{}' does not exist in '{}'").format(
|
||||
self.volume_name, volumes)
|
||||
)
|
||||
|
||||
# form cmd arguments
|
||||
project_create_cmd = [
|
||||
os.path.join(
|
||||
"/opt/Autodesk/",
|
||||
"wiretap",
|
||||
"tools",
|
||||
"2021",
|
||||
"wiretap_create_node",
|
||||
self.wiretap_tools_dir,
|
||||
"wiretap_create_node"
|
||||
),
|
||||
'-n',
|
||||
os.path.join("/volumes", self.volume_name),
|
||||
|
|
@ -202,6 +192,7 @@ class WireTapCom(object):
|
|||
|
||||
print(
|
||||
"A new project '{}' is created.".format(project_name))
|
||||
return project_exists
|
||||
|
||||
def _get_all_volumes(self):
|
||||
"""Request all available volumens from WireTap
|
||||
|
|
@ -431,11 +422,8 @@ class WireTapCom(object):
|
|||
color_policy = color_policy or "Legacy"
|
||||
project_colorspace_cmd = [
|
||||
os.path.join(
|
||||
"/opt/Autodesk/",
|
||||
"wiretap",
|
||||
"tools",
|
||||
"2021",
|
||||
"wiretap_duplicate_node",
|
||||
self.wiretap_tools_dir,
|
||||
"wiretap_duplicate_node"
|
||||
),
|
||||
"-s",
|
||||
"/syncolor/policies/Autodesk/{}".format(color_policy),
|
||||
|
|
|
|||
|
|
@ -5,17 +5,14 @@ from pprint import pformat
|
|||
import atexit
|
||||
import openpype
|
||||
import avalon
|
||||
import openpype.hosts.flame as opflame
|
||||
|
||||
flh = sys.modules[__name__]
|
||||
flh._project = None
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
|
||||
|
||||
def openpype_install():
|
||||
"""Registering OpenPype in context
|
||||
"""
|
||||
openpype.install()
|
||||
avalon.api.install(opflame)
|
||||
avalon.api.install(opfapi)
|
||||
print("Avalon registred hosts: {}".format(
|
||||
avalon.api.registered_host()))
|
||||
|
||||
|
|
@ -48,30 +45,34 @@ sys.excepthook = exeption_handler
|
|||
def cleanup():
|
||||
"""Cleaning up Flame framework context
|
||||
"""
|
||||
if opflame.apps:
|
||||
print('`{}` cleaning up apps:\n {}\n'.format(
|
||||
__file__, pformat(opflame.apps)))
|
||||
while len(opflame.apps):
|
||||
app = opflame.apps.pop()
|
||||
if opfapi.CTX.flame_apps:
|
||||
print('`{}` cleaning up flame_apps:\n {}\n'.format(
|
||||
__file__, pformat(opfapi.CTX.flame_apps)))
|
||||
while len(opfapi.CTX.flame_apps):
|
||||
app = opfapi.CTX.flame_apps.pop()
|
||||
print('`{}` removing : {}'.format(__file__, app.name))
|
||||
del app
|
||||
opflame.apps = []
|
||||
opfapi.CTX.flame_apps = []
|
||||
|
||||
if opflame.app_framework:
|
||||
print('PYTHON\t: %s cleaning up' % opflame.app_framework.bundle_name)
|
||||
opflame.app_framework.save_prefs()
|
||||
opflame.app_framework = None
|
||||
if opfapi.CTX.app_framework:
|
||||
print('openpype\t: {} cleaning up'.format(
|
||||
opfapi.CTX.app_framework.bundle_name)
|
||||
)
|
||||
opfapi.CTX.app_framework.save_prefs()
|
||||
opfapi.CTX.app_framework = None
|
||||
|
||||
|
||||
atexit.register(cleanup)
|
||||
|
||||
|
||||
def load_apps():
|
||||
"""Load available apps into Flame framework
|
||||
"""Load available flame_apps into Flame framework
|
||||
"""
|
||||
opflame.apps.append(opflame.FlameMenuProjectConnect(opflame.app_framework))
|
||||
opflame.apps.append(opflame.FlameMenuTimeline(opflame.app_framework))
|
||||
opflame.app_framework.log.info("Apps are loaded")
|
||||
opfapi.CTX.flame_apps.append(
|
||||
opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework))
|
||||
opfapi.CTX.flame_apps.append(
|
||||
opfapi.FlameMenuTimeline(opfapi.CTX.app_framework))
|
||||
opfapi.CTX.app_framework.log.info("Apps are loaded")
|
||||
|
||||
|
||||
def project_changed_dict(info):
|
||||
|
|
@ -89,10 +90,10 @@ def app_initialized(parent=None):
|
|||
Args:
|
||||
parent (obj, optional): Parent object. Defaults to None.
|
||||
"""
|
||||
opflame.app_framework = opflame.FlameAppFramework()
|
||||
opfapi.CTX.app_framework = opfapi.FlameAppFramework()
|
||||
|
||||
print("{} initializing".format(
|
||||
opflame.app_framework.bundle_name))
|
||||
opfapi.CTX.app_framework.bundle_name))
|
||||
|
||||
load_apps()
|
||||
|
||||
|
|
@ -103,7 +104,7 @@ Initialisation of the hook is starting from here
|
|||
First it needs to test if it can import the flame modul.
|
||||
This will happen only in case a project has been loaded.
|
||||
Then `app_initialized` will load main Framework which will load
|
||||
all menu objects as apps.
|
||||
all menu objects as flame_apps.
|
||||
"""
|
||||
|
||||
try:
|
||||
|
|
@ -131,15 +132,15 @@ def _build_app_menu(app_name):
|
|||
|
||||
# first find the relative appname
|
||||
app = None
|
||||
for _app in opflame.apps:
|
||||
for _app in opfapi.CTX.flame_apps:
|
||||
if _app.__class__.__name__ == app_name:
|
||||
app = _app
|
||||
|
||||
if app:
|
||||
menu.append(app.build_menu())
|
||||
|
||||
if opflame.app_framework:
|
||||
menu_auto_refresh = opflame.app_framework.prefs_global.get(
|
||||
if opfapi.CTX.app_framework:
|
||||
menu_auto_refresh = opfapi.CTX.app_framework.prefs_global.get(
|
||||
'menu_auto_refresh', {})
|
||||
if menu_auto_refresh.get('timeline_menu', True):
|
||||
try:
|
||||
|
|
@ -163,8 +164,8 @@ def project_saved(project_name, save_time, is_auto_save):
|
|||
save_time (str): time when it was saved
|
||||
is_auto_save (bool): autosave is on or off
|
||||
"""
|
||||
if opflame.app_framework:
|
||||
opflame.app_framework.save_prefs()
|
||||
if opfapi.CTX.app_framework:
|
||||
opfapi.CTX.app_framework.save_prefs()
|
||||
|
||||
|
||||
def get_main_menu_custom_ui_actions():
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ Flame utils for syncing scripts
|
|||
import os
|
||||
import shutil
|
||||
from openpype.api import Logger
|
||||
log = Logger().get_logger(__name__)
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def _sync_utility_scripts(env=None):
|
||||
|
|
@ -75,10 +75,19 @@ def _sync_utility_scripts(env=None):
|
|||
|
||||
path = os.path.join(flame_shared_dir, _itm)
|
||||
log.info("Removing `{path}`...".format(**locals()))
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path, onerror=None)
|
||||
else:
|
||||
os.remove(path)
|
||||
|
||||
try:
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path, onerror=None)
|
||||
else:
|
||||
os.remove(path)
|
||||
except PermissionError as msg:
|
||||
log.warning(
|
||||
"Not able to remove: `{}`, Problem with: `{}`".format(
|
||||
path,
|
||||
msg
|
||||
)
|
||||
)
|
||||
|
||||
# copy scripts into Resolve's utility scripts dir
|
||||
for dirpath, scriptlist in scripts.items():
|
||||
|
|
@ -88,13 +97,22 @@ def _sync_utility_scripts(env=None):
|
|||
src = os.path.join(dirpath, _script)
|
||||
dst = os.path.join(flame_shared_dir, _script)
|
||||
log.info("Copying `{src}` to `{dst}`...".format(**locals()))
|
||||
if os.path.isdir(src):
|
||||
shutil.copytree(
|
||||
src, dst, symlinks=False,
|
||||
ignore=None, ignore_dangling_symlinks=False
|
||||
|
||||
try:
|
||||
if os.path.isdir(src):
|
||||
shutil.copytree(
|
||||
src, dst, symlinks=False,
|
||||
ignore=None, ignore_dangling_symlinks=False
|
||||
)
|
||||
else:
|
||||
shutil.copy2(src, dst)
|
||||
except (PermissionError, FileExistsError) as msg:
|
||||
log.warning(
|
||||
"Not able to coppy to: `{}`, Problem with: `{}`".format(
|
||||
dst,
|
||||
msg
|
||||
)
|
||||
)
|
||||
else:
|
||||
shutil.copy2(src, dst)
|
||||
|
||||
|
||||
def setup(env=None):
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ from openpype.api import Logger
|
|||
# )
|
||||
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
exported_projet_ext = ".otoc"
|
||||
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import socket
|
|||
from openpype.lib import (
|
||||
PreLaunchHook, get_openpype_username)
|
||||
from openpype.hosts import flame as opflame
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
import openpype
|
||||
from pprint import pformat
|
||||
|
||||
|
|
@ -18,18 +19,18 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
"""
|
||||
app_groups = ["flame"]
|
||||
|
||||
# todo: replace version number with avalon launch app version
|
||||
flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7"
|
||||
|
||||
wtc_script_path = os.path.join(
|
||||
opflame.HOST_DIR, "api", "scripts", "wiretap_com.py")
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
|
||||
def execute(self):
|
||||
_env = self.launch_context.env
|
||||
self.flame_python_exe = _env["OPENPYPE_FLAME_PYTHON_EXEC"]
|
||||
self.flame_pythonpath = _env["OPENPYPE_FLAME_PYTHONPATH"]
|
||||
|
||||
"""Hook entry method."""
|
||||
project_doc = self.data["project_doc"]
|
||||
user_name = get_openpype_username()
|
||||
|
|
@ -55,12 +56,11 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
"FieldDominance": "PROGRESSIVE"
|
||||
}
|
||||
|
||||
|
||||
data_to_script = {
|
||||
# from settings
|
||||
"host_name": os.getenv("FLAME_WIRETAP_HOSTNAME") or hostname,
|
||||
"volume_name": os.getenv("FLAME_WIRETAP_VOLUME"),
|
||||
"group_name": os.getenv("FLAME_WIRETAP_GROUP"),
|
||||
"host_name": _env.get("FLAME_WIRETAP_HOSTNAME") or hostname,
|
||||
"volume_name": _env.get("FLAME_WIRETAP_VOLUME"),
|
||||
"group_name": _env.get("FLAME_WIRETAP_GROUP"),
|
||||
"color_policy": "ACES 1.1",
|
||||
|
||||
# from project
|
||||
|
|
@ -68,14 +68,28 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
"user_name": user_name,
|
||||
"project_data": project_data
|
||||
}
|
||||
|
||||
self.log.info(pformat(dict(_env)))
|
||||
self.log.info(pformat(data_to_script))
|
||||
|
||||
# add to python path from settings
|
||||
self._add_pythonpath()
|
||||
|
||||
app_arguments = self._get_launch_arguments(data_to_script)
|
||||
|
||||
self.log.info(pformat(dict(self.launch_context.env)))
|
||||
|
||||
opflame.setup(self.launch_context.env)
|
||||
opfapi.setup(self.launch_context.env)
|
||||
|
||||
self.launch_context.launch_args.extend(app_arguments)
|
||||
|
||||
def _add_pythonpath(self):
|
||||
pythonpath = self.launch_context.env.get("PYTHONPATH")
|
||||
|
||||
# separate it explicity by `;` that is what we use in settings
|
||||
new_pythonpath = self.flame_pythonpath.split(os.pathsep)
|
||||
new_pythonpath += pythonpath.split(os.pathsep)
|
||||
|
||||
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(new_pythonpath)
|
||||
|
||||
def _get_launch_arguments(self, script_data):
|
||||
# Dump data to string
|
||||
dumped_script_data = json.dumps(script_data)
|
||||
|
|
@ -83,7 +97,9 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
with make_temp_file(dumped_script_data) as tmp_json_path:
|
||||
# Prepare subprocess arguments
|
||||
args = [
|
||||
self.flame_python_exe,
|
||||
self.flame_python_exe.format(
|
||||
**self.launch_context.env
|
||||
),
|
||||
self.wtc_script_path,
|
||||
tmp_json_path
|
||||
]
|
||||
|
|
@ -91,7 +107,7 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
|
||||
process_kwargs = {
|
||||
"logger": self.log,
|
||||
"env": {}
|
||||
"env": self.launch_context.env
|
||||
}
|
||||
|
||||
openpype.api.run_subprocess(args, **process_kwargs)
|
||||
|
|
|
|||
657
openpype/hosts/flame/otio/flame_export.py
Normal file
657
openpype/hosts/flame/otio/flame_export.py
Normal file
|
|
@ -0,0 +1,657 @@
|
|||
""" compatibility OpenTimelineIO 0.12.0 and newer
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
import logging
|
||||
import opentimelineio as otio
|
||||
from . import utils
|
||||
|
||||
import flame
|
||||
from pprint import pformat
|
||||
|
||||
reload(utils) # noqa
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
TRACK_TYPES = {
|
||||
"video": otio.schema.TrackKind.Video,
|
||||
"audio": otio.schema.TrackKind.Audio
|
||||
}
|
||||
MARKERS_COLOR_MAP = {
|
||||
(1.0, 0.0, 0.0): otio.schema.MarkerColor.RED,
|
||||
(1.0, 0.5, 0.0): otio.schema.MarkerColor.ORANGE,
|
||||
(1.0, 1.0, 0.0): otio.schema.MarkerColor.YELLOW,
|
||||
(1.0, 0.5, 1.0): otio.schema.MarkerColor.PINK,
|
||||
(1.0, 1.0, 1.0): otio.schema.MarkerColor.WHITE,
|
||||
(0.0, 1.0, 0.0): otio.schema.MarkerColor.GREEN,
|
||||
(0.0, 1.0, 1.0): otio.schema.MarkerColor.CYAN,
|
||||
(0.0, 0.0, 1.0): otio.schema.MarkerColor.BLUE,
|
||||
(0.5, 0.0, 0.5): otio.schema.MarkerColor.PURPLE,
|
||||
(0.5, 0.0, 1.0): otio.schema.MarkerColor.MAGENTA,
|
||||
(0.0, 0.0, 0.0): otio.schema.MarkerColor.BLACK
|
||||
}
|
||||
MARKERS_INCLUDE = True
|
||||
|
||||
|
||||
class CTX:
|
||||
_fps = None
|
||||
_tl_start_frame = None
|
||||
project = None
|
||||
clips = None
|
||||
|
||||
@classmethod
|
||||
def set_fps(cls, new_fps):
|
||||
if not isinstance(new_fps, float):
|
||||
raise TypeError("Invalid fps type {}".format(type(new_fps)))
|
||||
if cls._fps != new_fps:
|
||||
cls._fps = new_fps
|
||||
|
||||
@classmethod
|
||||
def get_fps(cls):
|
||||
return cls._fps
|
||||
|
||||
@classmethod
|
||||
def set_tl_start_frame(cls, number):
|
||||
if not isinstance(number, int):
|
||||
raise TypeError("Invalid timeline start frame type {}".format(
|
||||
type(number)))
|
||||
if cls._tl_start_frame != number:
|
||||
cls._tl_start_frame = number
|
||||
|
||||
@classmethod
|
||||
def get_tl_start_frame(cls):
|
||||
return cls._tl_start_frame
|
||||
|
||||
|
||||
def flatten(_list):
|
||||
for item in _list:
|
||||
if isinstance(item, (list, tuple)):
|
||||
for sub_item in flatten(item):
|
||||
yield sub_item
|
||||
else:
|
||||
yield item
|
||||
|
||||
|
||||
def get_current_flame_project():
|
||||
project = flame.project.current_project
|
||||
return project
|
||||
|
||||
|
||||
def create_otio_rational_time(frame, fps):
|
||||
return otio.opentime.RationalTime(
|
||||
float(frame),
|
||||
float(fps)
|
||||
)
|
||||
|
||||
|
||||
def create_otio_time_range(start_frame, frame_duration, fps):
|
||||
return otio.opentime.TimeRange(
|
||||
start_time=create_otio_rational_time(start_frame, fps),
|
||||
duration=create_otio_rational_time(frame_duration, fps)
|
||||
)
|
||||
|
||||
|
||||
def _get_metadata(item):
|
||||
if hasattr(item, 'metadata'):
|
||||
if not item.metadata:
|
||||
return {}
|
||||
return {key: value for key, value in dict(item.metadata)}
|
||||
return {}
|
||||
|
||||
|
||||
def create_time_effects(otio_clip, item):
|
||||
# todo #2426: add retiming effects to export
|
||||
# get all subtrack items
|
||||
# subTrackItems = flatten(track_item.parent().subTrackItems())
|
||||
# speed = track_item.playbackSpeed()
|
||||
|
||||
# otio_effect = None
|
||||
# # retime on track item
|
||||
# if speed != 1.:
|
||||
# # make effect
|
||||
# otio_effect = otio.schema.LinearTimeWarp()
|
||||
# otio_effect.name = "Speed"
|
||||
# otio_effect.time_scalar = speed
|
||||
# otio_effect.metadata = {}
|
||||
|
||||
# # freeze frame effect
|
||||
# if speed == 0.:
|
||||
# otio_effect = otio.schema.FreezeFrame()
|
||||
# otio_effect.name = "FreezeFrame"
|
||||
# otio_effect.metadata = {}
|
||||
|
||||
# if otio_effect:
|
||||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
|
||||
# # loop trought and get all Timewarps
|
||||
# for effect in subTrackItems:
|
||||
# if ((track_item not in effect.linkedItems())
|
||||
# and (len(effect.linkedItems()) > 0)):
|
||||
# continue
|
||||
# # avoid all effect which are not TimeWarp and disabled
|
||||
# if "TimeWarp" not in effect.name():
|
||||
# continue
|
||||
|
||||
# if not effect.isEnabled():
|
||||
# continue
|
||||
|
||||
# node = effect.node()
|
||||
# name = node["name"].value()
|
||||
|
||||
# # solve effect class as effect name
|
||||
# _name = effect.name()
|
||||
# if "_" in _name:
|
||||
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
|
||||
# else:
|
||||
# effect_name = re.sub(r"\d+", "", _name) # one number
|
||||
|
||||
# metadata = {}
|
||||
# # add knob to metadata
|
||||
# for knob in ["lookup", "length"]:
|
||||
# value = node[knob].value()
|
||||
# animated = node[knob].isAnimated()
|
||||
# if animated:
|
||||
# value = [
|
||||
# ((node[knob].getValueAt(i)) - i)
|
||||
# for i in range(
|
||||
# track_item.timelineIn(),
|
||||
# track_item.timelineOut() + 1)
|
||||
# ]
|
||||
|
||||
# metadata[knob] = value
|
||||
|
||||
# # make effect
|
||||
# otio_effect = otio.schema.TimeEffect()
|
||||
# otio_effect.name = name
|
||||
# otio_effect.effect_name = effect_name
|
||||
# otio_effect.metadata = metadata
|
||||
|
||||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
pass
|
||||
|
||||
|
||||
def _get_marker_color(flame_colour):
|
||||
# clamp colors to closes half numbers
|
||||
_flame_colour = [
|
||||
(lambda x: round(x * 2) / 2)(c)
|
||||
for c in flame_colour]
|
||||
|
||||
for color, otio_color_type in MARKERS_COLOR_MAP.items():
|
||||
if _flame_colour == list(color):
|
||||
return otio_color_type
|
||||
|
||||
return otio.schema.MarkerColor.RED
|
||||
|
||||
|
||||
def _get_flame_markers(item):
|
||||
output_markers = []
|
||||
|
||||
time_in = item.record_in.relative_frame
|
||||
|
||||
for marker in item.markers:
|
||||
log.debug(marker)
|
||||
start_frame = marker.location.get_value().relative_frame
|
||||
|
||||
start_frame = (start_frame - time_in) + 1
|
||||
|
||||
marker_data = {
|
||||
"name": marker.name.get_value(),
|
||||
"duration": marker.duration.get_value().relative_frame,
|
||||
"comment": marker.comment.get_value(),
|
||||
"start_frame": start_frame,
|
||||
"colour": marker.colour.get_value()
|
||||
}
|
||||
|
||||
output_markers.append(marker_data)
|
||||
|
||||
return output_markers
|
||||
|
||||
|
||||
def create_otio_markers(otio_item, item):
|
||||
markers = _get_flame_markers(item)
|
||||
for marker in markers:
|
||||
frame_rate = CTX.get_fps()
|
||||
|
||||
marked_range = otio.opentime.TimeRange(
|
||||
start_time=otio.opentime.RationalTime(
|
||||
marker["start_frame"],
|
||||
frame_rate
|
||||
),
|
||||
duration=otio.opentime.RationalTime(
|
||||
marker["duration"],
|
||||
frame_rate
|
||||
)
|
||||
)
|
||||
|
||||
# testing the comment if it is not containing json string
|
||||
check_if_json = re.findall(
|
||||
re.compile(r"[{:}]"),
|
||||
marker["comment"]
|
||||
)
|
||||
|
||||
# to identify this as json, at least 3 items in the list should
|
||||
# be present ["{", ":", "}"]
|
||||
metadata = {}
|
||||
if len(check_if_json) >= 3:
|
||||
# this is json string
|
||||
try:
|
||||
# capture exceptions which are related to strings only
|
||||
metadata.update(
|
||||
json.loads(marker["comment"])
|
||||
)
|
||||
except ValueError as msg:
|
||||
log.error("Marker json conversion: {}".format(msg))
|
||||
else:
|
||||
metadata["comment"] = marker["comment"]
|
||||
|
||||
otio_marker = otio.schema.Marker(
|
||||
name=marker["name"],
|
||||
color=_get_marker_color(
|
||||
marker["colour"]),
|
||||
marked_range=marked_range,
|
||||
metadata=metadata
|
||||
)
|
||||
|
||||
otio_item.markers.append(otio_marker)
|
||||
|
||||
|
||||
def create_otio_reference(clip_data):
|
||||
metadata = _get_metadata(clip_data)
|
||||
|
||||
# get file info for path and start frame
|
||||
frame_start = 0
|
||||
fps = CTX.get_fps()
|
||||
|
||||
path = clip_data["fpath"]
|
||||
|
||||
reel_clip = None
|
||||
match_reel_clip = [
|
||||
clip for clip in CTX.clips
|
||||
if clip["fpath"] == path
|
||||
]
|
||||
if match_reel_clip:
|
||||
reel_clip = match_reel_clip.pop()
|
||||
fps = reel_clip["fps"]
|
||||
|
||||
file_name = os.path.basename(path)
|
||||
file_head, extension = os.path.splitext(file_name)
|
||||
|
||||
# get padding and other file infos
|
||||
log.debug("_ path: {}".format(path))
|
||||
|
||||
is_sequence = padding = utils.get_frame_from_path(path)
|
||||
if is_sequence:
|
||||
number = utils.get_frame_from_path(path)
|
||||
file_head = file_name.split(number)[:-1]
|
||||
frame_start = int(number)
|
||||
|
||||
frame_duration = clip_data["source_duration"]
|
||||
|
||||
if is_sequence:
|
||||
metadata.update({
|
||||
"isSequence": True,
|
||||
"padding": padding
|
||||
})
|
||||
|
||||
otio_ex_ref_item = None
|
||||
|
||||
if is_sequence:
|
||||
# if it is file sequence try to create `ImageSequenceReference`
|
||||
# the OTIO might not be compatible so return nothing and do it old way
|
||||
try:
|
||||
dirname = os.path.dirname(path)
|
||||
otio_ex_ref_item = otio.schema.ImageSequenceReference(
|
||||
target_url_base=dirname + os.sep,
|
||||
name_prefix=file_head,
|
||||
name_suffix=extension,
|
||||
start_frame=frame_start,
|
||||
frame_zero_padding=padding,
|
||||
rate=fps,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
if not otio_ex_ref_item:
|
||||
reformat_path = utils.get_reformated_path(path, padded=False)
|
||||
# in case old OTIO or video file create `ExternalReference`
|
||||
otio_ex_ref_item = otio.schema.ExternalReference(
|
||||
target_url=reformat_path,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
||||
# add metadata to otio item
|
||||
add_otio_metadata(otio_ex_ref_item, clip_data, **metadata)
|
||||
|
||||
return otio_ex_ref_item
|
||||
|
||||
|
||||
def create_otio_clip(clip_data):
|
||||
segment = clip_data["PySegment"]
|
||||
|
||||
# create media reference
|
||||
media_reference = create_otio_reference(clip_data)
|
||||
|
||||
# calculate source in
|
||||
first_frame = utils.get_frame_from_path(clip_data["fpath"]) or 0
|
||||
source_in = int(clip_data["source_in"]) - int(first_frame)
|
||||
|
||||
# creatae source range
|
||||
source_range = create_otio_time_range(
|
||||
source_in,
|
||||
clip_data["record_duration"],
|
||||
CTX.get_fps()
|
||||
)
|
||||
|
||||
otio_clip = otio.schema.Clip(
|
||||
name=clip_data["segment_name"],
|
||||
source_range=source_range,
|
||||
media_reference=media_reference
|
||||
)
|
||||
|
||||
# Add markers
|
||||
if MARKERS_INCLUDE:
|
||||
create_otio_markers(otio_clip, segment)
|
||||
|
||||
return otio_clip
|
||||
|
||||
|
||||
def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
|
||||
return otio.schema.Gap(
|
||||
source_range=create_otio_time_range(
|
||||
gap_start,
|
||||
(clip_start - tl_start_frame) - gap_start,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def get_clips_in_reels(project):
|
||||
output_clips = []
|
||||
project_desktop = project.current_workspace.desktop
|
||||
|
||||
for reel_group in project_desktop.reel_groups:
|
||||
for reel in reel_group.reels:
|
||||
for clip in reel.clips:
|
||||
clip_data = {
|
||||
"PyClip": clip,
|
||||
"fps": float(str(clip.frame_rate)[:-4])
|
||||
}
|
||||
|
||||
attrs = [
|
||||
"name", "width", "height",
|
||||
"ratio", "sample_rate", "bit_depth"
|
||||
]
|
||||
|
||||
for attr in attrs:
|
||||
val = getattr(clip, attr)
|
||||
clip_data[attr] = val
|
||||
|
||||
version = clip.versions[-1]
|
||||
track = version.tracks[-1]
|
||||
for segment in track.segments:
|
||||
segment_data = _get_segment_attributes(segment)
|
||||
clip_data.update(segment_data)
|
||||
|
||||
output_clips.append(clip_data)
|
||||
|
||||
return output_clips
|
||||
|
||||
|
||||
def _get_colourspace_policy():
|
||||
|
||||
output = {}
|
||||
# get policies project path
|
||||
policy_dir = "/opt/Autodesk/project/{}/synColor/policy".format(
|
||||
CTX.project.name
|
||||
)
|
||||
log.debug(policy_dir)
|
||||
policy_fp = os.path.join(policy_dir, "policy.cfg")
|
||||
|
||||
if not os.path.exists(policy_fp):
|
||||
return output
|
||||
|
||||
with open(policy_fp) as file:
|
||||
dict_conf = dict(line.strip().split(' = ', 1) for line in file)
|
||||
output.update(
|
||||
{"openpype.flame.{}".format(k): v for k, v in dict_conf.items()}
|
||||
)
|
||||
return output
|
||||
|
||||
|
||||
def _create_otio_timeline(sequence):
|
||||
|
||||
metadata = _get_metadata(sequence)
|
||||
|
||||
# find colour policy files and add them to metadata
|
||||
colorspace_policy = _get_colourspace_policy()
|
||||
metadata.update(colorspace_policy)
|
||||
|
||||
metadata.update({
|
||||
"openpype.timeline.width": int(sequence.width),
|
||||
"openpype.timeline.height": int(sequence.height),
|
||||
"openpype.timeline.pixelAspect": 1
|
||||
})
|
||||
|
||||
rt_start_time = create_otio_rational_time(
|
||||
CTX.get_tl_start_frame(), CTX.get_fps())
|
||||
|
||||
return otio.schema.Timeline(
|
||||
name=str(sequence.name)[1:-1],
|
||||
global_start_time=rt_start_time,
|
||||
metadata=metadata
|
||||
)
|
||||
|
||||
|
||||
def create_otio_track(track_type, track_name):
|
||||
return otio.schema.Track(
|
||||
name=track_name,
|
||||
kind=TRACK_TYPES[track_type]
|
||||
)
|
||||
|
||||
|
||||
def add_otio_gap(clip_data, otio_track, prev_out):
|
||||
gap_length = clip_data["record_in"] - prev_out
|
||||
if prev_out != 0:
|
||||
gap_length -= 1
|
||||
|
||||
gap = otio.opentime.TimeRange(
|
||||
duration=otio.opentime.RationalTime(
|
||||
gap_length,
|
||||
CTX.get_fps()
|
||||
)
|
||||
)
|
||||
otio_gap = otio.schema.Gap(source_range=gap)
|
||||
otio_track.append(otio_gap)
|
||||
|
||||
|
||||
def add_otio_metadata(otio_item, item, **kwargs):
|
||||
metadata = _get_metadata(item)
|
||||
|
||||
# add additional metadata from kwargs
|
||||
if kwargs:
|
||||
metadata.update(kwargs)
|
||||
|
||||
# add metadata to otio item metadata
|
||||
for key, value in metadata.items():
|
||||
otio_item.metadata.update({key: value})
|
||||
|
||||
|
||||
def _get_shot_tokens_values(clip, tokens):
|
||||
old_value = None
|
||||
output = {}
|
||||
|
||||
if not clip.shot_name:
|
||||
return output
|
||||
|
||||
old_value = clip.shot_name.get_value()
|
||||
|
||||
for token in tokens:
|
||||
clip.shot_name.set_value(token)
|
||||
_key = re.sub("[ <>]", "", token)
|
||||
|
||||
try:
|
||||
output[_key] = int(clip.shot_name.get_value())
|
||||
except ValueError:
|
||||
output[_key] = clip.shot_name.get_value()
|
||||
|
||||
clip.shot_name.set_value(old_value)
|
||||
|
||||
return output
|
||||
|
||||
|
||||
def _get_segment_attributes(segment):
|
||||
# log.debug(dir(segment))
|
||||
|
||||
if str(segment.name)[1:-1] == "":
|
||||
return None
|
||||
|
||||
# Add timeline segment to tree
|
||||
clip_data = {
|
||||
"segment_name": segment.name.get_value(),
|
||||
"segment_comment": segment.comment.get_value(),
|
||||
"tape_name": segment.tape_name,
|
||||
"source_name": segment.source_name,
|
||||
"fpath": segment.file_path,
|
||||
"PySegment": segment
|
||||
}
|
||||
|
||||
# add all available shot tokens
|
||||
shot_tokens = _get_shot_tokens_values(segment, [
|
||||
"<colour space>", "<width>", "<height>", "<depth>",
|
||||
])
|
||||
clip_data.update(shot_tokens)
|
||||
|
||||
# populate shot source metadata
|
||||
segment_attrs = [
|
||||
"record_duration", "record_in", "record_out",
|
||||
"source_duration", "source_in", "source_out"
|
||||
]
|
||||
segment_attrs_data = {}
|
||||
for attr in segment_attrs:
|
||||
if not hasattr(segment, attr):
|
||||
continue
|
||||
_value = getattr(segment, attr)
|
||||
segment_attrs_data[attr] = str(_value).replace("+", ":")
|
||||
|
||||
if attr in ["record_in", "record_out"]:
|
||||
clip_data[attr] = _value.relative_frame
|
||||
else:
|
||||
clip_data[attr] = _value.frame
|
||||
|
||||
clip_data["segment_timecodes"] = segment_attrs_data
|
||||
|
||||
return clip_data
|
||||
|
||||
|
||||
def create_otio_timeline(sequence):
|
||||
log.info(dir(sequence))
|
||||
log.info(sequence.attributes)
|
||||
|
||||
CTX.project = get_current_flame_project()
|
||||
CTX.clips = get_clips_in_reels(CTX.project)
|
||||
|
||||
log.debug(pformat(
|
||||
CTX.clips
|
||||
))
|
||||
|
||||
# get current timeline
|
||||
CTX.set_fps(
|
||||
float(str(sequence.frame_rate)[:-4]))
|
||||
|
||||
tl_start_frame = utils.timecode_to_frames(
|
||||
str(sequence.start_time).replace("+", ":"),
|
||||
CTX.get_fps()
|
||||
)
|
||||
CTX.set_tl_start_frame(tl_start_frame)
|
||||
|
||||
# convert timeline to otio
|
||||
otio_timeline = _create_otio_timeline(sequence)
|
||||
|
||||
# create otio tracks and clips
|
||||
for ver in sequence.versions:
|
||||
for track in ver.tracks:
|
||||
if len(track.segments) == 0 and track.hidden:
|
||||
return None
|
||||
|
||||
# convert track to otio
|
||||
otio_track = create_otio_track(
|
||||
"video", str(track.name)[1:-1])
|
||||
|
||||
all_segments = []
|
||||
for segment in track.segments:
|
||||
clip_data = _get_segment_attributes(segment)
|
||||
if not clip_data:
|
||||
continue
|
||||
all_segments.append(clip_data)
|
||||
|
||||
segments_ordered = {
|
||||
itemindex: clip_data
|
||||
for itemindex, clip_data in enumerate(
|
||||
all_segments)
|
||||
}
|
||||
log.debug("_ segments_ordered: {}".format(
|
||||
pformat(segments_ordered)
|
||||
))
|
||||
if not segments_ordered:
|
||||
continue
|
||||
|
||||
for itemindex, segment_data in segments_ordered.items():
|
||||
log.debug("_ itemindex: {}".format(itemindex))
|
||||
|
||||
# Add Gap if needed
|
||||
if itemindex == 0:
|
||||
# if it is first track item at track then add
|
||||
# it to previouse item
|
||||
prev_item = segment_data
|
||||
|
||||
else:
|
||||
# get previouse item
|
||||
prev_item = segments_ordered[itemindex - 1]
|
||||
|
||||
log.debug("_ segment_data: {}".format(segment_data))
|
||||
|
||||
# calculate clip frame range difference from each other
|
||||
clip_diff = segment_data["record_in"] - prev_item["record_out"]
|
||||
|
||||
# add gap if first track item is not starting
|
||||
# at first timeline frame
|
||||
if itemindex == 0 and segment_data["record_in"] > 0:
|
||||
add_otio_gap(segment_data, otio_track, 0)
|
||||
|
||||
# or add gap if following track items are having
|
||||
# frame range differences from each other
|
||||
elif itemindex and clip_diff != 1:
|
||||
add_otio_gap(
|
||||
segment_data, otio_track, prev_item["record_out"])
|
||||
|
||||
# create otio clip and add it to track
|
||||
otio_clip = create_otio_clip(segment_data)
|
||||
otio_track.append(otio_clip)
|
||||
|
||||
log.debug("_ otio_clip: {}".format(otio_clip))
|
||||
|
||||
# create otio marker
|
||||
# create otio metadata
|
||||
|
||||
# add track to otio timeline
|
||||
otio_timeline.tracks.append(otio_track)
|
||||
|
||||
return otio_timeline
|
||||
|
||||
|
||||
def write_to_file(otio_timeline, path):
|
||||
otio.adapters.write_to_file(otio_timeline, path)
|
||||
95
openpype/hosts/flame/otio/utils.py
Normal file
95
openpype/hosts/flame/otio/utils.py
Normal file
|
|
@ -0,0 +1,95 @@
|
|||
import re
|
||||
import opentimelineio as otio
|
||||
import logging
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def timecode_to_frames(timecode, framerate):
|
||||
rt = otio.opentime.from_timecode(timecode, framerate)
|
||||
return int(otio.opentime.to_frames(rt))
|
||||
|
||||
|
||||
def frames_to_timecode(frames, framerate):
|
||||
rt = otio.opentime.from_frames(frames, framerate)
|
||||
return otio.opentime.to_timecode(rt)
|
||||
|
||||
|
||||
def frames_to_seconds(frames, framerate):
|
||||
rt = otio.opentime.from_frames(frames, framerate)
|
||||
return otio.opentime.to_seconds(rt)
|
||||
|
||||
|
||||
def get_reformated_path(path, padded=True):
|
||||
"""
|
||||
Return fixed python expression path
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
|
||||
Returns:
|
||||
type: string with reformated path
|
||||
|
||||
Example:
|
||||
get_reformated_path("plate.1001.exr") > plate.%04d.exr
|
||||
|
||||
"""
|
||||
padding = get_padding_from_path(path)
|
||||
found = get_frame_from_path(path)
|
||||
|
||||
if not found:
|
||||
log.info("Path is not sequence: {}".format(path))
|
||||
return path
|
||||
|
||||
if padded:
|
||||
path = path.replace(found, "%0{}d".format(padding))
|
||||
else:
|
||||
path = path.replace(found, "%d")
|
||||
|
||||
return path
|
||||
|
||||
|
||||
def get_padding_from_path(path):
|
||||
"""
|
||||
Return padding number from Flame path style
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
|
||||
Returns:
|
||||
int: padding number
|
||||
|
||||
Example:
|
||||
get_padding_from_path("plate.0001.exr") > 4
|
||||
|
||||
"""
|
||||
found = get_frame_from_path(path)
|
||||
|
||||
if found:
|
||||
return len(found)
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def get_frame_from_path(path):
|
||||
"""
|
||||
Return sequence number from Flame path style
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
|
||||
Returns:
|
||||
int: sequence frame number
|
||||
|
||||
Example:
|
||||
def get_frame_from_path(path):
|
||||
("plate.0001.exr") > 0001
|
||||
|
||||
"""
|
||||
frame_pattern = re.compile(r"[._](\d+)[.]")
|
||||
|
||||
found = re.findall(frame_pattern, path)
|
||||
|
||||
if found:
|
||||
return found.pop()
|
||||
else:
|
||||
return None
|
||||
275
openpype/hosts/flame/plugins/create/create_shot_clip.py
Normal file
275
openpype/hosts/flame/plugins/create/create_shot_clip.py
Normal file
|
|
@ -0,0 +1,275 @@
|
|||
from copy import deepcopy
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
|
||||
|
||||
class CreateShotClip(opfapi.Creator):
|
||||
"""Publishable clip"""
|
||||
|
||||
label = "Create Publishable Clip"
|
||||
family = "clip"
|
||||
icon = "film"
|
||||
defaults = ["Main"]
|
||||
|
||||
presets = None
|
||||
|
||||
def process(self):
|
||||
# Creator copy of object attributes that are modified during `process`
|
||||
presets = deepcopy(self.presets)
|
||||
gui_inputs = self.get_gui_inputs()
|
||||
|
||||
# get key pares from presets and match it on ui inputs
|
||||
for k, v in gui_inputs.items():
|
||||
if v["type"] in ("dict", "section"):
|
||||
# nested dictionary (only one level allowed
|
||||
# for sections and dict)
|
||||
for _k, _v in v["value"].items():
|
||||
if presets.get(_k):
|
||||
gui_inputs[k][
|
||||
"value"][_k]["value"] = presets[_k]
|
||||
if presets.get(k):
|
||||
gui_inputs[k]["value"] = presets[k]
|
||||
|
||||
# open widget for plugins inputs
|
||||
results_back = self.create_widget(
|
||||
"Pype publish attributes creator",
|
||||
"Define sequential rename and fill hierarchy data.",
|
||||
gui_inputs
|
||||
)
|
||||
|
||||
if len(self.selected) < 1:
|
||||
return
|
||||
|
||||
if not results_back:
|
||||
print("Operation aborted")
|
||||
return
|
||||
|
||||
# get ui output for track name for vertical sync
|
||||
v_sync_track = results_back["vSyncTrack"]["value"]
|
||||
|
||||
# sort selected trackItems by
|
||||
sorted_selected_segments = []
|
||||
unsorted_selected_segments = []
|
||||
for _segment in self.selected:
|
||||
if _segment.parent.name.get_value() in v_sync_track:
|
||||
sorted_selected_segments.append(_segment)
|
||||
else:
|
||||
unsorted_selected_segments.append(_segment)
|
||||
|
||||
sorted_selected_segments.extend(unsorted_selected_segments)
|
||||
|
||||
kwargs = {
|
||||
"log": self.log,
|
||||
"ui_inputs": results_back,
|
||||
"avalon": self.data,
|
||||
"family": self.data["family"]
|
||||
}
|
||||
|
||||
for i, segment in enumerate(sorted_selected_segments):
|
||||
kwargs["rename_index"] = i
|
||||
# convert track item to timeline media pool item
|
||||
opfapi.PublishableClip(segment, **kwargs).convert()
|
||||
|
||||
def get_gui_inputs(self):
|
||||
gui_tracks = self._get_video_track_names(
|
||||
opfapi.get_current_sequence(opfapi.CTX.selection)
|
||||
)
|
||||
return deepcopy({
|
||||
"renameHierarchy": {
|
||||
"type": "section",
|
||||
"label": "Shot Hierarchy And Rename Settings",
|
||||
"target": "ui",
|
||||
"order": 0,
|
||||
"value": {
|
||||
"hierarchy": {
|
||||
"value": "{folder}/{sequence}",
|
||||
"type": "QLineEdit",
|
||||
"label": "Shot Parent Hierarchy",
|
||||
"target": "tag",
|
||||
"toolTip": "Parents folder for shot root folder, Template filled with `Hierarchy Data` section", # noqa
|
||||
"order": 0},
|
||||
"clipRename": {
|
||||
"value": False,
|
||||
"type": "QCheckBox",
|
||||
"label": "Rename clips",
|
||||
"target": "ui",
|
||||
"toolTip": "Renaming selected clips on fly", # noqa
|
||||
"order": 1},
|
||||
"clipName": {
|
||||
"value": "{sequence}{shot}",
|
||||
"type": "QLineEdit",
|
||||
"label": "Clip Name Template",
|
||||
"target": "ui",
|
||||
"toolTip": "template for creating shot namespaused for renaming (use rename: on)", # noqa
|
||||
"order": 2},
|
||||
"segmentIndex": {
|
||||
"value": True,
|
||||
"type": "QCheckBox",
|
||||
"label": "Segment index",
|
||||
"target": "ui",
|
||||
"toolTip": "Take number from segment index", # noqa
|
||||
"order": 3},
|
||||
"countFrom": {
|
||||
"value": 10,
|
||||
"type": "QSpinBox",
|
||||
"label": "Count sequence from",
|
||||
"target": "ui",
|
||||
"toolTip": "Set when the sequence number stafrom", # noqa
|
||||
"order": 4},
|
||||
"countSteps": {
|
||||
"value": 10,
|
||||
"type": "QSpinBox",
|
||||
"label": "Stepping number",
|
||||
"target": "ui",
|
||||
"toolTip": "What number is adding every new step", # noqa
|
||||
"order": 5},
|
||||
}
|
||||
},
|
||||
"hierarchyData": {
|
||||
"type": "dict",
|
||||
"label": "Shot Template Keywords",
|
||||
"target": "tag",
|
||||
"order": 1,
|
||||
"value": {
|
||||
"folder": {
|
||||
"value": "shots",
|
||||
"type": "QLineEdit",
|
||||
"label": "{folder}",
|
||||
"target": "tag",
|
||||
"toolTip": "Name of folder used for root of generated shots.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
|
||||
"order": 0},
|
||||
"episode": {
|
||||
"value": "ep01",
|
||||
"type": "QLineEdit",
|
||||
"label": "{episode}",
|
||||
"target": "tag",
|
||||
"toolTip": "Name of episode.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
|
||||
"order": 1},
|
||||
"sequence": {
|
||||
"value": "sq01",
|
||||
"type": "QLineEdit",
|
||||
"label": "{sequence}",
|
||||
"target": "tag",
|
||||
"toolTip": "Name of sequence of shots.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
|
||||
"order": 2},
|
||||
"track": {
|
||||
"value": "{_track_}",
|
||||
"type": "QLineEdit",
|
||||
"label": "{track}",
|
||||
"target": "tag",
|
||||
"toolTip": "Name of sequence of shots.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
|
||||
"order": 3},
|
||||
"shot": {
|
||||
"value": "sh###",
|
||||
"type": "QLineEdit",
|
||||
"label": "{shot}",
|
||||
"target": "tag",
|
||||
"toolTip": "Name of shot. `#` is converted to paded number. \nAlso could be used with usable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
|
||||
"order": 4}
|
||||
}
|
||||
},
|
||||
"verticalSync": {
|
||||
"type": "section",
|
||||
"label": "Vertical Synchronization Of Attributes",
|
||||
"target": "ui",
|
||||
"order": 2,
|
||||
"value": {
|
||||
"vSyncOn": {
|
||||
"value": True,
|
||||
"type": "QCheckBox",
|
||||
"label": "Enable Vertical Sync",
|
||||
"target": "ui",
|
||||
"toolTip": "Switch on if you want clips above each other to share its attributes", # noqa
|
||||
"order": 0},
|
||||
"vSyncTrack": {
|
||||
"value": gui_tracks, # noqa
|
||||
"type": "QComboBox",
|
||||
"label": "Hero track",
|
||||
"target": "ui",
|
||||
"toolTip": "Select driving track name which should be hero for all others", # noqa
|
||||
"order": 1}
|
||||
}
|
||||
},
|
||||
"publishSettings": {
|
||||
"type": "section",
|
||||
"label": "Publish Settings",
|
||||
"target": "ui",
|
||||
"order": 3,
|
||||
"value": {
|
||||
"subsetName": {
|
||||
"value": ["[ track name ]", "main", "bg", "fg", "bg",
|
||||
"animatic"],
|
||||
"type": "QComboBox",
|
||||
"label": "Subset Name",
|
||||
"target": "ui",
|
||||
"toolTip": "chose subset name patern, if [ track name ] is selected, name of track layer will be used", # noqa
|
||||
"order": 0},
|
||||
"subsetFamily": {
|
||||
"value": ["plate", "take"],
|
||||
"type": "QComboBox",
|
||||
"label": "Subset Family",
|
||||
"target": "ui", "toolTip": "What use of this subset is for", # noqa
|
||||
"order": 1},
|
||||
"reviewTrack": {
|
||||
"value": ["< none >"] + gui_tracks,
|
||||
"type": "QComboBox",
|
||||
"label": "Use Review Track",
|
||||
"target": "ui",
|
||||
"toolTip": "Generate preview videos on fly, if `< none >` is defined nothing will be generated.", # noqa
|
||||
"order": 2},
|
||||
"audio": {
|
||||
"value": False,
|
||||
"type": "QCheckBox",
|
||||
"label": "Include audio",
|
||||
"target": "tag",
|
||||
"toolTip": "Process subsets with corresponding audio", # noqa
|
||||
"order": 3},
|
||||
"sourceResolution": {
|
||||
"value": False,
|
||||
"type": "QCheckBox",
|
||||
"label": "Source resolution",
|
||||
"target": "tag",
|
||||
"toolTip": "Is resloution taken from timeline or source?", # noqa
|
||||
"order": 4},
|
||||
}
|
||||
},
|
||||
"frameRangeAttr": {
|
||||
"type": "section",
|
||||
"label": "Shot Attributes",
|
||||
"target": "ui",
|
||||
"order": 4,
|
||||
"value": {
|
||||
"workfileFrameStart": {
|
||||
"value": 1001,
|
||||
"type": "QSpinBox",
|
||||
"label": "Workfiles Start Frame",
|
||||
"target": "tag",
|
||||
"toolTip": "Set workfile starting frame number", # noqa
|
||||
"order": 0
|
||||
},
|
||||
"handleStart": {
|
||||
"value": 0,
|
||||
"type": "QSpinBox",
|
||||
"label": "Handle Start",
|
||||
"target": "tag",
|
||||
"toolTip": "Handle at start of clip", # noqa
|
||||
"order": 1
|
||||
},
|
||||
"handleEnd": {
|
||||
"value": 0,
|
||||
"type": "QSpinBox",
|
||||
"label": "Handle End",
|
||||
"target": "tag",
|
||||
"toolTip": "Handle at end of clip", # noqa
|
||||
"order": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
def _get_video_track_names(self, sequence):
|
||||
track_names = []
|
||||
for ver in sequence.versions:
|
||||
for track in ver.tracks:
|
||||
track_names.append(track.name.get_value())
|
||||
|
||||
return track_names
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
import tempfile
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export as otio_export
|
||||
import opentimelineio as otio
|
||||
from pprint import pformat
|
||||
reload(otio_export) # noqa
|
||||
|
||||
|
||||
@pyblish.api.log
|
||||
class CollectTestSelection(pyblish.api.ContextPlugin):
|
||||
"""testing selection sharing
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "test selection"
|
||||
hosts = ["flame"]
|
||||
|
||||
def process(self, context):
|
||||
self.log.info(
|
||||
"Active Selection: {}".format(opfapi.CTX.selection))
|
||||
|
||||
sequence = opfapi.get_current_sequence(opfapi.CTX.selection)
|
||||
|
||||
self.test_imprint_data(sequence)
|
||||
self.test_otio_export(sequence)
|
||||
|
||||
def test_otio_export(self, sequence):
|
||||
test_dir = os.path.normpath(
|
||||
tempfile.mkdtemp(prefix="test_pyblish_tmp_")
|
||||
)
|
||||
export_path = os.path.normpath(
|
||||
os.path.join(
|
||||
test_dir, "otio_timeline_export.otio"
|
||||
)
|
||||
)
|
||||
otio_timeline = otio_export.create_otio_timeline(sequence)
|
||||
otio_export.write_to_file(
|
||||
otio_timeline, export_path
|
||||
)
|
||||
read_timeline_otio = otio.adapters.read_from_file(export_path)
|
||||
|
||||
if otio_timeline != read_timeline_otio:
|
||||
raise Exception("Exported timeline is different from original")
|
||||
|
||||
self.log.info(pformat(otio_timeline))
|
||||
self.log.info("Otio exported to: {}".format(export_path))
|
||||
|
||||
def test_imprint_data(self, sequence):
|
||||
with opfapi.maintained_segment_selection(sequence) as sel_segments:
|
||||
for segment in sel_segments:
|
||||
if str(segment.name)[1:-1] == "":
|
||||
continue
|
||||
|
||||
self.log.debug("Segment with OpenPypeData: {}".format(
|
||||
segment.name))
|
||||
|
||||
opfapi.imprint(segment, {
|
||||
'asset': segment.name.get_value(),
|
||||
'family': 'render',
|
||||
'subset': 'subsetMain'
|
||||
})
|
||||
|
|
@ -45,19 +45,14 @@ class CreateHDA(plugin.Creator):
|
|||
if (self.options or {}).get("useSelection") and self.nodes:
|
||||
# if we have `use selection` enabled and we have some
|
||||
# selected nodes ...
|
||||
to_hda = self.nodes[0]
|
||||
if len(self.nodes) > 1:
|
||||
# if there is more then one node, create subnet first
|
||||
subnet = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
to_hda = subnet
|
||||
else:
|
||||
# in case of no selection, just create subnet node
|
||||
subnet = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
subnet = out.collapseIntoSubnet(
|
||||
self.nodes,
|
||||
subnet_name="{}_subnet".format(self.name))
|
||||
subnet.moveToGoodPosition()
|
||||
to_hda = subnet
|
||||
|
||||
else:
|
||||
to_hda = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
if not to_hda.type().definition():
|
||||
# if node type has not its definition, it is not user
|
||||
# created hda. We test if hda can be created from the node.
|
||||
|
|
@ -69,13 +64,12 @@ class CreateHDA(plugin.Creator):
|
|||
name=subset_name,
|
||||
hda_file_name="$HIP/{}.hda".format(subset_name)
|
||||
)
|
||||
hou.moveNodesTo(self.nodes, hda_node)
|
||||
hda_node.layoutChildren()
|
||||
elif self._check_existing(subset_name):
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("subset {} is already published with different HDA"
|
||||
"definition.").format(subset_name))
|
||||
else:
|
||||
if self._check_existing(subset_name):
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("subset {} is already published with different HDA"
|
||||
"definition.").format(subset_name))
|
||||
hda_node = to_hda
|
||||
|
||||
hda_node.setName(subset_name)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from avalon import api
|
||||
|
||||
from avalon.houdini import pipeline, lib
|
||||
from avalon.houdini import pipeline
|
||||
|
||||
|
||||
class AbcLoader(api.Loader):
|
||||
|
|
@ -25,16 +25,9 @@ class AbcLoader(api.Loader):
|
|||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Create a unique name
|
||||
counter = 1
|
||||
# Define node name
|
||||
namespace = namespace if namespace else context["asset"]["name"]
|
||||
formatted = "{}_{}".format(namespace, name) if namespace else name
|
||||
node_name = "{0}_{1:03d}".format(formatted, counter)
|
||||
|
||||
children = lib.children_as_string(hou.node("/obj"))
|
||||
while node_name in children:
|
||||
counter += 1
|
||||
node_name = "{0}_{1:03d}".format(formatted, counter)
|
||||
node_name = "{}_{}".format(namespace, name) if namespace else name
|
||||
|
||||
# Create a new geo node
|
||||
container = obj.createNode("geo", node_name=node_name)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
from avalon import api
|
||||
from avalon.houdini import pipeline, lib
|
||||
from avalon.houdini import pipeline
|
||||
|
||||
|
||||
ARCHIVE_EXPRESSION = ('__import__("_alembic_hom_extensions")'
|
||||
|
|
@ -97,18 +97,9 @@ class CameraLoader(api.Loader):
|
|||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Create a unique name
|
||||
counter = 1
|
||||
asset_name = context["asset"]["name"]
|
||||
|
||||
namespace = namespace or asset_name
|
||||
formatted = "{}_{}".format(namespace, name) if namespace else name
|
||||
node_name = "{0}_{1:03d}".format(formatted, counter)
|
||||
|
||||
children = lib.children_as_string(hou.node("/obj"))
|
||||
while node_name in children:
|
||||
counter += 1
|
||||
node_name = "{0}_{1:03d}".format(formatted, counter)
|
||||
# Define node name
|
||||
namespace = namespace if namespace else context["asset"]["name"]
|
||||
node_name = "{}_{}".format(namespace, name) if namespace else name
|
||||
|
||||
# Create a archive node
|
||||
container = self.create_and_connect(obj, "alembicarchive", node_name)
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
# Check if frames are bigger than 1 (file collection)
|
||||
# override the result
|
||||
if end_frame - start_frame > 1:
|
||||
if end_frame - start_frame > 0:
|
||||
result = self.create_file_list(
|
||||
match, int(start_frame), int(end_frame)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -313,13 +313,7 @@ def attribute_values(attr_values):
|
|||
|
||||
"""
|
||||
|
||||
# NOTE(antirotor): this didn't work for some reason for Yeti attributes
|
||||
# original = [(attr, cmds.getAttr(attr)) for attr in attr_values]
|
||||
original = []
|
||||
for attr in attr_values:
|
||||
type = cmds.getAttr(attr, type=True)
|
||||
value = cmds.getAttr(attr)
|
||||
original.append((attr, str(value) if type == "string" else value))
|
||||
original = [(attr, cmds.getAttr(attr)) for attr in attr_values]
|
||||
try:
|
||||
for attr, value in attr_values.items():
|
||||
if isinstance(value, string_types):
|
||||
|
|
@ -331,6 +325,12 @@ def attribute_values(attr_values):
|
|||
for attr, value in original:
|
||||
if isinstance(value, string_types):
|
||||
cmds.setAttr(attr, value, type="string")
|
||||
elif value is None and cmds.getAttr(attr, type=True) == "string":
|
||||
# In some cases the maya.cmds.getAttr command returns None
|
||||
# for string attributes but this value cannot assigned.
|
||||
# Note: After setting it once to "" it will then return ""
|
||||
# instead of None. So this would only happen once.
|
||||
cmds.setAttr(attr, "", type="string")
|
||||
else:
|
||||
cmds.setAttr(attr, value)
|
||||
|
||||
|
|
@ -745,6 +745,33 @@ def namespaced(namespace, new=True):
|
|||
cmds.namespace(set=original)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection_api():
|
||||
"""Maintain selection using the Maya Python API.
|
||||
|
||||
Warning: This is *not* added to the undo stack.
|
||||
|
||||
"""
|
||||
original = om.MGlobal.getActiveSelectionList()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
om.MGlobal.setActiveSelectionList(original)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def tool(context):
|
||||
"""Set a tool context during the context manager.
|
||||
|
||||
"""
|
||||
original = cmds.currentCtx()
|
||||
try:
|
||||
cmds.setToolTo(context)
|
||||
yield
|
||||
finally:
|
||||
cmds.setToolTo(original)
|
||||
|
||||
|
||||
def polyConstraint(components, *args, **kwargs):
|
||||
"""Return the list of *components* with the constraints applied.
|
||||
|
||||
|
|
@ -763,17 +790,25 @@ def polyConstraint(components, *args, **kwargs):
|
|||
kwargs.pop('mode', None)
|
||||
|
||||
with no_undo(flush=False):
|
||||
with maya.maintained_selection():
|
||||
# Apply constraint using mode=2 (current and next) so
|
||||
# it applies to the selection made before it; because just
|
||||
# a `maya.cmds.select()` call will not trigger the constraint.
|
||||
with reset_polySelectConstraint():
|
||||
cmds.select(components, r=1, noExpand=True)
|
||||
cmds.polySelectConstraint(*args, mode=2, **kwargs)
|
||||
result = cmds.ls(selection=True)
|
||||
cmds.select(clear=True)
|
||||
|
||||
return result
|
||||
# Reverting selection to the original selection using
|
||||
# `maya.cmds.select` can be slow in rare cases where previously
|
||||
# `maya.cmds.polySelectConstraint` had set constrain to "All and Next"
|
||||
# and the "Random" setting was activated. To work around this we
|
||||
# revert to the original selection using the Maya API. This is safe
|
||||
# since we're not generating any undo change anyway.
|
||||
with tool("selectSuperContext"):
|
||||
# Selection can be very slow when in a manipulator mode.
|
||||
# So we force the selection context which is fast.
|
||||
with maintained_selection_api():
|
||||
# Apply constraint using mode=2 (current and next) so
|
||||
# it applies to the selection made before it; because just
|
||||
# a `maya.cmds.select()` call will not trigger the constraint.
|
||||
with reset_polySelectConstraint():
|
||||
cmds.select(components, r=1, noExpand=True)
|
||||
cmds.polySelectConstraint(*args, mode=2, **kwargs)
|
||||
result = cmds.ls(selection=True)
|
||||
cmds.select(clear=True)
|
||||
return result
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
|
|||
|
|
@ -100,6 +100,13 @@ class ReferenceLoader(api.Loader):
|
|||
"offset",
|
||||
label="Position Offset",
|
||||
help="Offset loaded models for easier selection."
|
||||
),
|
||||
qargparse.Boolean(
|
||||
"attach_to_root",
|
||||
label="Group imported asset",
|
||||
default=True,
|
||||
help="Should a group be created to encapsulate"
|
||||
" imported representation ?"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -8,6 +8,8 @@ from collections import defaultdict
|
|||
from openpype.widgets.message_window import ScrollMessageBox
|
||||
from Qt import QtWidgets
|
||||
|
||||
from openpype.hosts.maya.api.plugin import get_reference_node
|
||||
|
||||
|
||||
class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
"""Specific loader for lookdev"""
|
||||
|
|
@ -70,7 +72,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
# Get reference node from container members
|
||||
members = cmds.sets(node, query=True, nodesOnly=True)
|
||||
reference_node = self._get_reference_node(members)
|
||||
reference_node = get_reference_node(members, log=self.log)
|
||||
|
||||
shader_nodes = cmds.ls(members, type='shadingEngine')
|
||||
orig_nodes = set(self._get_nodes_with_shader(shader_nodes))
|
||||
|
|
|
|||
|
|
@ -40,85 +40,88 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
except ValueError:
|
||||
family = "model"
|
||||
|
||||
group_name = "{}:_GRP".format(namespace)
|
||||
# True by default to keep legacy behaviours
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
|
||||
with maya.maintained_selection():
|
||||
groupName = "{}:_GRP".format(namespace)
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
nodes = cmds.file(self.fname,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=True,
|
||||
groupName=groupName,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
# namespace = cmds.referenceQuery(nodes[0], namespace=True)
|
||||
returnNewNodes=True,
|
||||
groupReference=attach_to_root,
|
||||
groupName=group_name)
|
||||
|
||||
shapes = cmds.ls(nodes, shapes=True, long=True)
|
||||
|
||||
newNodes = (list(set(nodes) - set(shapes)))
|
||||
new_nodes = (list(set(nodes) - set(shapes)))
|
||||
|
||||
current_namespace = pm.namespaceInfo(currentNamespace=True)
|
||||
|
||||
if current_namespace != ":":
|
||||
groupName = current_namespace + ":" + groupName
|
||||
group_name = current_namespace + ":" + group_name
|
||||
|
||||
groupNode = pm.PyNode(groupName)
|
||||
roots = set()
|
||||
self[:] = new_nodes
|
||||
|
||||
for node in newNodes:
|
||||
try:
|
||||
roots.add(pm.PyNode(node).getAllParents()[-2])
|
||||
except: # noqa: E722
|
||||
pass
|
||||
if attach_to_root:
|
||||
group_node = pm.PyNode(group_name)
|
||||
roots = set()
|
||||
|
||||
if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]:
|
||||
for node in new_nodes:
|
||||
try:
|
||||
roots.add(pm.PyNode(node).getAllParents()[-2])
|
||||
except: # noqa: E722
|
||||
pass
|
||||
|
||||
if family not in ["layout", "setdress",
|
||||
"mayaAscii", "mayaScene"]:
|
||||
for root in roots:
|
||||
root.setParent(world=True)
|
||||
|
||||
group_node.zeroTransformPivots()
|
||||
for root in roots:
|
||||
root.setParent(world=True)
|
||||
root.setParent(group_node)
|
||||
|
||||
groupNode.zeroTransformPivots()
|
||||
for root in roots:
|
||||
root.setParent(groupNode)
|
||||
cmds.setAttr(group_name + ".displayHandle", 1)
|
||||
|
||||
cmds.setAttr(groupName + ".displayHandle", 1)
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
colors = settings['maya']['load']['colors']
|
||||
c = colors.get(family)
|
||||
if c is not None:
|
||||
group_node.useOutlinerColor.set(1)
|
||||
group_node.outlinerColor.set(
|
||||
(float(c[0]) / 255),
|
||||
(float(c[1]) / 255),
|
||||
(float(c[2]) / 255))
|
||||
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
colors = settings['maya']['load']['colors']
|
||||
c = colors.get(family)
|
||||
if c is not None:
|
||||
groupNode.useOutlinerColor.set(1)
|
||||
groupNode.outlinerColor.set(
|
||||
(float(c[0])/255),
|
||||
(float(c[1])/255),
|
||||
(float(c[2])/255)
|
||||
)
|
||||
|
||||
self[:] = newNodes
|
||||
|
||||
cmds.setAttr(groupName + ".displayHandle", 1)
|
||||
# get bounding box
|
||||
bbox = cmds.exactWorldBoundingBox(groupName)
|
||||
# get pivot position on world space
|
||||
pivot = cmds.xform(groupName, q=True, sp=True, ws=True)
|
||||
# center of bounding box
|
||||
cx = (bbox[0] + bbox[3]) / 2
|
||||
cy = (bbox[1] + bbox[4]) / 2
|
||||
cz = (bbox[2] + bbox[5]) / 2
|
||||
# add pivot position to calculate offset
|
||||
cx = cx + pivot[0]
|
||||
cy = cy + pivot[1]
|
||||
cz = cz + pivot[2]
|
||||
# set selection handle offset to center of bounding box
|
||||
cmds.setAttr(groupName + ".selectHandleX", cx)
|
||||
cmds.setAttr(groupName + ".selectHandleY", cy)
|
||||
cmds.setAttr(groupName + ".selectHandleZ", cz)
|
||||
cmds.setAttr(group_name + ".displayHandle", 1)
|
||||
# get bounding box
|
||||
bbox = cmds.exactWorldBoundingBox(group_name)
|
||||
# get pivot position on world space
|
||||
pivot = cmds.xform(group_name, q=True, sp=True, ws=True)
|
||||
# center of bounding box
|
||||
cx = (bbox[0] + bbox[3]) / 2
|
||||
cy = (bbox[1] + bbox[4]) / 2
|
||||
cz = (bbox[2] + bbox[5]) / 2
|
||||
# add pivot position to calculate offset
|
||||
cx = cx + pivot[0]
|
||||
cy = cy + pivot[1]
|
||||
cz = cz + pivot[2]
|
||||
# set selection handle offset to center of bounding box
|
||||
cmds.setAttr(group_name + ".selectHandleX", cx)
|
||||
cmds.setAttr(group_name + ".selectHandleY", cy)
|
||||
cmds.setAttr(group_name + ".selectHandleZ", cz)
|
||||
|
||||
if family == "rig":
|
||||
self._post_process_rig(name, namespace, context, options)
|
||||
else:
|
||||
if "translate" in options:
|
||||
cmds.setAttr(groupName + ".t", *options["translate"])
|
||||
|
||||
return newNodes
|
||||
if "translate" in options:
|
||||
cmds.setAttr(group_name + ".t", *options["translate"])
|
||||
|
||||
return new_nodes
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -22,15 +22,22 @@ class CollectMayaHistory(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
|
||||
# Collect the history with long names
|
||||
history = cmds.listHistory(instance, leaf=False) or []
|
||||
history = cmds.ls(history, long=True)
|
||||
kwargs = {}
|
||||
if int(cmds.about(version=True)) >= 2020:
|
||||
# New flag since Maya 2020 which makes cmds.listHistory faster
|
||||
kwargs = {"fastIteration": True}
|
||||
else:
|
||||
self.log.debug("Ignoring `fastIteration` flag before Maya 2020..")
|
||||
|
||||
# Remove invalid node types (like renderlayers)
|
||||
invalid = cmds.ls(history, type="renderLayer", long=True)
|
||||
if invalid:
|
||||
invalid = set(invalid) # optimize lookup
|
||||
history = [x for x in history if x not in invalid]
|
||||
# Collect the history with long names
|
||||
history = set(cmds.listHistory(instance, leaf=False, **kwargs) or [])
|
||||
history = cmds.ls(list(history), long=True)
|
||||
|
||||
# Exclude invalid nodes (like renderlayers)
|
||||
exclude = cmds.ls(type="renderLayer", long=True)
|
||||
if exclude:
|
||||
exclude = set(exclude) # optimize lookup
|
||||
history = [x for x in history if x not in exclude]
|
||||
|
||||
# Combine members with history
|
||||
members = instance[:] + history
|
||||
|
|
|
|||
|
|
@ -55,8 +55,16 @@ def maketx(source, destination, *args):
|
|||
str: Output of `maketx` command.
|
||||
|
||||
"""
|
||||
from openpype.lib import get_oiio_tools_path
|
||||
|
||||
maketx_path = get_oiio_tools_path("maketx")
|
||||
if not os.path.exists(maketx_path):
|
||||
print(
|
||||
"OIIO tool not found in {}".format(maketx_path))
|
||||
raise AssertionError("OIIO tool not found")
|
||||
|
||||
cmd = [
|
||||
"maketx",
|
||||
maketx_path,
|
||||
"-v", # verbose
|
||||
"-u", # update mode
|
||||
# unpremultiply before conversion (recommended when alpha present)
|
||||
|
|
|
|||
|
|
@ -7,21 +7,6 @@ from avalon import maya
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
def polyConstraint(objects, *args, **kwargs):
|
||||
kwargs.pop('mode', None)
|
||||
|
||||
with lib.no_undo(flush=False):
|
||||
with maya.maintained_selection():
|
||||
with lib.reset_polySelectConstraint():
|
||||
cmds.select(objects, r=1, noExpand=True)
|
||||
# Acting as 'polyCleanupArgList' for n-sided polygon selection
|
||||
cmds.polySelectConstraint(*args, mode=3, **kwargs)
|
||||
result = cmds.ls(selection=True)
|
||||
cmds.select(clear=True)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class ValidateMeshNgons(pyblish.api.Validator):
|
||||
"""Ensure that meshes don't have ngons
|
||||
|
||||
|
|
@ -41,8 +26,17 @@ class ValidateMeshNgons(pyblish.api.Validator):
|
|||
@staticmethod
|
||||
def get_invalid(instance):
|
||||
|
||||
meshes = cmds.ls(instance, type='mesh')
|
||||
return polyConstraint(meshes, type=8, size=3)
|
||||
meshes = cmds.ls(instance, type='mesh', long=True)
|
||||
|
||||
# Get all faces
|
||||
faces = ['{0}.f[*]'.format(node) for node in meshes]
|
||||
|
||||
# Filter to n-sided polygon faces (ngons)
|
||||
invalid = lib.polyConstraint(faces,
|
||||
t=0x0008, # type=face
|
||||
size=3) # size=nsided
|
||||
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
"""Process all the nodes in the instance "objectSet"""
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
from maya import cmds
|
||||
import maya.api.OpenMaya as om2
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
|
|
@ -25,10 +26,16 @@ class ValidateMeshNormalsUnlocked(pyblish.api.Validator):
|
|||
|
||||
@staticmethod
|
||||
def has_locked_normals(mesh):
|
||||
"""Return whether a mesh node has locked normals"""
|
||||
return any(cmds.polyNormalPerVertex("{}.vtxFace[*][*]".format(mesh),
|
||||
query=True,
|
||||
freezeNormal=True))
|
||||
"""Return whether mesh has at least one locked normal"""
|
||||
|
||||
sel = om2.MGlobal.getSelectionListByName(mesh)
|
||||
node = sel.getDependNode(0)
|
||||
fn_mesh = om2.MFnMesh(node)
|
||||
_, normal_ids = fn_mesh.getNormalIds()
|
||||
for normal_id in normal_ids:
|
||||
if fn_mesh.isNormalLocked(normal_id):
|
||||
return True
|
||||
return False
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
|
|
|||
|
|
@ -164,7 +164,8 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
# Ignore proxy connections.
|
||||
if cmds.addAttr(plug, query=True, usedAsProxy=True):
|
||||
if (cmds.addAttr(plug, query=True, exists=True) and
|
||||
cmds.addAttr(plug, query=True, usedAsProxy=True)):
|
||||
continue
|
||||
|
||||
# Check for incoming connections
|
||||
|
|
|
|||
|
|
@ -3,14 +3,15 @@ from maya import cmds
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
from avalon.maya import maintained_selection
|
||||
|
||||
|
||||
class ValidateShapeZero(pyblish.api.Validator):
|
||||
"""shape can't have any values
|
||||
"""Shape components may not have any "tweak" values
|
||||
|
||||
To solve this issue, try freezing the shapes. So long
|
||||
as the translation, rotation and scaling values are zero,
|
||||
you're all good.
|
||||
To solve this issue, try freezing the shapes.
|
||||
|
||||
"""
|
||||
|
||||
|
|
@ -47,13 +48,22 @@ class ValidateShapeZero(pyblish.api.Validator):
|
|||
@classmethod
|
||||
def repair(cls, instance):
|
||||
invalid_shapes = cls.get_invalid(instance)
|
||||
for shape in invalid_shapes:
|
||||
cmds.polyCollapseTweaks(shape)
|
||||
if not invalid_shapes:
|
||||
return
|
||||
|
||||
with maintained_selection():
|
||||
with lib.tool("selectSuperContext"):
|
||||
for shape in invalid_shapes:
|
||||
cmds.polyCollapseTweaks(shape)
|
||||
# cmds.polyCollapseTweaks keeps selecting the geometry
|
||||
# after each command. When running on many meshes
|
||||
# after one another this tends to get really heavy
|
||||
cmds.select(clear=True)
|
||||
|
||||
def process(self, instance):
|
||||
"""Process all the nodes in the instance "objectSet"""
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise ValueError("Nodes found with shape or vertices not freezed"
|
||||
"values: {0}".format(invalid))
|
||||
raise ValueError("Shapes found with non-zero component tweaks: "
|
||||
"{0}".format(invalid))
|
||||
|
|
|
|||
|
|
@ -42,6 +42,7 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
|
||||
# generate data
|
||||
with anlib.maintained_selection():
|
||||
generated_repres = []
|
||||
for o_name, o_data in self.outputs.items():
|
||||
f_families = o_data["filter"]["families"]
|
||||
f_task_types = o_data["filter"]["task_types"]
|
||||
|
|
@ -112,11 +113,13 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
})
|
||||
else:
|
||||
data = exporter.generate_mov(**o_data)
|
||||
generated_repres.extend(data["representations"])
|
||||
|
||||
self.log.info(data["representations"])
|
||||
self.log.info(generated_repres)
|
||||
|
||||
# assign to representations
|
||||
instance.data["representations"] += data["representations"]
|
||||
if generated_repres:
|
||||
# assign to representations
|
||||
instance.data["representations"] += generated_repres
|
||||
|
||||
self.log.debug(
|
||||
"_ representations: {}".format(
|
||||
|
|
|
|||
255
openpype/hosts/photoshop/api/README.md
Normal file
255
openpype/hosts/photoshop/api/README.md
Normal file
|
|
@ -0,0 +1,255 @@
|
|||
# Photoshop Integration
|
||||
|
||||
## Setup
|
||||
|
||||
The Photoshop integration requires two components to work; `extension` and `server`.
|
||||
|
||||
### Extension
|
||||
|
||||
To install the extension download [Extension Manager Command Line tool (ExManCmd)](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#option-2---exmancmd).
|
||||
|
||||
```
|
||||
ExManCmd /install {path to avalon-core}\avalon\photoshop\extension.zxp
|
||||
```
|
||||
|
||||
### Server
|
||||
|
||||
The easiest way to get the server and Photoshop launch is with:
|
||||
|
||||
```
|
||||
python -c ^"import avalon.photoshop;avalon.photoshop.launch(""C:\Program Files\Adobe\Adobe Photoshop 2020\Photoshop.exe"")^"
|
||||
```
|
||||
|
||||
`avalon.photoshop.launch` launches the application and server, and also closes the server when Photoshop exists.
|
||||
|
||||
## Usage
|
||||
|
||||
The Photoshop extension can be found under `Window > Extensions > Avalon`. Once launched you should be presented with a panel like this:
|
||||
|
||||

|
||||
|
||||
|
||||
## Developing
|
||||
|
||||
### Extension
|
||||
When developing the extension you can load it [unsigned](https://github.com/Adobe-CEP/CEP-Resources/blob/master/CEP_9.x/Documentation/CEP%209.0%20HTML%20Extension%20Cookbook.md#debugging-unsigned-extensions).
|
||||
|
||||
When signing the extension you can use this [guide](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#package-distribute-install-guide).
|
||||
|
||||
```
|
||||
ZXPSignCmd -selfSignedCert NA NA Avalon Avalon-Photoshop avalon extension.p12
|
||||
ZXPSignCmd -sign {path to avalon-core}\avalon\photoshop\extension {path to avalon-core}\avalon\photoshop\extension.zxp extension.p12 avalon
|
||||
```
|
||||
|
||||
### Plugin Examples
|
||||
|
||||
These plugins were made with the [polly config](https://github.com/mindbender-studio/config). To fully integrate and load, you will have to use this config and add `image` to the [integration plugin](https://github.com/mindbender-studio/config/blob/master/polly/plugins/publish/integrate_asset.py).
|
||||
|
||||
#### Creator Plugin
|
||||
```python
|
||||
from avalon import photoshop
|
||||
|
||||
|
||||
class CreateImage(photoshop.Creator):
|
||||
"""Image folder for publish."""
|
||||
|
||||
name = "imageDefault"
|
||||
label = "Image"
|
||||
family = "image"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateImage, self).__init__(*args, **kwargs)
|
||||
```
|
||||
|
||||
#### Collector Plugin
|
||||
```python
|
||||
import pythoncom
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
"""Gather instances by LayerSet and file metadata
|
||||
|
||||
This collector takes into account assets that are associated with
|
||||
an LayerSet and marked with a unique identifier;
|
||||
|
||||
Identifier:
|
||||
id (str): "pyblish.avalon.instance"
|
||||
"""
|
||||
|
||||
label = "Instances"
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["photoshop"]
|
||||
families_mapping = {
|
||||
"image": []
|
||||
}
|
||||
|
||||
def process(self, context):
|
||||
# Necessary call when running in a different thread which pyblish-qml
|
||||
# can be.
|
||||
pythoncom.CoInitialize()
|
||||
|
||||
photoshop_client = PhotoshopClientStub()
|
||||
layers = photoshop_client.get_layers()
|
||||
layers_meta = photoshop_client.get_layers_metadata()
|
||||
for layer in layers:
|
||||
layer_data = photoshop_client.read(layer, layers_meta)
|
||||
|
||||
# Skip layers without metadata.
|
||||
if layer_data is None:
|
||||
continue
|
||||
|
||||
# Skip containers.
|
||||
if "container" in layer_data["id"]:
|
||||
continue
|
||||
|
||||
# child_layers = [*layer.Layers]
|
||||
# self.log.debug("child_layers {}".format(child_layers))
|
||||
# if not child_layers:
|
||||
# self.log.info("%s skipped, it was empty." % layer.Name)
|
||||
# continue
|
||||
|
||||
instance = context.create_instance(layer.name)
|
||||
instance.append(layer)
|
||||
instance.data.update(layer_data)
|
||||
instance.data["families"] = self.families_mapping[
|
||||
layer_data["family"]
|
||||
]
|
||||
instance.data["publish"] = layer.visible
|
||||
|
||||
# Produce diagnostic message for any graphical
|
||||
# user interface interested in visualising it.
|
||||
self.log.info("Found: \"%s\" " % instance.data["name"])
|
||||
```
|
||||
|
||||
#### Extractor Plugin
|
||||
```python
|
||||
import os
|
||||
|
||||
import openpype.api
|
||||
from avalon import photoshop
|
||||
|
||||
|
||||
class ExtractImage(openpype.api.Extractor):
|
||||
"""Produce a flattened image file from instance
|
||||
|
||||
This plug-in takes into account only the layers in the group.
|
||||
"""
|
||||
|
||||
label = "Extract Image"
|
||||
hosts = ["photoshop"]
|
||||
families = ["image"]
|
||||
formats = ["png", "jpg"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
self.log.info("Outputting image to {}".format(staging_dir))
|
||||
|
||||
# Perform extraction
|
||||
stub = photoshop.stub()
|
||||
files = {}
|
||||
with photoshop.maintained_selection():
|
||||
self.log.info("Extracting %s" % str(list(instance)))
|
||||
with photoshop.maintained_visibility():
|
||||
# Hide all other layers.
|
||||
extract_ids = set([ll.id for ll in stub.
|
||||
get_layers_in_layers([instance[0]])])
|
||||
|
||||
for layer in stub.get_layers():
|
||||
# limit unnecessary calls to client
|
||||
if layer.visible and layer.id not in extract_ids:
|
||||
stub.set_visible(layer.id, False)
|
||||
|
||||
save_options = []
|
||||
if "png" in self.formats:
|
||||
save_options.append('png')
|
||||
if "jpg" in self.formats:
|
||||
save_options.append('jpg')
|
||||
|
||||
file_basename = os.path.splitext(
|
||||
stub.get_active_document_name()
|
||||
)[0]
|
||||
for extension in save_options:
|
||||
_filename = "{}.{}".format(file_basename, extension)
|
||||
files[extension] = _filename
|
||||
|
||||
full_filename = os.path.join(staging_dir, _filename)
|
||||
stub.saveAs(full_filename, extension, True)
|
||||
|
||||
representations = []
|
||||
for extension, filename in files.items():
|
||||
representations.append({
|
||||
"name": extension,
|
||||
"ext": extension,
|
||||
"files": filename,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
instance.data["representations"] = representations
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(f"Extracted {instance} to {staging_dir}")
|
||||
```
|
||||
|
||||
#### Loader Plugin
|
||||
```python
|
||||
from avalon import api, photoshop
|
||||
|
||||
stub = photoshop.stub()
|
||||
|
||||
|
||||
class ImageLoader(api.Loader):
|
||||
"""Load images
|
||||
|
||||
Stores the imported asset in a container named after the asset.
|
||||
"""
|
||||
|
||||
families = ["image"]
|
||||
representations = ["*"]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
with photoshop.maintained_selection():
|
||||
layer = stub.import_smart_object(self.fname)
|
||||
|
||||
self[:] = [layer]
|
||||
|
||||
return photoshop.containerise(
|
||||
name,
|
||||
namespace,
|
||||
layer,
|
||||
context,
|
||||
self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
layer = container.pop("layer")
|
||||
|
||||
with photoshop.maintained_selection():
|
||||
stub.replace_smart_object(
|
||||
layer, api.get_representation_path(representation)
|
||||
)
|
||||
|
||||
stub.imprint(
|
||||
layer, {"representation": str(representation["_id"])}
|
||||
)
|
||||
|
||||
def remove(self, container):
|
||||
container["layer"].Delete()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
```
|
||||
For easier debugging of Javascript:
|
||||
https://community.adobe.com/t5/download-install/adobe-extension-debuger-problem/td-p/10911704?page=1
|
||||
Add --enable-blink-features=ShadowDOMV0,CustomElementsV0 when starting Chrome
|
||||
then localhost:8078 (port set in `photoshop\extension\.debug`)
|
||||
|
||||
Or use Visual Studio Code https://medium.com/adobetech/extendscript-debugger-for-visual-studio-code-public-release-a2ff6161fa01
|
||||
|
||||
Or install CEF client from https://github.com/Adobe-CEP/CEP-Resources/tree/master/CEP_9.x
|
||||
## Resources
|
||||
- https://github.com/lohriialo/photoshop-scripting-python
|
||||
- https://www.adobe.com/devnet/photoshop/scripting.html
|
||||
- https://github.com/Adobe-CEP/Getting-Started-guides
|
||||
- https://github.com/Adobe-CEP/CEP-Resources
|
||||
|
|
@ -1,79 +1,63 @@
|
|||
import os
|
||||
import sys
|
||||
import logging
|
||||
"""Public API
|
||||
|
||||
from Qt import QtWidgets
|
||||
Anything that isn't defined here is INTERNAL and unreliable for external use.
|
||||
|
||||
from avalon import io
|
||||
from avalon import api as avalon
|
||||
from openpype import lib
|
||||
from pyblish import api as pyblish
|
||||
import openpype.hosts.photoshop
|
||||
"""
|
||||
|
||||
log = logging.getLogger("openpype.hosts.photoshop")
|
||||
from .launch_logic import stub
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.photoshop.__file__))
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
from .pipeline import (
|
||||
ls,
|
||||
list_instances,
|
||||
remove_instance,
|
||||
install,
|
||||
uninstall,
|
||||
containerise
|
||||
)
|
||||
from .plugin import (
|
||||
PhotoshopLoader,
|
||||
Creator,
|
||||
get_unique_layer_name
|
||||
)
|
||||
from .workio import (
|
||||
file_extensions,
|
||||
has_unsaved_changes,
|
||||
save_file,
|
||||
open_file,
|
||||
current_file,
|
||||
work_root,
|
||||
)
|
||||
|
||||
def check_inventory():
|
||||
if not lib.any_outdated():
|
||||
return
|
||||
from .lib import (
|
||||
maintained_selection,
|
||||
maintained_visibility
|
||||
)
|
||||
|
||||
host = avalon.registered_host()
|
||||
outdated_containers = []
|
||||
for container in host.ls():
|
||||
representation = container['representation']
|
||||
representation_doc = io.find_one(
|
||||
{
|
||||
"_id": io.ObjectId(representation),
|
||||
"type": "representation"
|
||||
},
|
||||
projection={"parent": True}
|
||||
)
|
||||
if representation_doc and not lib.is_latest(representation_doc):
|
||||
outdated_containers.append(container)
|
||||
__all__ = [
|
||||
# launch_logic
|
||||
"stub",
|
||||
|
||||
# Warn about outdated containers.
|
||||
print("Starting new QApplication..")
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
# pipeline
|
||||
"ls",
|
||||
"list_instances",
|
||||
"remove_instance",
|
||||
"install",
|
||||
"containerise",
|
||||
|
||||
message_box = QtWidgets.QMessageBox()
|
||||
message_box.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg = "There are outdated containers in the scene."
|
||||
message_box.setText(msg)
|
||||
message_box.exec_()
|
||||
# Plugin
|
||||
"PhotoshopLoader",
|
||||
"Creator",
|
||||
"get_unique_layer_name",
|
||||
|
||||
# Garbage collect QApplication.
|
||||
del app
|
||||
# workfiles
|
||||
"file_extensions",
|
||||
"has_unsaved_changes",
|
||||
"save_file",
|
||||
"open_file",
|
||||
"current_file",
|
||||
"work_root",
|
||||
|
||||
|
||||
def application_launch():
|
||||
check_inventory()
|
||||
|
||||
|
||||
def install():
|
||||
print("Installing Pype config...")
|
||||
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
log.info(PUBLISH_PATH)
|
||||
|
||||
pyblish.register_callback(
|
||||
"instanceToggled", on_pyblish_instance_toggled
|
||||
)
|
||||
|
||||
avalon.on("application.launched", application_launch)
|
||||
|
||||
def uninstall():
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle layer visibility on instance toggles."""
|
||||
instance[0].Visible = new_value
|
||||
# lib
|
||||
"maintained_selection",
|
||||
"maintained_visibility",
|
||||
]
|
||||
|
|
|
|||
BIN
openpype/hosts/photoshop/api/extension.zxp
Normal file
BIN
openpype/hosts/photoshop/api/extension.zxp
Normal file
Binary file not shown.
9
openpype/hosts/photoshop/api/extension/.debug
Normal file
9
openpype/hosts/photoshop/api/extension/.debug
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.PS.panel">
|
||||
<HostList>
|
||||
<Host Name="PHXS" Port="8078"/>
|
||||
<Host Name="FLPR" Port="8078"/>
|
||||
</HostList>
|
||||
</Extension>
|
||||
</ExtensionList>
|
||||
53
openpype/hosts/photoshop/api/extension/CSXS/manifest.xml
Normal file
53
openpype/hosts/photoshop/api/extension/CSXS/manifest.xml
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<ExtensionManifest ExtensionBundleId="com.openpype.PS.panel" ExtensionBundleVersion="1.0.11" Version="7.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.PS.panel" Version="1.0.1" />
|
||||
</ExtensionList>
|
||||
<ExecutionEnvironment>
|
||||
<HostList>
|
||||
<Host Name="PHSP" Version="19" />
|
||||
<Host Name="PHXS" Version="19" />
|
||||
</HostList>
|
||||
<LocaleList>
|
||||
<Locale Code="All" />
|
||||
</LocaleList>
|
||||
<RequiredRuntimeList>
|
||||
<RequiredRuntime Name="CSXS" Version="7.0" />
|
||||
</RequiredRuntimeList>
|
||||
</ExecutionEnvironment>
|
||||
<DispatchInfoList>
|
||||
<Extension Id="com.openpype.PS.panel">
|
||||
<DispatchInfo>
|
||||
<Resources>
|
||||
<MainPath>./index.html</MainPath>
|
||||
<CEFCommandLine />
|
||||
</Resources>
|
||||
<Lifecycle>
|
||||
<AutoVisible>true</AutoVisible>
|
||||
<StartOn>
|
||||
<!-- Photoshop dispatches this event on startup -->
|
||||
<Event>applicationActivate</Event>
|
||||
<Event>com.adobe.csxs.events.ApplicationInitialized</Event>
|
||||
</StartOn>
|
||||
</Lifecycle>
|
||||
<UI>
|
||||
<Type>Panel</Type>
|
||||
<Menu>OpenPype</Menu>
|
||||
<Geometry>
|
||||
<Size>
|
||||
<Width>300</Width>
|
||||
<Height>140</Height>
|
||||
</Size>
|
||||
<MaxSize>
|
||||
<Width>400</Width>
|
||||
<Height>200</Height>
|
||||
</MaxSize>
|
||||
</Geometry>
|
||||
<Icons>
|
||||
<Icon Type="Normal">./icons/avalon-logo-48.png</Icon>
|
||||
</Icons>
|
||||
</UI>
|
||||
</DispatchInfo>
|
||||
</Extension>
|
||||
</DispatchInfoList>
|
||||
</ExtensionManifest>
|
||||
1193
openpype/hosts/photoshop/api/extension/client/CSInterface.js
Normal file
1193
openpype/hosts/photoshop/api/extension/client/CSInterface.js
Normal file
File diff suppressed because it is too large
Load diff
300
openpype/hosts/photoshop/api/extension/client/client.js
Normal file
300
openpype/hosts/photoshop/api/extension/client/client.js
Normal file
|
|
@ -0,0 +1,300 @@
|
|||
// client facing part of extension, creates WSRPC client (jsx cannot
|
||||
// do that)
|
||||
// consumes RPC calls from server (OpenPype) calls ./host/index.jsx and
|
||||
// returns values back (in json format)
|
||||
|
||||
var logReturn = function(result){ log.warn('Result: ' + result);};
|
||||
|
||||
var csInterface = new CSInterface();
|
||||
|
||||
log.warn("script start");
|
||||
|
||||
WSRPC.DEBUG = false;
|
||||
WSRPC.TRACE = false;
|
||||
|
||||
function myCallBack(){
|
||||
log.warn("Triggered index.jsx");
|
||||
}
|
||||
// importing through manifest.xml isn't working because relative paths
|
||||
// possibly TODO
|
||||
jsx.evalFile('./host/index.jsx', myCallBack);
|
||||
|
||||
function runEvalScript(script) {
|
||||
// because of asynchronous nature of functions in jsx
|
||||
// this waits for response
|
||||
return new Promise(function(resolve, reject){
|
||||
csInterface.evalScript(script, resolve);
|
||||
});
|
||||
}
|
||||
|
||||
/** main entry point **/
|
||||
startUp("WEBSOCKET_URL");
|
||||
|
||||
// get websocket server url from environment value
|
||||
async function startUp(url){
|
||||
log.warn("url", url);
|
||||
promis = runEvalScript("getEnv('" + url + "')");
|
||||
|
||||
var res = await promis;
|
||||
// run rest only after resolved promise
|
||||
main(res);
|
||||
}
|
||||
|
||||
function get_extension_version(){
|
||||
/** Returns version number from extension manifest.xml **/
|
||||
log.debug("get_extension_version")
|
||||
var path = csInterface.getSystemPath(SystemPath.EXTENSION);
|
||||
log.debug("extension path " + path);
|
||||
|
||||
var result = window.cep.fs.readFile(path + "/CSXS/manifest.xml");
|
||||
var version = undefined;
|
||||
if(result.err === 0){
|
||||
if (window.DOMParser) {
|
||||
const parser = new DOMParser();
|
||||
const xmlDoc = parser.parseFromString(result.data.toString(), 'text/xml');
|
||||
const children = xmlDoc.children;
|
||||
|
||||
for (let i = 0; i <= children.length; i++) {
|
||||
if (children[i] && children[i].getAttribute('ExtensionBundleVersion')) {
|
||||
version = children[i].getAttribute('ExtensionBundleVersion');
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return version
|
||||
}
|
||||
|
||||
function main(websocket_url){
|
||||
// creates connection to 'websocket_url', registers routes
|
||||
log.warn("websocket_url", websocket_url);
|
||||
var default_url = 'ws://localhost:8099/ws/';
|
||||
|
||||
if (websocket_url == ''){
|
||||
websocket_url = default_url;
|
||||
}
|
||||
log.warn("connecting to:", websocket_url);
|
||||
RPC = new WSRPC(websocket_url, 5000); // spin connection
|
||||
|
||||
RPC.connect();
|
||||
|
||||
log.warn("connected");
|
||||
|
||||
function EscapeStringForJSX(str){
|
||||
// Replaces:
|
||||
// \ with \\
|
||||
// ' with \'
|
||||
// " with \"
|
||||
// See: https://stackoverflow.com/a/3967927/5285364
|
||||
return str.replace(/\\/g, '\\\\').replace(/'/g, "\\'").replace(/"/g, '\\"');
|
||||
}
|
||||
|
||||
RPC.addRoute('Photoshop.open', function (data) {
|
||||
log.warn('Server called client route "open":', data);
|
||||
var escapedPath = EscapeStringForJSX(data.path);
|
||||
return runEvalScript("fileOpen('" + escapedPath +"')")
|
||||
.then(function(result){
|
||||
log.warn("open: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.read', function (data) {
|
||||
log.warn('Server called client route "read":', data);
|
||||
return runEvalScript("getHeadline()")
|
||||
.then(function(result){
|
||||
log.warn("getHeadline: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.get_layers', function (data) {
|
||||
log.warn('Server called client route "get_layers":', data);
|
||||
return runEvalScript("getLayers()")
|
||||
.then(function(result){
|
||||
log.warn("getLayers: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.set_visible', function (data) {
|
||||
log.warn('Server called client route "set_visible":', data);
|
||||
return runEvalScript("setVisible(" + data.layer_id + ", " +
|
||||
data.visibility + ")")
|
||||
.then(function(result){
|
||||
log.warn("setVisible: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.get_active_document_name', function (data) {
|
||||
log.warn('Server called client route "get_active_document_name":',
|
||||
data);
|
||||
return runEvalScript("getActiveDocumentName()")
|
||||
.then(function(result){
|
||||
log.warn("save: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.get_active_document_full_name', function (data) {
|
||||
log.warn('Server called client route ' +
|
||||
'"get_active_document_full_name":', data);
|
||||
return runEvalScript("getActiveDocumentFullName()")
|
||||
.then(function(result){
|
||||
log.warn("save: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.save', function (data) {
|
||||
log.warn('Server called client route "save":', data);
|
||||
|
||||
return runEvalScript("save()")
|
||||
.then(function(result){
|
||||
log.warn("save: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.get_selected_layers', function (data) {
|
||||
log.warn('Server called client route "get_selected_layers":', data);
|
||||
|
||||
return runEvalScript("getSelectedLayers()")
|
||||
.then(function(result){
|
||||
log.warn("get_selected_layers: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.create_group', function (data) {
|
||||
log.warn('Server called client route "create_group":', data);
|
||||
|
||||
return runEvalScript("createGroup('" + data.name + "')")
|
||||
.then(function(result){
|
||||
log.warn("createGroup: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.group_selected_layers', function (data) {
|
||||
log.warn('Server called client route "group_selected_layers":',
|
||||
data);
|
||||
|
||||
return runEvalScript("groupSelectedLayers(null, "+
|
||||
"'" + data.name +"')")
|
||||
.then(function(result){
|
||||
log.warn("group_selected_layers: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.import_smart_object', function (data) {
|
||||
log.warn('Server called client "import_smart_object":', data);
|
||||
var escapedPath = EscapeStringForJSX(data.path);
|
||||
return runEvalScript("importSmartObject('" + escapedPath +"', " +
|
||||
"'"+ data.name +"',"+
|
||||
+ data.as_reference +")")
|
||||
.then(function(result){
|
||||
log.warn("import_smart_object: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.replace_smart_object', function (data) {
|
||||
log.warn('Server called route "replace_smart_object":', data);
|
||||
var escapedPath = EscapeStringForJSX(data.path);
|
||||
return runEvalScript("replaceSmartObjects("+data.layer_id+"," +
|
||||
"'" + escapedPath +"',"+
|
||||
"'"+ data.name +"')")
|
||||
.then(function(result){
|
||||
log.warn("replaceSmartObjects: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.delete_layer', function (data) {
|
||||
log.warn('Server called route "delete_layer":', data);
|
||||
return runEvalScript("deleteLayer("+data.layer_id+")")
|
||||
.then(function(result){
|
||||
log.warn("delete_layer: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.rename_layer', function (data) {
|
||||
log.warn('Server called route "rename_layer":', data);
|
||||
return runEvalScript("renameLayer("+data.layer_id+", " +
|
||||
"'"+ data.name +"')")
|
||||
.then(function(result){
|
||||
log.warn("rename_layer: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.select_layers', function (data) {
|
||||
log.warn('Server called client route "select_layers":', data);
|
||||
|
||||
return runEvalScript("selectLayers('" + data.layers +"')")
|
||||
.then(function(result){
|
||||
log.warn("select_layers: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.is_saved', function (data) {
|
||||
log.warn('Server called client route "is_saved":', data);
|
||||
|
||||
return runEvalScript("isSaved()")
|
||||
.then(function(result){
|
||||
log.warn("is_saved: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.saveAs', function (data) {
|
||||
log.warn('Server called client route "saveAsJPEG":', data);
|
||||
var escapedPath = EscapeStringForJSX(data.image_path);
|
||||
return runEvalScript("saveAs('" + escapedPath + "', " +
|
||||
"'" + data.ext + "', " +
|
||||
data.as_copy + ")")
|
||||
.then(function(result){
|
||||
log.warn("save: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.imprint', function (data) {
|
||||
log.warn('Server called client route "imprint":', data);
|
||||
var escaped = data.payload.replace(/\n/g, "\\n");
|
||||
return runEvalScript("imprint('" + escaped + "')")
|
||||
.then(function(result){
|
||||
log.warn("imprint: " + result);
|
||||
return result;
|
||||
});
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.get_extension_version', function (data) {
|
||||
log.warn('Server called client route "get_extension_version":', data);
|
||||
return get_extension_version();
|
||||
});
|
||||
|
||||
RPC.addRoute('Photoshop.close', function (data) {
|
||||
log.warn('Server called client route "close":', data);
|
||||
return runEvalScript("close()");
|
||||
});
|
||||
|
||||
RPC.call('Photoshop.ping').then(function (data) {
|
||||
log.warn('Result for calling server route "ping": ', data);
|
||||
return runEvalScript("ping()")
|
||||
.then(function(result){
|
||||
log.warn("ping: " + result);
|
||||
return result;
|
||||
});
|
||||
|
||||
}, function (error) {
|
||||
log.warn(error);
|
||||
});
|
||||
|
||||
}
|
||||
|
||||
log.warn("end script");
|
||||
2
openpype/hosts/photoshop/api/extension/client/loglevel.min.js
vendored
Normal file
2
openpype/hosts/photoshop/api/extension/client/loglevel.min.js
vendored
Normal file
|
|
@ -0,0 +1,2 @@
|
|||
/*! loglevel - v1.6.8 - https://github.com/pimterry/loglevel - (c) 2020 Tim Perry - licensed MIT */
|
||||
!function(a,b){"use strict";"function"==typeof define&&define.amd?define(b):"object"==typeof module&&module.exports?module.exports=b():a.log=b()}(this,function(){"use strict";function a(a,b){var c=a[b];if("function"==typeof c.bind)return c.bind(a);try{return Function.prototype.bind.call(c,a)}catch(b){return function(){return Function.prototype.apply.apply(c,[a,arguments])}}}function b(){console.log&&(console.log.apply?console.log.apply(console,arguments):Function.prototype.apply.apply(console.log,[console,arguments])),console.trace&&console.trace()}function c(c){return"debug"===c&&(c="log"),typeof console!==i&&("trace"===c&&j?b:void 0!==console[c]?a(console,c):void 0!==console.log?a(console,"log"):h)}function d(a,b){for(var c=0;c<k.length;c++){var d=k[c];this[d]=c<a?h:this.methodFactory(d,a,b)}this.log=this.debug}function e(a,b,c){return function(){typeof console!==i&&(d.call(this,b,c),this[a].apply(this,arguments))}}function f(a,b,d){return c(a)||e.apply(this,arguments)}function g(a,b,c){function e(a){var b=(k[a]||"silent").toUpperCase();if(typeof window!==i){try{return void(window.localStorage[l]=b)}catch(a){}try{window.document.cookie=encodeURIComponent(l)+"="+b+";"}catch(a){}}}function g(){var a;if(typeof window!==i){try{a=window.localStorage[l]}catch(a){}if(typeof a===i)try{var b=window.document.cookie,c=b.indexOf(encodeURIComponent(l)+"=");-1!==c&&(a=/^([^;]+)/.exec(b.slice(c))[1])}catch(a){}return void 0===j.levels[a]&&(a=void 0),a}}var h,j=this,l="loglevel";a&&(l+=":"+a),j.name=a,j.levels={TRACE:0,DEBUG:1,INFO:2,WARN:3,ERROR:4,SILENT:5},j.methodFactory=c||f,j.getLevel=function(){return h},j.setLevel=function(b,c){if("string"==typeof b&&void 0!==j.levels[b.toUpperCase()]&&(b=j.levels[b.toUpperCase()]),!("number"==typeof b&&b>=0&&b<=j.levels.SILENT))throw"log.setLevel() called with invalid level: "+b;if(h=b,!1!==c&&e(b),d.call(j,b,a),typeof console===i&&b<j.levels.SILENT)return"No console available for logging"},j.setDefaultLevel=function(a){g()||j.setLevel(a,!1)},j.enableAll=function(a){j.setLevel(j.levels.TRACE,a)},j.disableAll=function(a){j.setLevel(j.levels.SILENT,a)};var m=g();null==m&&(m=null==b?"WARN":b),j.setLevel(m,!1)}var h=function(){},i="undefined",j=typeof window!==i&&typeof window.navigator!==i&&/Trident\/|MSIE /.test(window.navigator.userAgent),k=["trace","debug","info","warn","error"],l=new g,m={};l.getLogger=function(a){if("string"!=typeof a||""===a)throw new TypeError("You must supply a name when creating a logger.");var b=m[a];return b||(b=m[a]=new g(a,l.getLevel(),l.methodFactory)),b};var n=typeof window!==i?window.log:void 0;return l.noConflict=function(){return typeof window!==i&&window.log===l&&(window.log=n),l},l.getLoggers=function(){return m},l});
|
||||
393
openpype/hosts/photoshop/api/extension/client/wsrpc.js
Normal file
393
openpype/hosts/photoshop/api/extension/client/wsrpc.js
Normal file
|
|
@ -0,0 +1,393 @@
|
|||
(function (global, factory) {
|
||||
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
|
||||
typeof define === 'function' && define.amd ? define(factory) :
|
||||
(global = global || self, global.WSRPC = factory());
|
||||
}(this, function () { 'use strict';
|
||||
|
||||
function _classCallCheck(instance, Constructor) {
|
||||
if (!(instance instanceof Constructor)) {
|
||||
throw new TypeError("Cannot call a class as a function");
|
||||
}
|
||||
}
|
||||
|
||||
var Deferred = function Deferred() {
|
||||
_classCallCheck(this, Deferred);
|
||||
|
||||
var self = this;
|
||||
self.resolve = null;
|
||||
self.reject = null;
|
||||
self.done = false;
|
||||
|
||||
function wrapper(func) {
|
||||
return function () {
|
||||
if (self.done) throw new Error('Promise already done');
|
||||
self.done = true;
|
||||
return func.apply(this, arguments);
|
||||
};
|
||||
}
|
||||
|
||||
self.promise = new Promise(function (resolve, reject) {
|
||||
self.resolve = wrapper(resolve);
|
||||
self.reject = wrapper(reject);
|
||||
});
|
||||
|
||||
self.promise.isPending = function () {
|
||||
return !self.done;
|
||||
};
|
||||
|
||||
return self;
|
||||
};
|
||||
|
||||
function logGroup(group, level, args) {
|
||||
console.group(group);
|
||||
console[level].apply(this, args);
|
||||
console.groupEnd();
|
||||
}
|
||||
|
||||
function log() {
|
||||
if (!WSRPC.DEBUG) return;
|
||||
logGroup('WSRPC.DEBUG', 'trace', arguments);
|
||||
}
|
||||
|
||||
function trace(msg) {
|
||||
if (!WSRPC.TRACE) return;
|
||||
var payload = msg;
|
||||
if ('data' in msg) payload = JSON.parse(msg.data);
|
||||
logGroup("WSRPC.TRACE", 'trace', [payload]);
|
||||
}
|
||||
|
||||
function getAbsoluteWsUrl(url) {
|
||||
if (/^\w+:\/\//.test(url)) return url;
|
||||
if (typeof window == 'undefined' && window.location.host.length < 1) throw new Error("Can not construct absolute URL from ".concat(window.location));
|
||||
var scheme = window.location.protocol === "https:" ? "wss:" : "ws:";
|
||||
var port = window.location.port === '' ? ":".concat(window.location.port) : '';
|
||||
var host = window.location.host;
|
||||
var path = url.replace(/^\/+/gm, '');
|
||||
return "".concat(scheme, "//").concat(host).concat(port, "/").concat(path);
|
||||
}
|
||||
|
||||
var readyState = Object.freeze({
|
||||
0: 'CONNECTING',
|
||||
1: 'OPEN',
|
||||
2: 'CLOSING',
|
||||
3: 'CLOSED'
|
||||
});
|
||||
|
||||
var WSRPC = function WSRPC(URL) {
|
||||
var reconnectTimeout = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : 1000;
|
||||
|
||||
_classCallCheck(this, WSRPC);
|
||||
|
||||
var self = this;
|
||||
URL = getAbsoluteWsUrl(URL);
|
||||
self.id = 1;
|
||||
self.eventId = 0;
|
||||
self.socketStarted = false;
|
||||
self.eventStore = {
|
||||
onconnect: {},
|
||||
onerror: {},
|
||||
onclose: {},
|
||||
onchange: {}
|
||||
};
|
||||
self.connectionNumber = 0;
|
||||
self.oneTimeEventStore = {
|
||||
onconnect: [],
|
||||
onerror: [],
|
||||
onclose: [],
|
||||
onchange: []
|
||||
};
|
||||
self.callQueue = [];
|
||||
|
||||
function createSocket() {
|
||||
var ws = new WebSocket(URL);
|
||||
|
||||
var rejectQueue = function rejectQueue() {
|
||||
self.connectionNumber++; // rejects incoming calls
|
||||
|
||||
var deferred; //reject all pending calls
|
||||
|
||||
while (0 < self.callQueue.length) {
|
||||
var callObj = self.callQueue.shift();
|
||||
deferred = self.store[callObj.id];
|
||||
delete self.store[callObj.id];
|
||||
|
||||
if (deferred && deferred.promise.isPending()) {
|
||||
deferred.reject('WebSocket error occurred');
|
||||
}
|
||||
} // reject all from the store
|
||||
|
||||
|
||||
for (var key in self.store) {
|
||||
if (!self.store.hasOwnProperty(key)) continue;
|
||||
deferred = self.store[key];
|
||||
|
||||
if (deferred && deferred.promise.isPending()) {
|
||||
deferred.reject('WebSocket error occurred');
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
function reconnect(callEvents) {
|
||||
setTimeout(function () {
|
||||
try {
|
||||
self.socket = createSocket();
|
||||
self.id = 1;
|
||||
} catch (exc) {
|
||||
callEvents('onerror', exc);
|
||||
delete self.socket;
|
||||
console.error(exc);
|
||||
}
|
||||
}, reconnectTimeout);
|
||||
}
|
||||
|
||||
ws.onclose = function (err) {
|
||||
log('ONCLOSE CALLED', 'STATE', self.public.state());
|
||||
trace(err);
|
||||
|
||||
for (var serial in self.store) {
|
||||
if (!self.store.hasOwnProperty(serial)) continue;
|
||||
|
||||
if (self.store[serial].hasOwnProperty('reject')) {
|
||||
self.store[serial].reject('Connection closed');
|
||||
}
|
||||
}
|
||||
|
||||
rejectQueue();
|
||||
callEvents('onclose', err);
|
||||
callEvents('onchange', err);
|
||||
reconnect(callEvents);
|
||||
};
|
||||
|
||||
ws.onerror = function (err) {
|
||||
log('ONERROR CALLED', 'STATE', self.public.state());
|
||||
trace(err);
|
||||
rejectQueue();
|
||||
callEvents('onerror', err);
|
||||
callEvents('onchange', err);
|
||||
log('WebSocket has been closed by error: ', err);
|
||||
};
|
||||
|
||||
function tryCallEvent(func, event) {
|
||||
try {
|
||||
return func(event);
|
||||
} catch (e) {
|
||||
if (e.hasOwnProperty('stack')) {
|
||||
log(e.stack);
|
||||
} else {
|
||||
log('Event function', func, 'raised unknown error:', e);
|
||||
}
|
||||
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
|
||||
function callEvents(evName, event) {
|
||||
while (0 < self.oneTimeEventStore[evName].length) {
|
||||
var deferred = self.oneTimeEventStore[evName].shift();
|
||||
if (deferred.hasOwnProperty('resolve') && deferred.promise.isPending()) deferred.resolve();
|
||||
}
|
||||
|
||||
for (var i in self.eventStore[evName]) {
|
||||
if (!self.eventStore[evName].hasOwnProperty(i)) continue;
|
||||
var cur = self.eventStore[evName][i];
|
||||
tryCallEvent(cur, event);
|
||||
}
|
||||
}
|
||||
|
||||
ws.onopen = function (ev) {
|
||||
log('ONOPEN CALLED', 'STATE', self.public.state());
|
||||
trace(ev);
|
||||
|
||||
while (0 < self.callQueue.length) {
|
||||
// noinspection JSUnresolvedFunction
|
||||
self.socket.send(JSON.stringify(self.callQueue.shift(), 0, 1));
|
||||
}
|
||||
|
||||
callEvents('onconnect', ev);
|
||||
callEvents('onchange', ev);
|
||||
};
|
||||
|
||||
function handleCall(self, data) {
|
||||
if (!self.routes.hasOwnProperty(data.method)) throw new Error('Route not found');
|
||||
var connectionNumber = self.connectionNumber;
|
||||
var deferred = new Deferred();
|
||||
deferred.promise.then(function (result) {
|
||||
if (connectionNumber !== self.connectionNumber) return;
|
||||
self.socket.send(JSON.stringify({
|
||||
id: data.id,
|
||||
result: result
|
||||
}));
|
||||
}, function (error) {
|
||||
if (connectionNumber !== self.connectionNumber) return;
|
||||
self.socket.send(JSON.stringify({
|
||||
id: data.id,
|
||||
error: error
|
||||
}));
|
||||
});
|
||||
var func = self.routes[data.method];
|
||||
if (self.asyncRoutes[data.method]) return func.apply(deferred, [data.params]);
|
||||
|
||||
function badPromise() {
|
||||
throw new Error("You should register route with async flag.");
|
||||
}
|
||||
|
||||
var promiseMock = {
|
||||
resolve: badPromise,
|
||||
reject: badPromise
|
||||
};
|
||||
|
||||
try {
|
||||
deferred.resolve(func.apply(promiseMock, [data.params]));
|
||||
} catch (e) {
|
||||
deferred.reject(e);
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
|
||||
function handleError(self, data) {
|
||||
if (!self.store.hasOwnProperty(data.id)) return log('Unknown callback');
|
||||
var deferred = self.store[data.id];
|
||||
if (typeof deferred === 'undefined') return log('Confirmation without handler');
|
||||
delete self.store[data.id];
|
||||
log('REJECTING', data.error);
|
||||
deferred.reject(data.error);
|
||||
}
|
||||
|
||||
function handleResult(self, data) {
|
||||
var deferred = self.store[data.id];
|
||||
if (typeof deferred === 'undefined') return log('Confirmation without handler');
|
||||
delete self.store[data.id];
|
||||
|
||||
if (data.hasOwnProperty('result')) {
|
||||
return deferred.resolve(data.result);
|
||||
}
|
||||
|
||||
return deferred.reject(data.error);
|
||||
}
|
||||
|
||||
ws.onmessage = function (message) {
|
||||
log('ONMESSAGE CALLED', 'STATE', self.public.state());
|
||||
trace(message);
|
||||
if (message.type !== 'message') return;
|
||||
var data;
|
||||
|
||||
try {
|
||||
data = JSON.parse(message.data);
|
||||
log(data);
|
||||
|
||||
if (data.hasOwnProperty('method')) {
|
||||
return handleCall(self, data);
|
||||
} else if (data.hasOwnProperty('error') && data.error === null) {
|
||||
return handleError(self, data);
|
||||
} else {
|
||||
return handleResult(self, data);
|
||||
}
|
||||
} catch (exception) {
|
||||
var err = {
|
||||
error: exception.message,
|
||||
result: null,
|
||||
id: data ? data.id : null
|
||||
};
|
||||
self.socket.send(JSON.stringify(err));
|
||||
console.error(exception);
|
||||
}
|
||||
};
|
||||
|
||||
return ws;
|
||||
}
|
||||
|
||||
function makeCall(func, args, params) {
|
||||
self.id += 2;
|
||||
var deferred = new Deferred();
|
||||
var callObj = Object.freeze({
|
||||
id: self.id,
|
||||
method: func,
|
||||
params: args
|
||||
});
|
||||
var state = self.public.state();
|
||||
|
||||
if (state === 'OPEN') {
|
||||
self.store[self.id] = deferred;
|
||||
self.socket.send(JSON.stringify(callObj));
|
||||
} else if (state === 'CONNECTING') {
|
||||
log('SOCKET IS', state);
|
||||
self.store[self.id] = deferred;
|
||||
self.callQueue.push(callObj);
|
||||
} else {
|
||||
log('SOCKET IS', state);
|
||||
|
||||
if (params && params['noWait']) {
|
||||
deferred.reject("Socket is: ".concat(state));
|
||||
} else {
|
||||
self.store[self.id] = deferred;
|
||||
self.callQueue.push(callObj);
|
||||
}
|
||||
}
|
||||
|
||||
return deferred.promise;
|
||||
}
|
||||
|
||||
self.asyncRoutes = {};
|
||||
self.routes = {};
|
||||
self.store = {};
|
||||
self.public = Object.freeze({
|
||||
call: function call(func, args, params) {
|
||||
return makeCall(func, args, params);
|
||||
},
|
||||
addRoute: function addRoute(route, callback, isAsync) {
|
||||
self.asyncRoutes[route] = isAsync || false;
|
||||
self.routes[route] = callback;
|
||||
},
|
||||
deleteRoute: function deleteRoute(route) {
|
||||
delete self.asyncRoutes[route];
|
||||
return delete self.routes[route];
|
||||
},
|
||||
addEventListener: function addEventListener(event, func) {
|
||||
var eventId = self.eventId++;
|
||||
self.eventStore[event][eventId] = func;
|
||||
return eventId;
|
||||
},
|
||||
removeEventListener: function removeEventListener(event, index) {
|
||||
if (self.eventStore[event].hasOwnProperty(index)) {
|
||||
delete self.eventStore[event][index];
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
},
|
||||
onEvent: function onEvent(event) {
|
||||
var deferred = new Deferred();
|
||||
self.oneTimeEventStore[event].push(deferred);
|
||||
return deferred.promise;
|
||||
},
|
||||
destroy: function destroy() {
|
||||
return self.socket.close();
|
||||
},
|
||||
state: function state() {
|
||||
return readyState[this.stateCode()];
|
||||
},
|
||||
stateCode: function stateCode() {
|
||||
if (self.socketStarted && self.socket) return self.socket.readyState;
|
||||
return 3;
|
||||
},
|
||||
connect: function connect() {
|
||||
self.socketStarted = true;
|
||||
self.socket = createSocket();
|
||||
}
|
||||
});
|
||||
self.public.addRoute('log', function (argsObj) {
|
||||
//console.info("Websocket sent: ".concat(argsObj));
|
||||
});
|
||||
self.public.addRoute('ping', function (data) {
|
||||
return data;
|
||||
});
|
||||
return self.public;
|
||||
};
|
||||
|
||||
WSRPC.DEBUG = false;
|
||||
WSRPC.TRACE = false;
|
||||
|
||||
return WSRPC;
|
||||
|
||||
}));
|
||||
//# sourceMappingURL=wsrpc.js.map
|
||||
1
openpype/hosts/photoshop/api/extension/client/wsrpc.min.js
vendored
Normal file
1
openpype/hosts/photoshop/api/extension/client/wsrpc.min.js
vendored
Normal file
File diff suppressed because one or more lines are too long
774
openpype/hosts/photoshop/api/extension/host/JSX.js
Normal file
774
openpype/hosts/photoshop/api/extension/host/JSX.js
Normal file
|
|
@ -0,0 +1,774 @@
|
|||
/*
|
||||
_ ______ __ _
|
||||
| / ___\ \/ / (_)___
|
||||
_ | \___ \\ / | / __|
|
||||
| |_| |___) / \ _ | \__ \
|
||||
\___/|____/_/\_(_)/ |___/
|
||||
|__/
|
||||
_ ____
|
||||
/\ /\___ _ __ ___(_) ___ _ __ |___ \
|
||||
\ \ / / _ \ '__/ __| |/ _ \| '_ \ __) |
|
||||
\ V / __/ | \__ \ | (_) | | | | / __/
|
||||
\_/ \___|_| |___/_|\___/|_| |_| |_____|
|
||||
*/
|
||||
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////
|
||||
// JSX.js © and writtent by Trevor https://creative-scripts.com/jsx-js //
|
||||
// If you turn over is less the $50,000,000 then you don't have to pay anything //
|
||||
// License MIT, don't complain, don't sue NO MATTER WHAT //
|
||||
// If you turn over is more the $50,000,000 then you DO have to pay //
|
||||
// Contact me https://creative-scripts.com/contact for pricing and licensing //
|
||||
// Don't remove these commented lines //
|
||||
// For simple and effective calling of jsx from the js engine //
|
||||
// Version 2 last modified April 18 2018 //
|
||||
//////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Change log: //
|
||||
// JSX.js V2 is now independent of NodeJS and CSInterface.js <span class="wp-font-emots-emo-happy"></span> //
|
||||
// forceEval is now by default true //
|
||||
// It wraps the scripts in a try catch and an eval providing useful error handling //
|
||||
// One can set in the jsx engine $.includeStack = true to return the call stack in the event of an error //
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// JSX.js for calling jsx code from the js engine //
|
||||
// 2 methods included //
|
||||
// 1) jsx.evalScript AKA jsx.eval //
|
||||
// 2) jsx.evalFile AKA jsx.file //
|
||||
// Special features //
|
||||
// 1) Allows all changes in your jsx code to be reloaded into your extension at the click of a button //
|
||||
// 2) Can enable the $.fileName property to work and provides a $.__fileName() method as an alternative //
|
||||
// 3) Can force a callBack result from InDesign //
|
||||
// 4) No more csInterface.evalScript('alert("hello "' + title + " " + name + '");') //
|
||||
// use jsx.evalScript('alert("hello __title__ __name__");', {title: title, name: name}); //
|
||||
// 5) execute jsx files from your jsx folder like this jsx.evalFile('myFabJsxScript.jsx'); //
|
||||
// or from a relative path jsx.evalFile('../myFabScripts/myFabJsxScript.jsx'); //
|
||||
// or from an absolute url jsx.evalFile('/Path/to/my/FabJsxScript.jsx'); (mac) //
|
||||
// or from an absolute url jsx.evalFile('C:Path/to/my/FabJsxScript.jsx'); (windows) //
|
||||
// 6) Parameter can be entered in the from of a parameter list which can be in any order or as an object //
|
||||
// 7) Not camelCase sensitive (very useful for the illiterate) //
|
||||
// <span class="wp-font-emots-emo-sunglasses"></span> Dead easy to use BUT SPEND THE 3 TO 5 MINUTES IT SHOULD TAKE TO READ THE INSTRUCTIONS //
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
/* jshint undef:true, unused:true, esversion:6 */
|
||||
|
||||
//////////////////////////////////////
|
||||
// jsx is the interface for the API //
|
||||
//////////////////////////////////////
|
||||
|
||||
var jsx;
|
||||
|
||||
// Wrap everything in an anonymous function to prevent leeks
|
||||
(function() {
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
// Substitute some CSInterface functions to avoid dependency on it //
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
|
||||
var __dirname = (function() {
|
||||
var path, isMac;
|
||||
path = decodeURI(window.__adobe_cep__.getSystemPath('extension'));
|
||||
isMac = navigator.platform[0] === 'M'; // [M]ac
|
||||
path = path.replace('file://' + (isMac ? '' : '/'), '');
|
||||
return path;
|
||||
})();
|
||||
|
||||
var evalScript = function(script, callback) {
|
||||
callback = callback || function() {};
|
||||
window.__adobe_cep__.evalScript(script, callback);
|
||||
};
|
||||
|
||||
|
||||
////////////////////////////////////////////
|
||||
// In place of using the node path module //
|
||||
////////////////////////////////////////////
|
||||
|
||||
// jshint undef: true, unused: true
|
||||
|
||||
// A very minified version of the NodeJs Path module!!
|
||||
// For use outside of NodeJs
|
||||
// Majorly nicked by Trevor from Joyent
|
||||
var path = (function() {
|
||||
|
||||
var isString = function(arg) {
|
||||
return typeof arg === 'string';
|
||||
};
|
||||
|
||||
// var isObject = function(arg) {
|
||||
// return typeof arg === 'object' && arg !== null;
|
||||
// };
|
||||
|
||||
var basename = function(path) {
|
||||
if (!isString(path)) {
|
||||
throw new TypeError('Argument to path.basename must be a string');
|
||||
}
|
||||
var bits = path.split(/[\/\\]/g);
|
||||
return bits[bits.length - 1];
|
||||
};
|
||||
|
||||
// jshint undef: true
|
||||
// Regex to split a windows path into three parts: [*, device, slash,
|
||||
// tail] windows-only
|
||||
var splitDeviceRe =
|
||||
/^([a-zA-Z]:|[\\\/]{2}[^\\\/]+[\\\/]+[^\\\/]+)?([\\\/])?([\s\S]*?)$/;
|
||||
|
||||
// Regex to split the tail part of the above into [*, dir, basename, ext]
|
||||
// var splitTailRe =
|
||||
// /^([\s\S]*?)((?:\.{1,2}|[^\\\/]+?|)(\.[^.\/\\]*|))(?:[\\\/]*)$/;
|
||||
|
||||
var win32 = {};
|
||||
// Function to split a filename into [root, dir, basename, ext]
|
||||
// var win32SplitPath = function(filename) {
|
||||
// // Separate device+slash from tail
|
||||
// var result = splitDeviceRe.exec(filename),
|
||||
// device = (result[1] || '') + (result[2] || ''),
|
||||
// tail = result[3] || '';
|
||||
// // Split the tail into dir, basename and extension
|
||||
// var result2 = splitTailRe.exec(tail),
|
||||
// dir = result2[1],
|
||||
// basename = result2[2],
|
||||
// ext = result2[3];
|
||||
// return [device, dir, basename, ext];
|
||||
// };
|
||||
|
||||
var win32StatPath = function(path) {
|
||||
var result = splitDeviceRe.exec(path),
|
||||
device = result[1] || '',
|
||||
isUnc = !!device && device[1] !== ':';
|
||||
return {
|
||||
device: device,
|
||||
isUnc: isUnc,
|
||||
isAbsolute: isUnc || !!result[2], // UNC paths are always absolute
|
||||
tail: result[3]
|
||||
};
|
||||
};
|
||||
|
||||
var normalizeUNCRoot = function(device) {
|
||||
return '\\\\' + device.replace(/^[\\\/]+/, '').replace(/[\\\/]+/g, '\\');
|
||||
};
|
||||
|
||||
var normalizeArray = function(parts, allowAboveRoot) {
|
||||
var res = [];
|
||||
for (var i = 0; i < parts.length; i++) {
|
||||
var p = parts[i];
|
||||
|
||||
// ignore empty parts
|
||||
if (!p || p === '.')
|
||||
continue;
|
||||
|
||||
if (p === '..') {
|
||||
if (res.length && res[res.length - 1] !== '..') {
|
||||
res.pop();
|
||||
} else if (allowAboveRoot) {
|
||||
res.push('..');
|
||||
}
|
||||
} else {
|
||||
res.push(p);
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
|
||||
win32.normalize = function(path) {
|
||||
var result = win32StatPath(path),
|
||||
device = result.device,
|
||||
isUnc = result.isUnc,
|
||||
isAbsolute = result.isAbsolute,
|
||||
tail = result.tail,
|
||||
trailingSlash = /[\\\/]$/.test(tail);
|
||||
|
||||
// Normalize the tail path
|
||||
tail = normalizeArray(tail.split(/[\\\/]+/), !isAbsolute).join('\\');
|
||||
|
||||
if (!tail && !isAbsolute) {
|
||||
tail = '.';
|
||||
}
|
||||
if (tail && trailingSlash) {
|
||||
tail += '\\';
|
||||
}
|
||||
|
||||
// Convert slashes to backslashes when `device` points to an UNC root.
|
||||
// Also squash multiple slashes into a single one where appropriate.
|
||||
if (isUnc) {
|
||||
device = normalizeUNCRoot(device);
|
||||
}
|
||||
|
||||
return device + (isAbsolute ? '\\' : '') + tail;
|
||||
};
|
||||
win32.join = function() {
|
||||
var paths = [];
|
||||
for (var i = 0; i < arguments.length; i++) {
|
||||
var arg = arguments[i];
|
||||
if (!isString(arg)) {
|
||||
throw new TypeError('Arguments to path.join must be strings');
|
||||
}
|
||||
if (arg) {
|
||||
paths.push(arg);
|
||||
}
|
||||
}
|
||||
|
||||
var joined = paths.join('\\');
|
||||
|
||||
// Make sure that the joined path doesn't start with two slashes, because
|
||||
// normalize() will mistake it for an UNC path then.
|
||||
//
|
||||
// This step is skipped when it is very clear that the user actually
|
||||
// intended to point at an UNC path. This is assumed when the first
|
||||
// non-empty string arguments starts with exactly two slashes followed by
|
||||
// at least one more non-slash character.
|
||||
//
|
||||
// Note that for normalize() to treat a path as an UNC path it needs to
|
||||
// have at least 2 components, so we don't filter for that here.
|
||||
// This means that the user can use join to construct UNC paths from
|
||||
// a server name and a share name; for example:
|
||||
// path.join('//server', 'share') -> '\\\\server\\share\')
|
||||
if (!/^[\\\/]{2}[^\\\/]/.test(paths[0])) {
|
||||
joined = joined.replace(/^[\\\/]{2,}/, '\\');
|
||||
}
|
||||
return win32.normalize(joined);
|
||||
};
|
||||
|
||||
var posix = {};
|
||||
|
||||
// posix version
|
||||
posix.join = function() {
|
||||
var path = '';
|
||||
for (var i = 0; i < arguments.length; i++) {
|
||||
var segment = arguments[i];
|
||||
if (!isString(segment)) {
|
||||
throw new TypeError('Arguments to path.join must be strings');
|
||||
}
|
||||
if (segment) {
|
||||
if (!path) {
|
||||
path += segment;
|
||||
} else {
|
||||
path += '/' + segment;
|
||||
}
|
||||
}
|
||||
}
|
||||
return posix.normalize(path);
|
||||
};
|
||||
|
||||
// path.normalize(path)
|
||||
// posix version
|
||||
posix.normalize = function(path) {
|
||||
var isAbsolute = path.charAt(0) === '/',
|
||||
trailingSlash = path && path[path.length - 1] === '/';
|
||||
|
||||
// Normalize the path
|
||||
path = normalizeArray(path.split('/'), !isAbsolute).join('/');
|
||||
|
||||
if (!path && !isAbsolute) {
|
||||
path = '.';
|
||||
}
|
||||
if (path && trailingSlash) {
|
||||
path += '/';
|
||||
}
|
||||
|
||||
return (isAbsolute ? '/' : '') + path;
|
||||
};
|
||||
|
||||
win32.basename = posix.basename = basename;
|
||||
|
||||
this.win32 = win32;
|
||||
this.posix = posix;
|
||||
return (navigator.platform[0] === 'M') ? posix : win32;
|
||||
})();
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// The is the "main" function which is to be prototyped //
|
||||
// It run a small snippet in the jsx engine that //
|
||||
// 1) Assigns $.__dirname with the value of the extensions __dirname base path //
|
||||
// 2) Sets up a method $.__fileName() for retrieving from within the jsx script it's $.fileName value //
|
||||
// more on that method later //
|
||||
// At the end of the script the global declaration jsx = new Jsx(); has been made. //
|
||||
// If you like you can remove that and include in your relevant functions //
|
||||
// var jsx = new Jsx(); You would never call the Jsx function without the "new" declaration //
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
var Jsx = function() {
|
||||
var jsxScript;
|
||||
// Setup jsx function to enable the jsx scripts to easily retrieve their file location
|
||||
jsxScript = [
|
||||
'$.level = 0;',
|
||||
'if(!$.__fileNames){',
|
||||
' $.__fileNames = {};',
|
||||
' $.__dirname = "__dirname__";'.replace('__dirname__', __dirname),
|
||||
' $.__fileName = function(name){',
|
||||
' name = name || $.fileName;',
|
||||
' return ($.__fileNames && $.__fileNames[name]) || $.fileName;',
|
||||
' };',
|
||||
'}'
|
||||
].join('');
|
||||
evalScript(jsxScript);
|
||||
return this;
|
||||
};
|
||||
|
||||
/**
|
||||
* [evalScript] For calling jsx scripts from the js engine
|
||||
*
|
||||
* The jsx.evalScript method is used for calling jsx scripts directly from the js engine
|
||||
* Allows for easy replacement i.e. variable insertions and for forcing eval.
|
||||
* For convenience jsx.eval or jsx.script or jsx.evalscript can be used instead of calling jsx.evalScript
|
||||
*
|
||||
* @param {String} jsxScript
|
||||
* The string that makes up the jsx script
|
||||
* it can contain a simple template like syntax for replacements
|
||||
* 'alert("__foo__");'
|
||||
* the __foo__ will be replaced as per the replacements parameter
|
||||
*
|
||||
* @param {Function} callback
|
||||
* The callback function you want the jsx script to trigger on completion
|
||||
* The result of the jsx script is passed as the argument to that function
|
||||
* The function can exist in some other file.
|
||||
* Note that InDesign does not automatically pass the callBack as a string.
|
||||
* Either write your InDesign in a way that it returns a sting the form of
|
||||
* return 'this is my result surrounded by quotes'
|
||||
* or use the force eval option
|
||||
* [Optional DEFAULT no callBack]
|
||||
*
|
||||
* @param {Object} replacements
|
||||
* The replacements to make on the jsx script
|
||||
* given the following script (template)
|
||||
* 'alert("__message__: " + __val__);'
|
||||
* and we want to change the script to
|
||||
* 'alert("I was born in the year: " + 1234);'
|
||||
* we would pass the following object
|
||||
* {"message": 'I was born in the year', "val": 1234}
|
||||
* or if not using reserved words like do we can leave out the key quotes
|
||||
* {message: 'I was born in the year', val: 1234}
|
||||
* [Optional DEFAULT no replacements]
|
||||
*
|
||||
* @param {Bolean} forceEval
|
||||
* If the script should be wrapped in an eval and try catch
|
||||
* This will 1) provide useful error feedback if heaven forbid it is needed
|
||||
* 2) The result will be a string which is required for callback results in InDesign
|
||||
* [Optional DEFAULT true]
|
||||
*
|
||||
* Note 1) The order of the parameters is irrelevant
|
||||
* Note 2) One can pass the arguments as an object if desired
|
||||
* jsx.evalScript(myCallBackFunction, 'alert("__myMessage__");', true);
|
||||
* is the same as
|
||||
* jsx.evalScript({
|
||||
* script: 'alert("__myMessage__");',
|
||||
* replacements: {myMessage: 'Hi there'},
|
||||
* callBack: myCallBackFunction,
|
||||
* eval: true
|
||||
* });
|
||||
* note that either lower or camelCase key names are valid
|
||||
* i.e. both callback or callBack will work
|
||||
*
|
||||
* The following keys are the same jsx || script || jsxScript || jsxscript || file
|
||||
* The following keys are the same callBack || callback
|
||||
* The following keys are the same replacements || replace
|
||||
* The following keys are the same eval || forceEval || forceeval
|
||||
* The following keys are the same forceEvalScript || forceevalscript || evalScript || evalscript;
|
||||
*
|
||||
* @return {Boolean} if the jsxScript was executed or not
|
||||
*/
|
||||
|
||||
Jsx.prototype.evalScript = function() {
|
||||
var arg, i, key, replaceThis, withThis, args, callback, forceEval, replacements, jsxScript, isBin;
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////
|
||||
// sort out order which arguments into jsxScript, callback, replacements, forceEval //
|
||||
//////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
args = arguments;
|
||||
|
||||
// Detect if the parameters were passed as an object and if so allow for various keys
|
||||
if (args.length === 1 && (arg = args[0]) instanceof Object) {
|
||||
jsxScript = arg.jsxScript || arg.jsx || arg.script || arg.file || arg.jsxscript;
|
||||
callback = arg.callBack || arg.callback;
|
||||
replacements = arg.replacements || arg.replace;
|
||||
forceEval = arg.eval || arg.forceEval || arg.forceeval;
|
||||
} else {
|
||||
for (i = 0; i < 4; i++) {
|
||||
arg = args[i];
|
||||
if (arg === undefined) {
|
||||
continue;
|
||||
}
|
||||
if (arg.constructor === String) {
|
||||
jsxScript = arg;
|
||||
continue;
|
||||
}
|
||||
if (arg.constructor === Object) {
|
||||
replacements = arg;
|
||||
continue;
|
||||
}
|
||||
if (arg.constructor === Function) {
|
||||
callback = arg;
|
||||
continue;
|
||||
}
|
||||
if (arg === false) {
|
||||
forceEval = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no script provide then not too much to do!
|
||||
if (!jsxScript) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Have changed the forceEval default to be true as I prefer the error handling
|
||||
if (forceEval !== false) {
|
||||
forceEval = true;
|
||||
}
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// On Illustrator and other apps the result of the jsx script is automatically passed as a string //
|
||||
// if you have a "script" containing the single number 1 and nothing else then the callBack will register as "1" //
|
||||
// On InDesign that same script will provide a blank callBack //
|
||||
// Let's say we have a callBack function var callBack = function(result){alert(result);} //
|
||||
// On Ai your see the 1 in the alert //
|
||||
// On ID your just see a blank alert //
|
||||
// To see the 1 in the alert you need to convert the result to a string and then it will show //
|
||||
// So if we rewrite out 1 byte script to '1' i.e. surround the 1 in quotes then the call back alert will show 1 //
|
||||
// If the scripts planed one can make sure that the results always passed as a string (including errors) //
|
||||
// otherwise one can wrap the script in an eval and then have the result passed as a string //
|
||||
// I have not gone through all the apps but can say //
|
||||
// for Ai you never need to set the forceEval to true //
|
||||
// for ID you if you have not coded your script appropriately and your want to send a result to the callBack then set forceEval to true //
|
||||
// I changed this that even on Illustrator it applies the try catch, Note the try catch will fail if $.level is set to 1 //
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
if (forceEval) {
|
||||
|
||||
isBin = (jsxScript.substring(0, 10) === '@JSXBIN@ES') ? '' : '\n';
|
||||
jsxScript = (
|
||||
// "\n''') + '';} catch(e){(function(e){var n, a=[]; for (n in e){a.push(n + ': ' + e[n])}; return a.join('\n')})(e)}");
|
||||
// "\n''') + '';} catch(e){e + (e.line ? ('\\nLine ' + (+e.line - 1)) : '')}");
|
||||
[
|
||||
"$.level = 0;",
|
||||
"try{eval('''" + isBin, // need to add an extra line otherwise #targetengine doesn't work ;-]
|
||||
jsxScript.replace(/\\/g, '\\\\').replace(/'/g, "\\'").replace(/"/g, '\\"') + "\n''') + '';",
|
||||
"} catch (e) {",
|
||||
" (function(e) {",
|
||||
" var line, sourceLine, name, description, ErrorMessage, fileName, start, end, bug;",
|
||||
" line = +e.line" + (isBin === '' ? ';' : ' - 1;'), // To take into account the extra line added
|
||||
" fileName = File(e.fileName).fsName;",
|
||||
" sourceLine = line && e.source.split(/[\\r\\n]/)[line];",
|
||||
" name = e.name;",
|
||||
" description = e.description;",
|
||||
" ErrorMessage = name + ' ' + e.number + ': ' + description;",
|
||||
" if (fileName.length && !(/[\\/\\\\]\\d+$/.test(fileName))) {",
|
||||
" ErrorMessage += '\\nFile: ' + fileName;",
|
||||
" line++;",
|
||||
" }",
|
||||
" if (line){",
|
||||
" ErrorMessage += '\\nLine: ' + line +",
|
||||
" '-> ' + ((sourceLine.length < 300) ? sourceLine : sourceLine.substring(0,300) + '...');",
|
||||
" }",
|
||||
" if (e.start) {ErrorMessage += '\\nBug: ' + e.source.substring(e.start - 1, e.end)}",
|
||||
" if ($.includeStack) {ErrorMessage += '\\nStack:' + $.stack;}",
|
||||
" return ErrorMessage;",
|
||||
" })(e);",
|
||||
"}"
|
||||
].join('')
|
||||
);
|
||||
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////////////////
|
||||
// deal with the replacements //
|
||||
// Note it's probably better to use ${template} `literals` //
|
||||
/////////////////////////////////////////////////////////////
|
||||
|
||||
if (replacements) {
|
||||
for (key in replacements) {
|
||||
if (replacements.hasOwnProperty(key)) {
|
||||
replaceThis = new RegExp('__' + key + '__', 'g');
|
||||
withThis = replacements[key];
|
||||
jsxScript = jsxScript.replace(replaceThis, withThis + '');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
try {
|
||||
evalScript(jsxScript, callback);
|
||||
return true;
|
||||
} catch (err) {
|
||||
////////////////////////////////////////////////
|
||||
// Do whatever error handling you want here ! //
|
||||
////////////////////////////////////////////////
|
||||
var newErr;
|
||||
newErr = new Error(err);
|
||||
alert('Error Eek: ' + newErr.stack);
|
||||
return false;
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
|
||||
/**
|
||||
* [evalFile] For calling jsx scripts from the js engine
|
||||
*
|
||||
* The jsx.evalFiles method is used for executing saved jsx scripts
|
||||
* where the jsxScript parameter is a string of the jsx scripts file location.
|
||||
* For convenience jsx.file or jsx.evalfile can be used instead of jsx.evalFile
|
||||
*
|
||||
* @param {String} file
|
||||
* The path to jsx script
|
||||
* If only the base name is provided then the path will be presumed to be the
|
||||
* To execute files stored in the jsx folder located in the __dirname folder use
|
||||
* jsx.evalFile('myFabJsxScript.jsx');
|
||||
* To execute files stored in the a folder myFabScripts located in the __dirname folder use
|
||||
* jsx.evalFile('./myFabScripts/myFabJsxScript.jsx');
|
||||
* To execute files stored in the a folder myFabScripts located at an absolute url use
|
||||
* jsx.evalFile('/Path/to/my/FabJsxScript.jsx'); (mac)
|
||||
* or jsx.evalFile('C:Path/to/my/FabJsxScript.jsx'); (windows)
|
||||
*
|
||||
* @param {Function} callback
|
||||
* The callback function you want the jsx script to trigger on completion
|
||||
* The result of the jsx script is passed as the argument to that function
|
||||
* The function can exist in some other file.
|
||||
* Note that InDesign does not automatically pass the callBack as a string.
|
||||
* Either write your InDesign in a way that it returns a sting the form of
|
||||
* return 'this is my result surrounded by quotes'
|
||||
* or use the force eval option
|
||||
* [Optional DEFAULT no callBack]
|
||||
*
|
||||
* @param {Object} replacements
|
||||
* The replacements to make on the jsx script
|
||||
* give the following script (template)
|
||||
* 'alert("__message__: " + __val__);'
|
||||
* and we want to change the script to
|
||||
* 'alert("I was born in the year: " + 1234);'
|
||||
* we would pass the following object
|
||||
* {"message": 'I was born in the year', "val": 1234}
|
||||
* or if not using reserved words like do we can leave out the key quotes
|
||||
* {message: 'I was born in the year', val: 1234}
|
||||
* By default when possible the forceEvalScript will be set to true
|
||||
* The forceEvalScript option cannot be true when there are replacements
|
||||
* To force the forceEvalScript to be false you can send a blank set of replacements
|
||||
* jsx.evalFile('myFabScript.jsx', {}); Will NOT be executed using the $.evalScript method
|
||||
* jsx.evalFile('myFabScript.jsx'); Will YES be executed using the $.evalScript method
|
||||
* see the forceEvalScript parameter for details on this
|
||||
* [Optional DEFAULT no replacements]
|
||||
*
|
||||
* @param {Bolean} forceEval
|
||||
* If the script should be wrapped in an eval and try catch
|
||||
* This will 1) provide useful error feedback if heaven forbid it is needed
|
||||
* 2) The result will be a string which is required for callback results in InDesign
|
||||
* [Optional DEFAULT true]
|
||||
*
|
||||
* If no replacements are needed then the jsx script is be executed by using the $.evalFile method
|
||||
* This exposes the true value of the $.fileName property <span class="wp-font-emots-emo-sunglasses"></span>
|
||||
* In such a case it's best to avoid using the $.__fileName() with no base name as it won't work
|
||||
* BUT one can still use the $.__fileName('baseName') method which is more accurate than the standard $.fileName property <span class="wp-font-emots-emo-happy"></span>
|
||||
* Let's say you have a Drive called "Graphics" AND YOU HAVE a root folder on your "main" drive called "Graphics"
|
||||
* You call a script jsx.evalFile('/Volumes/Graphics/myFabScript.jsx');
|
||||
* $.fileName will give you '/Graphics/myFabScript.jsx' which is wrong
|
||||
* $.__fileName('myFabScript.jsx') will give you '/Volumes/Graphics/myFabScript.jsx' which is correct
|
||||
* $.__fileName() will not give you a reliable result
|
||||
* Note that if your calling multiple versions of myFabScript.jsx stored in multiple folders then you can get stuffed!
|
||||
* i.e. if the fileName is important to you then don't do that.
|
||||
* It also will force the result of the jsx file as a string which is particularly useful for InDesign callBacks
|
||||
*
|
||||
* Note 1) The order of the parameters is irrelevant
|
||||
* Note 2) One can pass the arguments as an object if desired
|
||||
* jsx.evalScript(myCallBackFunction, 'alert("__myMessage__");', true);
|
||||
* is the same as
|
||||
* jsx.evalScript({
|
||||
* script: 'alert("__myMessage__");',
|
||||
* replacements: {myMessage: 'Hi there'},
|
||||
* callBack: myCallBackFunction,
|
||||
* eval: false,
|
||||
* });
|
||||
* note that either lower or camelCase key names or valid
|
||||
* i.e. both callback or callBack will work
|
||||
*
|
||||
* The following keys are the same file || jsx || script || jsxScript || jsxscript
|
||||
* The following keys are the same callBack || callback
|
||||
* The following keys are the same replacements || replace
|
||||
* The following keys are the same eval || forceEval || forceeval
|
||||
*
|
||||
* @return {Boolean} if the jsxScript was executed or not
|
||||
*/
|
||||
|
||||
Jsx.prototype.evalFile = function() {
|
||||
var arg, args, callback, fileName, fileNameScript, forceEval, forceEvalScript,
|
||||
i, jsxFolder, jsxScript, newLine, replacements, success;
|
||||
|
||||
success = true; // optimistic <span class="wp-font-emots-emo-happy"></span>
|
||||
args = arguments;
|
||||
|
||||
jsxFolder = path.join(__dirname, 'jsx');
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// $.fileName does not return it's correct path in the jsx engine for files called from the js engine //
|
||||
// In Illustrator it returns an integer in InDesign it returns an empty string //
|
||||
// This script injection allows for the script to know it's path by calling //
|
||||
// $.__fileName(); //
|
||||
// on Illustrator this works pretty well //
|
||||
// on InDesign it's best to use with a bit of care //
|
||||
// If the a second script has been called the InDesing will "forget" the path to the first script //
|
||||
// 2 work-arounds for this //
|
||||
// 1) at the beginning of your script add var thePathToMeIs = $.fileName(); //
|
||||
// thePathToMeIs will not be forgotten after running the second script //
|
||||
// 2) $.__fileName('myBaseName.jsx'); //
|
||||
// for example you have file with the following path //
|
||||
// /path/to/me.jsx //
|
||||
// Call $.__fileName('me.jsx') and you will get /path/to/me.jsx even after executing a second script //
|
||||
// Note When the forceEvalScript option is used then you just use the regular $.fileName property //
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
fileNameScript = [
|
||||
// The if statement should not normally be executed
|
||||
'if(!$.__fileNames){',
|
||||
' $.__fileNames = {};',
|
||||
' $.__dirname = "__dirname__";'.replace('__dirname__', __dirname),
|
||||
' $.__fileName = function(name){',
|
||||
' name = name || $.fileName;',
|
||||
' return ($.__fileNames && $.__fileNames[name]) || $.fileName;',
|
||||
' };',
|
||||
'}',
|
||||
'$.__fileNames["__basename__"] = $.__fileNames["" + $.fileName] = "__fileName__";'
|
||||
].join('');
|
||||
|
||||
//////////////////////////////////////////////////////////////////////////////////////
|
||||
// sort out order which arguments into jsxScript, callback, replacements, forceEval //
|
||||
//////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
// Detect if the parameters were passed as an object and if so allow for various keys
|
||||
if (args.length === 1 && (arg = args[0]) instanceof Object) {
|
||||
jsxScript = arg.jsxScript || arg.jsx || arg.script || arg.file || arg.jsxscript;
|
||||
callback = arg.callBack || arg.callback;
|
||||
replacements = arg.replacements || arg.replace;
|
||||
forceEval = arg.eval || arg.forceEval || arg.forceeval;
|
||||
} else {
|
||||
for (i = 0; i < 5; i++) {
|
||||
arg = args[i];
|
||||
if (arg === undefined) {
|
||||
continue;
|
||||
}
|
||||
if (arg.constructor.name === 'String') {
|
||||
jsxScript = arg;
|
||||
continue;
|
||||
}
|
||||
if (arg.constructor.name === 'Object') {
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// If no replacements are provided then the $.evalScript method will be used //
|
||||
// This will allow directly for the $.fileName property to be used //
|
||||
// If one does not want the $.evalScript method to be used then //
|
||||
// either send a blank object as the replacements {} //
|
||||
// or explicitly set the forceEvalScript option to false //
|
||||
// This can only be done if the parameters are passed as an object //
|
||||
// i.e. jsx.evalFile({file:'myFabScript.jsx', forceEvalScript: false}); //
|
||||
// if the file was called using //
|
||||
// i.e. jsx.evalFile('myFabScript.jsx'); //
|
||||
// then the following jsx code is called $.evalFile(new File('Path/to/myFabScript.jsx', 10000000000)) + ''; //
|
||||
// forceEval is never needed if the forceEvalScript is triggered //
|
||||
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
replacements = arg;
|
||||
continue;
|
||||
}
|
||||
if (arg.constructor === Function) {
|
||||
callback = arg;
|
||||
continue;
|
||||
}
|
||||
if (arg === false) {
|
||||
forceEval = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no script provide then not too much to do!
|
||||
if (!jsxScript) {
|
||||
return false;
|
||||
}
|
||||
|
||||
forceEvalScript = !replacements;
|
||||
|
||||
|
||||
//////////////////////////////////////////////////////
|
||||
// Get path of script //
|
||||
// Check if it's literal, relative or in jsx folder //
|
||||
//////////////////////////////////////////////////////
|
||||
|
||||
if (/^\/|[a-zA-Z]+:/.test(jsxScript)) { // absolute path Mac | Windows
|
||||
jsxScript = path.normalize(jsxScript);
|
||||
} else if (/^\.+\//.test(jsxScript)) {
|
||||
jsxScript = path.join(__dirname, jsxScript); // relative path
|
||||
} else {
|
||||
jsxScript = path.join(jsxFolder, jsxScript); // files in the jsxFolder
|
||||
}
|
||||
|
||||
if (forceEvalScript) {
|
||||
jsxScript = jsxScript.replace(/"/g, '\\"');
|
||||
// Check that the path exist, should change this to asynchronous at some point
|
||||
if (!window.cep.fs.stat(jsxScript).err) {
|
||||
jsxScript = fileNameScript.replace(/__fileName__/, jsxScript).replace(/__basename__/, path.basename(jsxScript)) +
|
||||
'$.evalFile(new File("' + jsxScript.replace(/\\/g, '\\\\') + '")) + "";';
|
||||
return this.evalScript(jsxScript, callback, forceEval);
|
||||
} else {
|
||||
throw new Error(`The file: {jsxScript} could not be found / read`);
|
||||
}
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Replacements made so we can't use $.evalFile and need to read the jsx script for ourselves //
|
||||
////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
fileName = jsxScript.replace(/\\/g, '\\\\').replace(/"/g, '\\"');
|
||||
try {
|
||||
jsxScript = window.cep.fs.readFile(jsxScript).data;
|
||||
} catch (er) {
|
||||
throw new Error(`The file: ${fileName} could not be read`);
|
||||
}
|
||||
// It is desirable that the injected fileNameScript is on the same line as the 1st line of the script
|
||||
// This is so that the $.line or error.line returns the same value as the actual file
|
||||
// However if the 1st line contains a # directive then we need to insert a new line and stuff the above problem
|
||||
// When possible i.e. when there's no replacements then $.evalFile will be used and then the whole issue is avoided
|
||||
newLine = /^\s*#/.test(jsxScript) ? '\n' : '';
|
||||
jsxScript = fileNameScript.replace(/__fileName__/, fileName).replace(/__basename__/, path.basename(fileName)) + newLine + jsxScript;
|
||||
|
||||
try {
|
||||
// evalScript(jsxScript, callback);
|
||||
return this.evalScript(jsxScript, callback, replacements, forceEval);
|
||||
} catch (err) {
|
||||
////////////////////////////////////////////////
|
||||
// Do whatever error handling you want here ! //
|
||||
////////////////////////////////////////////////
|
||||
var newErr;
|
||||
newErr = new Error(err);
|
||||
alert('Error Eek: ' + newErr.stack);
|
||||
return false;
|
||||
}
|
||||
|
||||
return success; // success should be an array but for now it's a Boolean
|
||||
};
|
||||
|
||||
|
||||
////////////////////////////////////
|
||||
// Setup alternative method names //
|
||||
////////////////////////////////////
|
||||
Jsx.prototype.eval = Jsx.prototype.script = Jsx.prototype.evalscript = Jsx.prototype.evalScript;
|
||||
Jsx.prototype.file = Jsx.prototype.evalfile = Jsx.prototype.evalFile;
|
||||
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
// Examples //
|
||||
// jsx.evalScript('alert("foo");'); //
|
||||
// jsx.evalFile('foo.jsx'); // where foo.jsx is stored in the jsx folder at the base of the extensions directory //
|
||||
// jsx.evalFile('../myFolder/foo.jsx'); // where a relative or absolute file path is given //
|
||||
// //
|
||||
// using conventional methods one would use in the case were the values to swap were supplied by variables //
|
||||
// csInterface.evalScript('var q = "' + name + '"; alert("' + myString + '" ' + myOp + ' q);q;', callback); //
|
||||
// Using all the '' + foo + '' is very error prone //
|
||||
// jsx.evalScript('var q = "__name__"; alert(__string__ __opp__ q);q;',{'name':'Fred', 'string':'Hello ', 'opp':'+'}, callBack); //
|
||||
// is much simpler and less error prone //
|
||||
// //
|
||||
// more readable to use object //
|
||||
// jsx.evalFile({ //
|
||||
// file: 'yetAnotherFabScript.jsx', //
|
||||
// replacements: {"this": foo, That: bar, and: "&&", the: foo2, other: bar2}, //
|
||||
// eval: true //
|
||||
// }) //
|
||||
// Enjoy <span class="wp-font-emots-emo-happy"></span> //
|
||||
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
||||
jsx = new Jsx();
|
||||
})();
|
||||
484
openpype/hosts/photoshop/api/extension/host/index.jsx
Normal file
484
openpype/hosts/photoshop/api/extension/host/index.jsx
Normal file
File diff suppressed because one or more lines are too long
530
openpype/hosts/photoshop/api/extension/host/json.js
Normal file
530
openpype/hosts/photoshop/api/extension/host/json.js
Normal file
|
|
@ -0,0 +1,530 @@
|
|||
// json2.js
|
||||
// 2017-06-12
|
||||
// Public Domain.
|
||||
// NO WARRANTY EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
|
||||
|
||||
// USE YOUR OWN COPY. IT IS EXTREMELY UNWISE TO LOAD CODE FROM SERVERS YOU DO
|
||||
// NOT CONTROL.
|
||||
|
||||
// This file creates a global JSON object containing two methods: stringify
|
||||
// and parse. This file provides the ES5 JSON capability to ES3 systems.
|
||||
// If a project might run on IE8 or earlier, then this file should be included.
|
||||
// This file does nothing on ES5 systems.
|
||||
|
||||
// JSON.stringify(value, replacer, space)
|
||||
// value any JavaScript value, usually an object or array.
|
||||
// replacer an optional parameter that determines how object
|
||||
// values are stringified for objects. It can be a
|
||||
// function or an array of strings.
|
||||
// space an optional parameter that specifies the indentation
|
||||
// of nested structures. If it is omitted, the text will
|
||||
// be packed without extra whitespace. If it is a number,
|
||||
// it will specify the number of spaces to indent at each
|
||||
// level. If it is a string (such as "\t" or " "),
|
||||
// it contains the characters used to indent at each level.
|
||||
// This method produces a JSON text from a JavaScript value.
|
||||
// When an object value is found, if the object contains a toJSON
|
||||
// method, its toJSON method will be called and the result will be
|
||||
// stringified. A toJSON method does not serialize: it returns the
|
||||
// value represented by the name/value pair that should be serialized,
|
||||
// or undefined if nothing should be serialized. The toJSON method
|
||||
// will be passed the key associated with the value, and this will be
|
||||
// bound to the value.
|
||||
|
||||
// For example, this would serialize Dates as ISO strings.
|
||||
|
||||
// Date.prototype.toJSON = function (key) {
|
||||
// function f(n) {
|
||||
// // Format integers to have at least two digits.
|
||||
// return (n < 10)
|
||||
// ? "0" + n
|
||||
// : n;
|
||||
// }
|
||||
// return this.getUTCFullYear() + "-" +
|
||||
// f(this.getUTCMonth() + 1) + "-" +
|
||||
// f(this.getUTCDate()) + "T" +
|
||||
// f(this.getUTCHours()) + ":" +
|
||||
// f(this.getUTCMinutes()) + ":" +
|
||||
// f(this.getUTCSeconds()) + "Z";
|
||||
// };
|
||||
|
||||
// You can provide an optional replacer method. It will be passed the
|
||||
// key and value of each member, with this bound to the containing
|
||||
// object. The value that is returned from your method will be
|
||||
// serialized. If your method returns undefined, then the member will
|
||||
// be excluded from the serialization.
|
||||
|
||||
// If the replacer parameter is an array of strings, then it will be
|
||||
// used to select the members to be serialized. It filters the results
|
||||
// such that only members with keys listed in the replacer array are
|
||||
// stringified.
|
||||
|
||||
// Values that do not have JSON representations, such as undefined or
|
||||
// functions, will not be serialized. Such values in objects will be
|
||||
// dropped; in arrays they will be replaced with null. You can use
|
||||
// a replacer function to replace those with JSON values.
|
||||
|
||||
// JSON.stringify(undefined) returns undefined.
|
||||
|
||||
// The optional space parameter produces a stringification of the
|
||||
// value that is filled with line breaks and indentation to make it
|
||||
// easier to read.
|
||||
|
||||
// If the space parameter is a non-empty string, then that string will
|
||||
// be used for indentation. If the space parameter is a number, then
|
||||
// the indentation will be that many spaces.
|
||||
|
||||
// Example:
|
||||
|
||||
// text = JSON.stringify(["e", {pluribus: "unum"}]);
|
||||
// // text is '["e",{"pluribus":"unum"}]'
|
||||
|
||||
// text = JSON.stringify(["e", {pluribus: "unum"}], null, "\t");
|
||||
// // text is '[\n\t"e",\n\t{\n\t\t"pluribus": "unum"\n\t}\n]'
|
||||
|
||||
// text = JSON.stringify([new Date()], function (key, value) {
|
||||
// return this[key] instanceof Date
|
||||
// ? "Date(" + this[key] + ")"
|
||||
// : value;
|
||||
// });
|
||||
// // text is '["Date(---current time---)"]'
|
||||
|
||||
// JSON.parse(text, reviver)
|
||||
// This method parses a JSON text to produce an object or array.
|
||||
// It can throw a SyntaxError exception.
|
||||
|
||||
// The optional reviver parameter is a function that can filter and
|
||||
// transform the results. It receives each of the keys and values,
|
||||
// and its return value is used instead of the original value.
|
||||
// If it returns what it received, then the structure is not modified.
|
||||
// If it returns undefined then the member is deleted.
|
||||
|
||||
// Example:
|
||||
|
||||
// // Parse the text. Values that look like ISO date strings will
|
||||
// // be converted to Date objects.
|
||||
|
||||
// myData = JSON.parse(text, function (key, value) {
|
||||
// var a;
|
||||
// if (typeof value === "string") {
|
||||
// a =
|
||||
// /^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2}(?:\.\d*)?)Z$/.exec(value);
|
||||
// if (a) {
|
||||
// return new Date(Date.UTC(
|
||||
// +a[1], +a[2] - 1, +a[3], +a[4], +a[5], +a[6]
|
||||
// ));
|
||||
// }
|
||||
// return value;
|
||||
// }
|
||||
// });
|
||||
|
||||
// myData = JSON.parse(
|
||||
// "[\"Date(09/09/2001)\"]",
|
||||
// function (key, value) {
|
||||
// var d;
|
||||
// if (
|
||||
// typeof value === "string"
|
||||
// && value.slice(0, 5) === "Date("
|
||||
// && value.slice(-1) === ")"
|
||||
// ) {
|
||||
// d = new Date(value.slice(5, -1));
|
||||
// if (d) {
|
||||
// return d;
|
||||
// }
|
||||
// }
|
||||
// return value;
|
||||
// }
|
||||
// );
|
||||
|
||||
// This is a reference implementation. You are free to copy, modify, or
|
||||
// redistribute.
|
||||
|
||||
/*jslint
|
||||
eval, for, this
|
||||
*/
|
||||
|
||||
/*property
|
||||
JSON, apply, call, charCodeAt, getUTCDate, getUTCFullYear, getUTCHours,
|
||||
getUTCMinutes, getUTCMonth, getUTCSeconds, hasOwnProperty, join,
|
||||
lastIndex, length, parse, prototype, push, replace, slice, stringify,
|
||||
test, toJSON, toString, valueOf
|
||||
*/
|
||||
|
||||
|
||||
// Create a JSON object only if one does not already exist. We create the
|
||||
// methods in a closure to avoid creating global variables.
|
||||
|
||||
if (typeof JSON !== "object") {
|
||||
JSON = {};
|
||||
}
|
||||
|
||||
(function () {
|
||||
"use strict";
|
||||
|
||||
var rx_one = /^[\],:{}\s]*$/;
|
||||
var rx_two = /\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g;
|
||||
var rx_three = /"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g;
|
||||
var rx_four = /(?:^|:|,)(?:\s*\[)+/g;
|
||||
var rx_escapable = /[\\"\u0000-\u001f\u007f-\u009f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g;
|
||||
var rx_dangerous = /[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g;
|
||||
|
||||
function f(n) {
|
||||
// Format integers to have at least two digits.
|
||||
return (n < 10)
|
||||
? "0" + n
|
||||
: n;
|
||||
}
|
||||
|
||||
function this_value() {
|
||||
return this.valueOf();
|
||||
}
|
||||
|
||||
if (typeof Date.prototype.toJSON !== "function") {
|
||||
|
||||
Date.prototype.toJSON = function () {
|
||||
|
||||
return isFinite(this.valueOf())
|
||||
? (
|
||||
this.getUTCFullYear()
|
||||
+ "-"
|
||||
+ f(this.getUTCMonth() + 1)
|
||||
+ "-"
|
||||
+ f(this.getUTCDate())
|
||||
+ "T"
|
||||
+ f(this.getUTCHours())
|
||||
+ ":"
|
||||
+ f(this.getUTCMinutes())
|
||||
+ ":"
|
||||
+ f(this.getUTCSeconds())
|
||||
+ "Z"
|
||||
)
|
||||
: null;
|
||||
};
|
||||
|
||||
Boolean.prototype.toJSON = this_value;
|
||||
Number.prototype.toJSON = this_value;
|
||||
String.prototype.toJSON = this_value;
|
||||
}
|
||||
|
||||
var gap;
|
||||
var indent;
|
||||
var meta;
|
||||
var rep;
|
||||
|
||||
|
||||
function quote(string) {
|
||||
|
||||
// If the string contains no control characters, no quote characters, and no
|
||||
// backslash characters, then we can safely slap some quotes around it.
|
||||
// Otherwise we must also replace the offending characters with safe escape
|
||||
// sequences.
|
||||
|
||||
rx_escapable.lastIndex = 0;
|
||||
return rx_escapable.test(string)
|
||||
? "\"" + string.replace(rx_escapable, function (a) {
|
||||
var c = meta[a];
|
||||
return typeof c === "string"
|
||||
? c
|
||||
: "\\u" + ("0000" + a.charCodeAt(0).toString(16)).slice(-4);
|
||||
}) + "\""
|
||||
: "\"" + string + "\"";
|
||||
}
|
||||
|
||||
|
||||
function str(key, holder) {
|
||||
|
||||
// Produce a string from holder[key].
|
||||
|
||||
var i; // The loop counter.
|
||||
var k; // The member key.
|
||||
var v; // The member value.
|
||||
var length;
|
||||
var mind = gap;
|
||||
var partial;
|
||||
var value = holder[key];
|
||||
|
||||
// If the value has a toJSON method, call it to obtain a replacement value.
|
||||
|
||||
if (
|
||||
value
|
||||
&& typeof value === "object"
|
||||
&& typeof value.toJSON === "function"
|
||||
) {
|
||||
value = value.toJSON(key);
|
||||
}
|
||||
|
||||
// If we were called with a replacer function, then call the replacer to
|
||||
// obtain a replacement value.
|
||||
|
||||
if (typeof rep === "function") {
|
||||
value = rep.call(holder, key, value);
|
||||
}
|
||||
|
||||
// What happens next depends on the value's type.
|
||||
|
||||
switch (typeof value) {
|
||||
case "string":
|
||||
return quote(value);
|
||||
|
||||
case "number":
|
||||
|
||||
// JSON numbers must be finite. Encode non-finite numbers as null.
|
||||
|
||||
return (isFinite(value))
|
||||
? String(value)
|
||||
: "null";
|
||||
|
||||
case "boolean":
|
||||
case "null":
|
||||
|
||||
// If the value is a boolean or null, convert it to a string. Note:
|
||||
// typeof null does not produce "null". The case is included here in
|
||||
// the remote chance that this gets fixed someday.
|
||||
|
||||
return String(value);
|
||||
|
||||
// If the type is "object", we might be dealing with an object or an array or
|
||||
// null.
|
||||
|
||||
case "object":
|
||||
|
||||
// Due to a specification blunder in ECMAScript, typeof null is "object",
|
||||
// so watch out for that case.
|
||||
|
||||
if (!value) {
|
||||
return "null";
|
||||
}
|
||||
|
||||
// Make an array to hold the partial results of stringifying this object value.
|
||||
|
||||
gap += indent;
|
||||
partial = [];
|
||||
|
||||
// Is the value an array?
|
||||
|
||||
if (Object.prototype.toString.apply(value) === "[object Array]") {
|
||||
|
||||
// The value is an array. Stringify every element. Use null as a placeholder
|
||||
// for non-JSON values.
|
||||
|
||||
length = value.length;
|
||||
for (i = 0; i < length; i += 1) {
|
||||
partial[i] = str(i, value) || "null";
|
||||
}
|
||||
|
||||
// Join all of the elements together, separated with commas, and wrap them in
|
||||
// brackets.
|
||||
|
||||
v = partial.length === 0
|
||||
? "[]"
|
||||
: gap
|
||||
? (
|
||||
"[\n"
|
||||
+ gap
|
||||
+ partial.join(",\n" + gap)
|
||||
+ "\n"
|
||||
+ mind
|
||||
+ "]"
|
||||
)
|
||||
: "[" + partial.join(",") + "]";
|
||||
gap = mind;
|
||||
return v;
|
||||
}
|
||||
|
||||
// If the replacer is an array, use it to select the members to be stringified.
|
||||
|
||||
if (rep && typeof rep === "object") {
|
||||
length = rep.length;
|
||||
for (i = 0; i < length; i += 1) {
|
||||
if (typeof rep[i] === "string") {
|
||||
k = rep[i];
|
||||
v = str(k, value);
|
||||
if (v) {
|
||||
partial.push(quote(k) + (
|
||||
(gap)
|
||||
? ": "
|
||||
: ":"
|
||||
) + v);
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
|
||||
// Otherwise, iterate through all of the keys in the object.
|
||||
|
||||
for (k in value) {
|
||||
if (Object.prototype.hasOwnProperty.call(value, k)) {
|
||||
v = str(k, value);
|
||||
if (v) {
|
||||
partial.push(quote(k) + (
|
||||
(gap)
|
||||
? ": "
|
||||
: ":"
|
||||
) + v);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Join all of the member texts together, separated with commas,
|
||||
// and wrap them in braces.
|
||||
|
||||
v = partial.length === 0
|
||||
? "{}"
|
||||
: gap
|
||||
? "{\n" + gap + partial.join(",\n" + gap) + "\n" + mind + "}"
|
||||
: "{" + partial.join(",") + "}";
|
||||
gap = mind;
|
||||
return v;
|
||||
}
|
||||
}
|
||||
|
||||
// If the JSON object does not yet have a stringify method, give it one.
|
||||
|
||||
if (typeof JSON.stringify !== "function") {
|
||||
meta = { // table of character substitutions
|
||||
"\b": "\\b",
|
||||
"\t": "\\t",
|
||||
"\n": "\\n",
|
||||
"\f": "\\f",
|
||||
"\r": "\\r",
|
||||
"\"": "\\\"",
|
||||
"\\": "\\\\"
|
||||
};
|
||||
JSON.stringify = function (value, replacer, space) {
|
||||
|
||||
// The stringify method takes a value and an optional replacer, and an optional
|
||||
// space parameter, and returns a JSON text. The replacer can be a function
|
||||
// that can replace values, or an array of strings that will select the keys.
|
||||
// A default replacer method can be provided. Use of the space parameter can
|
||||
// produce text that is more easily readable.
|
||||
|
||||
var i;
|
||||
gap = "";
|
||||
indent = "";
|
||||
|
||||
// If the space parameter is a number, make an indent string containing that
|
||||
// many spaces.
|
||||
|
||||
if (typeof space === "number") {
|
||||
for (i = 0; i < space; i += 1) {
|
||||
indent += " ";
|
||||
}
|
||||
|
||||
// If the space parameter is a string, it will be used as the indent string.
|
||||
|
||||
} else if (typeof space === "string") {
|
||||
indent = space;
|
||||
}
|
||||
|
||||
// If there is a replacer, it must be a function or an array.
|
||||
// Otherwise, throw an error.
|
||||
|
||||
rep = replacer;
|
||||
if (replacer && typeof replacer !== "function" && (
|
||||
typeof replacer !== "object"
|
||||
|| typeof replacer.length !== "number"
|
||||
)) {
|
||||
throw new Error("JSON.stringify");
|
||||
}
|
||||
|
||||
// Make a fake root object containing our value under the key of "".
|
||||
// Return the result of stringifying the value.
|
||||
|
||||
return str("", {"": value});
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
// If the JSON object does not yet have a parse method, give it one.
|
||||
|
||||
if (typeof JSON.parse !== "function") {
|
||||
JSON.parse = function (text, reviver) {
|
||||
|
||||
// The parse method takes a text and an optional reviver function, and returns
|
||||
// a JavaScript value if the text is a valid JSON text.
|
||||
|
||||
var j;
|
||||
|
||||
function walk(holder, key) {
|
||||
|
||||
// The walk method is used to recursively walk the resulting structure so
|
||||
// that modifications can be made.
|
||||
|
||||
var k;
|
||||
var v;
|
||||
var value = holder[key];
|
||||
if (value && typeof value === "object") {
|
||||
for (k in value) {
|
||||
if (Object.prototype.hasOwnProperty.call(value, k)) {
|
||||
v = walk(value, k);
|
||||
if (v !== undefined) {
|
||||
value[k] = v;
|
||||
} else {
|
||||
delete value[k];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return reviver.call(holder, key, value);
|
||||
}
|
||||
|
||||
|
||||
// Parsing happens in four stages. In the first stage, we replace certain
|
||||
// Unicode characters with escape sequences. JavaScript handles many characters
|
||||
// incorrectly, either silently deleting them, or treating them as line endings.
|
||||
|
||||
text = String(text);
|
||||
rx_dangerous.lastIndex = 0;
|
||||
if (rx_dangerous.test(text)) {
|
||||
text = text.replace(rx_dangerous, function (a) {
|
||||
return (
|
||||
"\\u"
|
||||
+ ("0000" + a.charCodeAt(0).toString(16)).slice(-4)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
// In the second stage, we run the text against regular expressions that look
|
||||
// for non-JSON patterns. We are especially concerned with "()" and "new"
|
||||
// because they can cause invocation, and "=" because it can cause mutation.
|
||||
// But just to be safe, we want to reject all unexpected forms.
|
||||
|
||||
// We split the second stage into 4 regexp operations in order to work around
|
||||
// crippling inefficiencies in IE's and Safari's regexp engines. First we
|
||||
// replace the JSON backslash pairs with "@" (a non-JSON character). Second, we
|
||||
// replace all simple value tokens with "]" characters. Third, we delete all
|
||||
// open brackets that follow a colon or comma or that begin the text. Finally,
|
||||
// we look to see that the remaining characters are only whitespace or "]" or
|
||||
// "," or ":" or "{" or "}". If that is so, then the text is safe for eval.
|
||||
|
||||
if (
|
||||
rx_one.test(
|
||||
text
|
||||
.replace(rx_two, "@")
|
||||
.replace(rx_three, "]")
|
||||
.replace(rx_four, "")
|
||||
)
|
||||
) {
|
||||
|
||||
// In the third stage we use the eval function to compile the text into a
|
||||
// JavaScript structure. The "{" operator is subject to a syntactic ambiguity
|
||||
// in JavaScript: it can begin a block or an object literal. We wrap the text
|
||||
// in parens to eliminate the ambiguity.
|
||||
|
||||
j = eval("(" + text + ")");
|
||||
|
||||
// In the optional fourth stage, we recursively walk the new structure, passing
|
||||
// each name/value pair to a reviver function for possible transformation.
|
||||
|
||||
return (typeof reviver === "function")
|
||||
? walk({"": j}, "")
|
||||
: j;
|
||||
}
|
||||
|
||||
// If the text is not JSON parseable, then a SyntaxError is thrown.
|
||||
|
||||
throw new SyntaxError("JSON.parse");
|
||||
};
|
||||
}
|
||||
}());
|
||||
BIN
openpype/hosts/photoshop/api/extension/icons/avalon-logo-48.png
Normal file
BIN
openpype/hosts/photoshop/api/extension/icons/avalon-logo-48.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.3 KiB |
119
openpype/hosts/photoshop/api/extension/index.html
Normal file
119
openpype/hosts/photoshop/api/extension/index.html
Normal file
|
|
@ -0,0 +1,119 @@
|
|||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<style type="text/css">
|
||||
html, body, iframe {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
border: 0px;
|
||||
margin: 0px;
|
||||
overflow: hidden;
|
||||
background-color: #424242;
|
||||
}
|
||||
button {width: 100%;}
|
||||
</style>
|
||||
|
||||
<style>
|
||||
button {width: 100%;}
|
||||
body {margin:0; padding:0; height: 100%;}
|
||||
html {height: 100%;}
|
||||
</style>
|
||||
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js">
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#workfiles-button").bind("click", function() {
|
||||
RPC.call('Photoshop.workfiles_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#creator-button").bind("click", function() {
|
||||
RPC.call('Photoshop.creator_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#loader-button").bind("click", function() {
|
||||
RPC.call('Photoshop.loader_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#publish-button").bind("click", function() {
|
||||
RPC.call('Photoshop.publish_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#sceneinventory-button").bind("click", function() {
|
||||
RPC.call('Photoshop.sceneinventory_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#subsetmanager-button").bind("click", function() {
|
||||
RPC.call('Photoshop.subsetmanager_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<script type=text/javascript>
|
||||
$(function() {
|
||||
$("a#experimental-button").bind("click", function() {
|
||||
RPC.call('Photoshop.experimental_tools_route').then(function (data) {
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
});
|
||||
});
|
||||
</script>
|
||||
</head>
|
||||
<body>
|
||||
<script type="text/javascript" src="./client/wsrpc.js"></script>
|
||||
<script type="text/javascript" src="./client/CSInterface.js"></script>
|
||||
<script type="text/javascript" src="./client/loglevel.min.js"></script>
|
||||
|
||||
<!-- helper library for better debugging of .jsx check its license! -->
|
||||
<script type="text/javascript" src="./host/JSX.js"></script>
|
||||
|
||||
<script type="text/javascript" src="./client/client.js"></script>
|
||||
|
||||
<a href=# id=workfiles-button><button>Workfiles...</button></a>
|
||||
<a href=# id=creator-button><button>Create...</button></a>
|
||||
<a href=# id=loader-button><button>Load...</button></a>
|
||||
<a href=# id=publish-button><button>Publish...</button></a>
|
||||
<a href=# id=sceneinventory-button><button>Manage...</button></a>
|
||||
<a href=# id=subsetmanager-button><button>Subset Manager...</button></a>
|
||||
<a href=# id=experimental-button><button>Experimental Tools...</button></a>
|
||||
</body>
|
||||
</html>
|
||||
365
openpype/hosts/photoshop/api/launch_logic.py
Normal file
365
openpype/hosts/photoshop/api/launch_logic.py
Normal file
|
|
@ -0,0 +1,365 @@
|
|||
import os
|
||||
import subprocess
|
||||
import collections
|
||||
import asyncio
|
||||
|
||||
from wsrpc_aiohttp import (
|
||||
WebSocketRoute,
|
||||
WebSocketAsync
|
||||
)
|
||||
|
||||
from Qt import QtCore
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
from avalon import api
|
||||
from avalon.tools.webserver.app import WebServerTool
|
||||
|
||||
from .ws_stub import PhotoshopServerStub
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
class ConnectionNotEstablishedYet(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class MainThreadItem:
|
||||
"""Structure to store information about callback in main thread.
|
||||
|
||||
Item should be used to execute callback in main thread which may be needed
|
||||
for execution of Qt objects.
|
||||
|
||||
Item store callback (callable variable), arguments and keyword arguments
|
||||
for the callback. Item hold information about it's process.
|
||||
"""
|
||||
not_set = object()
|
||||
|
||||
def __init__(self, callback, *args, **kwargs):
|
||||
self._done = False
|
||||
self._exception = self.not_set
|
||||
self._result = self.not_set
|
||||
self._callback = callback
|
||||
self._args = args
|
||||
self._kwargs = kwargs
|
||||
|
||||
@property
|
||||
def done(self):
|
||||
return self._done
|
||||
|
||||
@property
|
||||
def exception(self):
|
||||
return self._exception
|
||||
|
||||
@property
|
||||
def result(self):
|
||||
return self._result
|
||||
|
||||
def execute(self):
|
||||
"""Execute callback and store it's result.
|
||||
|
||||
Method must be called from main thread. Item is marked as `done`
|
||||
when callback execution finished. Store output of callback of exception
|
||||
information when callback raise one.
|
||||
"""
|
||||
log.debug("Executing process in main thread")
|
||||
if self.done:
|
||||
log.warning("- item is already processed")
|
||||
return
|
||||
|
||||
log.info("Running callback: {}".format(str(self._callback)))
|
||||
try:
|
||||
result = self._callback(*self._args, **self._kwargs)
|
||||
self._result = result
|
||||
|
||||
except Exception as exc:
|
||||
self._exception = exc
|
||||
|
||||
finally:
|
||||
self._done = True
|
||||
|
||||
|
||||
def stub():
|
||||
"""
|
||||
Convenience function to get server RPC stub to call methods directed
|
||||
for host (Photoshop).
|
||||
It expects already created connection, started from client.
|
||||
Currently created when panel is opened (PS: Window>Extensions>Avalon)
|
||||
:return: <PhotoshopClientStub> where functions could be called from
|
||||
"""
|
||||
ps_stub = PhotoshopServerStub()
|
||||
if not ps_stub.client:
|
||||
raise ConnectionNotEstablishedYet("Connection is not created yet")
|
||||
|
||||
return ps_stub
|
||||
|
||||
|
||||
def show_tool_by_name(tool_name):
|
||||
kwargs = {}
|
||||
if tool_name == "loader":
|
||||
kwargs["use_context"] = True
|
||||
|
||||
host_tools.show_tool_by_name(tool_name, **kwargs)
|
||||
|
||||
|
||||
class ProcessLauncher(QtCore.QObject):
|
||||
route_name = "Photoshop"
|
||||
_main_thread_callbacks = collections.deque()
|
||||
|
||||
def __init__(self, subprocess_args):
|
||||
self._subprocess_args = subprocess_args
|
||||
self._log = None
|
||||
|
||||
super(ProcessLauncher, self).__init__()
|
||||
|
||||
# Keep track if launcher was already started
|
||||
self._started = False
|
||||
|
||||
self._process = None
|
||||
self._websocket_server = None
|
||||
|
||||
start_process_timer = QtCore.QTimer()
|
||||
start_process_timer.setInterval(100)
|
||||
|
||||
loop_timer = QtCore.QTimer()
|
||||
loop_timer.setInterval(200)
|
||||
|
||||
start_process_timer.timeout.connect(self._on_start_process_timer)
|
||||
loop_timer.timeout.connect(self._on_loop_timer)
|
||||
|
||||
self._start_process_timer = start_process_timer
|
||||
self._loop_timer = loop_timer
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
if self._log is None:
|
||||
self._log = Logger.get_logger(
|
||||
"{}-launcher".format(self.route_name)
|
||||
)
|
||||
return self._log
|
||||
|
||||
@property
|
||||
def websocket_server_is_running(self):
|
||||
if self._websocket_server is not None:
|
||||
return self._websocket_server.is_running
|
||||
return False
|
||||
|
||||
@property
|
||||
def is_process_running(self):
|
||||
if self._process is not None:
|
||||
return self._process.poll() is None
|
||||
return False
|
||||
|
||||
@property
|
||||
def is_host_connected(self):
|
||||
"""Returns True if connected, False if app is not running at all."""
|
||||
if not self.is_process_running:
|
||||
return False
|
||||
|
||||
try:
|
||||
_stub = stub()
|
||||
if _stub:
|
||||
return True
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def execute_in_main_thread(cls, callback, *args, **kwargs):
|
||||
item = MainThreadItem(callback, *args, **kwargs)
|
||||
cls._main_thread_callbacks.append(item)
|
||||
return item
|
||||
|
||||
def start(self):
|
||||
if self._started:
|
||||
return
|
||||
self.log.info("Started launch logic of AfterEffects")
|
||||
self._started = True
|
||||
self._start_process_timer.start()
|
||||
|
||||
def exit(self):
|
||||
""" Exit whole application. """
|
||||
if self._start_process_timer.isActive():
|
||||
self._start_process_timer.stop()
|
||||
if self._loop_timer.isActive():
|
||||
self._loop_timer.stop()
|
||||
|
||||
if self._websocket_server is not None:
|
||||
self._websocket_server.stop()
|
||||
|
||||
if self._process:
|
||||
self._process.kill()
|
||||
self._process.wait()
|
||||
|
||||
QtCore.QCoreApplication.exit()
|
||||
|
||||
def _on_loop_timer(self):
|
||||
# TODO find better way and catch errors
|
||||
# Run only callbacks that are in queue at the moment
|
||||
cls = self.__class__
|
||||
for _ in range(len(cls._main_thread_callbacks)):
|
||||
if cls._main_thread_callbacks:
|
||||
item = cls._main_thread_callbacks.popleft()
|
||||
item.execute()
|
||||
|
||||
if not self.is_process_running:
|
||||
self.log.info("Host process is not running. Closing")
|
||||
self.exit()
|
||||
|
||||
elif not self.websocket_server_is_running:
|
||||
self.log.info("Websocket server is not running. Closing")
|
||||
self.exit()
|
||||
|
||||
def _on_start_process_timer(self):
|
||||
# TODO add try except validations for each part in this method
|
||||
# Start server as first thing
|
||||
if self._websocket_server is None:
|
||||
self._init_server()
|
||||
return
|
||||
|
||||
# TODO add waiting time
|
||||
# Wait for webserver
|
||||
if not self.websocket_server_is_running:
|
||||
return
|
||||
|
||||
# Start application process
|
||||
if self._process is None:
|
||||
self._start_process()
|
||||
self.log.info("Waiting for host to connect")
|
||||
return
|
||||
|
||||
# TODO add waiting time
|
||||
# Wait until host is connected
|
||||
if self.is_host_connected:
|
||||
self._start_process_timer.stop()
|
||||
self._loop_timer.start()
|
||||
elif (
|
||||
not self.is_process_running
|
||||
or not self.websocket_server_is_running
|
||||
):
|
||||
self.exit()
|
||||
|
||||
def _init_server(self):
|
||||
if self._websocket_server is not None:
|
||||
return
|
||||
|
||||
self.log.debug(
|
||||
"Initialization of websocket server for host communication"
|
||||
)
|
||||
|
||||
self._websocket_server = websocket_server = WebServerTool()
|
||||
if websocket_server.port_occupied(
|
||||
websocket_server.host_name,
|
||||
websocket_server.port
|
||||
):
|
||||
self.log.info(
|
||||
"Server already running, sending actual context and exit."
|
||||
)
|
||||
asyncio.run(websocket_server.send_context_change(self.route_name))
|
||||
self.exit()
|
||||
return
|
||||
|
||||
# Add Websocket route
|
||||
websocket_server.add_route("*", "/ws/", WebSocketAsync)
|
||||
# Add after effects route to websocket handler
|
||||
|
||||
print("Adding {} route".format(self.route_name))
|
||||
WebSocketAsync.add_route(
|
||||
self.route_name, PhotoshopRoute
|
||||
)
|
||||
self.log.info("Starting websocket server for host communication")
|
||||
websocket_server.start_server()
|
||||
|
||||
def _start_process(self):
|
||||
if self._process is not None:
|
||||
return
|
||||
self.log.info("Starting host process")
|
||||
try:
|
||||
self._process = subprocess.Popen(
|
||||
self._subprocess_args,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL
|
||||
)
|
||||
except Exception:
|
||||
self.log.info("exce", exc_info=True)
|
||||
self.exit()
|
||||
|
||||
|
||||
class PhotoshopRoute(WebSocketRoute):
|
||||
"""
|
||||
One route, mimicking external application (like Harmony, etc).
|
||||
All functions could be called from client.
|
||||
'do_notify' function calls function on the client - mimicking
|
||||
notification after long running job on the server or similar
|
||||
"""
|
||||
instance = None
|
||||
|
||||
def init(self, **kwargs):
|
||||
# Python __init__ must be return "self".
|
||||
# This method might return anything.
|
||||
log.debug("someone called Photoshop route")
|
||||
self.instance = self
|
||||
return kwargs
|
||||
|
||||
# server functions
|
||||
async def ping(self):
|
||||
log.debug("someone called Photoshop route ping")
|
||||
|
||||
# This method calls function on the client side
|
||||
# client functions
|
||||
async def set_context(self, project, asset, task):
|
||||
"""
|
||||
Sets 'project' and 'asset' to envs, eg. setting context
|
||||
|
||||
Args:
|
||||
project (str)
|
||||
asset (str)
|
||||
"""
|
||||
log.info("Setting context change")
|
||||
log.info("project {} asset {} ".format(project, asset))
|
||||
if project:
|
||||
api.Session["AVALON_PROJECT"] = project
|
||||
os.environ["AVALON_PROJECT"] = project
|
||||
if asset:
|
||||
api.Session["AVALON_ASSET"] = asset
|
||||
os.environ["AVALON_ASSET"] = asset
|
||||
if task:
|
||||
api.Session["AVALON_TASK"] = task
|
||||
os.environ["AVALON_TASK"] = task
|
||||
|
||||
async def read(self):
|
||||
log.debug("photoshop.read client calls server server calls "
|
||||
"photoshop client")
|
||||
return await self.socket.call('photoshop.read')
|
||||
|
||||
# panel routes for tools
|
||||
async def creator_route(self):
|
||||
self._tool_route("creator")
|
||||
|
||||
async def workfiles_route(self):
|
||||
self._tool_route("workfiles")
|
||||
|
||||
async def loader_route(self):
|
||||
self._tool_route("loader")
|
||||
|
||||
async def publish_route(self):
|
||||
self._tool_route("publish")
|
||||
|
||||
async def sceneinventory_route(self):
|
||||
self._tool_route("sceneinventory")
|
||||
|
||||
async def subsetmanager_route(self):
|
||||
self._tool_route("subsetmanager")
|
||||
|
||||
async def experimental_tools_route(self):
|
||||
self._tool_route("experimental_tools")
|
||||
|
||||
def _tool_route(self, _tool_name):
|
||||
"""The address accessed when clicking on the buttons."""
|
||||
|
||||
ProcessLauncher.execute_in_main_thread(show_tool_by_name, _tool_name)
|
||||
|
||||
# Required return statement.
|
||||
return "nothing"
|
||||
78
openpype/hosts/photoshop/api/lib.py
Normal file
78
openpype/hosts/photoshop/api/lib.py
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
import os
|
||||
import sys
|
||||
import contextlib
|
||||
import traceback
|
||||
|
||||
from Qt import QtWidgets
|
||||
|
||||
import avalon.api
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.lib.remote_publish import headless_publish
|
||||
|
||||
from .launch_logic import ProcessLauncher, stub
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def safe_excepthook(*args):
|
||||
traceback.print_exception(*args)
|
||||
|
||||
|
||||
def main(*subprocess_args):
|
||||
from openpype.hosts.photoshop import api
|
||||
|
||||
avalon.api.install(api)
|
||||
sys.excepthook = safe_excepthook
|
||||
|
||||
# coloring in ConsoleTrayApp
|
||||
os.environ["OPENPYPE_LOG_NO_COLORS"] = "False"
|
||||
app = QtWidgets.QApplication([])
|
||||
app.setQuitOnLastWindowClosed(False)
|
||||
|
||||
launcher = ProcessLauncher(subprocess_args)
|
||||
launcher.start()
|
||||
|
||||
if os.environ.get("HEADLESS_PUBLISH"):
|
||||
launcher.execute_in_main_thread(
|
||||
headless_publish,
|
||||
log,
|
||||
"ClosePS",
|
||||
os.environ.get("IS_TEST")
|
||||
)
|
||||
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
|
||||
save = False
|
||||
if os.getenv("WORKFILES_SAVE_AS"):
|
||||
save = True
|
||||
|
||||
launcher.execute_in_main_thread(
|
||||
host_tools.show_workfiles, save=save
|
||||
)
|
||||
|
||||
sys.exit(app.exec_())
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
"""Maintain selection during context."""
|
||||
selection = stub().get_selected_layers()
|
||||
try:
|
||||
yield selection
|
||||
finally:
|
||||
stub().select_layers(selection)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_visibility():
|
||||
"""Maintain visibility during context."""
|
||||
visibility = {}
|
||||
layers = stub().get_layers()
|
||||
for layer in layers:
|
||||
visibility[layer.id] = layer.visible
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
for layer in layers:
|
||||
stub().set_visible(layer.id, visibility[layer.id])
|
||||
pass
|
||||
BIN
openpype/hosts/photoshop/api/panel.PNG
Normal file
BIN
openpype/hosts/photoshop/api/panel.PNG
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 8.6 KiB |
BIN
openpype/hosts/photoshop/api/panel_failure.PNG
Normal file
BIN
openpype/hosts/photoshop/api/panel_failure.PNG
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 13 KiB |
229
openpype/hosts/photoshop/api/pipeline.py
Normal file
229
openpype/hosts/photoshop/api/pipeline.py
Normal file
|
|
@ -0,0 +1,229 @@
|
|||
import os
|
||||
import sys
|
||||
from Qt import QtWidgets
|
||||
|
||||
import pyblish.api
|
||||
import avalon.api
|
||||
from avalon import pipeline, io
|
||||
|
||||
from openpype.api import Logger
|
||||
import openpype.hosts.photoshop
|
||||
|
||||
from . import lib
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.photoshop.__file__))
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
|
||||
def check_inventory():
|
||||
if not lib.any_outdated():
|
||||
return
|
||||
|
||||
host = avalon.api.registered_host()
|
||||
outdated_containers = []
|
||||
for container in host.ls():
|
||||
representation = container['representation']
|
||||
representation_doc = io.find_one(
|
||||
{
|
||||
"_id": io.ObjectId(representation),
|
||||
"type": "representation"
|
||||
},
|
||||
projection={"parent": True}
|
||||
)
|
||||
if representation_doc and not lib.is_latest(representation_doc):
|
||||
outdated_containers.append(container)
|
||||
|
||||
# Warn about outdated containers.
|
||||
print("Starting new QApplication..")
|
||||
|
||||
message_box = QtWidgets.QMessageBox()
|
||||
message_box.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg = "There are outdated containers in the scene."
|
||||
message_box.setText(msg)
|
||||
message_box.exec_()
|
||||
|
||||
|
||||
def on_application_launch():
|
||||
check_inventory()
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle layer visibility on instance toggles."""
|
||||
instance[0].Visible = new_value
|
||||
|
||||
|
||||
def install():
|
||||
"""Install Photoshop-specific functionality of avalon-core.
|
||||
|
||||
This function is called automatically on calling `api.install(photoshop)`.
|
||||
"""
|
||||
log.info("Installing OpenPype Photoshop...")
|
||||
pyblish.api.register_host("photoshop")
|
||||
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
|
||||
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
|
||||
log.info(PUBLISH_PATH)
|
||||
|
||||
pyblish.api.register_callback(
|
||||
"instanceToggled", on_pyblish_instance_toggled
|
||||
)
|
||||
|
||||
avalon.api.on("application.launched", on_application_launch)
|
||||
|
||||
|
||||
def uninstall():
|
||||
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
|
||||
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
|
||||
|
||||
|
||||
def ls():
|
||||
"""Yields containers from active Photoshop document
|
||||
|
||||
This is the host-equivalent of api.ls(), but instead of listing
|
||||
assets on disk, it lists assets already loaded in Photoshop; once loaded
|
||||
they are called 'containers'
|
||||
|
||||
Yields:
|
||||
dict: container
|
||||
|
||||
"""
|
||||
try:
|
||||
stub = lib.stub() # only after Photoshop is up
|
||||
except lib.ConnectionNotEstablishedYet:
|
||||
print("Not connected yet, ignoring")
|
||||
return
|
||||
|
||||
if not stub.get_active_document_name():
|
||||
return
|
||||
|
||||
layers_meta = stub.get_layers_metadata() # minimalize calls to PS
|
||||
for layer in stub.get_layers():
|
||||
data = stub.read(layer, layers_meta)
|
||||
|
||||
# Skip non-tagged layers.
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# Filter to only containers.
|
||||
if "container" not in data["id"]:
|
||||
continue
|
||||
|
||||
# Append transient data
|
||||
data["objectName"] = layer.name.replace(stub.LOADED_ICON, '')
|
||||
data["layer"] = layer
|
||||
|
||||
yield data
|
||||
|
||||
|
||||
def list_instances():
|
||||
"""List all created instances to publish from current workfile.
|
||||
|
||||
Pulls from File > File Info
|
||||
|
||||
For SubsetManager
|
||||
|
||||
Returns:
|
||||
(list) of dictionaries matching instances format
|
||||
"""
|
||||
stub = _get_stub()
|
||||
|
||||
if not stub:
|
||||
return []
|
||||
|
||||
instances = []
|
||||
layers_meta = stub.get_layers_metadata()
|
||||
if layers_meta:
|
||||
for key, instance in layers_meta.items():
|
||||
schema = instance.get("schema")
|
||||
if schema and "container" in schema:
|
||||
continue
|
||||
|
||||
instance['uuid'] = key
|
||||
instances.append(instance)
|
||||
|
||||
return instances
|
||||
|
||||
|
||||
def remove_instance(instance):
|
||||
"""Remove instance from current workfile metadata.
|
||||
|
||||
Updates metadata of current file in File > File Info and removes
|
||||
icon highlight on group layer.
|
||||
|
||||
For SubsetManager
|
||||
|
||||
Args:
|
||||
instance (dict): instance representation from subsetmanager model
|
||||
"""
|
||||
stub = _get_stub()
|
||||
|
||||
if not stub:
|
||||
return
|
||||
|
||||
stub.remove_instance(instance.get("uuid"))
|
||||
layer = stub.get_layer(instance.get("uuid"))
|
||||
if layer:
|
||||
stub.rename_layer(instance.get("uuid"),
|
||||
layer.name.replace(stub.PUBLISH_ICON, ''))
|
||||
|
||||
|
||||
def _get_stub():
|
||||
"""Handle pulling stub from PS to run operations on host
|
||||
|
||||
Returns:
|
||||
(PhotoshopServerStub) or None
|
||||
"""
|
||||
try:
|
||||
stub = lib.stub() # only after Photoshop is up
|
||||
except lib.ConnectionNotEstablishedYet:
|
||||
print("Not connected yet, ignoring")
|
||||
return
|
||||
|
||||
if not stub.get_active_document_name():
|
||||
return
|
||||
|
||||
return stub
|
||||
|
||||
|
||||
def containerise(
|
||||
name, namespace, layer, context, loader=None, suffix="_CON"
|
||||
):
|
||||
"""Imprint layer with metadata
|
||||
|
||||
Containerisation enables a tracking of version, author and origin
|
||||
for loaded assets.
|
||||
|
||||
Arguments:
|
||||
name (str): Name of resulting assembly
|
||||
namespace (str): Namespace under which to host container
|
||||
layer (PSItem): Layer to containerise
|
||||
context (dict): Asset information
|
||||
loader (str, optional): Name of loader used to produce this container.
|
||||
suffix (str, optional): Suffix of container, defaults to `_CON`.
|
||||
|
||||
Returns:
|
||||
container (str): Name of container assembly
|
||||
"""
|
||||
layer.name = name + suffix
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": pipeline.AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace,
|
||||
"loader": str(loader),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"members": [str(layer.id)]
|
||||
}
|
||||
stub = lib.stub()
|
||||
stub.imprint(layer, data)
|
||||
|
||||
return layer
|
||||
69
openpype/hosts/photoshop/api/plugin.py
Normal file
69
openpype/hosts/photoshop/api/plugin.py
Normal file
|
|
@ -0,0 +1,69 @@
|
|||
import re
|
||||
|
||||
import avalon.api
|
||||
from .launch_logic import stub
|
||||
|
||||
|
||||
def get_unique_layer_name(layers, asset_name, subset_name):
|
||||
"""
|
||||
Gets all layer names and if 'asset_name_subset_name' is present, it
|
||||
increases suffix by 1 (eg. creates unique layer name - for Loader)
|
||||
Args:
|
||||
layers (list) of dict with layers info (name, id etc.)
|
||||
asset_name (string):
|
||||
subset_name (string):
|
||||
|
||||
Returns:
|
||||
(string): name_00X (without version)
|
||||
"""
|
||||
name = "{}_{}".format(asset_name, subset_name)
|
||||
names = {}
|
||||
for layer in layers:
|
||||
layer_name = re.sub(r'_\d{3}$', '', layer.name)
|
||||
if layer_name in names.keys():
|
||||
names[layer_name] = names[layer_name] + 1
|
||||
else:
|
||||
names[layer_name] = 1
|
||||
occurrences = names.get(name, 0)
|
||||
|
||||
return "{}_{:0>3d}".format(name, occurrences + 1)
|
||||
|
||||
|
||||
class PhotoshopLoader(avalon.api.Loader):
|
||||
@staticmethod
|
||||
def get_stub():
|
||||
return stub()
|
||||
|
||||
|
||||
class Creator(avalon.api.Creator):
|
||||
"""Creator plugin to create instances in Photoshop
|
||||
|
||||
A LayerSet is created to support any number of layers in an instance. If
|
||||
the selection is used, these layers will be added to the LayerSet.
|
||||
"""
|
||||
|
||||
def process(self):
|
||||
# Photoshop can have multiple LayerSets with the same name, which does
|
||||
# not work with Avalon.
|
||||
msg = "Instance with name \"{}\" already exists.".format(self.name)
|
||||
stub = lib.stub() # only after Photoshop is up
|
||||
for layer in stub.get_layers():
|
||||
if self.name.lower() == layer.Name.lower():
|
||||
msg = QtWidgets.QMessageBox()
|
||||
msg.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg.setText(msg)
|
||||
msg.exec_()
|
||||
return False
|
||||
|
||||
# Store selection because adding a group will change selection.
|
||||
with lib.maintained_selection():
|
||||
|
||||
# Add selection to group.
|
||||
if (self.options or {}).get("useSelection"):
|
||||
group = stub.group_selected_layers(self.name)
|
||||
else:
|
||||
group = stub.create_group(self.name)
|
||||
|
||||
stub.imprint(group, self.data)
|
||||
|
||||
return group
|
||||
51
openpype/hosts/photoshop/api/workio.py
Normal file
51
openpype/hosts/photoshop/api/workio.py
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
"""Host API required Work Files tool"""
|
||||
import os
|
||||
|
||||
import avalon.api
|
||||
|
||||
from . import lib
|
||||
|
||||
|
||||
def _active_document():
|
||||
document_name = lib.stub().get_active_document_name()
|
||||
if not document_name:
|
||||
return None
|
||||
|
||||
return document_name
|
||||
|
||||
|
||||
def file_extensions():
|
||||
return avalon.api.HOST_WORKFILE_EXTENSIONS["photoshop"]
|
||||
|
||||
|
||||
def has_unsaved_changes():
|
||||
if _active_document():
|
||||
return not lib.stub().is_saved()
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def save_file(filepath):
|
||||
_, ext = os.path.splitext(filepath)
|
||||
lib.stub().saveAs(filepath, ext[1:], True)
|
||||
|
||||
|
||||
def open_file(filepath):
|
||||
lib.stub().open(filepath)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def current_file():
|
||||
try:
|
||||
full_name = lib.stub().get_active_document_full_name()
|
||||
if full_name and full_name != "null":
|
||||
return os.path.normpath(full_name).replace("\\", "/")
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def work_root(session):
|
||||
return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/")
|
||||
495
openpype/hosts/photoshop/api/ws_stub.py
Normal file
495
openpype/hosts/photoshop/api/ws_stub.py
Normal file
|
|
@ -0,0 +1,495 @@
|
|||
"""
|
||||
Stub handling connection from server to client.
|
||||
Used anywhere solution is calling client methods.
|
||||
"""
|
||||
import sys
|
||||
import json
|
||||
import attr
|
||||
from wsrpc_aiohttp import WebSocketAsync
|
||||
|
||||
from avalon.tools.webserver.app import WebServerTool
|
||||
|
||||
|
||||
@attr.s
|
||||
class PSItem(object):
|
||||
"""
|
||||
Object denoting layer or group item in PS. Each item is created in
|
||||
PS by any Loader, but contains same fields, which are being used
|
||||
in later processing.
|
||||
"""
|
||||
# metadata
|
||||
id = attr.ib() # id created by AE, could be used for querying
|
||||
name = attr.ib() # name of item
|
||||
group = attr.ib(default=None) # item type (footage, folder, comp)
|
||||
parents = attr.ib(factory=list)
|
||||
visible = attr.ib(default=True)
|
||||
type = attr.ib(default=None)
|
||||
# all imported elements, single for
|
||||
members = attr.ib(factory=list)
|
||||
long_name = attr.ib(default=None)
|
||||
color_code = attr.ib(default=None) # color code of layer
|
||||
|
||||
|
||||
class PhotoshopServerStub:
|
||||
"""
|
||||
Stub for calling function on client (Photoshop js) side.
|
||||
Expects that client is already connected (started when avalon menu
|
||||
is opened).
|
||||
'self.websocketserver.call' is used as async wrapper
|
||||
"""
|
||||
PUBLISH_ICON = '\u2117 '
|
||||
LOADED_ICON = '\u25bc'
|
||||
|
||||
def __init__(self):
|
||||
self.websocketserver = WebServerTool.get_instance()
|
||||
self.client = self.get_client()
|
||||
|
||||
@staticmethod
|
||||
def get_client():
|
||||
"""
|
||||
Return first connected client to WebSocket
|
||||
TODO implement selection by Route
|
||||
:return: <WebSocketAsync> client
|
||||
"""
|
||||
clients = WebSocketAsync.get_clients()
|
||||
client = None
|
||||
if len(clients) > 0:
|
||||
key = list(clients.keys())[0]
|
||||
client = clients.get(key)
|
||||
|
||||
return client
|
||||
|
||||
def open(self, path):
|
||||
"""Open file located at 'path' (local).
|
||||
|
||||
Args:
|
||||
path(string): file path locally
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(
|
||||
self.client.call('Photoshop.open', path=path)
|
||||
)
|
||||
|
||||
def read(self, layer, layers_meta=None):
|
||||
"""Parses layer metadata from Headline field of active document.
|
||||
|
||||
Args:
|
||||
layer: (PSItem)
|
||||
layers_meta: full list from Headline (for performance in loops)
|
||||
Returns:
|
||||
"""
|
||||
if layers_meta is None:
|
||||
layers_meta = self.get_layers_metadata()
|
||||
|
||||
return layers_meta.get(str(layer.id))
|
||||
|
||||
def imprint(self, layer, data, all_layers=None, layers_meta=None):
|
||||
"""Save layer metadata to Headline field of active document
|
||||
|
||||
Stores metadata in format:
|
||||
[{
|
||||
"active":true,
|
||||
"subset":"imageBG",
|
||||
"family":"image",
|
||||
"id":"pyblish.avalon.instance",
|
||||
"asset":"Town",
|
||||
"uuid": "8"
|
||||
}] - for created instances
|
||||
OR
|
||||
[{
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": "pyblish.avalon.instance",
|
||||
"name": "imageMG",
|
||||
"namespace": "Jungle_imageMG_001",
|
||||
"loader": "ImageLoader",
|
||||
"representation": "5fbfc0ee30a946093c6ff18a",
|
||||
"members": [
|
||||
"40"
|
||||
]
|
||||
}] - for loaded instances
|
||||
|
||||
Args:
|
||||
layer (PSItem):
|
||||
data(string): json representation for single layer
|
||||
all_layers (list of PSItem): for performance, could be
|
||||
injected for usage in loop, if not, single call will be
|
||||
triggered
|
||||
layers_meta(string): json representation from Headline
|
||||
(for performance - provide only if imprint is in
|
||||
loop - value should be same)
|
||||
Returns: None
|
||||
"""
|
||||
if not layers_meta:
|
||||
layers_meta = self.get_layers_metadata()
|
||||
|
||||
# json.dumps writes integer values in a dictionary to string, so
|
||||
# anticipating it here.
|
||||
if str(layer.id) in layers_meta and layers_meta[str(layer.id)]:
|
||||
if data:
|
||||
layers_meta[str(layer.id)].update(data)
|
||||
else:
|
||||
layers_meta.pop(str(layer.id))
|
||||
else:
|
||||
layers_meta[str(layer.id)] = data
|
||||
|
||||
# Ensure only valid ids are stored.
|
||||
if not all_layers:
|
||||
all_layers = self.get_layers()
|
||||
layer_ids = [layer.id for layer in all_layers]
|
||||
cleaned_data = []
|
||||
|
||||
for layer_id in layers_meta:
|
||||
if int(layer_id) in layer_ids:
|
||||
cleaned_data.append(layers_meta[layer_id])
|
||||
|
||||
payload = json.dumps(cleaned_data, indent=4)
|
||||
|
||||
self.websocketserver.call(
|
||||
self.client.call('Photoshop.imprint', payload=payload)
|
||||
)
|
||||
|
||||
def get_layers(self):
|
||||
"""Returns JSON document with all(?) layers in active document.
|
||||
|
||||
Returns: <list of PSItem>
|
||||
Format of tuple: { 'id':'123',
|
||||
'name': 'My Layer 1',
|
||||
'type': 'GUIDE'|'FG'|'BG'|'OBJ'
|
||||
'visible': 'true'|'false'
|
||||
"""
|
||||
res = self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_layers')
|
||||
)
|
||||
|
||||
return self._to_records(res)
|
||||
|
||||
def get_layer(self, layer_id):
|
||||
"""
|
||||
Returns PSItem for specific 'layer_id' or None if not found
|
||||
Args:
|
||||
layer_id (string): unique layer id, stored in 'uuid' field
|
||||
|
||||
Returns:
|
||||
(PSItem) or None
|
||||
"""
|
||||
layers = self.get_layers()
|
||||
for layer in layers:
|
||||
if str(layer.id) == str(layer_id):
|
||||
return layer
|
||||
|
||||
def get_layers_in_layers(self, layers):
|
||||
"""Return all layers that belong to layers (might be groups).
|
||||
|
||||
Args:
|
||||
layers <list of PSItem>:
|
||||
|
||||
Returns:
|
||||
<list of PSItem>
|
||||
"""
|
||||
all_layers = self.get_layers()
|
||||
ret = []
|
||||
parent_ids = set([lay.id for lay in layers])
|
||||
|
||||
for layer in all_layers:
|
||||
parents = set(layer.parents)
|
||||
if len(parent_ids & parents) > 0:
|
||||
ret.append(layer)
|
||||
if layer.id in parent_ids:
|
||||
ret.append(layer)
|
||||
|
||||
return ret
|
||||
|
||||
def create_group(self, name):
|
||||
"""Create new group (eg. LayerSet)
|
||||
|
||||
Returns:
|
||||
<PSItem>
|
||||
"""
|
||||
enhanced_name = self.PUBLISH_ICON + name
|
||||
ret = self.websocketserver.call(
|
||||
self.client.call('Photoshop.create_group', name=enhanced_name)
|
||||
)
|
||||
# create group on PS is asynchronous, returns only id
|
||||
return PSItem(id=ret, name=name, group=True)
|
||||
|
||||
def group_selected_layers(self, name):
|
||||
"""Group selected layers into new LayerSet (eg. group)
|
||||
|
||||
Returns:
|
||||
(Layer)
|
||||
"""
|
||||
enhanced_name = self.PUBLISH_ICON + name
|
||||
res = self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.group_selected_layers', name=enhanced_name
|
||||
)
|
||||
)
|
||||
res = self._to_records(res)
|
||||
if res:
|
||||
rec = res.pop()
|
||||
rec.name = rec.name.replace(self.PUBLISH_ICON, '')
|
||||
return rec
|
||||
raise ValueError("No group record returned")
|
||||
|
||||
def get_selected_layers(self):
|
||||
"""Get a list of actually selected layers.
|
||||
|
||||
Returns: <list of Layer('id':XX, 'name':"YYY")>
|
||||
"""
|
||||
res = self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_selected_layers')
|
||||
)
|
||||
return self._to_records(res)
|
||||
|
||||
def select_layers(self, layers):
|
||||
"""Selects specified layers in Photoshop by its ids.
|
||||
|
||||
Args:
|
||||
layers: <list of Layer('id':XX, 'name':"YYY")>
|
||||
"""
|
||||
layers_id = [str(lay.id) for lay in layers]
|
||||
self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.select_layers',
|
||||
layers=json.dumps(layers_id)
|
||||
)
|
||||
)
|
||||
|
||||
def get_active_document_full_name(self):
|
||||
"""Returns full name with path of active document via ws call
|
||||
|
||||
Returns(string):
|
||||
full path with name
|
||||
"""
|
||||
res = self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_active_document_full_name')
|
||||
)
|
||||
|
||||
return res
|
||||
|
||||
def get_active_document_name(self):
|
||||
"""Returns just a name of active document via ws call
|
||||
|
||||
Returns(string):
|
||||
file name
|
||||
"""
|
||||
return self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_active_document_name')
|
||||
)
|
||||
|
||||
def is_saved(self):
|
||||
"""Returns true if no changes in active document
|
||||
|
||||
Returns:
|
||||
<boolean>
|
||||
"""
|
||||
return self.websocketserver.call(
|
||||
self.client.call('Photoshop.is_saved')
|
||||
)
|
||||
|
||||
def save(self):
|
||||
"""Saves active document"""
|
||||
self.websocketserver.call(
|
||||
self.client.call('Photoshop.save')
|
||||
)
|
||||
|
||||
def saveAs(self, image_path, ext, as_copy):
|
||||
"""Saves active document to psd (copy) or png or jpg
|
||||
|
||||
Args:
|
||||
image_path(string): full local path
|
||||
ext: <string psd|jpg|png>
|
||||
as_copy: <boolean>
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.saveAs',
|
||||
image_path=image_path,
|
||||
ext=ext,
|
||||
as_copy=as_copy
|
||||
)
|
||||
)
|
||||
|
||||
def set_visible(self, layer_id, visibility):
|
||||
"""Set layer with 'layer_id' to 'visibility'
|
||||
|
||||
Args:
|
||||
layer_id: <int>
|
||||
visibility: <true - set visible, false - hide>
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.set_visible',
|
||||
layer_id=layer_id,
|
||||
visibility=visibility
|
||||
)
|
||||
)
|
||||
|
||||
def get_layers_metadata(self):
|
||||
"""Reads layers metadata from Headline from active document in PS.
|
||||
(Headline accessible by File > File Info)
|
||||
|
||||
Returns:
|
||||
(string): - json documents
|
||||
example:
|
||||
{"8":{"active":true,"subset":"imageBG",
|
||||
"family":"image","id":"pyblish.avalon.instance",
|
||||
"asset":"Town"}}
|
||||
8 is layer(group) id - used for deletion, update etc.
|
||||
"""
|
||||
layers_data = {}
|
||||
res = self.websocketserver.call(self.client.call('Photoshop.read'))
|
||||
try:
|
||||
layers_data = json.loads(res)
|
||||
except json.decoder.JSONDecodeError:
|
||||
pass
|
||||
# format of metadata changed from {} to [] because of standardization
|
||||
# keep current implementation logic as its working
|
||||
if not isinstance(layers_data, dict):
|
||||
temp_layers_meta = {}
|
||||
for layer_meta in layers_data:
|
||||
layer_id = layer_meta.get("uuid")
|
||||
if not layer_id:
|
||||
layer_id = layer_meta.get("members")[0]
|
||||
|
||||
temp_layers_meta[layer_id] = layer_meta
|
||||
layers_data = temp_layers_meta
|
||||
else:
|
||||
# legacy version of metadata
|
||||
for layer_id, layer_meta in layers_data.items():
|
||||
if layer_meta.get("schema") != "openpype:container-2.0":
|
||||
layer_meta["uuid"] = str(layer_id)
|
||||
else:
|
||||
layer_meta["members"] = [str(layer_id)]
|
||||
|
||||
return layers_data
|
||||
|
||||
def import_smart_object(self, path, layer_name, as_reference=False):
|
||||
"""Import the file at `path` as a smart object to active document.
|
||||
|
||||
Args:
|
||||
path (str): File path to import.
|
||||
layer_name (str): Unique layer name to differentiate how many times
|
||||
same smart object was loaded
|
||||
as_reference (bool): pull in content or reference
|
||||
"""
|
||||
enhanced_name = self.LOADED_ICON + layer_name
|
||||
res = self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.import_smart_object',
|
||||
path=path,
|
||||
name=enhanced_name,
|
||||
as_reference=as_reference
|
||||
)
|
||||
)
|
||||
rec = self._to_records(res).pop()
|
||||
if rec:
|
||||
rec.name = rec.name.replace(self.LOADED_ICON, '')
|
||||
return rec
|
||||
|
||||
def replace_smart_object(self, layer, path, layer_name):
|
||||
"""Replace the smart object `layer` with file at `path`
|
||||
|
||||
Args:
|
||||
layer (PSItem):
|
||||
path (str): File to import.
|
||||
layer_name (str): Unique layer name to differentiate how many times
|
||||
same smart object was loaded
|
||||
"""
|
||||
enhanced_name = self.LOADED_ICON + layer_name
|
||||
self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.replace_smart_object',
|
||||
layer_id=layer.id,
|
||||
path=path,
|
||||
name=enhanced_name
|
||||
)
|
||||
)
|
||||
|
||||
def delete_layer(self, layer_id):
|
||||
"""Deletes specific layer by it's id.
|
||||
|
||||
Args:
|
||||
layer_id (int): id of layer to delete
|
||||
"""
|
||||
self.websocketserver.call(
|
||||
self.client.call('Photoshop.delete_layer', layer_id=layer_id)
|
||||
)
|
||||
|
||||
def rename_layer(self, layer_id, name):
|
||||
"""Renames specific layer by it's id.
|
||||
|
||||
Args:
|
||||
layer_id (int): id of layer to delete
|
||||
name (str): new name
|
||||
"""
|
||||
self.websocketserver.call(
|
||||
self.client.call(
|
||||
'Photoshop.rename_layer',
|
||||
layer_id=layer_id,
|
||||
name=name
|
||||
)
|
||||
)
|
||||
|
||||
def remove_instance(self, instance_id):
|
||||
cleaned_data = {}
|
||||
|
||||
for key, instance in self.get_layers_metadata().items():
|
||||
if key != instance_id:
|
||||
cleaned_data[key] = instance
|
||||
|
||||
payload = json.dumps(cleaned_data, indent=4)
|
||||
|
||||
self.websocketserver.call(
|
||||
self.client.call('Photoshop.imprint', payload=payload)
|
||||
)
|
||||
|
||||
def get_extension_version(self):
|
||||
"""Returns version number of installed extension."""
|
||||
return self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_extension_version')
|
||||
)
|
||||
|
||||
def close(self):
|
||||
"""Shutting down PS and process too.
|
||||
|
||||
For webpublishing only.
|
||||
"""
|
||||
# TODO change client.call to method with checks for client
|
||||
self.websocketserver.call(self.client.call('Photoshop.close'))
|
||||
|
||||
def _to_records(self, res):
|
||||
"""Converts string json representation into list of PSItem for
|
||||
dot notation access to work.
|
||||
|
||||
Args:
|
||||
res (string): valid json
|
||||
|
||||
Returns:
|
||||
<list of PSItem>
|
||||
"""
|
||||
try:
|
||||
layers_data = json.loads(res)
|
||||
except json.decoder.JSONDecodeError:
|
||||
raise ValueError("Received broken JSON {}".format(res))
|
||||
ret = []
|
||||
|
||||
# convert to AEItem to use dot donation
|
||||
if isinstance(layers_data, dict):
|
||||
layers_data = [layers_data]
|
||||
for d in layers_data:
|
||||
# currently implemented and expected fields
|
||||
ret.append(PSItem(
|
||||
d.get('id'),
|
||||
d.get('name'),
|
||||
d.get('group'),
|
||||
d.get('parents'),
|
||||
d.get('visible'),
|
||||
d.get('type'),
|
||||
d.get('members'),
|
||||
d.get('long_name'),
|
||||
d.get("color_code")
|
||||
))
|
||||
return ret
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
from Qt import QtWidgets
|
||||
import openpype.api
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class CreateImage(openpype.api.Creator):
|
||||
|
|
|
|||
|
|
@ -1,26 +0,0 @@
|
|||
import re
|
||||
|
||||
|
||||
def get_unique_layer_name(layers, asset_name, subset_name):
|
||||
"""
|
||||
Gets all layer names and if 'asset_name_subset_name' is present, it
|
||||
increases suffix by 1 (eg. creates unique layer name - for Loader)
|
||||
Args:
|
||||
layers (list) of dict with layers info (name, id etc.)
|
||||
asset_name (string):
|
||||
subset_name (string):
|
||||
|
||||
Returns:
|
||||
(string): name_00X (without version)
|
||||
"""
|
||||
name = "{}_{}".format(asset_name, subset_name)
|
||||
names = {}
|
||||
for layer in layers:
|
||||
layer_name = re.sub(r'_\d{3}$', '', layer.name)
|
||||
if layer_name in names.keys():
|
||||
names[layer_name] = names[layer_name] + 1
|
||||
else:
|
||||
names[layer_name] = 1
|
||||
occurrences = names.get(name, 0)
|
||||
|
||||
return "{}_{:0>3d}".format(name, occurrences + 1)
|
||||
|
|
@ -1,12 +1,11 @@
|
|||
import re
|
||||
|
||||
from avalon import api, photoshop
|
||||
from avalon import api
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.hosts.photoshop.api import get_unique_layer_name
|
||||
|
||||
from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
|
||||
|
||||
stub = photoshop.stub()
|
||||
|
||||
class ImageLoader(api.Loader):
|
||||
class ImageLoader(photoshop.PhotoshopLoader):
|
||||
"""Load images
|
||||
|
||||
Stores the imported asset in a container named after the asset.
|
||||
|
|
@ -16,11 +15,14 @@ class ImageLoader(api.Loader):
|
|||
representations = ["*"]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"]["name"],
|
||||
name)
|
||||
stub = self.get_stub()
|
||||
layer_name = get_unique_layer_name(
|
||||
stub.get_layers(),
|
||||
context["asset"]["name"],
|
||||
name
|
||||
)
|
||||
with photoshop.maintained_selection():
|
||||
layer = self.import_layer(self.fname, layer_name)
|
||||
layer = self.import_layer(self.fname, layer_name, stub)
|
||||
|
||||
self[:] = [layer]
|
||||
namespace = namespace or layer_name
|
||||
|
|
@ -35,6 +37,8 @@ class ImageLoader(api.Loader):
|
|||
|
||||
def update(self, container, representation):
|
||||
""" Switch asset or change version """
|
||||
stub = self.get_stub()
|
||||
|
||||
layer = container.pop("layer")
|
||||
|
||||
context = representation.get("context", {})
|
||||
|
|
@ -44,9 +48,9 @@ class ImageLoader(api.Loader):
|
|||
layer_name = "{}_{}".format(context["asset"], context["subset"])
|
||||
# switching assets
|
||||
if namespace_from_container != layer_name:
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"],
|
||||
context["subset"])
|
||||
layer_name = get_unique_layer_name(
|
||||
stub.get_layers(), context["asset"], context["subset"]
|
||||
)
|
||||
else: # switching version - keep same name
|
||||
layer_name = container["namespace"]
|
||||
|
||||
|
|
@ -66,6 +70,8 @@ class ImageLoader(api.Loader):
|
|||
Args:
|
||||
container (dict): container to be removed - used to get layer_id
|
||||
"""
|
||||
stub = self.get_stub()
|
||||
|
||||
layer = container.pop("layer")
|
||||
stub.imprint(layer, {})
|
||||
stub.delete_layer(layer.id)
|
||||
|
|
@ -73,5 +79,5 @@ class ImageLoader(api.Loader):
|
|||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def import_layer(self, file_name, layer_name):
|
||||
def import_layer(self, file_name, layer_name, stub):
|
||||
return stub.import_smart_object(file_name, layer_name)
|
||||
|
|
|
|||
|
|
@ -1,17 +1,13 @@
|
|||
import os
|
||||
|
||||
from avalon import api
|
||||
from avalon import photoshop
|
||||
from avalon.pipeline import get_representation_path_from_context
|
||||
from avalon.vendor import qargparse
|
||||
|
||||
from openpype.lib import Anatomy
|
||||
from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
|
||||
|
||||
stub = photoshop.stub()
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.hosts.photoshop.api import get_unique_layer_name
|
||||
|
||||
|
||||
class ImageFromSequenceLoader(api.Loader):
|
||||
class ImageFromSequenceLoader(photoshop.PhotoshopLoader):
|
||||
""" Load specifing image from sequence
|
||||
|
||||
Used only as quick load of reference file from a sequence.
|
||||
|
|
@ -35,15 +31,16 @@ class ImageFromSequenceLoader(api.Loader):
|
|||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
if data.get("frame"):
|
||||
self.fname = os.path.join(os.path.dirname(self.fname),
|
||||
data["frame"])
|
||||
self.fname = os.path.join(
|
||||
os.path.dirname(self.fname), data["frame"]
|
||||
)
|
||||
if not os.path.exists(self.fname):
|
||||
return
|
||||
|
||||
stub = photoshop.stub()
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"]["name"],
|
||||
name)
|
||||
stub = self.get_stub()
|
||||
layer_name = get_unique_layer_name(
|
||||
stub.get_layers(), context["asset"]["name"], name
|
||||
)
|
||||
|
||||
with photoshop.maintained_selection():
|
||||
layer = stub.import_smart_object(self.fname, layer_name)
|
||||
|
|
@ -95,4 +92,3 @@ class ImageFromSequenceLoader(api.Loader):
|
|||
def remove(self, container):
|
||||
"""No update possible, not containerized."""
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -1,30 +1,30 @@
|
|||
import re
|
||||
|
||||
from avalon import api, photoshop
|
||||
from avalon import api
|
||||
|
||||
from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
|
||||
|
||||
stub = photoshop.stub()
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.hosts.photoshop.api import get_unique_layer_name
|
||||
|
||||
|
||||
class ReferenceLoader(api.Loader):
|
||||
class ReferenceLoader(photoshop.PhotoshopLoader):
|
||||
"""Load reference images
|
||||
|
||||
Stores the imported asset in a container named after the asset.
|
||||
Stores the imported asset in a container named after the asset.
|
||||
|
||||
Inheriting from 'load_image' didn't work because of
|
||||
"Cannot write to closing transport", possible refactor.
|
||||
Inheriting from 'load_image' didn't work because of
|
||||
"Cannot write to closing transport", possible refactor.
|
||||
"""
|
||||
|
||||
families = ["image", "render"]
|
||||
representations = ["*"]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"]["name"],
|
||||
name)
|
||||
stub = self.get_stub()
|
||||
layer_name = get_unique_layer_name(
|
||||
stub.get_layers(), context["asset"]["name"], name
|
||||
)
|
||||
with photoshop.maintained_selection():
|
||||
layer = self.import_layer(self.fname, layer_name)
|
||||
layer = self.import_layer(self.fname, layer_name, stub)
|
||||
|
||||
self[:] = [layer]
|
||||
namespace = namespace or layer_name
|
||||
|
|
@ -39,6 +39,7 @@ class ReferenceLoader(api.Loader):
|
|||
|
||||
def update(self, container, representation):
|
||||
""" Switch asset or change version """
|
||||
stub = self.get_stub()
|
||||
layer = container.pop("layer")
|
||||
|
||||
context = representation.get("context", {})
|
||||
|
|
@ -48,9 +49,9 @@ class ReferenceLoader(api.Loader):
|
|||
layer_name = "{}_{}".format(context["asset"], context["subset"])
|
||||
# switching assets
|
||||
if namespace_from_container != layer_name:
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"],
|
||||
context["subset"])
|
||||
layer_name = get_unique_layer_name(
|
||||
stub.get_layers(), context["asset"], context["subset"]
|
||||
)
|
||||
else: # switching version - keep same name
|
||||
layer_name = container["namespace"]
|
||||
|
||||
|
|
@ -65,11 +66,12 @@ class ReferenceLoader(api.Loader):
|
|||
)
|
||||
|
||||
def remove(self, container):
|
||||
"""
|
||||
Removes element from scene: deletes layer + removes from Headline
|
||||
"""Removes element from scene: deletes layer + removes from Headline
|
||||
|
||||
Args:
|
||||
container (dict): container to be removed - used to get layer_id
|
||||
"""
|
||||
stub = self.get_stub()
|
||||
layer = container.pop("layer")
|
||||
stub.imprint(layer, {})
|
||||
stub.delete_layer(layer.id)
|
||||
|
|
@ -77,6 +79,7 @@ class ReferenceLoader(api.Loader):
|
|||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def import_layer(self, file_name, layer_name):
|
||||
return stub.import_smart_object(file_name, layer_name,
|
||||
as_reference=True)
|
||||
def import_layer(self, file_name, layer_name, stub):
|
||||
return stub.import_smart_object(
|
||||
file_name, layer_name, as_reference=True
|
||||
)
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import os
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class ClosePS(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class CollectCurrentFile(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import re
|
||||
import pyblish.api
|
||||
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class CollectExtensionVersion(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import pyblish.api
|
||||
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -1,10 +1,11 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import re
|
||||
|
||||
from avalon import photoshop
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.lib.plugin_tools import parse_json
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class CollectRemoteInstances(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import os
|
||||
|
||||
import openpype.api
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class ExtractImage(openpype.api.Extractor):
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
|
||||
import openpype.api
|
||||
import openpype.lib
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class ExtractReview(openpype.api.Extractor):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import openpype.api
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class ExtractSaveScene(openpype.api.Extractor):
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import pyblish.api
|
|||
from openpype.action import get_errored_plugins_from_data
|
||||
from openpype.lib import version_up
|
||||
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class IncrementWorkfile(pyblish.api.InstancePlugin):
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
from avalon import api
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class ValidateInstanceAssetRepair(pyblish.api.Action):
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import re
|
|||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from avalon import photoshop
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
|
||||
class ValidateNamingRepair(pyblish.api.Action):
|
||||
|
|
|
|||
|
|
@ -1,3 +1,6 @@
|
|||
import os
|
||||
|
||||
|
||||
def add_implementation_envs(env, _app):
|
||||
"""Modify environments to contain all required for implementation."""
|
||||
defaults = {
|
||||
|
|
@ -6,3 +9,12 @@ def add_implementation_envs(env, _app):
|
|||
for key, value in defaults.items():
|
||||
if not env.get(key):
|
||||
env[key] = value
|
||||
|
||||
|
||||
def get_launch_script_path():
|
||||
current_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
return os.path.join(
|
||||
current_dir,
|
||||
"api",
|
||||
"launch_script.py"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,93 +1,49 @@
|
|||
import os
|
||||
import logging
|
||||
from .communication_server import CommunicationWrapper
|
||||
from . import lib
|
||||
from . import launch_script
|
||||
from . import workio
|
||||
from . import pipeline
|
||||
from . import plugin
|
||||
from .pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
maintained_selection,
|
||||
remove_instance,
|
||||
list_instances,
|
||||
ls
|
||||
)
|
||||
|
||||
import requests
|
||||
|
||||
import avalon.api
|
||||
import pyblish.api
|
||||
from avalon.tvpaint import pipeline
|
||||
from avalon.tvpaint.communication_server import register_localization_file
|
||||
from .lib import set_context_settings
|
||||
|
||||
from openpype.hosts import tvpaint
|
||||
from openpype.api import get_current_project_settings
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(tvpaint.__file__))
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
from .workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
current_file,
|
||||
has_unsaved_changes,
|
||||
file_extensions,
|
||||
work_root,
|
||||
)
|
||||
|
||||
|
||||
def on_instance_toggle(instance, old_value, new_value):
|
||||
# Review may not have real instance in wokrfile metadata
|
||||
if not instance.data.get("uuid"):
|
||||
return
|
||||
__all__ = (
|
||||
"CommunicationWrapper",
|
||||
|
||||
instance_id = instance.data["uuid"]
|
||||
found_idx = None
|
||||
current_instances = pipeline.list_instances()
|
||||
for idx, workfile_instance in enumerate(current_instances):
|
||||
if workfile_instance["uuid"] == instance_id:
|
||||
found_idx = idx
|
||||
break
|
||||
"lib",
|
||||
"launch_script",
|
||||
"workio",
|
||||
"pipeline",
|
||||
"plugin",
|
||||
|
||||
if found_idx is None:
|
||||
return
|
||||
"install",
|
||||
"uninstall",
|
||||
"maintained_selection",
|
||||
"remove_instance",
|
||||
"list_instances",
|
||||
"ls",
|
||||
|
||||
if "active" in current_instances[found_idx]:
|
||||
current_instances[found_idx]["active"] = new_value
|
||||
pipeline._write_instances(current_instances)
|
||||
|
||||
|
||||
def initial_launch():
|
||||
# Setup project settings if its the template that's launched.
|
||||
# TODO also check for template creation when it's possible to define
|
||||
# templates
|
||||
last_workfile = os.environ.get("AVALON_LAST_WORKFILE")
|
||||
if not last_workfile or os.path.exists(last_workfile):
|
||||
return
|
||||
|
||||
log.info("Setting up project...")
|
||||
set_context_settings()
|
||||
|
||||
|
||||
def application_exit():
|
||||
data = get_current_project_settings()
|
||||
stop_timer = data["tvpaint"]["stop_timer_on_application_exit"]
|
||||
|
||||
if not stop_timer:
|
||||
return
|
||||
|
||||
# Stop application timer.
|
||||
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
|
||||
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
|
||||
requests.post(rest_api_url)
|
||||
|
||||
|
||||
def install():
|
||||
log.info("OpenPype - Installing TVPaint integration")
|
||||
localization_file = os.path.join(HOST_DIR, "resources", "avalon.loc")
|
||||
register_localization_file(localization_file)
|
||||
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
|
||||
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
|
||||
|
||||
registered_callbacks = (
|
||||
pyblish.api.registered_callbacks().get("instanceToggled") or []
|
||||
)
|
||||
if on_instance_toggle not in registered_callbacks:
|
||||
pyblish.api.register_callback("instanceToggled", on_instance_toggle)
|
||||
|
||||
avalon.api.on("application.launched", initial_launch)
|
||||
avalon.api.on("application.exit", application_exit)
|
||||
|
||||
|
||||
def uninstall():
|
||||
log.info("OpenPype - Uninstalling TVPaint integration")
|
||||
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
|
||||
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
|
||||
# Workfiles API
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
)
|
||||
|
|
|
|||
939
openpype/hosts/tvpaint/api/communication_server.py
Normal file
939
openpype/hosts/tvpaint/api/communication_server.py
Normal file
|
|
@ -0,0 +1,939 @@
|
|||
import os
|
||||
import json
|
||||
import time
|
||||
import subprocess
|
||||
import collections
|
||||
import asyncio
|
||||
import logging
|
||||
import socket
|
||||
import platform
|
||||
import filecmp
|
||||
import tempfile
|
||||
import threading
|
||||
import shutil
|
||||
from queue import Queue
|
||||
from contextlib import closing
|
||||
|
||||
from aiohttp import web
|
||||
from aiohttp_json_rpc import JsonRpc
|
||||
from aiohttp_json_rpc.protocol import (
|
||||
encode_request, encode_error, decode_msg, JsonRpcMsgTyp
|
||||
)
|
||||
from aiohttp_json_rpc.exceptions import RpcError
|
||||
|
||||
from avalon import api
|
||||
from openpype.hosts.tvpaint.tvpaint_plugin import get_plugin_files_path
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
class CommunicationWrapper:
|
||||
# TODO add logs and exceptions
|
||||
communicator = None
|
||||
|
||||
log = logging.getLogger("CommunicationWrapper")
|
||||
|
||||
@classmethod
|
||||
def create_qt_communicator(cls, *args, **kwargs):
|
||||
"""Create communicator for Artist usage."""
|
||||
communicator = QtCommunicator(*args, **kwargs)
|
||||
cls.set_communicator(communicator)
|
||||
return communicator
|
||||
|
||||
@classmethod
|
||||
def set_communicator(cls, communicator):
|
||||
if not cls.communicator:
|
||||
cls.communicator = communicator
|
||||
else:
|
||||
cls.log.warning("Communicator was set multiple times.")
|
||||
|
||||
@classmethod
|
||||
def client(cls):
|
||||
if not cls.communicator:
|
||||
return None
|
||||
return cls.communicator.client()
|
||||
|
||||
@classmethod
|
||||
def execute_george(cls, george_script):
|
||||
"""Execute passed goerge script in TVPaint."""
|
||||
if not cls.communicator:
|
||||
return
|
||||
return cls.communicator.execute_george(george_script)
|
||||
|
||||
|
||||
class WebSocketServer:
|
||||
def __init__(self):
|
||||
self.client = None
|
||||
|
||||
self.loop = asyncio.new_event_loop()
|
||||
self.app = web.Application(loop=self.loop)
|
||||
self.port = self.find_free_port()
|
||||
self.websocket_thread = WebsocketServerThread(
|
||||
self, self.port, loop=self.loop
|
||||
)
|
||||
|
||||
@property
|
||||
def server_is_running(self):
|
||||
return self.websocket_thread.server_is_running
|
||||
|
||||
def add_route(self, *args, **kwargs):
|
||||
self.app.router.add_route(*args, **kwargs)
|
||||
|
||||
@staticmethod
|
||||
def find_free_port():
|
||||
with closing(
|
||||
socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
) as sock:
|
||||
sock.bind(("", 0))
|
||||
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
||||
port = sock.getsockname()[1]
|
||||
return port
|
||||
|
||||
def start(self):
|
||||
self.websocket_thread.start()
|
||||
|
||||
def stop(self):
|
||||
try:
|
||||
if self.websocket_thread.is_running:
|
||||
log.debug("Stopping websocket server")
|
||||
self.websocket_thread.is_running = False
|
||||
self.websocket_thread.stop()
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Error has happened during Killing websocket server",
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
|
||||
class WebsocketServerThread(threading.Thread):
|
||||
""" Listener for websocket rpc requests.
|
||||
|
||||
It would be probably better to "attach" this to main thread (as for
|
||||
example Harmony needs to run something on main thread), but currently
|
||||
it creates separate thread and separate asyncio event loop
|
||||
"""
|
||||
def __init__(self, module, port, loop):
|
||||
super(WebsocketServerThread, self).__init__()
|
||||
self.is_running = False
|
||||
self.server_is_running = False
|
||||
self.port = port
|
||||
self.module = module
|
||||
self.loop = loop
|
||||
self.runner = None
|
||||
self.site = None
|
||||
self.tasks = []
|
||||
|
||||
def run(self):
|
||||
self.is_running = True
|
||||
|
||||
try:
|
||||
log.debug("Starting websocket server")
|
||||
|
||||
self.loop.run_until_complete(self.start_server())
|
||||
|
||||
log.info(
|
||||
"Running Websocket server on URL:"
|
||||
" \"ws://localhost:{}\"".format(self.port)
|
||||
)
|
||||
|
||||
asyncio.ensure_future(self.check_shutdown(), loop=self.loop)
|
||||
|
||||
self.server_is_running = True
|
||||
self.loop.run_forever()
|
||||
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Websocket Server service has failed", exc_info=True
|
||||
)
|
||||
finally:
|
||||
self.server_is_running = False
|
||||
# optional
|
||||
self.loop.close()
|
||||
|
||||
self.is_running = False
|
||||
log.info("Websocket server stopped")
|
||||
|
||||
async def start_server(self):
|
||||
""" Starts runner and TCPsite """
|
||||
self.runner = web.AppRunner(self.module.app)
|
||||
await self.runner.setup()
|
||||
self.site = web.TCPSite(self.runner, "localhost", self.port)
|
||||
await self.site.start()
|
||||
|
||||
def stop(self):
|
||||
"""Sets is_running flag to false, 'check_shutdown' shuts server down"""
|
||||
self.is_running = False
|
||||
|
||||
async def check_shutdown(self):
|
||||
""" Future that is running and checks if server should be running
|
||||
periodically.
|
||||
"""
|
||||
while self.is_running:
|
||||
while self.tasks:
|
||||
task = self.tasks.pop(0)
|
||||
log.debug("waiting for task {}".format(task))
|
||||
await task
|
||||
log.debug("returned value {}".format(task.result))
|
||||
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
log.debug("## Server shutdown started")
|
||||
|
||||
await self.site.stop()
|
||||
log.debug("# Site stopped")
|
||||
await self.runner.cleanup()
|
||||
log.debug("# Server runner stopped")
|
||||
tasks = [
|
||||
task for task in asyncio.all_tasks()
|
||||
if task is not asyncio.current_task()
|
||||
]
|
||||
list(map(lambda task: task.cancel(), tasks)) # cancel all the tasks
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
log.debug(f"Finished awaiting cancelled tasks, results: {results}...")
|
||||
await self.loop.shutdown_asyncgens()
|
||||
# to really make sure everything else has time to stop
|
||||
await asyncio.sleep(0.07)
|
||||
self.loop.stop()
|
||||
|
||||
|
||||
class BaseTVPaintRpc(JsonRpc):
|
||||
def __init__(self, communication_obj, route_name="", **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
self.requests_ids = collections.defaultdict(lambda: 0)
|
||||
self.waiting_requests = collections.defaultdict(list)
|
||||
self.responses = collections.defaultdict(list)
|
||||
|
||||
self.route_name = route_name
|
||||
self.communication_obj = communication_obj
|
||||
|
||||
async def _handle_rpc_msg(self, http_request, raw_msg):
|
||||
# This is duplicated code from super but there is no way how to do it
|
||||
# to be able handle server->client requests
|
||||
host = http_request.host
|
||||
if host in self.waiting_requests:
|
||||
try:
|
||||
_raw_message = raw_msg.data
|
||||
msg = decode_msg(_raw_message)
|
||||
|
||||
except RpcError as error:
|
||||
await self._ws_send_str(http_request, encode_error(error))
|
||||
return
|
||||
|
||||
if msg.type in (JsonRpcMsgTyp.RESULT, JsonRpcMsgTyp.ERROR):
|
||||
msg_data = json.loads(_raw_message)
|
||||
if msg_data.get("id") in self.waiting_requests[host]:
|
||||
self.responses[host].append(msg_data)
|
||||
return
|
||||
|
||||
return await super()._handle_rpc_msg(http_request, raw_msg)
|
||||
|
||||
def client_connected(self):
|
||||
# TODO This is poor check. Add check it is client from TVPaint
|
||||
if self.clients:
|
||||
return True
|
||||
return False
|
||||
|
||||
def send_notification(self, client, method, params=None):
|
||||
if params is None:
|
||||
params = []
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
client.ws.send_str(encode_request(method, params=params)),
|
||||
loop=self.loop
|
||||
)
|
||||
|
||||
def send_request(self, client, method, params=None, timeout=0):
|
||||
if params is None:
|
||||
params = []
|
||||
|
||||
client_host = client.host
|
||||
|
||||
request_id = self.requests_ids[client_host]
|
||||
self.requests_ids[client_host] += 1
|
||||
|
||||
self.waiting_requests[client_host].append(request_id)
|
||||
|
||||
log.debug("Sending request to client {} ({}, {}) id: {}".format(
|
||||
client_host, method, params, request_id
|
||||
))
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
client.ws.send_str(encode_request(method, request_id, params)),
|
||||
loop=self.loop
|
||||
)
|
||||
result = future.result()
|
||||
|
||||
not_found = object()
|
||||
response = not_found
|
||||
start = time.time()
|
||||
while True:
|
||||
if client.ws.closed:
|
||||
return None
|
||||
|
||||
for _response in self.responses[client_host]:
|
||||
_id = _response.get("id")
|
||||
if _id == request_id:
|
||||
response = _response
|
||||
break
|
||||
|
||||
if response is not not_found:
|
||||
break
|
||||
|
||||
if timeout > 0 and (time.time() - start) > timeout:
|
||||
raise Exception("Timeout passed")
|
||||
return
|
||||
|
||||
time.sleep(0.1)
|
||||
|
||||
if response is not_found:
|
||||
raise Exception("Connection closed")
|
||||
|
||||
self.responses[client_host].remove(response)
|
||||
|
||||
error = response.get("error")
|
||||
result = response.get("result")
|
||||
if error:
|
||||
raise Exception("Error happened: {}".format(error))
|
||||
return result
|
||||
|
||||
|
||||
class QtTVPaintRpc(BaseTVPaintRpc):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
from openpype.tools.utils import host_tools
|
||||
self.tools_helper = host_tools.HostToolsHelper()
|
||||
|
||||
route_name = self.route_name
|
||||
|
||||
# Register methods
|
||||
self.add_methods(
|
||||
(route_name, self.workfiles_tool),
|
||||
(route_name, self.loader_tool),
|
||||
(route_name, self.creator_tool),
|
||||
(route_name, self.subset_manager_tool),
|
||||
(route_name, self.publish_tool),
|
||||
(route_name, self.scene_inventory_tool),
|
||||
(route_name, self.library_loader_tool),
|
||||
(route_name, self.experimental_tools)
|
||||
)
|
||||
|
||||
# Panel routes for tools
|
||||
async def workfiles_tool(self):
|
||||
log.info("Triggering Workfile tool")
|
||||
item = MainThreadItem(self.tools_helper.show_workfiles)
|
||||
self._execute_in_main_thread(item)
|
||||
return
|
||||
|
||||
async def loader_tool(self):
|
||||
log.info("Triggering Loader tool")
|
||||
item = MainThreadItem(self.tools_helper.show_loader)
|
||||
self._execute_in_main_thread(item)
|
||||
return
|
||||
|
||||
async def creator_tool(self):
|
||||
log.info("Triggering Creator tool")
|
||||
item = MainThreadItem(self.tools_helper.show_creator)
|
||||
await self._async_execute_in_main_thread(item, wait=False)
|
||||
|
||||
async def subset_manager_tool(self):
|
||||
log.info("Triggering Subset Manager tool")
|
||||
item = MainThreadItem(self.tools_helper.show_subset_manager)
|
||||
# Do not wait for result of callback
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def publish_tool(self):
|
||||
log.info("Triggering Publish tool")
|
||||
item = MainThreadItem(self.tools_helper.show_publish)
|
||||
self._execute_in_main_thread(item)
|
||||
return
|
||||
|
||||
async def scene_inventory_tool(self):
|
||||
"""Open Scene Inventory tool.
|
||||
|
||||
Funciton can't confirm if tool was opened becauise one part of
|
||||
SceneInventory initialization is calling websocket request to host but
|
||||
host can't response because is waiting for response from this call.
|
||||
"""
|
||||
log.info("Triggering Scene inventory tool")
|
||||
item = MainThreadItem(self.tools_helper.show_scene_inventory)
|
||||
# Do not wait for result of callback
|
||||
self._execute_in_main_thread(item, wait=False)
|
||||
return
|
||||
|
||||
async def library_loader_tool(self):
|
||||
log.info("Triggering Library loader tool")
|
||||
item = MainThreadItem(self.tools_helper.show_library_loader)
|
||||
self._execute_in_main_thread(item)
|
||||
return
|
||||
|
||||
async def experimental_tools(self):
|
||||
log.info("Triggering Library loader tool")
|
||||
item = MainThreadItem(self.tools_helper.show_experimental_tools_dialog)
|
||||
self._execute_in_main_thread(item)
|
||||
return
|
||||
|
||||
async def _async_execute_in_main_thread(self, item, **kwargs):
|
||||
await self.communication_obj.async_execute_in_main_thread(
|
||||
item, **kwargs
|
||||
)
|
||||
|
||||
def _execute_in_main_thread(self, item, **kwargs):
|
||||
return self.communication_obj.execute_in_main_thread(item, **kwargs)
|
||||
|
||||
|
||||
class MainThreadItem:
|
||||
"""Structure to store information about callback in main thread.
|
||||
|
||||
Item should be used to execute callback in main thread which may be needed
|
||||
for execution of Qt objects.
|
||||
|
||||
Item store callback (callable variable), arguments and keyword arguments
|
||||
for the callback. Item hold information about it's process.
|
||||
"""
|
||||
not_set = object()
|
||||
sleep_time = 0.1
|
||||
|
||||
def __init__(self, callback, *args, **kwargs):
|
||||
self.done = False
|
||||
self.exception = self.not_set
|
||||
self.result = self.not_set
|
||||
self.callback = callback
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
|
||||
def execute(self):
|
||||
"""Execute callback and store it's result.
|
||||
|
||||
Method must be called from main thread. Item is marked as `done`
|
||||
when callback execution finished. Store output of callback of exception
|
||||
information when callback raise one.
|
||||
"""
|
||||
log.debug("Executing process in main thread")
|
||||
if self.done:
|
||||
log.warning("- item is already processed")
|
||||
return
|
||||
|
||||
callback = self.callback
|
||||
args = self.args
|
||||
kwargs = self.kwargs
|
||||
log.info("Running callback: {}".format(str(callback)))
|
||||
try:
|
||||
result = callback(*args, **kwargs)
|
||||
self.result = result
|
||||
|
||||
except Exception as exc:
|
||||
self.exception = exc
|
||||
|
||||
finally:
|
||||
self.done = True
|
||||
|
||||
def wait(self):
|
||||
"""Wait for result from main thread.
|
||||
|
||||
This method stops current thread until callback is executed.
|
||||
|
||||
Returns:
|
||||
object: Output of callback. May be any type or object.
|
||||
|
||||
Raises:
|
||||
Exception: Reraise any exception that happened during callback
|
||||
execution.
|
||||
"""
|
||||
while not self.done:
|
||||
time.sleep(self.sleep_time)
|
||||
|
||||
if self.exception is self.not_set:
|
||||
return self.result
|
||||
raise self.exception
|
||||
|
||||
async def async_wait(self):
|
||||
"""Wait for result from main thread.
|
||||
|
||||
Returns:
|
||||
object: Output of callback. May be any type or object.
|
||||
|
||||
Raises:
|
||||
Exception: Reraise any exception that happened during callback
|
||||
execution.
|
||||
"""
|
||||
while not self.done:
|
||||
await asyncio.sleep(self.sleep_time)
|
||||
|
||||
if self.exception is self.not_set:
|
||||
return self.result
|
||||
raise self.exception
|
||||
|
||||
|
||||
class BaseCommunicator:
|
||||
def __init__(self):
|
||||
self.process = None
|
||||
self.websocket_server = None
|
||||
self.websocket_rpc = None
|
||||
self.exit_code = None
|
||||
self._connected_client = None
|
||||
|
||||
@property
|
||||
def server_is_running(self):
|
||||
if self.websocket_server is None:
|
||||
return False
|
||||
return self.websocket_server.server_is_running
|
||||
|
||||
def _windows_file_process(self, src_dst_mapping, to_remove):
|
||||
"""Windows specific file processing asking for admin permissions.
|
||||
|
||||
It is required to have administration permissions to modify plugin
|
||||
files in TVPaint installation folder.
|
||||
|
||||
Method requires `pywin32` python module.
|
||||
|
||||
Args:
|
||||
src_dst_mapping (list, tuple, set): Mapping of source file to
|
||||
destination. Both must be full path. Each item must be iterable
|
||||
of size 2 `(C:/src/file.dll, C:/dst/file.dll)`.
|
||||
to_remove (list): Fullpath to files that should be removed.
|
||||
"""
|
||||
|
||||
import pythoncom
|
||||
from win32comext.shell import shell
|
||||
|
||||
# Create temp folder where plugin files are temporary copied
|
||||
# - reason is that copy to TVPaint requires administartion permissions
|
||||
# but admin may not have access to source folder
|
||||
tmp_dir = os.path.normpath(
|
||||
tempfile.mkdtemp(prefix="tvpaint_copy_")
|
||||
)
|
||||
|
||||
# Copy source to temp folder and create new mapping
|
||||
dst_folders = collections.defaultdict(list)
|
||||
new_src_dst_mapping = []
|
||||
for old_src, dst in src_dst_mapping:
|
||||
new_src = os.path.join(tmp_dir, os.path.split(old_src)[1])
|
||||
shutil.copy(old_src, new_src)
|
||||
new_src_dst_mapping.append((new_src, dst))
|
||||
|
||||
for src, dst in new_src_dst_mapping:
|
||||
src = os.path.normpath(src)
|
||||
dst = os.path.normpath(dst)
|
||||
dst_filename = os.path.basename(dst)
|
||||
dst_folder_path = os.path.dirname(dst)
|
||||
dst_folders[dst_folder_path].append((dst_filename, src))
|
||||
|
||||
# create an instance of IFileOperation
|
||||
fo = pythoncom.CoCreateInstance(
|
||||
shell.CLSID_FileOperation,
|
||||
None,
|
||||
pythoncom.CLSCTX_ALL,
|
||||
shell.IID_IFileOperation
|
||||
)
|
||||
# Add delete command to file operation object
|
||||
for filepath in to_remove:
|
||||
item = shell.SHCreateItemFromParsingName(
|
||||
filepath, None, shell.IID_IShellItem
|
||||
)
|
||||
fo.DeleteItem(item)
|
||||
|
||||
# here you can use SetOperationFlags, progress Sinks, etc.
|
||||
for folder_path, items in dst_folders.items():
|
||||
# create an instance of IShellItem for the target folder
|
||||
folder_item = shell.SHCreateItemFromParsingName(
|
||||
folder_path, None, shell.IID_IShellItem
|
||||
)
|
||||
for _dst_filename, source_file_path in items:
|
||||
# create an instance of IShellItem for the source item
|
||||
copy_item = shell.SHCreateItemFromParsingName(
|
||||
source_file_path, None, shell.IID_IShellItem
|
||||
)
|
||||
# queue the copy operation
|
||||
fo.CopyItem(copy_item, folder_item, _dst_filename, None)
|
||||
|
||||
# commit
|
||||
fo.PerformOperations()
|
||||
|
||||
# Remove temp folder
|
||||
shutil.rmtree(tmp_dir)
|
||||
|
||||
def _prepare_windows_plugin(self, launch_args):
|
||||
"""Copy plugin to TVPaint plugins and set PATH to dependencies.
|
||||
|
||||
Check if plugin in TVPaint's plugins exist and match to plugin
|
||||
version to current implementation version. Based on 64-bit or 32-bit
|
||||
version of the plugin. Path to libraries required for plugin is added
|
||||
to PATH variable.
|
||||
"""
|
||||
|
||||
host_executable = launch_args[0]
|
||||
executable_file = os.path.basename(host_executable)
|
||||
if "64bit" in executable_file:
|
||||
subfolder = "windows_x64"
|
||||
elif "32bit" in executable_file:
|
||||
subfolder = "windows_x86"
|
||||
else:
|
||||
raise ValueError(
|
||||
"Can't determine if executable "
|
||||
"leads to 32-bit or 64-bit TVPaint!"
|
||||
)
|
||||
|
||||
plugin_files_path = get_plugin_files_path()
|
||||
# Folder for right windows plugin files
|
||||
source_plugins_dir = os.path.join(plugin_files_path, subfolder)
|
||||
|
||||
# Path to libraies (.dll) required for plugin library
|
||||
# - additional libraries can be copied to TVPaint installation folder
|
||||
# (next to executable) or added to PATH environment variable
|
||||
additional_libs_folder = os.path.join(
|
||||
source_plugins_dir,
|
||||
"additional_libraries"
|
||||
)
|
||||
additional_libs_folder = additional_libs_folder.replace("\\", "/")
|
||||
if additional_libs_folder not in os.environ["PATH"]:
|
||||
os.environ["PATH"] += (os.pathsep + additional_libs_folder)
|
||||
|
||||
# Path to TVPaint's plugins folder (where we want to add our plugin)
|
||||
host_plugins_path = os.path.join(
|
||||
os.path.dirname(host_executable),
|
||||
"plugins"
|
||||
)
|
||||
|
||||
# Files that must be copied to TVPaint's plugin folder
|
||||
plugin_dir = os.path.join(source_plugins_dir, "plugin")
|
||||
|
||||
to_copy = []
|
||||
to_remove = []
|
||||
# Remove old plugin name
|
||||
deprecated_filepath = os.path.join(
|
||||
host_plugins_path, "AvalonPlugin.dll"
|
||||
)
|
||||
if os.path.exists(deprecated_filepath):
|
||||
to_remove.append(deprecated_filepath)
|
||||
|
||||
for filename in os.listdir(plugin_dir):
|
||||
src_full_path = os.path.join(plugin_dir, filename)
|
||||
dst_full_path = os.path.join(host_plugins_path, filename)
|
||||
if dst_full_path in to_remove:
|
||||
to_remove.remove(dst_full_path)
|
||||
|
||||
if (
|
||||
not os.path.exists(dst_full_path)
|
||||
or not filecmp.cmp(src_full_path, dst_full_path)
|
||||
):
|
||||
to_copy.append((src_full_path, dst_full_path))
|
||||
|
||||
# Skip copy if everything is done
|
||||
if not to_copy and not to_remove:
|
||||
return
|
||||
|
||||
# Try to copy
|
||||
try:
|
||||
self._windows_file_process(to_copy, to_remove)
|
||||
except Exception:
|
||||
log.error("Plugin copy failed", exc_info=True)
|
||||
|
||||
# Validate copy was done
|
||||
invalid_copy = []
|
||||
for src, dst in to_copy:
|
||||
if not os.path.exists(dst) or not filecmp.cmp(src, dst):
|
||||
invalid_copy.append((src, dst))
|
||||
|
||||
# Validate delete was dones
|
||||
invalid_remove = []
|
||||
for filepath in to_remove:
|
||||
if os.path.exists(filepath):
|
||||
invalid_remove.append(filepath)
|
||||
|
||||
if not invalid_remove and not invalid_copy:
|
||||
return
|
||||
|
||||
msg_parts = []
|
||||
if invalid_remove:
|
||||
msg_parts.append(
|
||||
"Failed to remove files: {}".format(", ".join(invalid_remove))
|
||||
)
|
||||
|
||||
if invalid_copy:
|
||||
_invalid = [
|
||||
"\"{}\" -> \"{}\"".format(src, dst)
|
||||
for src, dst in invalid_copy
|
||||
]
|
||||
msg_parts.append(
|
||||
"Failed to copy files: {}".format(", ".join(_invalid))
|
||||
)
|
||||
raise RuntimeError(" & ".join(msg_parts))
|
||||
|
||||
def _launch_tv_paint(self, launch_args):
|
||||
flags = (
|
||||
subprocess.DETACHED_PROCESS
|
||||
| subprocess.CREATE_NEW_PROCESS_GROUP
|
||||
)
|
||||
env = os.environ.copy()
|
||||
# Remove QuickTime from PATH on windows
|
||||
# - quicktime overrides TVPaint's ffmpeg encode/decode which may
|
||||
# cause issues on loading
|
||||
if platform.system().lower() == "windows":
|
||||
new_path = []
|
||||
for path in env["PATH"].split(os.pathsep):
|
||||
if path and "quicktime" not in path.lower():
|
||||
new_path.append(path)
|
||||
env["PATH"] = os.pathsep.join(new_path)
|
||||
|
||||
kwargs = {
|
||||
"env": env,
|
||||
"creationflags": flags
|
||||
}
|
||||
self.process = subprocess.Popen(launch_args, **kwargs)
|
||||
|
||||
def _create_routes(self):
|
||||
self.websocket_rpc = BaseTVPaintRpc(
|
||||
self, loop=self.websocket_server.loop
|
||||
)
|
||||
self.websocket_server.add_route(
|
||||
"*", "/", self.websocket_rpc.handle_request
|
||||
)
|
||||
|
||||
def _start_webserver(self):
|
||||
self.websocket_server.start()
|
||||
# Make sure RPC is using same loop as websocket server
|
||||
while not self.websocket_server.server_is_running:
|
||||
time.sleep(0.1)
|
||||
|
||||
def _stop_webserver(self):
|
||||
self.websocket_server.stop()
|
||||
|
||||
def _exit(self, exit_code=None):
|
||||
self._stop_webserver()
|
||||
if exit_code is not None:
|
||||
self.exit_code = exit_code
|
||||
|
||||
def stop(self):
|
||||
"""Stop communication and currently running python process."""
|
||||
log.info("Stopping communication")
|
||||
self._exit()
|
||||
|
||||
def launch(self, launch_args):
|
||||
"""Prepare all required data and launch host.
|
||||
|
||||
First is prepared websocket server as communication point for host,
|
||||
when server is ready to use host is launched as subprocess.
|
||||
"""
|
||||
if platform.system().lower() == "windows":
|
||||
self._prepare_windows_plugin(launch_args)
|
||||
|
||||
# Launch TVPaint and the websocket server.
|
||||
log.info("Launching TVPaint")
|
||||
self.websocket_server = WebSocketServer()
|
||||
|
||||
self._create_routes()
|
||||
|
||||
os.environ["WEBSOCKET_URL"] = "ws://localhost:{}".format(
|
||||
self.websocket_server.port
|
||||
)
|
||||
|
||||
log.info("Added request handler for url: {}".format(
|
||||
os.environ["WEBSOCKET_URL"]
|
||||
))
|
||||
|
||||
self._start_webserver()
|
||||
|
||||
# Start TVPaint when server is running
|
||||
self._launch_tv_paint(launch_args)
|
||||
|
||||
log.info("Waiting for client connection")
|
||||
while True:
|
||||
if self.process.poll() is not None:
|
||||
log.debug("Host process is not alive. Exiting")
|
||||
self._exit(1)
|
||||
return
|
||||
|
||||
if self.websocket_rpc.client_connected():
|
||||
log.info("Client has connected")
|
||||
break
|
||||
time.sleep(0.5)
|
||||
|
||||
self._on_client_connect()
|
||||
|
||||
api.emit("application.launched")
|
||||
|
||||
def _on_client_connect(self):
|
||||
self._initial_textfile_write()
|
||||
|
||||
def _initial_textfile_write(self):
|
||||
"""Show popup about Write to file at start of TVPaint."""
|
||||
tmp_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
tmp_file.close()
|
||||
tmp_filepath = tmp_file.name.replace("\\", "/")
|
||||
george_script = (
|
||||
"tv_writetextfile \"strict\" \"append\" \"{}\" \"empty\""
|
||||
).format(tmp_filepath)
|
||||
|
||||
result = CommunicationWrapper.execute_george(george_script)
|
||||
|
||||
# Remote the file
|
||||
os.remove(tmp_filepath)
|
||||
|
||||
if result is None:
|
||||
log.warning(
|
||||
"Host was probably closed before plugin was initialized."
|
||||
)
|
||||
elif result.lower() == "forbidden":
|
||||
log.warning("User didn't confirm saving files.")
|
||||
|
||||
def _client(self):
|
||||
if not self.websocket_rpc:
|
||||
log.warning("Communicator's server did not start yet.")
|
||||
return None
|
||||
|
||||
for client in self.websocket_rpc.clients:
|
||||
if not client.ws.closed:
|
||||
return client
|
||||
log.warning("Client is not yet connected to Communicator.")
|
||||
return None
|
||||
|
||||
def client(self):
|
||||
if not self._connected_client or self._connected_client.ws.closed:
|
||||
self._connected_client = self._client()
|
||||
return self._connected_client
|
||||
|
||||
def send_request(self, method, params=None):
|
||||
client = self.client()
|
||||
if not client:
|
||||
return
|
||||
|
||||
return self.websocket_rpc.send_request(
|
||||
client, method, params
|
||||
)
|
||||
|
||||
def send_notification(self, method, params=None):
|
||||
client = self.client()
|
||||
if not client:
|
||||
return
|
||||
|
||||
self.websocket_rpc.send_notification(
|
||||
client, method, params
|
||||
)
|
||||
|
||||
def execute_george(self, george_script):
|
||||
"""Execute passed goerge script in TVPaint."""
|
||||
return self.send_request(
|
||||
"execute_george", [george_script]
|
||||
)
|
||||
|
||||
def execute_george_through_file(self, george_script):
|
||||
"""Execute george script with temp file.
|
||||
|
||||
Allows to execute multiline george script without stopping websocket
|
||||
client.
|
||||
|
||||
On windows make sure script does not contain paths with backwards
|
||||
slashes in paths, TVPaint won't execute properly in that case.
|
||||
|
||||
Args:
|
||||
george_script (str): George script to execute. May be multilined.
|
||||
"""
|
||||
temporary_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".grg", delete=False
|
||||
)
|
||||
temporary_file.write(george_script)
|
||||
temporary_file.close()
|
||||
temp_file_path = temporary_file.name.replace("\\", "/")
|
||||
self.execute_george("tv_runscript {}".format(temp_file_path))
|
||||
os.remove(temp_file_path)
|
||||
|
||||
|
||||
class QtCommunicator(BaseCommunicator):
|
||||
menu_definitions = {
|
||||
"title": "OpenPype Tools",
|
||||
"menu_items": [
|
||||
{
|
||||
"callback": "workfiles_tool",
|
||||
"label": "Workfiles",
|
||||
"help": "Open workfiles tool"
|
||||
}, {
|
||||
"callback": "loader_tool",
|
||||
"label": "Load",
|
||||
"help": "Open loader tool"
|
||||
}, {
|
||||
"callback": "creator_tool",
|
||||
"label": "Create",
|
||||
"help": "Open creator tool"
|
||||
}, {
|
||||
"callback": "scene_inventory_tool",
|
||||
"label": "Scene inventory",
|
||||
"help": "Open scene inventory tool"
|
||||
}, {
|
||||
"callback": "publish_tool",
|
||||
"label": "Publish",
|
||||
"help": "Open publisher"
|
||||
}, {
|
||||
"callback": "library_loader_tool",
|
||||
"label": "Library",
|
||||
"help": "Open library loader tool"
|
||||
}, {
|
||||
"callback": "subset_manager_tool",
|
||||
"label": "Subset Manager",
|
||||
"help": "Open subset manager tool"
|
||||
}, {
|
||||
"callback": "experimental_tools",
|
||||
"label": "Experimental tools",
|
||||
"help": "Open experimental tools dialog"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
def __init__(self, qt_app):
|
||||
super().__init__()
|
||||
self.callback_queue = Queue()
|
||||
self.qt_app = qt_app
|
||||
|
||||
def _create_routes(self):
|
||||
self.websocket_rpc = QtTVPaintRpc(
|
||||
self, loop=self.websocket_server.loop
|
||||
)
|
||||
self.websocket_server.add_route(
|
||||
"*", "/", self.websocket_rpc.handle_request
|
||||
)
|
||||
|
||||
def execute_in_main_thread(self, main_thread_item, wait=True):
|
||||
"""Add `MainThreadItem` to callback queue and wait for result."""
|
||||
self.callback_queue.put(main_thread_item)
|
||||
if wait:
|
||||
return main_thread_item.wait()
|
||||
return
|
||||
|
||||
async def async_execute_in_main_thread(self, main_thread_item, wait=True):
|
||||
"""Add `MainThreadItem` to callback queue and wait for result."""
|
||||
self.callback_queue.put(main_thread_item)
|
||||
if wait:
|
||||
return await main_thread_item.async_wait()
|
||||
|
||||
def main_thread_listen(self):
|
||||
"""Get last `MainThreadItem` from queue.
|
||||
|
||||
Must be called from main thread.
|
||||
|
||||
Method checks if host process is still running as it may cause
|
||||
issues if not.
|
||||
"""
|
||||
# check if host still running
|
||||
if self.process.poll() is not None:
|
||||
self._exit()
|
||||
return None
|
||||
|
||||
if self.callback_queue.empty():
|
||||
return None
|
||||
return self.callback_queue.get()
|
||||
|
||||
def _on_client_connect(self):
|
||||
super()._on_client_connect()
|
||||
self._build_menu()
|
||||
|
||||
def _build_menu(self):
|
||||
self.send_request(
|
||||
"define_menu", [self.menu_definitions]
|
||||
)
|
||||
|
||||
def _exit(self, *args, **kwargs):
|
||||
super()._exit(*args, **kwargs)
|
||||
api.emit("application.exit")
|
||||
self.qt_app.exit(self.exit_code)
|
||||
84
openpype/hosts/tvpaint/api/launch_script.py
Normal file
84
openpype/hosts/tvpaint/api/launch_script.py
Normal file
|
|
@ -0,0 +1,84 @@
|
|||
import os
|
||||
import sys
|
||||
import signal
|
||||
import traceback
|
||||
import ctypes
|
||||
import platform
|
||||
import logging
|
||||
|
||||
from Qt import QtWidgets, QtCore, QtGui
|
||||
|
||||
from avalon import api
|
||||
from openpype import style
|
||||
from openpype.hosts.tvpaint.api.communication_server import (
|
||||
CommunicationWrapper
|
||||
)
|
||||
from openpype.hosts.tvpaint import api as tvpaint_host
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def safe_excepthook(*args):
|
||||
traceback.print_exception(*args)
|
||||
|
||||
|
||||
def main(launch_args):
|
||||
# Be sure server won't crash at any moment but just print traceback
|
||||
sys.excepthook = safe_excepthook
|
||||
|
||||
# Create QtApplication for tools
|
||||
# - QApplicaiton is also main thread/event loop of the server
|
||||
qt_app = QtWidgets.QApplication([])
|
||||
|
||||
# Execute pipeline installation
|
||||
api.install(tvpaint_host)
|
||||
|
||||
# Create Communicator object and trigger launch
|
||||
# - this must be done before anything is processed
|
||||
communicator = CommunicationWrapper.create_qt_communicator(qt_app)
|
||||
communicator.launch(launch_args)
|
||||
|
||||
def process_in_main_thread():
|
||||
"""Execution of `MainThreadItem`."""
|
||||
item = communicator.main_thread_listen()
|
||||
if item:
|
||||
item.execute()
|
||||
|
||||
timer = QtCore.QTimer()
|
||||
timer.setInterval(100)
|
||||
timer.timeout.connect(process_in_main_thread)
|
||||
timer.start()
|
||||
|
||||
# Register terminal signal handler
|
||||
def signal_handler(*_args):
|
||||
print("You pressed Ctrl+C. Process ended.")
|
||||
communicator.stop()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
qt_app.setQuitOnLastWindowClosed(False)
|
||||
qt_app.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
# Load avalon icon
|
||||
icon_path = style.app_icon_path()
|
||||
if icon_path:
|
||||
icon = QtGui.QIcon(icon_path)
|
||||
qt_app.setWindowIcon(icon)
|
||||
|
||||
# Set application name to be able show application icon in task bar
|
||||
if platform.system().lower() == "windows":
|
||||
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(
|
||||
u"WebsocketServer"
|
||||
)
|
||||
|
||||
# Run Qt application event processing
|
||||
sys.exit(qt_app.exec_())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = list(sys.argv)
|
||||
if os.path.abspath(__file__) == os.path.normpath(args[0]):
|
||||
# Pop path to script
|
||||
args.pop(0)
|
||||
main(args)
|
||||
|
|
@ -1,85 +1,534 @@
|
|||
from PIL import Image
|
||||
import os
|
||||
import logging
|
||||
import tempfile
|
||||
|
||||
import avalon.io
|
||||
from avalon.tvpaint.lib import execute_george
|
||||
|
||||
from . import CommunicationWrapper
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def composite_images(input_image_paths, output_filepath):
|
||||
"""Composite images in order from passed list.
|
||||
def execute_george(george_script, communicator=None):
|
||||
if not communicator:
|
||||
communicator = CommunicationWrapper.communicator
|
||||
return communicator.execute_george(george_script)
|
||||
|
||||
Raises:
|
||||
ValueError: When entered list is empty.
|
||||
|
||||
def execute_george_through_file(george_script, communicator=None):
|
||||
"""Execute george script with temp file.
|
||||
|
||||
Allows to execute multiline george script without stopping websocket
|
||||
client.
|
||||
|
||||
On windows make sure script does not contain paths with backwards
|
||||
slashes in paths, TVPaint won't execute properly in that case.
|
||||
|
||||
Args:
|
||||
george_script (str): George script to execute. May be multilined.
|
||||
"""
|
||||
if not input_image_paths:
|
||||
raise ValueError("Nothing to composite.")
|
||||
if not communicator:
|
||||
communicator = CommunicationWrapper.communicator
|
||||
|
||||
img_obj = None
|
||||
for image_filepath in input_image_paths:
|
||||
_img_obj = Image.open(image_filepath)
|
||||
if img_obj is None:
|
||||
img_obj = _img_obj
|
||||
else:
|
||||
img_obj.alpha_composite(_img_obj)
|
||||
img_obj.save(output_filepath)
|
||||
return communicator.execute_george_through_file(george_script)
|
||||
|
||||
|
||||
def set_context_settings(asset_doc=None):
|
||||
"""Set workfile settings by asset document data.
|
||||
def parse_layers_data(data):
|
||||
"""Parse layers data loaded in 'get_layers_data'."""
|
||||
layers = []
|
||||
layers_raw = data.split("\n")
|
||||
for layer_raw in layers_raw:
|
||||
layer_raw = layer_raw.strip()
|
||||
if not layer_raw:
|
||||
continue
|
||||
(
|
||||
layer_id, group_id, visible, position, opacity, name,
|
||||
layer_type,
|
||||
frame_start, frame_end, prelighttable, postlighttable,
|
||||
selected, editable, sencil_state
|
||||
) = layer_raw.split("|")
|
||||
layer = {
|
||||
"layer_id": int(layer_id),
|
||||
"group_id": int(group_id),
|
||||
"visible": visible == "ON",
|
||||
"position": int(position),
|
||||
"opacity": int(opacity),
|
||||
"name": name,
|
||||
"type": layer_type,
|
||||
"frame_start": int(frame_start),
|
||||
"frame_end": int(frame_end),
|
||||
"prelighttable": prelighttable == "1",
|
||||
"postlighttable": postlighttable == "1",
|
||||
"selected": selected == "1",
|
||||
"editable": editable == "1",
|
||||
"sencil_state": sencil_state
|
||||
}
|
||||
layers.append(layer)
|
||||
return layers
|
||||
|
||||
Change fps, resolution and frame start/end.
|
||||
|
||||
def get_layers_data_george_script(output_filepath, layer_ids=None):
|
||||
"""Prepare george script which will collect all layers from workfile."""
|
||||
output_filepath = output_filepath.replace("\\", "/")
|
||||
george_script_lines = [
|
||||
# Variable containing full path to output file
|
||||
"output_path = \"{}\"".format(output_filepath),
|
||||
# Get Current Layer ID
|
||||
"tv_LayerCurrentID",
|
||||
"current_layer_id = result"
|
||||
]
|
||||
# Script part for getting and storing layer information to temp
|
||||
layer_data_getter = (
|
||||
# Get information about layer's group
|
||||
"tv_layercolor \"get\" layer_id",
|
||||
"group_id = result",
|
||||
"tv_LayerInfo layer_id",
|
||||
(
|
||||
"PARSE result visible position opacity name"
|
||||
" type startFrame endFrame prelighttable postlighttable"
|
||||
" selected editable sencilState"
|
||||
),
|
||||
# Check if layer ID match `tv_LayerCurrentID`
|
||||
"IF CMP(current_layer_id, layer_id)==1",
|
||||
# - mark layer as selected if layer id match to current layer id
|
||||
"selected=1",
|
||||
"END",
|
||||
# Prepare line with data separated by "|"
|
||||
(
|
||||
"line = layer_id'|'group_id'|'visible'|'position'|'opacity'|'"
|
||||
"name'|'type'|'startFrame'|'endFrame'|'prelighttable'|'"
|
||||
"postlighttable'|'selected'|'editable'|'sencilState"
|
||||
),
|
||||
# Write data to output file
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line",
|
||||
)
|
||||
|
||||
# Collect data for all layers if layers are not specified
|
||||
if layer_ids is None:
|
||||
george_script_lines.extend((
|
||||
# Layer loop variables
|
||||
"loop = 1",
|
||||
"idx = 0",
|
||||
# Layers loop
|
||||
"WHILE loop",
|
||||
"tv_LayerGetID idx",
|
||||
"layer_id = result",
|
||||
"idx = idx + 1",
|
||||
# Stop loop if layer_id is "NONE"
|
||||
"IF CMP(layer_id, \"NONE\")==1",
|
||||
"loop = 0",
|
||||
"ELSE",
|
||||
*layer_data_getter,
|
||||
"END",
|
||||
"END"
|
||||
))
|
||||
else:
|
||||
for layer_id in layer_ids:
|
||||
george_script_lines.append("layer_id = {}".format(layer_id))
|
||||
george_script_lines.extend(layer_data_getter)
|
||||
|
||||
return "\n".join(george_script_lines)
|
||||
|
||||
|
||||
def layers_data(layer_ids=None, communicator=None):
|
||||
"""Backwards compatible function of 'get_layers_data'."""
|
||||
return get_layers_data(layer_ids, communicator)
|
||||
|
||||
|
||||
def get_layers_data(layer_ids=None, communicator=None):
|
||||
"""Collect all layers information from currently opened workfile."""
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
if layer_ids is not None and isinstance(layer_ids, int):
|
||||
layer_ids = [layer_ids]
|
||||
|
||||
output_filepath = output_file.name
|
||||
|
||||
george_script = get_layers_data_george_script(output_filepath, layer_ids)
|
||||
|
||||
execute_george_through_file(george_script, communicator)
|
||||
|
||||
with open(output_filepath, "r") as stream:
|
||||
data = stream.read()
|
||||
|
||||
output = parse_layers_data(data)
|
||||
os.remove(output_filepath)
|
||||
return output
|
||||
|
||||
|
||||
def parse_group_data(data):
|
||||
"""Paser group data collected in 'get_groups_data'."""
|
||||
output = []
|
||||
groups_raw = data.split("\n")
|
||||
for group_raw in groups_raw:
|
||||
group_raw = group_raw.strip()
|
||||
if not group_raw:
|
||||
continue
|
||||
|
||||
parts = group_raw.split(" ")
|
||||
# Check for length and concatenate 2 last items until length match
|
||||
# - this happens if name contain spaces
|
||||
while len(parts) > 6:
|
||||
last_item = parts.pop(-1)
|
||||
parts[-1] = " ".join([parts[-1], last_item])
|
||||
clip_id, group_id, red, green, blue, name = parts
|
||||
|
||||
group = {
|
||||
"group_id": int(group_id),
|
||||
"name": name,
|
||||
"clip_id": int(clip_id),
|
||||
"red": int(red),
|
||||
"green": int(green),
|
||||
"blue": int(blue),
|
||||
}
|
||||
output.append(group)
|
||||
return output
|
||||
|
||||
|
||||
def groups_data(communicator=None):
|
||||
"""Backwards compatible function of 'get_groups_data'."""
|
||||
return get_groups_data(communicator)
|
||||
|
||||
|
||||
def get_groups_data(communicator=None):
|
||||
"""Information about groups from current workfile."""
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
|
||||
output_filepath = output_file.name.replace("\\", "/")
|
||||
george_script_lines = (
|
||||
# Variable containing full path to output file
|
||||
"output_path = \"{}\"".format(output_filepath),
|
||||
"loop = 1",
|
||||
"FOR idx = 1 TO 12",
|
||||
"tv_layercolor \"getcolor\" 0 idx",
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result",
|
||||
"END"
|
||||
)
|
||||
george_script = "\n".join(george_script_lines)
|
||||
execute_george_through_file(george_script, communicator)
|
||||
|
||||
with open(output_filepath, "r") as stream:
|
||||
data = stream.read()
|
||||
|
||||
output = parse_group_data(data)
|
||||
os.remove(output_filepath)
|
||||
return output
|
||||
|
||||
|
||||
def get_layers_pre_post_behavior(layer_ids, communicator=None):
|
||||
"""Collect data about pre and post behavior of layer ids.
|
||||
|
||||
Pre and Post behaviors is enumerator of possible values:
|
||||
- "none"
|
||||
- "repeat" / "loop"
|
||||
- "pingpong"
|
||||
- "hold"
|
||||
|
||||
Example output:
|
||||
```json
|
||||
{
|
||||
0: {
|
||||
"pre": "none",
|
||||
"post": "loop"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Returns:
|
||||
dict: Key is layer id value is dictionary with "pre" and "post" keys.
|
||||
"""
|
||||
if asset_doc is None:
|
||||
# Use current session asset if not passed
|
||||
asset_doc = avalon.io.find_one({
|
||||
"type": "asset",
|
||||
"name": avalon.io.Session["AVALON_ASSET"]
|
||||
})
|
||||
# Skip if is empty
|
||||
if not layer_ids:
|
||||
return {}
|
||||
|
||||
project_doc = avalon.io.find_one({"type": "project"})
|
||||
# Auto convert to list
|
||||
if not isinstance(layer_ids, (list, set, tuple)):
|
||||
layer_ids = [layer_ids]
|
||||
|
||||
framerate = asset_doc["data"].get("fps")
|
||||
if framerate is None:
|
||||
framerate = project_doc["data"].get("fps")
|
||||
# Prepare temp file
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
|
||||
if framerate is not None:
|
||||
execute_george(
|
||||
"tv_framerate {} \"timestretch\"".format(framerate)
|
||||
)
|
||||
else:
|
||||
print("Framerate was not found!")
|
||||
output_filepath = output_file.name.replace("\\", "/")
|
||||
george_script_lines = [
|
||||
# Variable containing full path to output file
|
||||
"output_path = \"{}\"".format(output_filepath),
|
||||
]
|
||||
for layer_id in layer_ids:
|
||||
george_script_lines.extend([
|
||||
"layer_id = {}".format(layer_id),
|
||||
"tv_layerprebehavior layer_id",
|
||||
"pre_beh = result",
|
||||
"tv_layerpostbehavior layer_id",
|
||||
"post_beh = result",
|
||||
"line = layer_id'|'pre_beh'|'post_beh",
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line"
|
||||
])
|
||||
|
||||
width_key = "resolutionWidth"
|
||||
height_key = "resolutionHeight"
|
||||
george_script = "\n".join(george_script_lines)
|
||||
execute_george_through_file(george_script, communicator)
|
||||
|
||||
width = asset_doc["data"].get(width_key)
|
||||
height = asset_doc["data"].get(height_key)
|
||||
if width is None or height is None:
|
||||
width = project_doc["data"].get(width_key)
|
||||
height = project_doc["data"].get(height_key)
|
||||
# Read data
|
||||
with open(output_filepath, "r") as stream:
|
||||
data = stream.read()
|
||||
|
||||
if width is None or height is None:
|
||||
print("Resolution was not found!")
|
||||
else:
|
||||
execute_george("tv_resizepage {} {} 0".format(width, height))
|
||||
# Remove temp file
|
||||
os.remove(output_filepath)
|
||||
|
||||
frame_start = asset_doc["data"].get("frameStart")
|
||||
frame_end = asset_doc["data"].get("frameEnd")
|
||||
# Parse data
|
||||
output = {}
|
||||
raw_lines = data.split("\n")
|
||||
for raw_line in raw_lines:
|
||||
line = raw_line.strip()
|
||||
if not line:
|
||||
continue
|
||||
parts = line.split("|")
|
||||
if len(parts) != 3:
|
||||
continue
|
||||
layer_id, pre_beh, post_beh = parts
|
||||
output[int(layer_id)] = {
|
||||
"pre": pre_beh.lower(),
|
||||
"post": post_beh.lower()
|
||||
}
|
||||
return output
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
print("Frame range was not found!")
|
||||
return
|
||||
|
||||
handles = asset_doc["data"].get("handles") or 0
|
||||
handle_start = asset_doc["data"].get("handleStart")
|
||||
handle_end = asset_doc["data"].get("handleEnd")
|
||||
def get_layers_exposure_frames(layer_ids, layers_data=None, communicator=None):
|
||||
"""Get exposure frames.
|
||||
|
||||
if handle_start is None or handle_end is None:
|
||||
handle_start = handles
|
||||
handle_end = handles
|
||||
Easily said returns frames where keyframes are. Recognized with george
|
||||
function `tv_exposureinfo` returning "Head".
|
||||
|
||||
# Always start from 0 Mark In and set only Mark Out
|
||||
mark_in = 0
|
||||
mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end
|
||||
Args:
|
||||
layer_ids (list): Ids of a layers for which exposure frames should
|
||||
look for.
|
||||
layers_data (list): Precollected layers data. If are not passed then
|
||||
'get_layers_data' is used.
|
||||
communicator (BaseCommunicator): Communicator used for communication
|
||||
with TVPaint.
|
||||
|
||||
execute_george("tv_markin {} set".format(mark_in))
|
||||
execute_george("tv_markout {} set".format(mark_out))
|
||||
Returns:
|
||||
dict: Frames where exposure is set to "Head" by layer id.
|
||||
"""
|
||||
|
||||
if layers_data is None:
|
||||
layers_data = get_layers_data(layer_ids)
|
||||
_layers_by_id = {
|
||||
layer["layer_id"]: layer
|
||||
for layer in layers_data
|
||||
}
|
||||
layers_by_id = {
|
||||
layer_id: _layers_by_id.get(layer_id)
|
||||
for layer_id in layer_ids
|
||||
}
|
||||
tmp_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
tmp_file.close()
|
||||
tmp_output_path = tmp_file.name.replace("\\", "/")
|
||||
george_script_lines = [
|
||||
"output_path = \"{}\"".format(tmp_output_path)
|
||||
]
|
||||
|
||||
output = {}
|
||||
layer_id_mapping = {}
|
||||
for layer_id, layer_data in layers_by_id.items():
|
||||
layer_id_mapping[str(layer_id)] = layer_id
|
||||
output[layer_id] = []
|
||||
if not layer_data:
|
||||
continue
|
||||
first_frame = layer_data["frame_start"]
|
||||
last_frame = layer_data["frame_end"]
|
||||
george_script_lines.extend([
|
||||
"line = \"\"",
|
||||
"layer_id = {}".format(layer_id),
|
||||
"line = line''layer_id",
|
||||
"tv_layerset layer_id",
|
||||
"frame = {}".format(first_frame),
|
||||
"WHILE (frame <= {})".format(last_frame),
|
||||
"tv_exposureinfo frame",
|
||||
"exposure = result",
|
||||
"IF (CMP(exposure, \"Head\") == 1)",
|
||||
"line = line'|'frame",
|
||||
"END",
|
||||
"frame = frame + 1",
|
||||
"END",
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line"
|
||||
])
|
||||
|
||||
execute_george_through_file("\n".join(george_script_lines), communicator)
|
||||
|
||||
with open(tmp_output_path, "r") as stream:
|
||||
data = stream.read()
|
||||
|
||||
os.remove(tmp_output_path)
|
||||
|
||||
lines = []
|
||||
for line in data.split("\n"):
|
||||
line = line.strip()
|
||||
if line:
|
||||
lines.append(line)
|
||||
|
||||
for line in lines:
|
||||
line_items = list(line.split("|"))
|
||||
layer_id = line_items.pop(0)
|
||||
_layer_id = layer_id_mapping[layer_id]
|
||||
output[_layer_id] = [int(frame) for frame in line_items]
|
||||
|
||||
return output
|
||||
|
||||
|
||||
def get_exposure_frames(
|
||||
layer_id, first_frame=None, last_frame=None, communicator=None
|
||||
):
|
||||
"""Get exposure frames.
|
||||
|
||||
Easily said returns frames where keyframes are. Recognized with george
|
||||
function `tv_exposureinfo` returning "Head".
|
||||
|
||||
Args:
|
||||
layer_id (int): Id of a layer for which exposure frames should
|
||||
look for.
|
||||
first_frame (int): From which frame will look for exposure frames.
|
||||
Used layers first frame if not entered.
|
||||
last_frame (int): Last frame where will look for exposure frames.
|
||||
Used layers last frame if not entered.
|
||||
|
||||
Returns:
|
||||
list: Frames where exposure is set to "Head".
|
||||
"""
|
||||
if first_frame is None or last_frame is None:
|
||||
layer = layers_data(layer_id)[0]
|
||||
if first_frame is None:
|
||||
first_frame = layer["frame_start"]
|
||||
if last_frame is None:
|
||||
last_frame = layer["frame_end"]
|
||||
|
||||
tmp_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
tmp_file.close()
|
||||
tmp_output_path = tmp_file.name.replace("\\", "/")
|
||||
george_script_lines = [
|
||||
"tv_layerset {}".format(layer_id),
|
||||
"output_path = \"{}\"".format(tmp_output_path),
|
||||
"output = \"\"",
|
||||
"frame = {}".format(first_frame),
|
||||
"WHILE (frame <= {})".format(last_frame),
|
||||
"tv_exposureinfo frame",
|
||||
"exposure = result",
|
||||
"IF (CMP(exposure, \"Head\") == 1)",
|
||||
"IF (CMP(output, \"\") == 1)",
|
||||
"output = output''frame",
|
||||
"ELSE",
|
||||
"output = output'|'frame",
|
||||
"END",
|
||||
"END",
|
||||
"frame = frame + 1",
|
||||
"END",
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' output"
|
||||
]
|
||||
|
||||
execute_george_through_file("\n".join(george_script_lines), communicator)
|
||||
|
||||
with open(tmp_output_path, "r") as stream:
|
||||
data = stream.read()
|
||||
|
||||
os.remove(tmp_output_path)
|
||||
|
||||
lines = []
|
||||
for line in data.split("\n"):
|
||||
line = line.strip()
|
||||
if line:
|
||||
lines.append(line)
|
||||
|
||||
exposure_frames = []
|
||||
for line in lines:
|
||||
for frame in line.split("|"):
|
||||
exposure_frames.append(int(frame))
|
||||
return exposure_frames
|
||||
|
||||
|
||||
def get_scene_data(communicator=None):
|
||||
"""Scene data of currently opened scene.
|
||||
|
||||
Result contains resolution, pixel aspect, fps mark in/out with states,
|
||||
frame start and background color.
|
||||
|
||||
Returns:
|
||||
dict: Scene data collected in many ways.
|
||||
"""
|
||||
workfile_info = execute_george("tv_projectinfo", communicator)
|
||||
workfile_info_parts = workfile_info.split(" ")
|
||||
|
||||
# Project frame start - not used
|
||||
workfile_info_parts.pop(-1)
|
||||
field_order = workfile_info_parts.pop(-1)
|
||||
frame_rate = float(workfile_info_parts.pop(-1))
|
||||
pixel_apsect = float(workfile_info_parts.pop(-1))
|
||||
height = int(workfile_info_parts.pop(-1))
|
||||
width = int(workfile_info_parts.pop(-1))
|
||||
|
||||
# Marks return as "{frame - 1} {state} ", example "0 set".
|
||||
result = execute_george("tv_markin", communicator)
|
||||
mark_in_frame, mark_in_state, _ = result.split(" ")
|
||||
|
||||
result = execute_george("tv_markout", communicator)
|
||||
mark_out_frame, mark_out_state, _ = result.split(" ")
|
||||
|
||||
start_frame = execute_george("tv_startframe", communicator)
|
||||
return {
|
||||
"width": width,
|
||||
"height": height,
|
||||
"pixel_aspect": pixel_apsect,
|
||||
"fps": frame_rate,
|
||||
"field_order": field_order,
|
||||
"mark_in": int(mark_in_frame),
|
||||
"mark_in_state": mark_in_state,
|
||||
"mark_in_set": mark_in_state == "set",
|
||||
"mark_out": int(mark_out_frame),
|
||||
"mark_out_state": mark_out_state,
|
||||
"mark_out_set": mark_out_state == "set",
|
||||
"start_frame": int(start_frame),
|
||||
"bg_color": get_scene_bg_color(communicator)
|
||||
}
|
||||
|
||||
|
||||
def get_scene_bg_color(communicator=None):
|
||||
"""Background color set on scene.
|
||||
|
||||
Is important for review exporting where scene bg color is used as
|
||||
background.
|
||||
"""
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
output_filepath = output_file.name.replace("\\", "/")
|
||||
george_script_lines = [
|
||||
# Variable containing full path to output file
|
||||
"output_path = \"{}\"".format(output_filepath),
|
||||
"tv_background",
|
||||
"bg_color = result",
|
||||
# Write data to output file
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' bg_color"
|
||||
]
|
||||
|
||||
george_script = "\n".join(george_script_lines)
|
||||
execute_george_through_file(george_script, communicator)
|
||||
|
||||
with open(output_filepath, "r") as stream:
|
||||
data = stream.read()
|
||||
|
||||
os.remove(output_filepath)
|
||||
data = data.strip()
|
||||
if not data:
|
||||
return None
|
||||
return data.split(" ")
|
||||
|
|
|
|||
491
openpype/hosts/tvpaint/api/pipeline.py
Normal file
491
openpype/hosts/tvpaint/api/pipeline.py
Normal file
|
|
@ -0,0 +1,491 @@
|
|||
import os
|
||||
import json
|
||||
import contextlib
|
||||
import tempfile
|
||||
import logging
|
||||
|
||||
import requests
|
||||
|
||||
import pyblish.api
|
||||
import avalon.api
|
||||
|
||||
from avalon import io
|
||||
from avalon.pipeline import AVALON_CONTAINER_ID
|
||||
|
||||
from openpype.hosts import tvpaint
|
||||
from openpype.api import get_current_project_settings
|
||||
|
||||
from .lib import (
|
||||
execute_george,
|
||||
execute_george_through_file
|
||||
)
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(tvpaint.__file__))
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
|
||||
METADATA_SECTION = "avalon"
|
||||
SECTION_NAME_CONTEXT = "context"
|
||||
SECTION_NAME_INSTANCES = "instances"
|
||||
SECTION_NAME_CONTAINERS = "containers"
|
||||
# Maximum length of metadata chunk string
|
||||
# TODO find out the max (500 is safe enough)
|
||||
TVPAINT_CHUNK_LENGTH = 500
|
||||
|
||||
"""TVPaint's Metadata
|
||||
|
||||
Metadata are stored to TVPaint's workfile.
|
||||
|
||||
Workfile works similar to .ini file but has few limitation. Most important
|
||||
limitation is that value under key has limited length. Due to this limitation
|
||||
each metadata section/key stores number of "subkeys" that are related to
|
||||
the section.
|
||||
|
||||
Example:
|
||||
Metadata key `"instances"` may have stored value "2". In that case it is
|
||||
expected that there are also keys `["instances0", "instances1"]`.
|
||||
|
||||
Workfile data looks like:
|
||||
```
|
||||
[avalon]
|
||||
instances0=[{{__dq__}id{__dq__}: {__dq__}pyblish.avalon.instance{__dq__...
|
||||
instances1=...more data...
|
||||
instances=2
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
def install():
|
||||
"""Install Maya-specific functionality of avalon-core.
|
||||
|
||||
This function is called automatically on calling `api.install(maya)`.
|
||||
|
||||
"""
|
||||
log.info("OpenPype - Installing TVPaint integration")
|
||||
io.install()
|
||||
|
||||
# Create workdir folder if does not exist yet
|
||||
workdir = io.Session["AVALON_WORKDIR"]
|
||||
if not os.path.exists(workdir):
|
||||
os.makedirs(workdir)
|
||||
|
||||
pyblish.api.register_host("tvpaint")
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
|
||||
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
|
||||
|
||||
registered_callbacks = (
|
||||
pyblish.api.registered_callbacks().get("instanceToggled") or []
|
||||
)
|
||||
if on_instance_toggle not in registered_callbacks:
|
||||
pyblish.api.register_callback("instanceToggled", on_instance_toggle)
|
||||
|
||||
avalon.api.on("application.launched", initial_launch)
|
||||
avalon.api.on("application.exit", application_exit)
|
||||
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall TVPaint-specific functionality of avalon-core.
|
||||
|
||||
This function is called automatically on calling `api.uninstall()`.
|
||||
|
||||
"""
|
||||
log.info("OpenPype - Uninstalling TVPaint integration")
|
||||
pyblish.api.deregister_host("tvpaint")
|
||||
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
|
||||
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
|
||||
|
||||
|
||||
def containerise(
|
||||
name, namespace, members, context, loader, current_containers=None
|
||||
):
|
||||
"""Add new container to metadata.
|
||||
|
||||
Args:
|
||||
name (str): Container name.
|
||||
namespace (str): Container namespace.
|
||||
members (list): List of members that were loaded and belongs
|
||||
to the container (layer names).
|
||||
current_containers (list): Preloaded containers. Should be used only
|
||||
on update/switch when containers were modified durring the process.
|
||||
|
||||
Returns:
|
||||
dict: Container data stored to workfile metadata.
|
||||
"""
|
||||
|
||||
container_data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"members": members,
|
||||
"name": name,
|
||||
"namespace": namespace,
|
||||
"loader": str(loader),
|
||||
"representation": str(context["representation"]["_id"])
|
||||
}
|
||||
if current_containers is None:
|
||||
current_containers = ls()
|
||||
|
||||
# Add container to containers list
|
||||
current_containers.append(container_data)
|
||||
|
||||
# Store data to metadata
|
||||
write_workfile_metadata(SECTION_NAME_CONTAINERS, current_containers)
|
||||
|
||||
return container_data
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
# TODO implement logic
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
def split_metadata_string(text, chunk_length=None):
|
||||
"""Split string by length.
|
||||
|
||||
Split text to chunks by entered length.
|
||||
Example:
|
||||
```python
|
||||
text = "ABCDEFGHIJKLM"
|
||||
result = split_metadata_string(text, 3)
|
||||
print(result)
|
||||
>>> ['ABC', 'DEF', 'GHI', 'JKL']
|
||||
```
|
||||
|
||||
Args:
|
||||
text (str): Text that will be split into chunks.
|
||||
chunk_length (int): Single chunk size. Default chunk_length is
|
||||
set to global variable `TVPAINT_CHUNK_LENGTH`.
|
||||
|
||||
Returns:
|
||||
list: List of strings wil at least one item.
|
||||
"""
|
||||
if chunk_length is None:
|
||||
chunk_length = TVPAINT_CHUNK_LENGTH
|
||||
chunks = []
|
||||
for idx in range(chunk_length, len(text) + chunk_length, chunk_length):
|
||||
start_idx = idx - chunk_length
|
||||
chunks.append(text[start_idx:idx])
|
||||
return chunks
|
||||
|
||||
|
||||
def get_workfile_metadata_string_for_keys(metadata_keys):
|
||||
"""Read metadata for specific keys from current project workfile.
|
||||
|
||||
All values from entered keys are stored to single string without separator.
|
||||
|
||||
Function is designed to help get all values for one metadata key at once.
|
||||
So order of passed keys matteres.
|
||||
|
||||
Args:
|
||||
metadata_keys (list, str): Metadata keys for which data should be
|
||||
retrieved. Order of keys matters! It is possible to enter only
|
||||
single key as string.
|
||||
"""
|
||||
# Add ability to pass only single key
|
||||
if isinstance(metadata_keys, str):
|
||||
metadata_keys = [metadata_keys]
|
||||
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
output_filepath = output_file.name.replace("\\", "/")
|
||||
|
||||
george_script_parts = []
|
||||
george_script_parts.append(
|
||||
"output_path = \"{}\"".format(output_filepath)
|
||||
)
|
||||
# Store data for each index of metadata key
|
||||
for metadata_key in metadata_keys:
|
||||
george_script_parts.append(
|
||||
"tv_readprojectstring \"{}\" \"{}\" \"\"".format(
|
||||
METADATA_SECTION, metadata_key
|
||||
)
|
||||
)
|
||||
george_script_parts.append(
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result"
|
||||
)
|
||||
|
||||
# Execute the script
|
||||
george_script = "\n".join(george_script_parts)
|
||||
execute_george_through_file(george_script)
|
||||
|
||||
# Load data from temp file
|
||||
with open(output_filepath, "r") as stream:
|
||||
file_content = stream.read()
|
||||
|
||||
# Remove `\n` from content
|
||||
output_string = file_content.replace("\n", "")
|
||||
|
||||
# Delete temp file
|
||||
os.remove(output_filepath)
|
||||
|
||||
return output_string
|
||||
|
||||
|
||||
def get_workfile_metadata_string(metadata_key):
|
||||
"""Read metadata for specific key from current project workfile."""
|
||||
result = get_workfile_metadata_string_for_keys([metadata_key])
|
||||
if not result:
|
||||
return None
|
||||
|
||||
stripped_result = result.strip()
|
||||
if not stripped_result:
|
||||
return None
|
||||
|
||||
# NOTE Backwards compatibility when metadata key did not store range of key
|
||||
# indexes but the value itself
|
||||
# NOTE We don't have to care about negative values with `isdecimal` check
|
||||
if not stripped_result.isdecimal():
|
||||
metadata_string = result
|
||||
else:
|
||||
keys = []
|
||||
for idx in range(int(stripped_result)):
|
||||
keys.append("{}{}".format(metadata_key, idx))
|
||||
metadata_string = get_workfile_metadata_string_for_keys(keys)
|
||||
|
||||
# Replace quotes plaholders with their values
|
||||
metadata_string = (
|
||||
metadata_string
|
||||
.replace("{__sq__}", "'")
|
||||
.replace("{__dq__}", "\"")
|
||||
)
|
||||
return metadata_string
|
||||
|
||||
|
||||
def get_workfile_metadata(metadata_key, default=None):
|
||||
"""Read and parse metadata for specific key from current project workfile.
|
||||
|
||||
Pipeline use function to store loaded and created instances within keys
|
||||
stored in `SECTION_NAME_INSTANCES` and `SECTION_NAME_CONTAINERS`
|
||||
constants.
|
||||
|
||||
Args:
|
||||
metadata_key (str): Key defying which key should read. It is expected
|
||||
value contain json serializable string.
|
||||
"""
|
||||
if default is None:
|
||||
default = []
|
||||
|
||||
json_string = get_workfile_metadata_string(metadata_key)
|
||||
if json_string:
|
||||
try:
|
||||
return json.loads(json_string)
|
||||
except json.decoder.JSONDecodeError:
|
||||
# TODO remove when backwards compatibility of storing metadata
|
||||
# will be removed
|
||||
print((
|
||||
"Fixed invalid metadata in workfile."
|
||||
" Not serializable string was: {}"
|
||||
).format(json_string))
|
||||
write_workfile_metadata(metadata_key, default)
|
||||
return default
|
||||
|
||||
|
||||
def write_workfile_metadata(metadata_key, value):
|
||||
"""Write metadata for specific key into current project workfile.
|
||||
|
||||
George script has specific way how to work with quotes which should be
|
||||
solved automatically with this function.
|
||||
|
||||
Args:
|
||||
metadata_key (str): Key defying under which key value will be stored.
|
||||
value (dict,list,str): Data to store they must be json serializable.
|
||||
"""
|
||||
if isinstance(value, (dict, list)):
|
||||
value = json.dumps(value)
|
||||
|
||||
if not value:
|
||||
value = ""
|
||||
|
||||
# Handle quotes in dumped json string
|
||||
# - replace single and double quotes with placeholders
|
||||
value = (
|
||||
value
|
||||
.replace("'", "{__sq__}")
|
||||
.replace("\"", "{__dq__}")
|
||||
)
|
||||
chunks = split_metadata_string(value)
|
||||
chunks_len = len(chunks)
|
||||
|
||||
write_template = "tv_writeprojectstring \"{}\" \"{}\" \"{}\""
|
||||
george_script_parts = []
|
||||
# Add information about chunks length to metadata key itself
|
||||
george_script_parts.append(
|
||||
write_template.format(METADATA_SECTION, metadata_key, chunks_len)
|
||||
)
|
||||
# Add chunk values to indexed metadata keys
|
||||
for idx, chunk_value in enumerate(chunks):
|
||||
sub_key = "{}{}".format(metadata_key, idx)
|
||||
george_script_parts.append(
|
||||
write_template.format(METADATA_SECTION, sub_key, chunk_value)
|
||||
)
|
||||
|
||||
george_script = "\n".join(george_script_parts)
|
||||
|
||||
return execute_george_through_file(george_script)
|
||||
|
||||
|
||||
def get_current_workfile_context():
|
||||
"""Return context in which was workfile saved."""
|
||||
return get_workfile_metadata(SECTION_NAME_CONTEXT, {})
|
||||
|
||||
|
||||
def save_current_workfile_context(context):
|
||||
"""Save context which was used to create a workfile."""
|
||||
return write_workfile_metadata(SECTION_NAME_CONTEXT, context)
|
||||
|
||||
|
||||
def remove_instance(instance):
|
||||
"""Remove instance from current workfile metadata."""
|
||||
current_instances = get_workfile_metadata(SECTION_NAME_INSTANCES)
|
||||
instance_id = instance.get("uuid")
|
||||
found_idx = None
|
||||
if instance_id:
|
||||
for idx, _inst in enumerate(current_instances):
|
||||
if _inst["uuid"] == instance_id:
|
||||
found_idx = idx
|
||||
break
|
||||
|
||||
if found_idx is None:
|
||||
return
|
||||
current_instances.pop(found_idx)
|
||||
write_instances(current_instances)
|
||||
|
||||
|
||||
def list_instances():
|
||||
"""List all created instances from current workfile."""
|
||||
return get_workfile_metadata(SECTION_NAME_INSTANCES)
|
||||
|
||||
|
||||
def write_instances(data):
|
||||
return write_workfile_metadata(SECTION_NAME_INSTANCES, data)
|
||||
|
||||
|
||||
# Backwards compatibility
|
||||
def _write_instances(*args, **kwargs):
|
||||
return write_instances(*args, **kwargs)
|
||||
|
||||
|
||||
def ls():
|
||||
return get_workfile_metadata(SECTION_NAME_CONTAINERS)
|
||||
|
||||
|
||||
def on_instance_toggle(instance, old_value, new_value):
|
||||
"""Update instance data in workfile on publish toggle."""
|
||||
# Review may not have real instance in wokrfile metadata
|
||||
if not instance.data.get("uuid"):
|
||||
return
|
||||
|
||||
instance_id = instance.data["uuid"]
|
||||
found_idx = None
|
||||
current_instances = list_instances()
|
||||
for idx, workfile_instance in enumerate(current_instances):
|
||||
if workfile_instance["uuid"] == instance_id:
|
||||
found_idx = idx
|
||||
break
|
||||
|
||||
if found_idx is None:
|
||||
return
|
||||
|
||||
if "active" in current_instances[found_idx]:
|
||||
current_instances[found_idx]["active"] = new_value
|
||||
write_instances(current_instances)
|
||||
|
||||
|
||||
def initial_launch():
|
||||
# Setup project settings if its the template that's launched.
|
||||
# TODO also check for template creation when it's possible to define
|
||||
# templates
|
||||
last_workfile = os.environ.get("AVALON_LAST_WORKFILE")
|
||||
if not last_workfile or os.path.exists(last_workfile):
|
||||
return
|
||||
|
||||
log.info("Setting up project...")
|
||||
set_context_settings()
|
||||
|
||||
|
||||
def application_exit():
|
||||
data = get_current_project_settings()
|
||||
stop_timer = data["tvpaint"]["stop_timer_on_application_exit"]
|
||||
|
||||
if not stop_timer:
|
||||
return
|
||||
|
||||
# Stop application timer.
|
||||
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
|
||||
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
|
||||
requests.post(rest_api_url)
|
||||
|
||||
|
||||
def set_context_settings(asset_doc=None):
|
||||
"""Set workfile settings by asset document data.
|
||||
|
||||
Change fps, resolution and frame start/end.
|
||||
"""
|
||||
if asset_doc is None:
|
||||
# Use current session asset if not passed
|
||||
asset_doc = avalon.io.find_one({
|
||||
"type": "asset",
|
||||
"name": avalon.io.Session["AVALON_ASSET"]
|
||||
})
|
||||
|
||||
project_doc = avalon.io.find_one({"type": "project"})
|
||||
|
||||
framerate = asset_doc["data"].get("fps")
|
||||
if framerate is None:
|
||||
framerate = project_doc["data"].get("fps")
|
||||
|
||||
if framerate is not None:
|
||||
execute_george(
|
||||
"tv_framerate {} \"timestretch\"".format(framerate)
|
||||
)
|
||||
else:
|
||||
print("Framerate was not found!")
|
||||
|
||||
width_key = "resolutionWidth"
|
||||
height_key = "resolutionHeight"
|
||||
|
||||
width = asset_doc["data"].get(width_key)
|
||||
height = asset_doc["data"].get(height_key)
|
||||
if width is None or height is None:
|
||||
width = project_doc["data"].get(width_key)
|
||||
height = project_doc["data"].get(height_key)
|
||||
|
||||
if width is None or height is None:
|
||||
print("Resolution was not found!")
|
||||
else:
|
||||
execute_george(
|
||||
"tv_resizepage {} {} 0".format(width, height)
|
||||
)
|
||||
|
||||
frame_start = asset_doc["data"].get("frameStart")
|
||||
frame_end = asset_doc["data"].get("frameEnd")
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
print("Frame range was not found!")
|
||||
return
|
||||
|
||||
handles = asset_doc["data"].get("handles") or 0
|
||||
handle_start = asset_doc["data"].get("handleStart")
|
||||
handle_end = asset_doc["data"].get("handleEnd")
|
||||
|
||||
if handle_start is None or handle_end is None:
|
||||
handle_start = handles
|
||||
handle_end = handles
|
||||
|
||||
# Always start from 0 Mark In and set only Mark Out
|
||||
mark_in = 0
|
||||
mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end
|
||||
|
||||
execute_george("tv_markin {} set".format(mark_in))
|
||||
execute_george("tv_markout {} set".format(mark_out))
|
||||
|
|
@ -1,8 +1,21 @@
|
|||
import re
|
||||
import uuid
|
||||
|
||||
import avalon.api
|
||||
|
||||
from openpype.api import PypeCreatorMixin
|
||||
from avalon.tvpaint import pipeline
|
||||
from openpype.hosts.tvpaint.api import (
|
||||
pipeline,
|
||||
lib
|
||||
)
|
||||
|
||||
|
||||
class Creator(PypeCreatorMixin, pipeline.Creator):
|
||||
class Creator(PypeCreatorMixin, avalon.api.Creator):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(Creator, self).__init__(*args, **kwargs)
|
||||
# Add unified identifier created with `uuid` module
|
||||
self.data["uuid"] = str(uuid.uuid4())
|
||||
|
||||
@classmethod
|
||||
def get_dynamic_data(cls, *args, **kwargs):
|
||||
dynamic_data = super(Creator, cls).get_dynamic_data(*args, **kwargs)
|
||||
|
|
@ -17,3 +30,95 @@ class Creator(PypeCreatorMixin, pipeline.Creator):
|
|||
if "task" not in dynamic_data and task_name:
|
||||
dynamic_data["task"] = task_name
|
||||
return dynamic_data
|
||||
|
||||
@staticmethod
|
||||
def are_instances_same(instance_1, instance_2):
|
||||
"""Compare instances but skip keys with unique values.
|
||||
|
||||
During compare are skiped keys that will be 100% sure
|
||||
different on new instance, like "id".
|
||||
|
||||
Returns:
|
||||
bool: True if instances are same.
|
||||
"""
|
||||
if (
|
||||
not isinstance(instance_1, dict)
|
||||
or not isinstance(instance_2, dict)
|
||||
):
|
||||
return instance_1 == instance_2
|
||||
|
||||
checked_keys = set()
|
||||
checked_keys.add("id")
|
||||
for key, value in instance_1.items():
|
||||
if key not in checked_keys:
|
||||
if key not in instance_2:
|
||||
return False
|
||||
if value != instance_2[key]:
|
||||
return False
|
||||
checked_keys.add(key)
|
||||
|
||||
for key in instance_2.keys():
|
||||
if key not in checked_keys:
|
||||
return False
|
||||
return True
|
||||
|
||||
def write_instances(self, data):
|
||||
self.log.debug(
|
||||
"Storing instance data to workfile. {}".format(str(data))
|
||||
)
|
||||
return pipeline.write_instances(data)
|
||||
|
||||
def process(self):
|
||||
data = pipeline.list_instances()
|
||||
data.append(self.data)
|
||||
self.write_instances(data)
|
||||
|
||||
|
||||
class Loader(avalon.api.Loader):
|
||||
hosts = ["tvpaint"]
|
||||
|
||||
@staticmethod
|
||||
def get_members_from_container(container):
|
||||
if "members" not in container and "objectName" in container:
|
||||
# Backwards compatibility
|
||||
layer_ids_str = container.get("objectName")
|
||||
return [
|
||||
int(layer_id) for layer_id in layer_ids_str.split("|")
|
||||
]
|
||||
return container["members"]
|
||||
|
||||
def get_unique_layer_name(self, asset_name, name):
|
||||
"""Layer name with counter as suffix.
|
||||
|
||||
Find higher 3 digit suffix from all layer names in scene matching regex
|
||||
`{asset_name}_{name}_{suffix}`. Higher 3 digit suffix is used
|
||||
as base for next number if scene does not contain layer matching regex
|
||||
`0` is used ase base.
|
||||
|
||||
Args:
|
||||
asset_name (str): Name of subset's parent asset document.
|
||||
name (str): Name of loaded subset.
|
||||
|
||||
Returns:
|
||||
(str): `{asset_name}_{name}_{higher suffix + 1}`
|
||||
"""
|
||||
layer_name_base = "{}_{}".format(asset_name, name)
|
||||
|
||||
counter_regex = re.compile(r"_(\d{3})$")
|
||||
|
||||
higher_counter = 0
|
||||
for layer in lib.get_layers_data():
|
||||
layer_name = layer["name"]
|
||||
if not layer_name.startswith(layer_name_base):
|
||||
continue
|
||||
number_subpart = layer_name[len(layer_name_base):]
|
||||
groups = counter_regex.findall(number_subpart)
|
||||
if len(groups) != 1:
|
||||
continue
|
||||
|
||||
counter = int(groups[0])
|
||||
if counter > higher_counter:
|
||||
higher_counter = counter
|
||||
continue
|
||||
|
||||
return "{}_{:0>3d}".format(layer_name_base, higher_counter + 1)
|
||||
|
|
|
|||
55
openpype/hosts/tvpaint/api/workio.py
Normal file
55
openpype/hosts/tvpaint/api/workio.py
Normal file
|
|
@ -0,0 +1,55 @@
|
|||
"""Host API required for Work Files.
|
||||
# TODO @iLLiCiT implement functions:
|
||||
has_unsaved_changes
|
||||
"""
|
||||
|
||||
from avalon import api
|
||||
from .lib import (
|
||||
execute_george,
|
||||
execute_george_through_file
|
||||
)
|
||||
from .pipeline import save_current_workfile_context
|
||||
|
||||
|
||||
def open_file(filepath):
|
||||
"""Open the scene file in Blender."""
|
||||
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(
|
||||
filepath.replace("\\", "/")
|
||||
)
|
||||
return execute_george_through_file(george_script)
|
||||
|
||||
|
||||
def save_file(filepath):
|
||||
"""Save the open scene file."""
|
||||
# Store context to workfile before save
|
||||
context = {
|
||||
"project": api.Session["AVALON_PROJECT"],
|
||||
"asset": api.Session["AVALON_ASSET"],
|
||||
"task": api.Session["AVALON_TASK"]
|
||||
}
|
||||
save_current_workfile_context(context)
|
||||
|
||||
# Execute george script to save workfile.
|
||||
george_script = "tv_SaveProject {}".format(filepath.replace("\\", "/"))
|
||||
return execute_george(george_script)
|
||||
|
||||
|
||||
def current_file():
|
||||
"""Return the path of the open scene file."""
|
||||
george_script = "tv_GetProjectName"
|
||||
return execute_george(george_script)
|
||||
|
||||
|
||||
def has_unsaved_changes():
|
||||
"""Does the open scene file have unsaved changes?"""
|
||||
return False
|
||||
|
||||
|
||||
def file_extensions():
|
||||
"""Return the supported file extensions for Blender scene files."""
|
||||
return api.HOST_WORKFILE_EXTENSIONS["tvpaint"]
|
||||
|
||||
|
||||
def work_root(session):
|
||||
"""Return the default root to browse for work files."""
|
||||
return session["AVALON_WORKDIR"]
|
||||
|
|
@ -4,7 +4,7 @@ import shutil
|
|||
from openpype.hosts import tvpaint
|
||||
from openpype.lib import (
|
||||
PreLaunchHook,
|
||||
get_pype_execute_args
|
||||
get_openpype_execute_args
|
||||
)
|
||||
|
||||
import avalon
|
||||
|
|
@ -30,7 +30,7 @@ class TvpaintPrelaunchHook(PreLaunchHook):
|
|||
while self.launch_context.launch_args:
|
||||
remainders.append(self.launch_context.launch_args.pop(0))
|
||||
|
||||
new_launch_args = get_pype_execute_args(
|
||||
new_launch_args = get_openpype_execute_args(
|
||||
"run", self.launch_script_path(), executable_path
|
||||
)
|
||||
|
||||
|
|
@ -44,10 +44,6 @@ class TvpaintPrelaunchHook(PreLaunchHook):
|
|||
self.launch_context.launch_args.extend(remainders)
|
||||
|
||||
def launch_script_path(self):
|
||||
avalon_dir = os.path.dirname(os.path.abspath(avalon.__file__))
|
||||
script_path = os.path.join(
|
||||
avalon_dir,
|
||||
"tvpaint",
|
||||
"launch_script.py"
|
||||
)
|
||||
return script_path
|
||||
from openpype.hosts.tvpaint import get_launch_script_path
|
||||
|
||||
return get_launch_script_path()
|
||||
|
|
|
|||
|
|
@ -1,11 +1,12 @@
|
|||
from avalon.api import CreatorError
|
||||
from avalon.tvpaint import (
|
||||
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.hosts.tvpaint.api import (
|
||||
plugin,
|
||||
pipeline,
|
||||
lib,
|
||||
CommunicationWrapper
|
||||
)
|
||||
from openpype.hosts.tvpaint.api import plugin
|
||||
from openpype.lib import prepare_template_data
|
||||
|
||||
|
||||
class CreateRenderlayer(plugin.Creator):
|
||||
|
|
@ -56,7 +57,7 @@ class CreateRenderlayer(plugin.Creator):
|
|||
# Validate that communication is initialized
|
||||
if CommunicationWrapper.communicator:
|
||||
# Get currently selected layers
|
||||
layers_data = lib.layers_data()
|
||||
layers_data = lib.get_layers_data()
|
||||
|
||||
selected_layers = [
|
||||
layer
|
||||
|
|
@ -75,7 +76,7 @@ class CreateRenderlayer(plugin.Creator):
|
|||
def process(self):
|
||||
self.log.debug("Query data from workfile.")
|
||||
instances = pipeline.list_instances()
|
||||
layers_data = lib.layers_data()
|
||||
layers_data = lib.get_layers_data()
|
||||
|
||||
self.log.debug("Checking for selection groups.")
|
||||
# Collect group ids from selection
|
||||
|
|
@ -102,7 +103,7 @@ class CreateRenderlayer(plugin.Creator):
|
|||
self.log.debug(f"Selected group id is \"{group_id}\".")
|
||||
self.data["group_id"] = group_id
|
||||
|
||||
group_data = lib.groups_data()
|
||||
group_data = lib.get_groups_data()
|
||||
group_name = None
|
||||
for group in group_data:
|
||||
if group["group_id"] == group_id:
|
||||
|
|
@ -169,7 +170,7 @@ class CreateRenderlayer(plugin.Creator):
|
|||
return
|
||||
|
||||
self.log.debug("Querying groups data from workfile.")
|
||||
groups_data = lib.groups_data()
|
||||
groups_data = lib.get_groups_data()
|
||||
|
||||
self.log.debug("Changing name of the group.")
|
||||
selected_group = None
|
||||
|
|
@ -196,6 +197,7 @@ class CreateRenderlayer(plugin.Creator):
|
|||
)
|
||||
|
||||
def _ask_user_subset_override(self, instance):
|
||||
from Qt import QtCore
|
||||
from Qt.QtWidgets import QMessageBox
|
||||
|
||||
title = "Subset \"{}\" already exist".format(instance["subset"])
|
||||
|
|
@ -205,6 +207,10 @@ class CreateRenderlayer(plugin.Creator):
|
|||
).format(instance["subset"])
|
||||
|
||||
dialog = QMessageBox()
|
||||
dialog.setWindowFlags(
|
||||
dialog.windowFlags()
|
||||
| QtCore.Qt.WindowStaysOnTopHint
|
||||
)
|
||||
dialog.setWindowTitle(title)
|
||||
dialog.setText(text)
|
||||
dialog.setStandardButtons(QMessageBox.Yes | QMessageBox.No)
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
from avalon.api import CreatorError
|
||||
from avalon.tvpaint import (
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.hosts.tvpaint.api import (
|
||||
plugin,
|
||||
pipeline,
|
||||
lib,
|
||||
CommunicationWrapper
|
||||
)
|
||||
from openpype.hosts.tvpaint.api import plugin
|
||||
from openpype.lib import prepare_template_data
|
||||
|
||||
|
||||
class CreateRenderPass(plugin.Creator):
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
from avalon.vendor import qargparse
|
||||
from avalon.tvpaint import lib, pipeline
|
||||
from openpype.hosts.tvpaint.api import lib, plugin
|
||||
|
||||
|
||||
class ImportImage(pipeline.Loader):
|
||||
class ImportImage(plugin.Loader):
|
||||
"""Load image or image sequence to TVPaint as new layer."""
|
||||
|
||||
families = ["render", "image", "background", "plate"]
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue