Merge branch 'develop' into feature/extract_reveiew_metadata_update

This commit is contained in:
Jakub Jezek 2022-01-19 14:08:15 +01:00
commit 91407aaa32
No known key found for this signature in database
GPG key ID: D8548FBF690B100A
464 changed files with 22532 additions and 2370 deletions

View file

@ -1,134 +1,130 @@
# Changelog
## [3.7.0-nightly.7](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.8.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.7.0...HEAD)
### 📖 Documentation
- docs\[website\]: Add Ellipse Studio \(logo\) as an OpenPype contributor [\#2324](https://github.com/pypeclub/OpenPype/pull/2324)
- Renamed to proper name [\#2546](https://github.com/pypeclub/OpenPype/pull/2546)
- Slack: Add review to notification message [\#2498](https://github.com/pypeclub/OpenPype/pull/2498)
**🆕 New features**
- Settings UI use OpenPype styles [\#2296](https://github.com/pypeclub/OpenPype/pull/2296)
- Flame: OpenTimelineIO Export Modul [\#2398](https://github.com/pypeclub/OpenPype/pull/2398)
**🚀 Enhancements**
- Settings: Webpublisher in hosts enum [\#2367](https://github.com/pypeclub/OpenPype/pull/2367)
- Hiero: python3 compatibility [\#2365](https://github.com/pypeclub/OpenPype/pull/2365)
- Burnins: Be able recognize mxf OPAtom format [\#2361](https://github.com/pypeclub/OpenPype/pull/2361)
- Local settings: Copyable studio paths [\#2349](https://github.com/pypeclub/OpenPype/pull/2349)
- Assets Widget: Clear model on project change [\#2345](https://github.com/pypeclub/OpenPype/pull/2345)
- General: OpenPype default modules hierarchy [\#2338](https://github.com/pypeclub/OpenPype/pull/2338)
- General: FFprobe error exception contain original error message [\#2328](https://github.com/pypeclub/OpenPype/pull/2328)
- Resolve: Add experimental button to menu [\#2325](https://github.com/pypeclub/OpenPype/pull/2325)
- Hiero: Add experimental tools action [\#2323](https://github.com/pypeclub/OpenPype/pull/2323)
- Input links: Cleanup and unification of differences [\#2322](https://github.com/pypeclub/OpenPype/pull/2322)
- General: Don't validate vendor bin with executing them [\#2317](https://github.com/pypeclub/OpenPype/pull/2317)
- General: Multilayer EXRs support [\#2315](https://github.com/pypeclub/OpenPype/pull/2315)
- General: Run process log stderr as info log level [\#2309](https://github.com/pypeclub/OpenPype/pull/2309)
- General: Reduce vendor imports [\#2305](https://github.com/pypeclub/OpenPype/pull/2305)
- Tools: Cleanup of unused classes [\#2304](https://github.com/pypeclub/OpenPype/pull/2304)
- Project Manager: Added ability to delete project [\#2298](https://github.com/pypeclub/OpenPype/pull/2298)
- Ftrack: Synchronize input links [\#2287](https://github.com/pypeclub/OpenPype/pull/2287)
- Nuke: extract baked review videos presets [\#2248](https://github.com/pypeclub/OpenPype/pull/2248)
- Settings: PathInput strip passed string [\#2550](https://github.com/pypeclub/OpenPype/pull/2550)
- General: Validate if current process OpenPype version is requested version [\#2529](https://github.com/pypeclub/OpenPype/pull/2529)
- General: Be able to use anatomy data in ffmpeg output arguments [\#2525](https://github.com/pypeclub/OpenPype/pull/2525)
- Expose toggle publish plug-in settings for Maya Look Shading Engine Naming [\#2521](https://github.com/pypeclub/OpenPype/pull/2521)
- Photoshop: Move implementation to OpenPype [\#2510](https://github.com/pypeclub/OpenPype/pull/2510)
- TimersManager: Move module one hierarchy higher [\#2501](https://github.com/pypeclub/OpenPype/pull/2501)
- Slack: notifications are sent with Openpype logo and bot name [\#2499](https://github.com/pypeclub/OpenPype/pull/2499)
- Ftrack: Event handlers settings [\#2496](https://github.com/pypeclub/OpenPype/pull/2496)
- Flame - create publishable clips [\#2495](https://github.com/pypeclub/OpenPype/pull/2495)
- Tools: Fix style and modality of errors in loader and creator [\#2489](https://github.com/pypeclub/OpenPype/pull/2489)
- Project Manager: Remove project button cleanup [\#2482](https://github.com/pypeclub/OpenPype/pull/2482)
- Tools: Be able to change models of tasks and assets widgets [\#2475](https://github.com/pypeclub/OpenPype/pull/2475)
- Publish pype: Reduce publish process defering [\#2464](https://github.com/pypeclub/OpenPype/pull/2464)
- Maya: Improve speed of Collect History logic [\#2460](https://github.com/pypeclub/OpenPype/pull/2460)
- Maya: Validate Rig Controllers - fix Error: in script editor [\#2459](https://github.com/pypeclub/OpenPype/pull/2459)
- Maya: Optimize Validate Locked Normals speed for dense polymeshes [\#2457](https://github.com/pypeclub/OpenPype/pull/2457)
- Fix \#2453 Refactor missing \_get\_reference\_node method [\#2455](https://github.com/pypeclub/OpenPype/pull/2455)
- Houdini: Remove broken unique name counter [\#2450](https://github.com/pypeclub/OpenPype/pull/2450)
- Maya: Improve lib.polyConstraint performance when Select tool is not the active tool context [\#2447](https://github.com/pypeclub/OpenPype/pull/2447)
- General: Validate third party before build [\#2425](https://github.com/pypeclub/OpenPype/pull/2425)
- Maya : add option to not group reference in ReferenceLoader [\#2383](https://github.com/pypeclub/OpenPype/pull/2383)
**🐛 Bug fixes**
- Fix published frame content for sequence starting with 0 [\#2513](https://github.com/pypeclub/OpenPype/pull/2513)
- Fix \#2497: reset empty string attributes correctly to "" instead of "None" [\#2506](https://github.com/pypeclub/OpenPype/pull/2506)
- General: Settings work if OpenPypeVersion is available [\#2494](https://github.com/pypeclub/OpenPype/pull/2494)
- General: PYTHONPATH may break OpenPype dependencies [\#2493](https://github.com/pypeclub/OpenPype/pull/2493)
- Workfiles tool: Files widget show files on first show [\#2488](https://github.com/pypeclub/OpenPype/pull/2488)
- General: Custom template paths filter fix [\#2483](https://github.com/pypeclub/OpenPype/pull/2483)
- Loader: Remove always on top flag in tray [\#2480](https://github.com/pypeclub/OpenPype/pull/2480)
- General: Anatomy does not return root envs as unicode [\#2465](https://github.com/pypeclub/OpenPype/pull/2465)
- Maya: Validate Shape Zero do not keep fixed geometry vertices selected/active after repair [\#2456](https://github.com/pypeclub/OpenPype/pull/2456)
**Merged pull requests:**
- General: Fix install thread in igniter [\#2549](https://github.com/pypeclub/OpenPype/pull/2549)
- AfterEffects: Move implementation to OpenPype [\#2543](https://github.com/pypeclub/OpenPype/pull/2543)
- Fix create zip tool - path argument [\#2522](https://github.com/pypeclub/OpenPype/pull/2522)
- General: Modules import function output fix [\#2492](https://github.com/pypeclub/OpenPype/pull/2492)
- AE: fix hiding of alert window below Publish [\#2491](https://github.com/pypeclub/OpenPype/pull/2491)
- Maya: Validate NGONs re-use polyConstraint code from openpype.host.maya.api.lib [\#2458](https://github.com/pypeclub/OpenPype/pull/2458)
## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.14...3.7.0)
**🚀 Enhancements**
- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462)
- Photoshop: New style validations for New publisher [\#2429](https://github.com/pypeclub/OpenPype/pull/2429)
- General: Environment variables groups [\#2424](https://github.com/pypeclub/OpenPype/pull/2424)
- Unreal: Dynamic menu created in Python [\#2422](https://github.com/pypeclub/OpenPype/pull/2422)
- Settings UI: Hyperlinks to settings [\#2420](https://github.com/pypeclub/OpenPype/pull/2420)
- Modules: JobQueue module moved one hierarchy level higher [\#2419](https://github.com/pypeclub/OpenPype/pull/2419)
- TimersManager: Start timer post launch hook [\#2418](https://github.com/pypeclub/OpenPype/pull/2418)
- General: Run applications as separate processes under linux [\#2408](https://github.com/pypeclub/OpenPype/pull/2408)
- Ftrack: Check existence of object type on recreation [\#2404](https://github.com/pypeclub/OpenPype/pull/2404)
- Enhancement: Global cleanup plugin that explicitly remove paths from context [\#2402](https://github.com/pypeclub/OpenPype/pull/2402)
- General: MongoDB ability to specify replica set groups [\#2401](https://github.com/pypeclub/OpenPype/pull/2401)
- Flame: moving `utility\_scripts` to api folder also with `scripts` [\#2385](https://github.com/pypeclub/OpenPype/pull/2385)
- Centos 7 dependency compatibility [\#2384](https://github.com/pypeclub/OpenPype/pull/2384)
- Enhancement: Settings: Use project settings values from another project [\#2382](https://github.com/pypeclub/OpenPype/pull/2382)
- Blender 3: Support auto install for new blender version [\#2377](https://github.com/pypeclub/OpenPype/pull/2377)
- Maya add render image path to settings [\#2375](https://github.com/pypeclub/OpenPype/pull/2375)
**🐛 Bug fixes**
- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471)
- Short Pyblish plugin path [\#2428](https://github.com/pypeclub/OpenPype/pull/2428)
- PS: Introduced settings for invalid characters to use in ValidateNaming plugin [\#2417](https://github.com/pypeclub/OpenPype/pull/2417)
- Settings UI: Breadcrumbs path does not create new entities [\#2416](https://github.com/pypeclub/OpenPype/pull/2416)
- AfterEffects: Variant 2022 is in defaults but missing in schemas [\#2412](https://github.com/pypeclub/OpenPype/pull/2412)
- Nuke: baking representations was not additive [\#2406](https://github.com/pypeclub/OpenPype/pull/2406)
- General: Fix access to environments from default settings [\#2403](https://github.com/pypeclub/OpenPype/pull/2403)
- Fix: Placeholder Input color set fix [\#2399](https://github.com/pypeclub/OpenPype/pull/2399)
- Settings: Fix state change of wrapper label [\#2396](https://github.com/pypeclub/OpenPype/pull/2396)
- Flame: fix ftrack publisher [\#2381](https://github.com/pypeclub/OpenPype/pull/2381)
- hiero: solve custom ocio path [\#2379](https://github.com/pypeclub/OpenPype/pull/2379)
- hiero: fix workio and flatten [\#2378](https://github.com/pypeclub/OpenPype/pull/2378)
- Nuke: fixing menu re-drawing during context change [\#2374](https://github.com/pypeclub/OpenPype/pull/2374)
- Webpublisher: Fix assignment of families of TVpaint instances [\#2373](https://github.com/pypeclub/OpenPype/pull/2373)
- Nuke: fixing node name based on switched asset name [\#2369](https://github.com/pypeclub/OpenPype/pull/2369)
- JobQueue: Fix loading of settings [\#2362](https://github.com/pypeclub/OpenPype/pull/2362)
- Tools: Placeholder color [\#2359](https://github.com/pypeclub/OpenPype/pull/2359)
- Launcher: Minimize button on MacOs [\#2355](https://github.com/pypeclub/OpenPype/pull/2355)
- StandalonePublisher: Fix import of constant [\#2354](https://github.com/pypeclub/OpenPype/pull/2354)
- Adobe products show issue [\#2347](https://github.com/pypeclub/OpenPype/pull/2347)
- Maya Look Assigner: Fix Python 3 compatibility [\#2343](https://github.com/pypeclub/OpenPype/pull/2343)
- Remove wrongly used host for hook [\#2342](https://github.com/pypeclub/OpenPype/pull/2342)
- Tools: Use Qt context on tools show [\#2340](https://github.com/pypeclub/OpenPype/pull/2340)
- Flame: Fix default argument value in custom dictionary [\#2339](https://github.com/pypeclub/OpenPype/pull/2339)
- Timers Manager: Disable auto stop timer on linux platform [\#2334](https://github.com/pypeclub/OpenPype/pull/2334)
- nuke: bake preset single input exception [\#2331](https://github.com/pypeclub/OpenPype/pull/2331)
- Hiero: fixing multiple templates at a hierarchy parent [\#2330](https://github.com/pypeclub/OpenPype/pull/2330)
- Fix - provider icons are pulled from a folder [\#2326](https://github.com/pypeclub/OpenPype/pull/2326)
- InputLinks: Typo in "inputLinks" key [\#2314](https://github.com/pypeclub/OpenPype/pull/2314)
- Deadline timeout and logging [\#2312](https://github.com/pypeclub/OpenPype/pull/2312)
- nuke: do not multiply representation on class method [\#2311](https://github.com/pypeclub/OpenPype/pull/2311)
- Workfiles tool: Fix task formatting [\#2306](https://github.com/pypeclub/OpenPype/pull/2306)
- Delivery: Fix delivery paths created on windows [\#2302](https://github.com/pypeclub/OpenPype/pull/2302)
- Maya: Deadline - fix limit groups [\#2295](https://github.com/pypeclub/OpenPype/pull/2295)
- Royal Render: Fix plugin order and OpenPype auto-detection [\#2291](https://github.com/pypeclub/OpenPype/pull/2291)
- Alternate site for site sync doesnt work for sequences [\#2284](https://github.com/pypeclub/OpenPype/pull/2284)
**Merged pull requests:**
- Linux : flip updating submodules logic [\#2357](https://github.com/pypeclub/OpenPype/pull/2357)
- Update of avalon-core [\#2346](https://github.com/pypeclub/OpenPype/pull/2346)
- Maya: configurable model top level validation [\#2321](https://github.com/pypeclub/OpenPype/pull/2321)
- Forced cx\_freeze to include sqlite3 into build [\#2432](https://github.com/pypeclub/OpenPype/pull/2432)
- Maya: Replaced PATH usage with vendored oiio path for maketx utility [\#2405](https://github.com/pypeclub/OpenPype/pull/2405)
- \[Fix\]\[MAYA\] Handle message type attribute within CollectLook [\#2394](https://github.com/pypeclub/OpenPype/pull/2394)
- Add validator to check correct version of extension for PS and AE [\#2387](https://github.com/pypeclub/OpenPype/pull/2387)
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)
**🐛 Bug fixes**
- Nuke: inventory update removes all loaded read nodes [\#2294](https://github.com/pypeclub/OpenPype/pull/2294)
## [3.6.3](https://github.com/pypeclub/OpenPype/tree/3.6.3) (2021-11-19)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.3-nightly.1...3.6.3)
**🐛 Bug fixes**
- Deadline: Fix publish targets [\#2280](https://github.com/pypeclub/OpenPype/pull/2280)
## [3.6.2](https://github.com/pypeclub/OpenPype/tree/3.6.2) (2021-11-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.2-nightly.2...3.6.2)
**🚀 Enhancements**
- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265)
- SceneInventory: Choose loader in asset switcher [\#2262](https://github.com/pypeclub/OpenPype/pull/2262)
- Style: New fonts in OpenPype style [\#2256](https://github.com/pypeclub/OpenPype/pull/2256)
- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251)
- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244)
**🐛 Bug fixes**
- Tools: Parenting of tools in Nuke and Hiero [\#2266](https://github.com/pypeclub/OpenPype/pull/2266)
- limiting validator to specific editorial hosts [\#2264](https://github.com/pypeclub/OpenPype/pull/2264)
- Tools: Select Context dialog attribute fix [\#2261](https://github.com/pypeclub/OpenPype/pull/2261)
- Maya: Render publishing fails on linux [\#2260](https://github.com/pypeclub/OpenPype/pull/2260)
- LookAssigner: Fix tool reopen [\#2259](https://github.com/pypeclub/OpenPype/pull/2259)
- Standalone: editorial not publishing thumbnails on all subsets [\#2258](https://github.com/pypeclub/OpenPype/pull/2258)
- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247)
## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.1-nightly.1...3.6.1)
**🐛 Bug fixes**
- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254)
## [3.6.0](https://github.com/pypeclub/OpenPype/tree/3.6.0) (2021-11-15)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.0-nightly.6...3.6.0)
**🚀 Enhancements**
- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255)
- Tools: Subset manager in OpenPype [\#2243](https://github.com/pypeclub/OpenPype/pull/2243)
**🐛 Bug fixes**
- Ftrack: Sync project ftrack id cache issue [\#2250](https://github.com/pypeclub/OpenPype/pull/2250)
- Ftrack: Session creation and Prepare project [\#2245](https://github.com/pypeclub/OpenPype/pull/2245)
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)

View file

@ -41,6 +41,8 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
ncurses \
ncurses-devel \
qt5-qtbase-devel \
xcb-util-wm \
xcb-util-renderutil \
&& yum clean all
# we need to build our own patchelf
@ -92,7 +94,8 @@ RUN source $HOME/.bashrc \
RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib \
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.7/vendor/python/PySide2/Qt/lib
RUN cd /opt/openpype \
rm -rf ./vendor/bin

49
app_launcher.py Normal file
View file

@ -0,0 +1,49 @@
"""Launch process that is not child process of python or OpenPype.
This is written for linux distributions where process tree may affect what
is when closed or blocked to be closed.
"""
import os
import sys
import subprocess
import json
def main(input_json_path):
"""Read launch arguments from json file and launch the process.
Expected that json contains "args" key with string or list of strings.
Arguments are converted to string using `list2cmdline`. At the end is added
`&` which will cause that launched process is detached and running as
"background" process.
## Notes
@iLLiCiT: This should be possible to do with 'disown' or double forking but
I didn't find a way how to do it properly. Disown didn't work as
expected for me and double forking killed parent process which is
unexpected too.
"""
with open(input_json_path, "r") as stream:
data = json.load(stream)
# Change environment variables
env = data.get("env") or {}
for key, value in env.items():
os.environ[key] = value
# Prepare launch arguments
args = data["args"]
if isinstance(args, list):
args = subprocess.list2cmdline(args)
# Run the command as background process
shell_cmd = args + " &"
os.system(shell_cmd)
sys.exit(0)
if __name__ == "__main__":
# Expect that last argument is path to a json with launch args information
main(sys.argv[-1])

View file

@ -6,9 +6,18 @@ import sys
os.chdir(os.path.dirname(__file__)) # for override sys.path in Deadline
from .bootstrap_repos import BootstrapRepos
from .bootstrap_repos import (
BootstrapRepos,
OpenPypeVersion
)
from .version import __version__ as version
# Store OpenPypeVersion to 'sys.modules'
# - this makes it available in OpenPype processes without modifying
# 'sys.path' or 'PYTHONPATH'
if "OpenPypeVersion" not in sys.modules:
sys.modules["OpenPypeVersion"] = OpenPypeVersion
def open_dialog():
"""Show Igniter dialog."""
@ -22,7 +31,9 @@ def open_dialog():
if scale_attr is not None:
QtWidgets.QApplication.setAttribute(scale_attr)
app = QtWidgets.QApplication(sys.argv)
app = QtWidgets.QApplication.instance()
if not app:
app = QtWidgets.QApplication(sys.argv)
d = InstallDialog()
d.open()
@ -43,7 +54,9 @@ def open_update_window(openpype_version):
if scale_attr is not None:
QtWidgets.QApplication.setAttribute(scale_attr)
app = QtWidgets.QApplication(sys.argv)
app = QtWidgets.QApplication.instance()
if not app:
app = QtWidgets.QApplication(sys.argv)
d = UpdateWindow(version=openpype_version)
d.open()
@ -53,9 +66,32 @@ def open_update_window(openpype_version):
return version_path
def show_message_dialog(title, message):
"""Show dialog with a message and title to user."""
if os.getenv("OPENPYPE_HEADLESS_MODE"):
print("!!! Can't open dialog in headless mode. Exiting.")
sys.exit(1)
from Qt import QtWidgets, QtCore
from .message_dialog import MessageDialog
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
if scale_attr is not None:
QtWidgets.QApplication.setAttribute(scale_attr)
app = QtWidgets.QApplication.instance()
if not app:
app = QtWidgets.QApplication(sys.argv)
dialog = MessageDialog(title, message)
dialog.open()
app.exec_()
__all__ = [
"BootstrapRepos",
"open_dialog",
"open_update_window",
"show_message_dialog",
"version"
]

View file

@ -22,7 +22,10 @@ from .user_settings import (
OpenPypeSecureRegistry,
OpenPypeSettingsRegistry
)
from .tools import get_openpype_path_from_db
from .tools import (
get_openpype_path_from_db,
get_expected_studio_version_str
)
LOG_INFO = 0
@ -60,6 +63,7 @@ class OpenPypeVersion(semver.VersionInfo):
staging = False
path = None
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501
_installed_version = None
def __init__(self, *args, **kwargs):
"""Create OpenPype version.
@ -232,6 +236,390 @@ class OpenPypeVersion(semver.VersionInfo):
else:
return hash(str(self))
@staticmethod
def is_version_in_dir(
dir_item: Path, version: OpenPypeVersion) -> Tuple[bool, str]:
"""Test if path item is OpenPype version matching detected version.
If item is directory that might (based on it's name)
contain OpenPype version, check if it really does contain
OpenPype and that their versions matches.
Args:
dir_item (Path): Directory to test.
version (OpenPypeVersion): OpenPype version detected
from name.
Returns:
Tuple: State and reason, True if it is valid OpenPype version,
False otherwise.
"""
try:
# add one 'openpype' level as inside dir there should
# be many other repositories.
version_str = OpenPypeVersion.get_version_string_from_directory(
dir_item) # noqa: E501
version_check = OpenPypeVersion(version=version_str)
except ValueError:
return False, f"cannot determine version from {dir_item}"
version_main = version_check.get_main_version()
detected_main = version.get_main_version()
if version_main != detected_main:
return False, (f"dir version ({version}) and "
f"its content version ({version_check}) "
"doesn't match. Skipping.")
return True, "Versions match"
@staticmethod
def is_version_in_zip(
zip_item: Path, version: OpenPypeVersion) -> Tuple[bool, str]:
"""Test if zip path is OpenPype version matching detected version.
Open zip file, look inside and parse version from OpenPype
inside it. If there is none, or it is different from
version specified in file name, skip it.
Args:
zip_item (Path): Zip file to test.
version (OpenPypeVersion): Pype version detected
from name.
Returns:
Tuple: State and reason, True if it is valid OpenPype version,
False otherwise.
"""
# skip non-zip files
if zip_item.suffix.lower() != ".zip":
return False, "Not a zip"
try:
with ZipFile(zip_item, "r") as zip_file:
with zip_file.open(
"openpype/version.py") as version_file:
zip_version = {}
exec(version_file.read(), zip_version)
try:
version_check = OpenPypeVersion(
version=zip_version["__version__"])
except ValueError as e:
return False, str(e)
version_main = version_check.get_main_version() #
# noqa: E501
detected_main = version.get_main_version()
# noqa: E501
if version_main != detected_main:
return False, (f"zip version ({version}) "
f"and its content version "
f"({version_check}) "
"doesn't match. Skipping.")
except BadZipFile:
return False, f"{zip_item} is not a zip file"
except KeyError:
return False, "Zip does not contain OpenPype"
return True, "Versions match"
@staticmethod
def get_version_string_from_directory(repo_dir: Path) -> Union[str, None]:
"""Get version of OpenPype in given directory.
Note: in frozen OpenPype installed in user data dir, this must point
one level deeper as it is:
`openpype-version-v3.0.0/openpype/version.py`
Args:
repo_dir (Path): Path to OpenPype repo.
Returns:
str: version string.
None: if OpenPype is not found.
"""
# try to find version
version_file = Path(repo_dir) / "openpype" / "version.py"
if not version_file.exists():
return None
version = {}
with version_file.open("r") as fp:
exec(fp.read(), version)
return version['__version__']
@classmethod
def get_openpype_path(cls):
"""Path to openpype zip directory.
Path can be set through environment variable 'OPENPYPE_PATH' which
is set during start of OpenPype if is not available.
"""
return os.getenv("OPENPYPE_PATH")
@classmethod
def openpype_path_is_set(cls):
"""Path to OpenPype zip directory is set."""
if cls.get_openpype_path():
return True
return False
@classmethod
def openpype_path_is_accessible(cls):
"""Path to OpenPype zip directory is accessible.
Exists for this machine.
"""
# First check if is set
if not cls.openpype_path_is_set():
return False
# Validate existence
if Path(cls.get_openpype_path()).exists():
return True
return False
@classmethod
def get_local_versions(
cls, production: bool = None, staging: bool = None
) -> List:
"""Get all versions available on this machine.
Arguments give ability to specify if filtering is needed. If both
arguments are set to None all found versions are returned.
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
production = True
staging = True
elif production is None and not staging:
production = True
elif staging is None and not production:
staging = True
# Just return empty output if both are disabled
if not production and not staging:
return []
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
versions = OpenPypeVersion.get_versions_from_directory(
dir_to_search
)
filtered_versions = []
for version in versions:
if version.is_staging():
if staging:
filtered_versions.append(version)
elif production:
filtered_versions.append(version)
return list(sorted(set(filtered_versions)))
@classmethod
def get_remote_versions(
cls, production: bool = None, staging: bool = None
) -> List:
"""Get all versions available in OpenPype Path.
Arguments give ability to specify if filtering is needed. If both
arguments are set to None all found versions are returned.
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
production = True
staging = True
elif production is None and not staging:
production = True
elif staging is None and not production:
staging = True
# Just return empty output if both are disabled
if not production and not staging:
return []
dir_to_search = None
if cls.openpype_path_is_accessible():
dir_to_search = Path(cls.get_openpype_path())
else:
registry = OpenPypeSettingsRegistry()
try:
registry_dir = Path(str(registry.get_item("openPypePath")))
if registry_dir.exists():
dir_to_search = registry_dir
except ValueError:
# nothing found in registry, we'll use data dir
pass
if not dir_to_search:
return []
versions = cls.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
if version.is_staging():
if staging:
filtered_versions.append(version)
elif production:
filtered_versions.append(version)
return list(sorted(set(filtered_versions)))
@staticmethod
def get_versions_from_directory(openpype_dir: Path) -> List:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
Returns:
list of OpenPypeVersion
Throws:
ValueError: if invalid path is specified.
"""
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
_openpype_versions = []
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
# if file, strip extension, in case of dir not.
name = item.name if item.is_dir() else item.stem
result = OpenPypeVersion.version_in_str(name)
if result:
detected_version: OpenPypeVersion
detected_version = result
if item.is_dir() and not OpenPypeVersion.is_version_in_dir(
item, detected_version
)[0]:
continue
if item.is_file() and not OpenPypeVersion.is_version_in_zip(
item, detected_version
)[0]:
continue
detected_version.path = item
_openpype_versions.append(detected_version)
return sorted(_openpype_versions)
@staticmethod
def get_installed_version_str() -> str:
"""Get version of local OpenPype."""
version = {}
path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
with open(path, "r") as fp:
exec(fp.read(), version)
return version["__version__"]
@classmethod
def get_installed_version(cls):
"""Get version of OpenPype inside build."""
if cls._installed_version is None:
installed_version_str = cls.get_installed_version_str()
if installed_version_str:
cls._installed_version = OpenPypeVersion(
version=installed_version_str,
path=Path(os.environ["OPENPYPE_ROOT"])
)
return cls._installed_version
@staticmethod
def get_latest_version(
staging: bool = False,
local: bool = None,
remote: bool = None
) -> OpenPypeVersion:
"""Get latest available version.
The version does not contain information about path and source.
This is utility version to get latest version from all found. Build
version is not listed if staging is enabled.
Arguments 'local' and 'remote' define if local and remote repository
versions are used. All versions are used if both are not set (or set
to 'None'). If only one of them is set to 'True' the other is disabled.
It is possible to set both to 'True' (same as both set to None) and to
'False' in that case only build version can be used.
Args:
staging (bool, optional): List staging versions if True.
local (bool, optional): List local versions if True.
remote (bool, optional): List remote versions if True.
"""
if local is None and remote is None:
local = True
remote = True
elif local is None and not remote:
local = True
elif remote is None and not local:
remote = True
installed_version = OpenPypeVersion.get_installed_version()
local_versions = []
remote_versions = []
if local:
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
)
if remote:
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging
)
all_versions = local_versions + remote_versions
if not staging:
all_versions.append(installed_version)
if not all_versions:
return None
all_versions.sort()
return all_versions[-1]
@classmethod
def get_expected_studio_version(cls, staging=False, global_settings=None):
"""Expected OpenPype version that should be used at the moment.
If version is not defined in settings the latest found version is
used.
Using precached global settings is needed for usage inside OpenPype.
Args:
staging (bool): Staging version or production version.
global_settings (dict): Optional precached global settings.
Returns:
OpenPypeVersion: Version that should be used.
"""
result = get_expected_studio_version_str(staging, global_settings)
if not result:
return None
return OpenPypeVersion(version=result)
class BootstrapRepos:
"""Class for bootstrapping local OpenPype installation.
@ -301,16 +689,6 @@ class BootstrapRepos:
return v.path
return None
@staticmethod
def get_local_live_version() -> str:
"""Get version of local OpenPype."""
version = {}
path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
with open(path, "r") as fp:
exec(fp.read(), version)
return version["__version__"]
@staticmethod
def get_version(repo_dir: Path) -> Union[str, None]:
"""Get version of OpenPype in given directory.
@ -358,7 +736,7 @@ class BootstrapRepos:
# version and use it as a source. Otherwise repo_dir is user
# entered location.
if not repo_dir:
version = self.get_local_live_version()
version = OpenPypeVersion.get_installed_version_str()
repo_dir = self.live_repo_dir
else:
version = self.get_version(repo_dir)
@ -384,7 +762,7 @@ class BootstrapRepos:
destination = self._move_zip_to_data_dir(temp_zip)
return OpenPypeVersion(version=version, path=destination)
return OpenPypeVersion(version=version, path=Path(destination))
def _move_zip_to_data_dir(self, zip_file) -> Union[None, Path]:
"""Move zip with OpenPype version to user data directory.
@ -534,7 +912,6 @@ class BootstrapRepos:
processed_path = file
self._print(f"- processing {processed_path}")
checksums.append(
(
sha256sum(file.as_posix()),
@ -734,6 +1111,65 @@ class BootstrapRepos:
os.environ["PYTHONPATH"] = os.pathsep.join(paths)
@staticmethod
def find_openpype_version(version, staging):
if isinstance(version, str):
version = OpenPypeVersion(version=version)
installed_version = OpenPypeVersion.get_installed_version()
if installed_version == version:
return installed_version
local_versions = OpenPypeVersion.get_local_versions(
staging=staging, production=not staging
)
zip_version = None
for local_version in local_versions:
if local_version == version:
if local_version.path.suffix.lower() == ".zip":
zip_version = local_version
else:
return local_version
if zip_version is not None:
return zip_version
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging, production=not staging
)
for remote_version in remote_versions:
if remote_version == version:
return remote_version
return None
@staticmethod
def find_latest_openpype_version(staging):
installed_version = OpenPypeVersion.get_installed_version()
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
)
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging
)
all_versions = local_versions + remote_versions
if not staging:
all_versions.append(installed_version)
if not all_versions:
return None
all_versions.sort()
latest_version = all_versions[-1]
if latest_version == installed_version:
return latest_version
if not latest_version.path.is_dir():
for version in local_versions:
if version == latest_version and version.path.is_dir():
latest_version = version
break
return latest_version
def find_openpype(
self,
openpype_path: Union[Path, str] = None,
@ -1107,7 +1543,8 @@ class BootstrapRepos:
Args:
zip_item (Path): Zip file to test.
detected_version (OpenPypeVersion): Pype version detected from name.
detected_version (OpenPypeVersion): Pype version detected from
name.
Returns:
True if it is valid OpenPype version, False otherwise.

View file

@ -12,7 +12,8 @@ from Qt.QtCore import QTimer # noqa
from .install_thread import InstallThread
from .tools import (
validate_mongo_connection,
get_openpype_path_from_db
get_openpype_path_from_db,
get_openpype_icon_path
)
from .nice_progress_bar import NiceProgressBar
@ -187,7 +188,6 @@ class InstallDialog(QtWidgets.QDialog):
current_dir = os.path.dirname(os.path.abspath(__file__))
roboto_font_path = os.path.join(current_dir, "RobotoMono-Regular.ttf")
poppins_font_path = os.path.join(current_dir, "Poppins")
icon_path = os.path.join(current_dir, "openpype_icon.png")
# Install roboto font
QtGui.QFontDatabase.addApplicationFont(roboto_font_path)
@ -196,6 +196,7 @@ class InstallDialog(QtWidgets.QDialog):
QtGui.QFontDatabase.addApplicationFont(filename)
# Load logo
icon_path = get_openpype_icon_path()
pixmap_openpype_logo = QtGui.QPixmap(icon_path)
# Set logo as icon of window
self.setWindowIcon(QtGui.QIcon(pixmap_openpype_logo))

View file

@ -60,7 +60,7 @@ class InstallThread(QThread):
# find local version of OpenPype
bs = BootstrapRepos(
progress_callback=self.set_progress, message=self.message)
local_version = bs.get_local_live_version()
local_version = OpenPypeVersion.get_installed_version_str()
# if user did entered nothing, we install OpenPype from local version.
# zip content of `repos`, copy it to user data dir and append

44
igniter/message_dialog.py Normal file
View file

@ -0,0 +1,44 @@
from Qt import QtWidgets, QtGui
from .tools import (
load_stylesheet,
get_openpype_icon_path
)
class MessageDialog(QtWidgets.QDialog):
"""Simple message dialog with title, message and OK button."""
def __init__(self, title, message):
super(MessageDialog, self).__init__()
# Set logo as icon of window
icon_path = get_openpype_icon_path()
pixmap_openpype_logo = QtGui.QPixmap(icon_path)
self.setWindowIcon(QtGui.QIcon(pixmap_openpype_logo))
# Set title
self.setWindowTitle(title)
# Set message
label_widget = QtWidgets.QLabel(message, self)
ok_btn = QtWidgets.QPushButton("OK", self)
btns_layout = QtWidgets.QHBoxLayout()
btns_layout.addStretch(1)
btns_layout.addWidget(ok_btn, 0)
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(label_widget, 1)
layout.addLayout(btns_layout, 0)
ok_btn.clicked.connect(self._on_ok_clicked)
self._label_widget = label_widget
self._ok_btn = ok_btn
def _on_ok_clicked(self):
self.close()
def showEvent(self, event):
super(MessageDialog, self).showEvent(event)
self.setStyleSheet(load_stylesheet())

View file

@ -16,6 +16,11 @@ from pymongo.errors import (
)
class OpenPypeVersionNotFound(Exception):
"""OpenPype version was not found in remote and local repository."""
pass
def should_add_certificate_path_to_mongo_url(mongo_url):
"""Check if should add ca certificate to mongo url.
@ -182,6 +187,28 @@ def get_openpype_path_from_db(url: str) -> Union[str, None]:
return None
def get_expected_studio_version_str(
staging=False, global_settings=None
) -> str:
"""Version that should be currently used in studio.
Args:
staging (bool): Get current version for staging.
global_settings (dict): Optional precached global settings.
Returns:
str: OpenPype version which should be used. Empty string means latest.
"""
mongo_url = os.environ.get("OPENPYPE_MONGO")
if global_settings is None:
global_settings = get_openpype_global_settings(mongo_url)
if staging:
key = "staging_version"
else:
key = "production_version"
return global_settings.get(key) or ""
def load_stylesheet() -> str:
"""Load css style sheet.
@ -192,3 +219,11 @@ def load_stylesheet() -> str:
stylesheet_path = Path(__file__).parent.resolve() / "stylesheet.css"
return stylesheet_path.read_text()
def get_openpype_icon_path() -> str:
"""Path to OpenPype icon png file."""
return os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"openpype_icon.png"
)

View file

@ -72,7 +72,7 @@ class RepairContextAction(pyblish.api.Action):
is available on the plugin.
"""
label = "Repair Context"
label = "Repair"
on = "failed" # This action is only available on a failed plug-in
def process(self, context, plugin):

View file

@ -31,8 +31,6 @@ from .lib import (
)
from .lib.mongo import (
decompose_url,
compose_url,
get_default_components
)
@ -84,8 +82,6 @@ __all__ = [
"Anatomy",
"config",
"execute",
"decompose_url",
"compose_url",
"get_default_components",
"ApplicationManager",
"BuildWorkfile",

View file

@ -138,7 +138,10 @@ def webpublisherwebserver(debug, executable, upload_dir, host=None, port=None):
@click.option("--asset", help="Asset name", default=None)
@click.option("--task", help="Task name", default=None)
@click.option("--app", help="Application name", default=None)
def extractenvironments(output_json_path, project, asset, task, app):
@click.option(
"--envgroup", help="Environment group (e.g. \"farm\")", default=None
)
def extractenvironments(output_json_path, project, asset, task, app, envgroup):
"""Extract environment variables for entered context to a json file.
Entered output filepath will be created if does not exists.
@ -149,7 +152,7 @@ def extractenvironments(output_json_path, project, asset, task, app):
Context options are "project", "asset", "task", "app"
"""
PypeCommands.extractenvironments(
output_json_path, project, asset, task, app
output_json_path, project, asset, task, app, envgroup
)
@ -356,9 +359,22 @@ def run(script):
"--pyargs",
help="Run tests from package",
default=None)
def runtests(folder, mark, pyargs):
@click.option("-t",
"--test_data_folder",
help="Unzipped directory path of test file",
default=None)
@click.option("-s",
"--persist",
help="Persist test DB and published files after test end",
default=None)
@click.option("-a",
"--app_variant",
help="Provide specific app variant for test, empty for latest",
default=None)
def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant):
"""Run all automatic tests after proper initialization via start.py"""
PypeCommands().run_tests(folder, mark, pyargs)
PypeCommands().run_tests(folder, mark, pyargs, test_data_folder,
persist, app_variant)
@main.command()

View file

@ -0,0 +1,33 @@
import os
from openpype.lib import (
PreLaunchHook,
create_workdir_extra_folders
)
class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"""Add last workfile path to launch arguments.
This is not possible to do for all applications the same way.
"""
# Execute after workfile template copy
order = 15
def execute(self):
if not self.application.is_host:
return
env = self.data.get("env") or {}
workdir = env.get("AVALON_WORKDIR")
if not workdir or not os.path.exists(workdir):
return
host_name = self.application.host_name
task_type = self.data["task_type"]
task_name = self.data["task_name"]
project_name = self.data["project_name"]
create_workdir_extra_folders(
workdir, host_name, task_type, task_name, project_name,
)

View file

@ -48,7 +48,7 @@ class GlobalHostDataHook(PreLaunchHook):
"log": self.log
})
prepare_host_environments(temp_data)
prepare_host_environments(temp_data, self.launch_context.env_group)
prepare_context_environments(temp_data)
temp_data.pop("log")

View file

@ -3,7 +3,7 @@ import subprocess
from openpype.lib import (
PreLaunchHook,
get_pype_execute_args
get_openpype_execute_args
)
from openpype import PACKAGE_DIR as OPENPYPE_DIR
@ -35,7 +35,7 @@ class NonPythonHostHook(PreLaunchHook):
"non_python_host_launch.py"
)
new_launch_args = get_pype_execute_args(
new_launch_args = get_openpype_execute_args(
"run", script_path, executable_path
)
# Add workfile path if exists
@ -48,4 +48,3 @@ class NonPythonHostHook(PreLaunchHook):
if remainders:
self.launch_context.launch_args.extend(remainders)

View file

@ -0,0 +1,66 @@
# AfterEffects Integration
Requirements: This extension requires use of Javascript engine, which is
available since CC 16.0.
Please check your File>Project Settings>Expressions>Expressions Engine
## Setup
The After Effects integration requires two components to work; `extension` and `server`.
### Extension
To install the extension download [Extension Manager Command Line tool (ExManCmd)](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#option-2---exmancmd).
```
ExManCmd /install {path to avalon-core}\avalon\photoshop\extension.zxp
```
OR
download [Anastasiys Extension Manager](https://install.anastasiy.com/)
### Server
The easiest way to get the server and After Effects launch is with:
```
python -c ^"import avalon.photoshop;avalon.aftereffects.launch(""c:\Program Files\Adobe\Adobe After Effects 2020\Support Files\AfterFX.exe"")^"
```
`avalon.aftereffects.launch` launches the application and server, and also closes the server when After Effects exists.
## Usage
The After Effects extension can be found under `Window > Extensions > OpenPype`. Once launched you should be presented with a panel like this:
![Avalon Panel](panel.PNG "Avalon Panel")
## Developing
### Extension
When developing the extension you can load it [unsigned](https://github.com/Adobe-CEP/CEP-Resources/blob/master/CEP_9.x/Documentation/CEP%209.0%20HTML%20Extension%20Cookbook.md#debugging-unsigned-extensions).
When signing the extension you can use this [guide](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#package-distribute-install-guide).
```
ZXPSignCmd -selfSignedCert NA NA Avalon Avalon-After-Effects avalon extension.p12
ZXPSignCmd -sign {path to avalon-core}\avalon\aftereffects\extension {path to avalon-core}\avalon\aftereffects\extension.zxp extension.p12 avalon
```
### Plugin Examples
These plugins were made with the [polly config](https://github.com/mindbender-studio/config). To fully integrate and load, you will have to use this config and add `image` to the [integration plugin](https://github.com/mindbender-studio/config/blob/master/polly/plugins/publish/integrate_asset.py).
Expected deployed extension location on default Windows:
`c:\Program Files (x86)\Common Files\Adobe\CEP\extensions\com.openpype.AE.panel`
For easier debugging of Javascript:
https://community.adobe.com/t5/download-install/adobe-extension-debuger-problem/td-p/10911704?page=1
Add (optional) --enable-blink-features=ShadowDOMV0,CustomElementsV0 when starting Chrome
then localhost:8092
Or use Visual Studio Code https://medium.com/adobetech/extendscript-debugger-for-visual-studio-code-public-release-a2ff6161fa01
## Resources
- https://javascript-tools-guide.readthedocs.io/introduction/index.html
- https://github.com/Adobe-CEP/Getting-Started-guides
- https://github.com/Adobe-CEP/CEP-Resources

View file

@ -1,115 +1,68 @@
import os
import sys
import logging
"""Public API
from avalon import io
from avalon import api as avalon
from Qt import QtWidgets
from openpype import lib, api
import pyblish.api as pyblish
import openpype.hosts.aftereffects
Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .launch_logic import (
get_stub,
stub,
)
from .pipeline import (
ls,
get_asset_settings,
install,
uninstall,
list_instances,
remove_instance,
containerise
)
from .workio import (
file_extensions,
has_unsaved_changes,
save_file,
open_file,
current_file,
work_root,
)
from .lib import (
maintained_selection,
get_extension_manifest_path
)
from .plugin import (
AfterEffectsLoader
)
log = logging.getLogger("openpype.hosts.aftereffects")
__all__ = [
# launch_logic
"get_stub",
"stub",
# pipeline
"ls",
"get_asset_settings",
"install",
"uninstall",
"list_instances",
"remove_instance",
"containerise",
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.aftereffects.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
"file_extensions",
"has_unsaved_changes",
"save_file",
"open_file",
"current_file",
"work_root",
# lib
"maintained_selection",
"get_extension_manifest_path",
def check_inventory():
if not lib.any_outdated():
return
host = pyblish.registered_host()
outdated_containers = []
for container in host.ls():
representation = container['representation']
representation_doc = io.find_one(
{
"_id": io.ObjectId(representation),
"type": "representation"
},
projection={"parent": True}
)
if representation_doc and not lib.is_latest(representation_doc):
outdated_containers.append(container)
# Warn about outdated containers.
print("Starting new QApplication..")
app = QtWidgets.QApplication(sys.argv)
message_box = QtWidgets.QMessageBox()
message_box.setIcon(QtWidgets.QMessageBox.Warning)
msg = "There are outdated containers in the scene."
message_box.setText(msg)
message_box.exec_()
# Garbage collect QApplication.
del app
def application_launch():
check_inventory()
def install():
print("Installing Pype config...")
pyblish.register_plugin_path(PUBLISH_PATH)
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
log.info(PUBLISH_PATH)
pyblish.register_callback(
"instanceToggled", on_pyblish_instance_toggled
)
avalon.on("application.launched", application_launch)
def uninstall():
pyblish.deregister_plugin_path(PUBLISH_PATH)
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle layer visibility on instance toggles."""
instance[0].Visible = new_value
def get_asset_settings():
"""Get settings on current asset from database.
Returns:
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
resolution_width = asset_data.get("resolutionWidth")
resolution_height = asset_data.get("resolutionHeight")
duration = (frame_end - frame_start + 1) + handle_start + handle_end
entity_type = asset_data.get("entityType")
scene_data = {
"fps": fps,
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end,
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height,
"duration": duration
}
return scene_data
# plugin
"AfterEffectsLoader"
]

Binary file not shown.

View file

@ -0,0 +1,32 @@
<?xml version="1.0" encoding="UTF-8"?>
<ExtensionList>
<Extension Id="com.openpype.AE.panel">
<HostList>
<!-- Comment Host tags according to the apps you want your panel to support -->
<!-- Photoshop -->
<Host Name="PHXS" Port="8088"/>
<!-- Illustrator -->
<Host Name="ILST" Port="8089"/>
<!-- InDesign -->
<Host Name="IDSN" Port="8090" />
<!-- Premiere -->
<Host Name="PPRO" Port="8091" />
<!-- AfterEffects -->
<Host Name="AEFT" Port="8092" />
<!-- PRELUDE -->
<Host Name="PRLD" Port="8093" />
<!-- FLASH Pro -->
<Host Name="FLPR" Port="8094" />
</HostList>
</Extension>
</ExtensionList>

View file

@ -0,0 +1,79 @@
<?xml version="1.0" encoding="UTF-8"?>
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.21"
ExtensionBundleName="openpype" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ExtensionList>
<Extension Id="com.openpype.AE.panel" Version="1.0" />
</ExtensionList>
<ExecutionEnvironment>
<HostList>
<!-- Uncomment Host tags according to the apps you want your panel to support -->
<!-- Photoshop -->
<!--<Host Name="PHXS" Version="[14.0,19.0]" /> -->
<!-- <Host Name="PHSP" Version="[14.0,19.0]" /> -->
<!-- Illustrator -->
<!-- <Host Name="ILST" Version="[18.0,22.0]" /> -->
<!-- InDesign -->
<!-- <Host Name="IDSN" Version="[10.0,13.0]" /> -->
<!-- Premiere -->
<!-- <Host Name="PPRO" Version="[8.0,12.0]" /> -->
<!-- AfterEffects -->
<Host Name="AEFT" Version="[13.0,99.0]" />
<!-- PRELUDE -->
<!-- <Host Name="PRLD" Version="[3.0,7.0]" /> -->
<!-- FLASH Pro -->
<!-- <Host Name="FLPR" Version="[14.0,18.0]" /> -->
</HostList>
<LocaleList>
<Locale Code="All" />
</LocaleList>
<RequiredRuntimeList>
<RequiredRuntime Name="CSXS" Version="9.0" />
</RequiredRuntimeList>
</ExecutionEnvironment>
<DispatchInfoList>
<Extension Id="com.openpype.AE.panel">
<DispatchInfo >
<Resources>
<MainPath>./index.html</MainPath>
<ScriptPath>./jsx/hostscript.jsx</ScriptPath>
</Resources>
<Lifecycle>
<AutoVisible>true</AutoVisible>
</Lifecycle>
<UI>
<Type>Panel</Type>
<Menu>OpenPype</Menu>
<Geometry>
<Size>
<Height>200</Height>
<Width>100</Width>
</Size>
<!--<MinSize>
<Height>550</Height>
<Width>400</Width>
</MinSize>
<MaxSize>
<Height>550</Height>
<Width>400</Width>
</MaxSize>-->
</Geometry>
<Icons>
<Icon Type="Normal">./icons/iconNormal.png</Icon>
<Icon Type="RollOver">./icons/iconRollover.png</Icon>
<Icon Type="Disabled">./icons/iconDisabled.png</Icon>
<Icon Type="DarkNormal">./icons/iconDarkNormal.png</Icon>
<Icon Type="DarkRollOver">./icons/iconDarkRollover.png</Icon>
</Icons>
</UI>
</DispatchInfo>
</Extension>
</DispatchInfoList>
</ExtensionManifest>

View file

@ -0,0 +1,327 @@
/*
* HTML5 Boilerplate
*
* What follows is the result of much research on cross-browser styling.
* Credit left inline and big thanks to Nicolas Gallagher, Jonathan Neal,
* Kroc Camen, and the H5BP dev community and team.
*
* Detailed information about this CSS: h5bp.com/css
*
* ==|== normalize ==========================================================
*/
/* =============================================================================
HTML5 display definitions
========================================================================== */
article, aside, details, figcaption, figure, footer, header, hgroup, nav, section { display: block; }
audio, canvas, video { display: inline-block; *display: inline; *zoom: 1; }
audio:not([controls]) { display: none; }
[hidden] { display: none; }
/* =============================================================================
Base
========================================================================== */
/*
* 1. Correct text resizing oddly in IE6/7 when body font-size is set using em units
* 2. Force vertical scrollbar in non-IE
* 3. Prevent iOS text size adjust on device orientation change, without disabling user zoom: h5bp.com/g
*/
html { font-size: 100%; overflow-y: scroll; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; }
body { margin: 0; font-size: 100%; line-height: 1.231; }
body, button, input, select, textarea { font-family: helvetica, arial,"lucida grande", verdana, "メイリオ", " Pゴシック", sans-serif; color: #222; }
/*
* Remove text-shadow in selection highlight: h5bp.com/i
* These selection declarations have to be separate
* Also: hot pink! (or customize the background color to match your design)
*/
::selection { text-shadow: none; background-color: highlight; color: highlighttext; }
/* =============================================================================
Links
========================================================================== */
a { color: #00e; }
a:visited { color: #551a8b; }
a:hover { color: #06e; }
a:focus { outline: thin dotted; }
/* Improve readability when focused and hovered in all browsers: h5bp.com/h */
a:hover, a:active { outline: 0; }
/* =============================================================================
Typography
========================================================================== */
abbr[title] { border-bottom: 1px dotted; }
b, strong { font-weight: bold; }
blockquote { margin: 1em 40px; }
dfn { font-style: italic; }
hr { display: block; height: 1px; border: 0; border-top: 1px solid #ccc; margin: 1em 0; padding: 0; }
ins { background: #ff9; color: #000; text-decoration: none; }
mark { background: #ff0; color: #000; font-style: italic; font-weight: bold; }
/* Redeclare monospace font family: h5bp.com/j */
pre, code, kbd, samp { font-family: monospace, serif; _font-family: 'courier new', monospace; font-size: 1em; }
/* Improve readability of pre-formatted text in all browsers */
pre { white-space: pre; white-space: pre-wrap; word-wrap: break-word; }
q { quotes: none; }
q:before, q:after { content: ""; content: none; }
small { font-size: 85%; }
/* Position subscript and superscript content without affecting line-height: h5bp.com/k */
sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; }
sup { top: -0.5em; }
sub { bottom: -0.25em; }
/* =============================================================================
Lists
========================================================================== */
ul, ol { margin: 1em 0; padding: 0 0 0 40px; }
dd { margin: 0 0 0 40px; }
nav ul, nav ol { list-style: none; list-style-image: none; margin: 0; padding: 0; }
/* =============================================================================
Embedded content
========================================================================== */
/*
* 1. Improve image quality when scaled in IE7: h5bp.com/d
* 2. Remove the gap between images and borders on image containers: h5bp.com/e
*/
img { border: 0; -ms-interpolation-mode: bicubic; vertical-align: middle; }
/*
* Correct overflow not hidden in IE9
*/
svg:not(:root) { overflow: hidden; }
/* =============================================================================
Figures
========================================================================== */
figure { margin: 0; }
/* =============================================================================
Forms
========================================================================== */
form { margin: 0; }
fieldset { border: 0; margin: 0; padding: 0; }
/* Indicate that 'label' will shift focus to the associated form element */
label { cursor: pointer; }
/*
* 1. Correct color not inheriting in IE6/7/8/9
* 2. Correct alignment displayed oddly in IE6/7
*/
legend { border: 0; *margin-left: -7px; padding: 0; }
/*
* 1. Correct font-size not inheriting in all browsers
* 2. Remove margins in FF3/4 S5 Chrome
* 3. Define consistent vertical alignment display in all browsers
*/
button, input, select, textarea { font-size: 100%; margin: 0; vertical-align: baseline; *vertical-align: middle; }
/*
* 1. Define line-height as normal to match FF3/4 (set using !important in the UA stylesheet)
*/
button, input { line-height: normal; }
/*
* 1. Display hand cursor for clickable form elements
* 2. Allow styling of clickable form elements in iOS
* 3. Correct inner spacing displayed oddly in IE7 (doesn't effect IE6)
*/
button, input[type="button"], input[type="reset"], input[type="submit"] { cursor: pointer; -webkit-appearance: button; *overflow: visible; }
/*
* Consistent box sizing and appearance
*/
input[type="checkbox"], input[type="radio"] { box-sizing: border-box; padding: 0; }
input[type="search"] { -webkit-appearance: textfield; -moz-box-sizing: content-box; -webkit-box-sizing: content-box; box-sizing: content-box; }
input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; }
/*
* Remove inner padding and border in FF3/4: h5bp.com/l
*/
button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; }
/*
* 1. Remove default vertical scrollbar in IE6/7/8/9
* 2. Allow only vertical resizing
*/
textarea { overflow: auto; vertical-align: top; resize: vertical; }
/* Colors for form validity */
input:valid, textarea:valid { }
input:invalid, textarea:invalid { background-color: #f0dddd; }
/* =============================================================================
Tables
========================================================================== */
table { border-collapse: collapse; border-spacing: 0; }
td { vertical-align: top; }
/* ==|== primary styles =====================================================
Author:
========================================================================== */
/* ==|== media queries ======================================================
PLACEHOLDER Media Queries for Responsive Design.
These override the primary ('mobile first') styles
Modify as content requires.
========================================================================== */
@media only screen and (min-width: 480px) {
/* Style adjustments for viewports 480px and over go here */
}
@media only screen and (min-width: 768px) {
/* Style adjustments for viewports 768px and over go here */
}
/* ==|== non-semantic helper classes ========================================
Please define your styles before this section.
========================================================================== */
/* For image replacement */
.ir { display: block; border: 0; text-indent: -999em; overflow: hidden; background-color: transparent; background-repeat: no-repeat; text-align: left; direction: ltr; }
.ir br { display: none; }
/* Hide from both screenreaders and browsers: h5bp.com/u */
.hidden { display: none !important; visibility: hidden; }
/* Hide only visually, but have it available for screenreaders: h5bp.com/v */
.visuallyhidden { border: 0; clip: rect(0 0 0 0); height: 1px; margin: -1px; overflow: hidden; padding: 0; position: absolute; width: 1px; }
/* Extends the .visuallyhidden class to allow the element to be focusable when navigated to via the keyboard: h5bp.com/p */
.visuallyhidden.focusable:active, .visuallyhidden.focusable:focus { clip: auto; height: auto; margin: 0; overflow: visible; position: static; width: auto; }
/* Hide visually and from screenreaders, but maintain layout */
.invisible { visibility: hidden; }
/* Contain floats: h5bp.com/q */
.clearfix:before, .clearfix:after { content: ""; display: table; }
.clearfix:after { clear: both; }
.clearfix { *zoom: 1; }
/* ==|== print styles =======================================================
Print styles.
Inlined to avoid required HTTP connection: h5bp.com/r
========================================================================== */
@media print {
* { background: transparent !important; color: black !important; box-shadow:none !important; text-shadow: none !important; filter:none !important; -ms-filter: none !important; } /* Black prints faster: h5bp.com/s */
a, a:visited { text-decoration: underline; }
a[href]:after { content: " (" attr(href) ")"; }
abbr[title]:after { content: " (" attr(title) ")"; }
.ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { content: ""; } /* Don't show links for images, or javascript/internal links */
pre, blockquote { border: 1px solid #999; page-break-inside: avoid; }
table { display: table-header-group; } /* h5bp.com/t */
tr, img { page-break-inside: avoid; }
img { max-width: 100% !important; }
@page { margin: 0.5cm; }
p, h2, h3 { orphans: 3; widows: 3; }
h2, h3 { page-break-after: avoid; }
}
/* reflow reset for -webkit-margin-before: 1em */
p { margin: 0; }
html {
overflow-y: auto;
background-color: transparent;
height: 100%;
}
body {
background: #fff;
font: normal 100%;
position: relative;
height: 100%;
}
body, div, img, p, button, input, select, textarea {
box-sizing: border-box;
}
.image {
display: block;
}
input {
cursor: default;
display: block;
}
input[type=button] {
background-color: #e5e9e8;
border: 1px solid #9daca9;
border-radius: 4px;
box-shadow: inset 0 1px #fff;
font: inherit;
letter-spacing: inherit;
text-indent: inherit;
color: inherit;
}
input[type=button]:hover {
background-color: #eff1f1;
}
input[type=button]:active {
background-color: #d2d6d6;
border: 1px solid #9daca9;
box-shadow: inset 0 1px rgba(0,0,0,0.1);
}
/* Reset anchor styles to an unstyled default to be in parity with design surface. It
is presumed that most link styles in real-world designs are custom (non-default). */
a, a:visited, a:hover, a:active {
color: inherit;
text-decoration: inherit;
}

View file

@ -0,0 +1,51 @@
/*Your styles*/
body {
margin: 10px;
}
#content {
margin-right:auto;
margin-left:auto;
vertical-align:middle;
width:100%;
}
#btn_test{
width: 100%;
}
/*
Those classes will be edited at runtime with values specified
by the settings of the CC application
*/
.hostFontColor{}
.hostFontFamily{}
.hostFontSize{}
/*font family, color and size*/
.hostFont{}
/*background color*/
.hostBgd{}
/*lighter background color*/
.hostBgdLight{}
/*darker background color*/
.hostBgdDark{}
/*background color and font*/
.hostElt{}
.hostButton{
border:1px solid;
border-radius:2px;
height:20px;
vertical-align:bottom;
font-family:inherit;
color:inherit;
font-size:inherit;
}

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -0,0 +1,136 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<link rel="stylesheet" href="css/topcoat-desktop-dark.min.css"/>
<link id="hostStyle" rel="stylesheet" href="css/styles.css"/>
<style type="text/css">
html, body, iframe {
width: 100%;
height: 100%;
border: 0px;
margin: 0px;
overflow: hidden;
}
button {width: 100%;}
</style>
<style>
button {width: 100%;}
body {margin:0; padding:0; height: 100%;}
html {height: 100%;}
</style>
<title></title>
<script src="js/libs/jquery-2.0.2.min.js"></script>
<script type=text/javascript>
$(function() {
$("a#workfiles-button").bind("click", function() {
RPC.call('AfterEffects.workfiles_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#creator-button").bind("click", function() {
RPC.call('AfterEffects.creator_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#loader-button").bind("click", function() {
RPC.call('AfterEffects.loader_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#publish-button").bind("click", function() {
RPC.call('AfterEffects.publish_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#sceneinventory-button").bind("click", function() {
RPC.call('AfterEffects.sceneinventory_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#subsetmanager-button").bind("click", function() {
RPC.call('AfterEffects.subsetmanager_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#experimental-button").bind("click", function() {
RPC.call('AfterEffects.experimental_tools_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
</head>
<body class="hostElt">
<div id="content">
<div>
<div></div><a href=# id=workfiles-button><button class="hostFontSize">Workfiles...</button></a></div>
<div> <a href=# id=creator-button><button class="hostFontSize">Create...</button></a></div>
<div><a href=# id=loader-button><button class="hostFontSize">Load...</button></a></div>
<div><a href=# id=publish-button><button class="hostFontSize">Publish...</button></a></div>
<div><a href=# id=sceneinventory-button><button class="hostFontSize">Manage...</button></a></div>
<div><a href=# id=subsetmanager-button><button class="hostFontSize">Subset Manager...</button></a></div>
<div><a href=# id=experimental-button><button class="hostFontSize">Experimental Tools...</button></a></div>
</div>
</div>
<!-- <script src="js/libs/PlayerDebugMode"></script> -->
<script src="js/libs/wsrpc.js"></script>
<script src="js/libs/loglevel.min.js"></script>
<script src="js/libs/CSInterface.js"></script>
<script src="js/themeManager.js"></script>
<script src="js/main.js"></script>
</body>
</html>

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,530 @@
// json2.js
// 2017-06-12
// Public Domain.
// NO WARRANTY EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK.
// USE YOUR OWN COPY. IT IS EXTREMELY UNWISE TO LOAD CODE FROM SERVERS YOU DO
// NOT CONTROL.
// This file creates a global JSON object containing two methods: stringify
// and parse. This file provides the ES5 JSON capability to ES3 systems.
// If a project might run on IE8 or earlier, then this file should be included.
// This file does nothing on ES5 systems.
// JSON.stringify(value, replacer, space)
// value any JavaScript value, usually an object or array.
// replacer an optional parameter that determines how object
// values are stringified for objects. It can be a
// function or an array of strings.
// space an optional parameter that specifies the indentation
// of nested structures. If it is omitted, the text will
// be packed without extra whitespace. If it is a number,
// it will specify the number of spaces to indent at each
// level. If it is a string (such as "\t" or "&nbsp;"),
// it contains the characters used to indent at each level.
// This method produces a JSON text from a JavaScript value.
// When an object value is found, if the object contains a toJSON
// method, its toJSON method will be called and the result will be
// stringified. A toJSON method does not serialize: it returns the
// value represented by the name/value pair that should be serialized,
// or undefined if nothing should be serialized. The toJSON method
// will be passed the key associated with the value, and this will be
// bound to the value.
// For example, this would serialize Dates as ISO strings.
// Date.prototype.toJSON = function (key) {
// function f(n) {
// // Format integers to have at least two digits.
// return (n < 10)
// ? "0" + n
// : n;
// }
// return this.getUTCFullYear() + "-" +
// f(this.getUTCMonth() + 1) + "-" +
// f(this.getUTCDate()) + "T" +
// f(this.getUTCHours()) + ":" +
// f(this.getUTCMinutes()) + ":" +
// f(this.getUTCSeconds()) + "Z";
// };
// You can provide an optional replacer method. It will be passed the
// key and value of each member, with this bound to the containing
// object. The value that is returned from your method will be
// serialized. If your method returns undefined, then the member will
// be excluded from the serialization.
// If the replacer parameter is an array of strings, then it will be
// used to select the members to be serialized. It filters the results
// such that only members with keys listed in the replacer array are
// stringified.
// Values that do not have JSON representations, such as undefined or
// functions, will not be serialized. Such values in objects will be
// dropped; in arrays they will be replaced with null. You can use
// a replacer function to replace those with JSON values.
// JSON.stringify(undefined) returns undefined.
// The optional space parameter produces a stringification of the
// value that is filled with line breaks and indentation to make it
// easier to read.
// If the space parameter is a non-empty string, then that string will
// be used for indentation. If the space parameter is a number, then
// the indentation will be that many spaces.
// Example:
// text = JSON.stringify(["e", {pluribus: "unum"}]);
// // text is '["e",{"pluribus":"unum"}]'
// text = JSON.stringify(["e", {pluribus: "unum"}], null, "\t");
// // text is '[\n\t"e",\n\t{\n\t\t"pluribus": "unum"\n\t}\n]'
// text = JSON.stringify([new Date()], function (key, value) {
// return this[key] instanceof Date
// ? "Date(" + this[key] + ")"
// : value;
// });
// // text is '["Date(---current time---)"]'
// JSON.parse(text, reviver)
// This method parses a JSON text to produce an object or array.
// It can throw a SyntaxError exception.
// The optional reviver parameter is a function that can filter and
// transform the results. It receives each of the keys and values,
// and its return value is used instead of the original value.
// If it returns what it received, then the structure is not modified.
// If it returns undefined then the member is deleted.
// Example:
// // Parse the text. Values that look like ISO date strings will
// // be converted to Date objects.
// myData = JSON.parse(text, function (key, value) {
// var a;
// if (typeof value === "string") {
// a =
// /^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2}(?:\.\d*)?)Z$/.exec(value);
// if (a) {
// return new Date(Date.UTC(
// +a[1], +a[2] - 1, +a[3], +a[4], +a[5], +a[6]
// ));
// }
// return value;
// }
// });
// myData = JSON.parse(
// "[\"Date(09/09/2001)\"]",
// function (key, value) {
// var d;
// if (
// typeof value === "string"
// && value.slice(0, 5) === "Date("
// && value.slice(-1) === ")"
// ) {
// d = new Date(value.slice(5, -1));
// if (d) {
// return d;
// }
// }
// return value;
// }
// );
// This is a reference implementation. You are free to copy, modify, or
// redistribute.
/*jslint
eval, for, this
*/
/*property
JSON, apply, call, charCodeAt, getUTCDate, getUTCFullYear, getUTCHours,
getUTCMinutes, getUTCMonth, getUTCSeconds, hasOwnProperty, join,
lastIndex, length, parse, prototype, push, replace, slice, stringify,
test, toJSON, toString, valueOf
*/
// Create a JSON object only if one does not already exist. We create the
// methods in a closure to avoid creating global variables.
if (typeof JSON !== "object") {
JSON = {};
}
(function () {
"use strict";
var rx_one = /^[\],:{}\s]*$/;
var rx_two = /\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g;
var rx_three = /"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g;
var rx_four = /(?:^|:|,)(?:\s*\[)+/g;
var rx_escapable = /[\\"\u0000-\u001f\u007f-\u009f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g;
var rx_dangerous = /[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g;
function f(n) {
// Format integers to have at least two digits.
return (n < 10)
? "0" + n
: n;
}
function this_value() {
return this.valueOf();
}
if (typeof Date.prototype.toJSON !== "function") {
Date.prototype.toJSON = function () {
return isFinite(this.valueOf())
? (
this.getUTCFullYear()
+ "-"
+ f(this.getUTCMonth() + 1)
+ "-"
+ f(this.getUTCDate())
+ "T"
+ f(this.getUTCHours())
+ ":"
+ f(this.getUTCMinutes())
+ ":"
+ f(this.getUTCSeconds())
+ "Z"
)
: null;
};
Boolean.prototype.toJSON = this_value;
Number.prototype.toJSON = this_value;
String.prototype.toJSON = this_value;
}
var gap;
var indent;
var meta;
var rep;
function quote(string) {
// If the string contains no control characters, no quote characters, and no
// backslash characters, then we can safely slap some quotes around it.
// Otherwise we must also replace the offending characters with safe escape
// sequences.
rx_escapable.lastIndex = 0;
return rx_escapable.test(string)
? "\"" + string.replace(rx_escapable, function (a) {
var c = meta[a];
return typeof c === "string"
? c
: "\\u" + ("0000" + a.charCodeAt(0).toString(16)).slice(-4);
}) + "\""
: "\"" + string + "\"";
}
function str(key, holder) {
// Produce a string from holder[key].
var i; // The loop counter.
var k; // The member key.
var v; // The member value.
var length;
var mind = gap;
var partial;
var value = holder[key];
// If the value has a toJSON method, call it to obtain a replacement value.
if (
value
&& typeof value === "object"
&& typeof value.toJSON === "function"
) {
value = value.toJSON(key);
}
// If we were called with a replacer function, then call the replacer to
// obtain a replacement value.
if (typeof rep === "function") {
value = rep.call(holder, key, value);
}
// What happens next depends on the value's type.
switch (typeof value) {
case "string":
return quote(value);
case "number":
// JSON numbers must be finite. Encode non-finite numbers as null.
return (isFinite(value))
? String(value)
: "null";
case "boolean":
case "null":
// If the value is a boolean or null, convert it to a string. Note:
// typeof null does not produce "null". The case is included here in
// the remote chance that this gets fixed someday.
return String(value);
// If the type is "object", we might be dealing with an object or an array or
// null.
case "object":
// Due to a specification blunder in ECMAScript, typeof null is "object",
// so watch out for that case.
if (!value) {
return "null";
}
// Make an array to hold the partial results of stringifying this object value.
gap += indent;
partial = [];
// Is the value an array?
if (Object.prototype.toString.apply(value) === "[object Array]") {
// The value is an array. Stringify every element. Use null as a placeholder
// for non-JSON values.
length = value.length;
for (i = 0; i < length; i += 1) {
partial[i] = str(i, value) || "null";
}
// Join all of the elements together, separated with commas, and wrap them in
// brackets.
v = partial.length === 0
? "[]"
: gap
? (
"[\n"
+ gap
+ partial.join(",\n" + gap)
+ "\n"
+ mind
+ "]"
)
: "[" + partial.join(",") + "]";
gap = mind;
return v;
}
// If the replacer is an array, use it to select the members to be stringified.
if (rep && typeof rep === "object") {
length = rep.length;
for (i = 0; i < length; i += 1) {
if (typeof rep[i] === "string") {
k = rep[i];
v = str(k, value);
if (v) {
partial.push(quote(k) + (
(gap)
? ": "
: ":"
) + v);
}
}
}
} else {
// Otherwise, iterate through all of the keys in the object.
for (k in value) {
if (Object.prototype.hasOwnProperty.call(value, k)) {
v = str(k, value);
if (v) {
partial.push(quote(k) + (
(gap)
? ": "
: ":"
) + v);
}
}
}
}
// Join all of the member texts together, separated with commas,
// and wrap them in braces.
v = partial.length === 0
? "{}"
: gap
? "{\n" + gap + partial.join(",\n" + gap) + "\n" + mind + "}"
: "{" + partial.join(",") + "}";
gap = mind;
return v;
}
}
// If the JSON object does not yet have a stringify method, give it one.
if (typeof JSON.stringify !== "function") {
meta = { // table of character substitutions
"\b": "\\b",
"\t": "\\t",
"\n": "\\n",
"\f": "\\f",
"\r": "\\r",
"\"": "\\\"",
"\\": "\\\\"
};
JSON.stringify = function (value, replacer, space) {
// The stringify method takes a value and an optional replacer, and an optional
// space parameter, and returns a JSON text. The replacer can be a function
// that can replace values, or an array of strings that will select the keys.
// A default replacer method can be provided. Use of the space parameter can
// produce text that is more easily readable.
var i;
gap = "";
indent = "";
// If the space parameter is a number, make an indent string containing that
// many spaces.
if (typeof space === "number") {
for (i = 0; i < space; i += 1) {
indent += " ";
}
// If the space parameter is a string, it will be used as the indent string.
} else if (typeof space === "string") {
indent = space;
}
// If there is a replacer, it must be a function or an array.
// Otherwise, throw an error.
rep = replacer;
if (replacer && typeof replacer !== "function" && (
typeof replacer !== "object"
|| typeof replacer.length !== "number"
)) {
throw new Error("JSON.stringify");
}
// Make a fake root object containing our value under the key of "".
// Return the result of stringifying the value.
return str("", {"": value});
};
}
// If the JSON object does not yet have a parse method, give it one.
if (typeof JSON.parse !== "function") {
JSON.parse = function (text, reviver) {
// The parse method takes a text and an optional reviver function, and returns
// a JavaScript value if the text is a valid JSON text.
var j;
function walk(holder, key) {
// The walk method is used to recursively walk the resulting structure so
// that modifications can be made.
var k;
var v;
var value = holder[key];
if (value && typeof value === "object") {
for (k in value) {
if (Object.prototype.hasOwnProperty.call(value, k)) {
v = walk(value, k);
if (v !== undefined) {
value[k] = v;
} else {
delete value[k];
}
}
}
}
return reviver.call(holder, key, value);
}
// Parsing happens in four stages. In the first stage, we replace certain
// Unicode characters with escape sequences. JavaScript handles many characters
// incorrectly, either silently deleting them, or treating them as line endings.
text = String(text);
rx_dangerous.lastIndex = 0;
if (rx_dangerous.test(text)) {
text = text.replace(rx_dangerous, function (a) {
return (
"\\u"
+ ("0000" + a.charCodeAt(0).toString(16)).slice(-4)
);
});
}
// In the second stage, we run the text against regular expressions that look
// for non-JSON patterns. We are especially concerned with "()" and "new"
// because they can cause invocation, and "=" because it can cause mutation.
// But just to be safe, we want to reject all unexpected forms.
// We split the second stage into 4 regexp operations in order to work around
// crippling inefficiencies in IE's and Safari's regexp engines. First we
// replace the JSON backslash pairs with "@" (a non-JSON character). Second, we
// replace all simple value tokens with "]" characters. Third, we delete all
// open brackets that follow a colon or comma or that begin the text. Finally,
// we look to see that the remaining characters are only whitespace or "]" or
// "," or ":" or "{" or "}". If that is so, then the text is safe for eval.
if (
rx_one.test(
text
.replace(rx_two, "@")
.replace(rx_three, "]")
.replace(rx_four, "")
)
) {
// In the third stage we use the eval function to compile the text into a
// JavaScript structure. The "{" operator is subject to a syntactic ambiguity
// in JavaScript: it can begin a block or an object literal. We wrap the text
// in parens to eliminate the ambiguity.
j = eval("(" + text + ")");
// In the optional fourth stage, we recursively walk the new structure, passing
// each name/value pair to a reviver function for possible transformation.
return (typeof reviver === "function")
? walk({"": j}, "")
: j;
}
// If the text is not JSON parseable, then a SyntaxError is thrown.
throw new SyntaxError("JSON.parse");
};
}
}());

View file

@ -0,0 +1,2 @@
/*! loglevel - v1.6.8 - https://github.com/pimterry/loglevel - (c) 2020 Tim Perry - licensed MIT */
!function(a,b){"use strict";"function"==typeof define&&define.amd?define(b):"object"==typeof module&&module.exports?module.exports=b():a.log=b()}(this,function(){"use strict";function a(a,b){var c=a[b];if("function"==typeof c.bind)return c.bind(a);try{return Function.prototype.bind.call(c,a)}catch(b){return function(){return Function.prototype.apply.apply(c,[a,arguments])}}}function b(){console.log&&(console.log.apply?console.log.apply(console,arguments):Function.prototype.apply.apply(console.log,[console,arguments])),console.trace&&console.trace()}function c(c){return"debug"===c&&(c="log"),typeof console!==i&&("trace"===c&&j?b:void 0!==console[c]?a(console,c):void 0!==console.log?a(console,"log"):h)}function d(a,b){for(var c=0;c<k.length;c++){var d=k[c];this[d]=c<a?h:this.methodFactory(d,a,b)}this.log=this.debug}function e(a,b,c){return function(){typeof console!==i&&(d.call(this,b,c),this[a].apply(this,arguments))}}function f(a,b,d){return c(a)||e.apply(this,arguments)}function g(a,b,c){function e(a){var b=(k[a]||"silent").toUpperCase();if(typeof window!==i){try{return void(window.localStorage[l]=b)}catch(a){}try{window.document.cookie=encodeURIComponent(l)+"="+b+";"}catch(a){}}}function g(){var a;if(typeof window!==i){try{a=window.localStorage[l]}catch(a){}if(typeof a===i)try{var b=window.document.cookie,c=b.indexOf(encodeURIComponent(l)+"=");-1!==c&&(a=/^([^;]+)/.exec(b.slice(c))[1])}catch(a){}return void 0===j.levels[a]&&(a=void 0),a}}var h,j=this,l="loglevel";a&&(l+=":"+a),j.name=a,j.levels={TRACE:0,DEBUG:1,INFO:2,WARN:3,ERROR:4,SILENT:5},j.methodFactory=c||f,j.getLevel=function(){return h},j.setLevel=function(b,c){if("string"==typeof b&&void 0!==j.levels[b.toUpperCase()]&&(b=j.levels[b.toUpperCase()]),!("number"==typeof b&&b>=0&&b<=j.levels.SILENT))throw"log.setLevel() called with invalid level: "+b;if(h=b,!1!==c&&e(b),d.call(j,b,a),typeof console===i&&b<j.levels.SILENT)return"No console available for logging"},j.setDefaultLevel=function(a){g()||j.setLevel(a,!1)},j.enableAll=function(a){j.setLevel(j.levels.TRACE,a)},j.disableAll=function(a){j.setLevel(j.levels.SILENT,a)};var m=g();null==m&&(m=null==b?"WARN":b),j.setLevel(m,!1)}var h=function(){},i="undefined",j=typeof window!==i&&typeof window.navigator!==i&&/Trident\/|MSIE /.test(window.navigator.userAgent),k=["trace","debug","info","warn","error"],l=new g,m={};l.getLogger=function(a){if("string"!=typeof a||""===a)throw new TypeError("You must supply a name when creating a logger.");var b=m[a];return b||(b=m[a]=new g(a,l.getLevel(),l.methodFactory)),b};var n=typeof window!==i?window.log:void 0;return l.noConflict=function(){return typeof window!==i&&window.log===l&&(window.log=n),l},l.getLoggers=function(){return m},l});

View file

@ -0,0 +1,393 @@
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
typeof define === 'function' && define.amd ? define(factory) :
(global = global || self, global.WSRPC = factory());
}(this, function () { 'use strict';
function _classCallCheck(instance, Constructor) {
if (!(instance instanceof Constructor)) {
throw new TypeError("Cannot call a class as a function");
}
}
var Deferred = function Deferred() {
_classCallCheck(this, Deferred);
var self = this;
self.resolve = null;
self.reject = null;
self.done = false;
function wrapper(func) {
return function () {
if (self.done) throw new Error('Promise already done');
self.done = true;
return func.apply(this, arguments);
};
}
self.promise = new Promise(function (resolve, reject) {
self.resolve = wrapper(resolve);
self.reject = wrapper(reject);
});
self.promise.isPending = function () {
return !self.done;
};
return self;
};
function logGroup(group, level, args) {
console.group(group);
console[level].apply(this, args);
console.groupEnd();
}
function log() {
if (!WSRPC.DEBUG) return;
logGroup('WSRPC.DEBUG', 'trace', arguments);
}
function trace(msg) {
if (!WSRPC.TRACE) return;
var payload = msg;
if ('data' in msg) payload = JSON.parse(msg.data);
logGroup("WSRPC.TRACE", 'trace', [payload]);
}
function getAbsoluteWsUrl(url) {
if (/^\w+:\/\//.test(url)) return url;
if (typeof window == 'undefined' && window.location.host.length < 1) throw new Error("Can not construct absolute URL from ".concat(window.location));
var scheme = window.location.protocol === "https:" ? "wss:" : "ws:";
var port = window.location.port === '' ? ":".concat(window.location.port) : '';
var host = window.location.host;
var path = url.replace(/^\/+/gm, '');
return "".concat(scheme, "//").concat(host).concat(port, "/").concat(path);
}
var readyState = Object.freeze({
0: 'CONNECTING',
1: 'OPEN',
2: 'CLOSING',
3: 'CLOSED'
});
var WSRPC = function WSRPC(URL) {
var reconnectTimeout = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : 1000;
_classCallCheck(this, WSRPC);
var self = this;
URL = getAbsoluteWsUrl(URL);
self.id = 1;
self.eventId = 0;
self.socketStarted = false;
self.eventStore = {
onconnect: {},
onerror: {},
onclose: {},
onchange: {}
};
self.connectionNumber = 0;
self.oneTimeEventStore = {
onconnect: [],
onerror: [],
onclose: [],
onchange: []
};
self.callQueue = [];
function createSocket() {
var ws = new WebSocket(URL);
var rejectQueue = function rejectQueue() {
self.connectionNumber++; // rejects incoming calls
var deferred; //reject all pending calls
while (0 < self.callQueue.length) {
var callObj = self.callQueue.shift();
deferred = self.store[callObj.id];
delete self.store[callObj.id];
if (deferred && deferred.promise.isPending()) {
deferred.reject('WebSocket error occurred');
}
} // reject all from the store
for (var key in self.store) {
if (!self.store.hasOwnProperty(key)) continue;
deferred = self.store[key];
if (deferred && deferred.promise.isPending()) {
deferred.reject('WebSocket error occurred');
}
}
};
function reconnect(callEvents) {
setTimeout(function () {
try {
self.socket = createSocket();
self.id = 1;
} catch (exc) {
callEvents('onerror', exc);
delete self.socket;
console.error(exc);
}
}, reconnectTimeout);
}
ws.onclose = function (err) {
log('ONCLOSE CALLED', 'STATE', self.public.state());
trace(err);
for (var serial in self.store) {
if (!self.store.hasOwnProperty(serial)) continue;
if (self.store[serial].hasOwnProperty('reject')) {
self.store[serial].reject('Connection closed');
}
}
rejectQueue();
callEvents('onclose', err);
callEvents('onchange', err);
reconnect(callEvents);
};
ws.onerror = function (err) {
log('ONERROR CALLED', 'STATE', self.public.state());
trace(err);
rejectQueue();
callEvents('onerror', err);
callEvents('onchange', err);
log('WebSocket has been closed by error: ', err);
};
function tryCallEvent(func, event) {
try {
return func(event);
} catch (e) {
if (e.hasOwnProperty('stack')) {
log(e.stack);
} else {
log('Event function', func, 'raised unknown error:', e);
}
console.error(e);
}
}
function callEvents(evName, event) {
while (0 < self.oneTimeEventStore[evName].length) {
var deferred = self.oneTimeEventStore[evName].shift();
if (deferred.hasOwnProperty('resolve') && deferred.promise.isPending()) deferred.resolve();
}
for (var i in self.eventStore[evName]) {
if (!self.eventStore[evName].hasOwnProperty(i)) continue;
var cur = self.eventStore[evName][i];
tryCallEvent(cur, event);
}
}
ws.onopen = function (ev) {
log('ONOPEN CALLED', 'STATE', self.public.state());
trace(ev);
while (0 < self.callQueue.length) {
// noinspection JSUnresolvedFunction
self.socket.send(JSON.stringify(self.callQueue.shift(), 0, 1));
}
callEvents('onconnect', ev);
callEvents('onchange', ev);
};
function handleCall(self, data) {
if (!self.routes.hasOwnProperty(data.method)) throw new Error('Route not found');
var connectionNumber = self.connectionNumber;
var deferred = new Deferred();
deferred.promise.then(function (result) {
if (connectionNumber !== self.connectionNumber) return;
self.socket.send(JSON.stringify({
id: data.id,
result: result
}));
}, function (error) {
if (connectionNumber !== self.connectionNumber) return;
self.socket.send(JSON.stringify({
id: data.id,
error: error
}));
});
var func = self.routes[data.method];
if (self.asyncRoutes[data.method]) return func.apply(deferred, [data.params]);
function badPromise() {
throw new Error("You should register route with async flag.");
}
var promiseMock = {
resolve: badPromise,
reject: badPromise
};
try {
deferred.resolve(func.apply(promiseMock, [data.params]));
} catch (e) {
deferred.reject(e);
console.error(e);
}
}
function handleError(self, data) {
if (!self.store.hasOwnProperty(data.id)) return log('Unknown callback');
var deferred = self.store[data.id];
if (typeof deferred === 'undefined') return log('Confirmation without handler');
delete self.store[data.id];
log('REJECTING', data.error);
deferred.reject(data.error);
}
function handleResult(self, data) {
var deferred = self.store[data.id];
if (typeof deferred === 'undefined') return log('Confirmation without handler');
delete self.store[data.id];
if (data.hasOwnProperty('result')) {
return deferred.resolve(data.result);
}
return deferred.reject(data.error);
}
ws.onmessage = function (message) {
log('ONMESSAGE CALLED', 'STATE', self.public.state());
trace(message);
if (message.type !== 'message') return;
var data;
try {
data = JSON.parse(message.data);
log(data);
if (data.hasOwnProperty('method')) {
return handleCall(self, data);
} else if (data.hasOwnProperty('error') && data.error === null) {
return handleError(self, data);
} else {
return handleResult(self, data);
}
} catch (exception) {
var err = {
error: exception.message,
result: null,
id: data ? data.id : null
};
self.socket.send(JSON.stringify(err));
console.error(exception);
}
};
return ws;
}
function makeCall(func, args, params) {
self.id += 2;
var deferred = new Deferred();
var callObj = Object.freeze({
id: self.id,
method: func,
params: args
});
var state = self.public.state();
if (state === 'OPEN') {
self.store[self.id] = deferred;
self.socket.send(JSON.stringify(callObj));
} else if (state === 'CONNECTING') {
log('SOCKET IS', state);
self.store[self.id] = deferred;
self.callQueue.push(callObj);
} else {
log('SOCKET IS', state);
if (params && params['noWait']) {
deferred.reject("Socket is: ".concat(state));
} else {
self.store[self.id] = deferred;
self.callQueue.push(callObj);
}
}
return deferred.promise;
}
self.asyncRoutes = {};
self.routes = {};
self.store = {};
self.public = Object.freeze({
call: function call(func, args, params) {
return makeCall(func, args, params);
},
addRoute: function addRoute(route, callback, isAsync) {
self.asyncRoutes[route] = isAsync || false;
self.routes[route] = callback;
},
deleteRoute: function deleteRoute(route) {
delete self.asyncRoutes[route];
return delete self.routes[route];
},
addEventListener: function addEventListener(event, func) {
var eventId = self.eventId++;
self.eventStore[event][eventId] = func;
return eventId;
},
removeEventListener: function removeEventListener(event, index) {
if (self.eventStore[event].hasOwnProperty(index)) {
delete self.eventStore[event][index];
return true;
} else {
return false;
}
},
onEvent: function onEvent(event) {
var deferred = new Deferred();
self.oneTimeEventStore[event].push(deferred);
return deferred.promise;
},
destroy: function destroy() {
return self.socket.close();
},
state: function state() {
return readyState[this.stateCode()];
},
stateCode: function stateCode() {
if (self.socketStarted && self.socket) return self.socket.readyState;
return 3;
},
connect: function connect() {
self.socketStarted = true;
self.socket = createSocket();
}
});
self.public.addRoute('log', function (argsObj) {
//console.info("Websocket sent: ".concat(argsObj));
});
self.public.addRoute('ping', function (data) {
return data;
});
return self.public;
};
WSRPC.DEBUG = false;
WSRPC.TRACE = false;
return WSRPC;
}));
//# sourceMappingURL=wsrpc.js.map

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,347 @@
/*jslint vars: true, plusplus: true, devel: true, nomen: true, regexp: true,
indent: 4, maxerr: 50 */
/*global $, window, location, CSInterface, SystemPath, themeManager*/
var csInterface = new CSInterface();
log.warn("script start");
WSRPC.DEBUG = false;
WSRPC.TRACE = false;
// get websocket server url from environment value
async function startUp(url){
promis = runEvalScript("getEnv('" + url + "')");
var res = await promis;
log.warn("res: " + res);
promis = runEvalScript("getEnv('OPENPYPE_DEBUG')");
var debug = await promis;
log.warn("debug: " + debug);
if (debug && debug.toString() == '3'){
WSRPC.DEBUG = true;
WSRPC.TRACE = true;
}
// run rest only after resolved promise
main(res);
}
function get_extension_version(){
/** Returns version number from extension manifest.xml **/
log.debug("get_extension_version")
var path = csInterface.getSystemPath(SystemPath.EXTENSION);
log.debug("extension path " + path);
var result = window.cep.fs.readFile(path + "/CSXS/manifest.xml");
var version = undefined;
if(result.err === 0){
if (window.DOMParser) {
const parser = new DOMParser();
const xmlDoc = parser.parseFromString(result.data.toString(),
'text/xml');
const children = xmlDoc.children;
for (let i = 0; i <= children.length; i++) {
if (children[i] &&
children[i].getAttribute('ExtensionBundleVersion')) {
version =
children[i].getAttribute('ExtensionBundleVersion');
}
}
}
}
return '{"result":"' + version + '"}'
}
function main(websocket_url){
// creates connection to 'websocket_url', registers routes
var default_url = 'ws://localhost:8099/ws/';
if (websocket_url == ''){
websocket_url = default_url;
}
RPC = new WSRPC(websocket_url, 5000); // spin connection
RPC.connect();
log.warn("connected");
RPC.addRoute('AfterEffects.open', function (data) {
log.warn('Server called client route "open":', data);
var escapedPath = EscapeStringForJSX(data.path);
return runEvalScript("fileOpen('" + escapedPath +"')")
.then(function(result){
log.warn("open: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_metadata', function (data) {
log.warn('Server called client route "get_metadata":', data);
return runEvalScript("getMetadata()")
.then(function(result){
log.warn("getMetadata: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_active_document_name', function (data) {
log.warn('Server called client route ' +
'"get_active_document_name":', data);
return runEvalScript("getActiveDocumentName()")
.then(function(result){
log.warn("get_active_document_name: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_active_document_full_name', function (data){
log.warn('Server called client route ' +
'"get_active_document_full_name":', data);
return runEvalScript("getActiveDocumentFullName()")
.then(function(result){
log.warn("get_active_document_full_name: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_items', function (data) {
log.warn('Server called client route "get_items":', data);
return runEvalScript("getItems(" + data.comps + "," +
data.folders + "," +
data.footages + ")")
.then(function(result){
log.warn("get_items: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_selected_items', function (data) {
log.warn('Server called client route "get_selected_items":', data);
return runEvalScript("getSelectedItems(" + data.comps + "," +
data.folders + "," +
data.footages + ")")
.then(function(result){
log.warn("get_items: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.import_file', function (data) {
log.warn('Server called client route "import_file":', data);
var escapedPath = EscapeStringForJSX(data.path);
return runEvalScript("importFile('" + escapedPath +"', " +
"'" + data.item_name + "'," +
"'" + JSON.stringify(
data.import_options) + "')")
.then(function(result){
log.warn("importFile: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.replace_item', function (data) {
log.warn('Server called client route "replace_item":', data);
var escapedPath = EscapeStringForJSX(data.path);
return runEvalScript("replaceItem(" + data.item_id + ", " +
"'" + escapedPath + "', " +
"'" + data.item_name + "')")
.then(function(result){
log.warn("replaceItem: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.rename_item', function (data) {
log.warn('Server called client route "rename_item":', data);
return runEvalScript("renameItem(" + data.item_id + ", " +
"'" + data.item_name + "')")
.then(function(result){
log.warn("renameItem: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.delete_item', function (data) {
log.warn('Server called client route "delete_item":', data);
return runEvalScript("deleteItem(" + data.item_id + ")")
.then(function(result){
log.warn("deleteItem: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.imprint', function (data) {
log.warn('Server called client route "imprint":', data);
var escaped = data.payload.replace(/\n/g, "\\n");
return runEvalScript("imprint('" + escaped +"')")
.then(function(result){
log.warn("imprint: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.set_label_color', function (data) {
log.warn('Server called client route "set_label_color":', data);
return runEvalScript("setLabelColor(" + data.item_id + "," +
data.color_idx + ")")
.then(function(result){
log.warn("imprint: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_work_area', function (data) {
log.warn('Server called client route "get_work_area":', data);
return runEvalScript("getWorkArea(" + data.item_id + ")")
.then(function(result){
log.warn("getWorkArea: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.set_work_area', function (data) {
log.warn('Server called client route "set_work_area":', data);
return runEvalScript("setWorkArea(" + data.item_id + ',' +
data.start + ',' +
data.duration + ',' +
data.frame_rate + ")")
.then(function(result){
log.warn("getWorkArea: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.saveAs', function (data) {
log.warn('Server called client route "saveAs":', data);
var escapedPath = EscapeStringForJSX(data.image_path);
return runEvalScript("saveAs('" + escapedPath + "', " +
data.as_copy + ")")
.then(function(result){
log.warn("saveAs: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.save', function (data) {
log.warn('Server called client route "save":', data);
return runEvalScript("save()")
.then(function(result){
log.warn("save: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_render_info', function (data) {
log.warn('Server called client route "get_render_info":', data);
return runEvalScript("getRenderInfo()")
.then(function(result){
log.warn("get_render_info: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_audio_url', function (data) {
log.warn('Server called client route "get_audio_url":', data);
return runEvalScript("getAudioUrlForComp(" + data.item_id + ")")
.then(function(result){
log.warn("getAudioUrlForComp: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.import_background', function (data) {
log.warn('Server called client route "import_background":', data);
return runEvalScript("importBackground(" + data.comp_id + ", " +
"'" + data.comp_name + "', " +
JSON.stringify(data.files) + ")")
.then(function(result){
log.warn("importBackground: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.reload_background', function (data) {
log.warn('Server called client route "reload_background":', data);
return runEvalScript("reloadBackground(" + data.comp_id + ", " +
"'" + data.comp_name + "', " +
JSON.stringify(data.files) + ")")
.then(function(result){
log.warn("reloadBackground: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.add_item_as_layer', function (data) {
log.warn('Server called client route "add_item_as_layer":', data);
return runEvalScript("addItemAsLayerToComp(" + data.comp_id + ", " +
data.item_id + "," +
" null )")
.then(function(result){
log.warn("addItemAsLayerToComp: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.render', function (data) {
log.warn('Server called client route "render":', data);
var escapedPath = EscapeStringForJSX(data.folder_url);
return runEvalScript("render('" + escapedPath +"')")
.then(function(result){
log.warn("render: " + result);
return result;
});
});
RPC.addRoute('AfterEffects.get_extension_version', function (data) {
log.warn('Server called client route "get_extension_version":', data);
return get_extension_version();
});
RPC.addRoute('AfterEffects.close', function (data) {
log.warn('Server called client route "close":', data);
return runEvalScript("close()");
});
}
/** main entry point **/
startUp("WEBSOCKET_URL");
(function () {
'use strict';
var csInterface = new CSInterface();
function init() {
themeManager.init();
$("#btn_test").click(function () {
csInterface.evalScript('sayHello()');
});
}
init();
}());
function EscapeStringForJSX(str){
// Replaces:
// \ with \\
// ' with \'
// " with \"
// See: https://stackoverflow.com/a/3967927/5285364
return str.replace(/\\/g, '\\\\').replace(/'/g, "\\'").replace(/"/g,'\\"');
}
function runEvalScript(script) {
// because of asynchronous nature of functions in jsx
// this waits for response
return new Promise(function(resolve, reject){
csInterface.evalScript(script, resolve);
});
}

View file

@ -0,0 +1,128 @@
/*jslint vars: true, plusplus: true, devel: true, nomen: true, regexp: true, indent: 4, maxerr: 50 */
/*global window, document, CSInterface*/
/*
Responsible for overwriting CSS at runtime according to CC app
settings as defined by the end user.
*/
var themeManager = (function () {
'use strict';
/**
* Convert the Color object to string in hexadecimal format;
*/
function toHex(color, delta) {
function computeValue(value, delta) {
var computedValue = !isNaN(delta) ? value + delta : value;
if (computedValue < 0) {
computedValue = 0;
} else if (computedValue > 255) {
computedValue = 255;
}
computedValue = Math.floor(computedValue);
computedValue = computedValue.toString(16);
return computedValue.length === 1 ? "0" + computedValue : computedValue;
}
var hex = "";
if (color) {
hex = computeValue(color.red, delta) + computeValue(color.green, delta) + computeValue(color.blue, delta);
}
return hex;
}
function reverseColor(color, delta) {
return toHex({
red: Math.abs(255 - color.red),
green: Math.abs(255 - color.green),
blue: Math.abs(255 - color.blue)
},
delta);
}
function addRule(stylesheetId, selector, rule) {
var stylesheet = document.getElementById(stylesheetId);
if (stylesheet) {
stylesheet = stylesheet.sheet;
if (stylesheet.addRule) {
stylesheet.addRule(selector, rule);
} else if (stylesheet.insertRule) {
stylesheet.insertRule(selector + ' { ' + rule + ' }', stylesheet.cssRules.length);
}
}
}
/**
* Update the theme with the AppSkinInfo retrieved from the host product.
*/
function updateThemeWithAppSkinInfo(appSkinInfo) {
var panelBgColor = appSkinInfo.panelBackgroundColor.color;
var bgdColor = toHex(panelBgColor);
var darkBgdColor = toHex(panelBgColor, 20);
var fontColor = "F0F0F0";
if (panelBgColor.red > 122) {
fontColor = "000000";
}
var lightBgdColor = toHex(panelBgColor, -100);
var styleId = "hostStyle";
addRule(styleId, ".hostElt", "background-color:" + "#" + bgdColor);
addRule(styleId, ".hostElt", "font-size:" + appSkinInfo.baseFontSize + "px;");
addRule(styleId, ".hostElt", "font-family:" + appSkinInfo.baseFontFamily);
addRule(styleId, ".hostElt", "color:" + "#" + fontColor);
addRule(styleId, ".hostBgd", "background-color:" + "#" + bgdColor);
addRule(styleId, ".hostBgdDark", "background-color: " + "#" + darkBgdColor);
addRule(styleId, ".hostBgdLight", "background-color: " + "#" + lightBgdColor);
addRule(styleId, ".hostFontSize", "font-size:" + appSkinInfo.baseFontSize + "px;");
addRule(styleId, ".hostFontFamily", "font-family:" + appSkinInfo.baseFontFamily);
addRule(styleId, ".hostFontColor", "color:" + "#" + fontColor);
addRule(styleId, ".hostFont", "font-size:" + appSkinInfo.baseFontSize + "px;");
addRule(styleId, ".hostFont", "font-family:" + appSkinInfo.baseFontFamily);
addRule(styleId, ".hostFont", "color:" + "#" + fontColor);
addRule(styleId, ".hostButton", "background-color:" + "#" + darkBgdColor);
addRule(styleId, ".hostButton:hover", "background-color:" + "#" + bgdColor);
addRule(styleId, ".hostButton:active", "background-color:" + "#" + darkBgdColor);
addRule(styleId, ".hostButton", "border-color: " + "#" + lightBgdColor);
}
function onAppThemeColorChanged(event) {
var skinInfo = JSON.parse(window.__adobe_cep__.getHostEnvironment()).appSkinInfo;
updateThemeWithAppSkinInfo(skinInfo);
}
function init() {
var csInterface = new CSInterface();
updateThemeWithAppSkinInfo(csInterface.hostEnvironment.appSkinInfo);
csInterface.addEventListener(CSInterface.THEME_COLOR_CHANGED_EVENT, onAppThemeColorChanged);
}
return {
init: init
};
}());

View file

@ -0,0 +1,723 @@
/*jslint vars: true, plusplus: true, devel: true, nomen: true, regexp: true,
indent: 4, maxerr: 50 */
/*global $, Folder*/
#include "../js/libs/json.js";
/* All public API function should return JSON! */
app.preferences.savePrefAsBool("General Section", "Show Welcome Screen", false) ;
if(!Array.prototype.indexOf) {
Array.prototype.indexOf = function ( item ) {
var index = 0, length = this.length;
for ( ; index < length; index++ ) {
if ( this[index] === item )
return index;
}
return -1;
};
}
function sayHello(){
alert("hello from ExtendScript");
}
function getEnv(variable){
return $.getenv(variable);
}
function getMetadata(){
/**
* Returns payload in 'Label' field of project's metadata
*
**/
if (ExternalObject.AdobeXMPScript === undefined){
ExternalObject.AdobeXMPScript =
new ExternalObject('lib:AdobeXMPScript');
}
var proj = app.project;
var meta = new XMPMeta(app.project.xmpPacket);
var schemaNS = XMPMeta.getNamespaceURI("xmp");
var label = "xmp:Label";
if (meta.doesPropertyExist(schemaNS, label)){
var prop = meta.getProperty(schemaNS, label);
return prop.value;
}
return _prepareSingleValue([]);
}
function imprint(payload){
/**
* Stores payload in 'Label' field of project's metadata
*
* Args:
* payload (string): json content
*/
if (ExternalObject.AdobeXMPScript === undefined){
ExternalObject.AdobeXMPScript =
new ExternalObject('lib:AdobeXMPScript');
}
var proj = app.project;
var meta = new XMPMeta(app.project.xmpPacket);
var schemaNS = XMPMeta.getNamespaceURI("xmp");
var label = "xmp:Label";
meta.setProperty(schemaNS, label, payload);
app.project.xmpPacket = meta.serialize();
}
function fileOpen(path){
/**
* Opens (project) file on 'path'
*/
fp = new File(path);
return _prepareSingleValue(app.open(fp))
}
function getActiveDocumentName(){
/**
* Returns file name of active document
* */
var file = app.project.file;
if (file){
return _prepareSingleValue(file.name)
}
return _prepareError("No file open currently");
}
function getActiveDocumentFullName(){
/**
* Returns absolute path to current project
* */
var file = app.project.file;
if (file){
var f = new File(file.fullName);
var path = f.fsName;
f.close();
return _prepareSingleValue(path)
}
return _prepareError("No file open currently");
}
function getItems(comps, folders, footages){
/**
* Returns JSON representation of compositions and
* if 'collectLayers' then layers in comps too.
*
* Args:
* comps (bool): return selected compositions
* folders (bool): return folders
* footages (bool): return FootageItem
* Returns:
* (list) of JSON items
*/
var items = []
for (i = 1; i <= app.project.items.length; ++i){
var item = app.project.items[i];
if (!item){
continue;
}
var ret = _getItem(item, comps, folders, footages);
if (ret){
items.push(ret);
}
}
return '[' + items.join() + ']';
}
function getSelectedItems(comps, folders, footages){
/**
* Returns list of selected items from Project menu
*
* Args:
* comps (bool): return selected compositions
* folders (bool): return folders
* footages (bool): return FootageItem
* Returns:
* (list) of JSON items
*/
var items = []
for (i = 0; i < app.project.selection.length; ++i){
var item = app.project.selection[i];
if (!item){
continue;
}
var ret = _getItem(item, comps, folders, footages);
if (ret){
items.push(ret);
}
}
return '[' + items.join() + ']';
}
function _getItem(item, comps, folders, footages){
/**
* Auxiliary function as project items and selections
* are indexed in different way :/
* Refactor
*/
var item_type = '';
if (item instanceof FolderItem){
item_type = 'folder';
if (!folders){
return "{}";
}
}
if (item instanceof FootageItem){
item_type = 'footage';
if (!footages){
return "{}";
}
}
if (item instanceof CompItem){
item_type = 'comp';
if (!comps){
return "{}";
}
}
var item = {"name": item.name,
"id": item.id,
"type": item_type};
return JSON.stringify(item);
}
function importFile(path, item_name, import_options){
/**
* Imports file (image tested for now) as a FootageItem.
* Creates new composition
*
* Args:
* path (string): absolute path to image file
* item_name (string): label for composition
* Returns:
* JSON {name, id}
*/
var comp;
var ret = {};
try{
import_options = JSON.parse(import_options);
} catch (e){
return _prepareError("Couldn't parse import options " + import_options);
}
app.beginUndoGroup("Import File");
fp = new File(path);
if (fp.exists){
try {
im_opt = new ImportOptions(fp);
importAsType = import_options["ImportAsType"];
if ('ImportAsType' in import_options){ // refactor
if (importAsType.indexOf('COMP') > 0){
im_opt.importAs = ImportAsType.COMP;
}
if (importAsType.indexOf('FOOTAGE') > 0){
im_opt.importAs = ImportAsType.FOOTAGE;
}
if (importAsType.indexOf('COMP_CROPPED_LAYERS') > 0){
im_opt.importAs = ImportAsType.COMP_CROPPED_LAYERS;
}
if (importAsType.indexOf('PROJECT') > 0){
im_opt.importAs = ImportAsType.PROJECT;
}
}
if ('sequence' in import_options){
im_opt.sequence = true;
}
comp = app.project.importFile(im_opt);
if (app.project.selection.length == 2 &&
app.project.selection[0] instanceof FolderItem){
comp.parentFolder = app.project.selection[0]
}
} catch (error) {
return _prepareError(error.toString() + importOptions.file.fsName);
} finally {
fp.close();
}
}else{
return _prepareError("File " + path + " not found.");
}
if (comp){
comp.name = item_name;
comp.label = 9; // Green
ret = {"name": comp.name, "id": comp.id}
}
app.endUndoGroup();
return JSON.stringify(ret);
}
function setLabelColor(comp_id, color_idx){
/**
* Set item_id label to 'color_idx' color
* Args:
* item_id (int): item id
* color_idx (int): 0-16 index from Label
*/
var item = app.project.itemByID(comp_id);
if (item){
item.label = color_idx;
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function replaceItem(comp_id, path, item_name){
/**
* Replaces loaded file with new file and updates name
*
* Args:
* comp_id (int): id of composition, not a index!
* path (string): absolute path to new file
* item_name (string): new composition name
*/
app.beginUndoGroup("Replace File");
fp = new File(path);
if (!fp.exists){
return _prepareError("File " + path + " not found.");
}
var item = app.project.itemByID(comp_id);
if (item){
try{
if (isFileSequence(item)) {
item.replaceWithSequence(fp, false);
}else{
item.replace(fp);
}
item.name = item_name;
} catch (error) {
return _prepareError(error.toString() + path);
} finally {
fp.close();
}
}else{
return _prepareError("There is no composition with "+ comp_id);
}
app.endUndoGroup();
}
function renameItem(item_id, new_name){
/**
* Renames item with 'item_id' to 'new_name'
*
* Args:
* item_id (int): id to search item
* new_name (str)
*/
var item = app.project.itemByID(item_id);
if (item){
item.name = new_name;
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function deleteItem(item_id){
/**
* Delete any 'item_id'
*
* Not restricted only to comp, it could delete
* any item with 'id'
*/
var item = app.project.itemByID(item_id);
if (item){
item.remove();
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function getWorkArea(comp_id){
/**
* Returns information about workarea - are that will be
* rendered. All calculation will be done in OpenPype,
* easier to modify without redeploy of extension.
*
* Returns
* (dict)
*/
var item = app.project.itemByID(comp_id);
if (item){
return JSON.stringify({
"workAreaStart": item.displayStartFrame,
"workAreaDuration": item.duration,
"frameRate": item.frameRate});
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function setWorkArea(comp_id, workAreaStart, workAreaDuration, frameRate){
/**
* Sets work area info from outside (from Ftrack via OpenPype)
*/
var item = app.project.itemByID(comp_id);
if (item){
item.displayStartTime = workAreaStart;
item.duration = workAreaDuration;
item.frameRate = frameRate;
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function save(){
/**
* Saves current project
*/
app.project.save(); //TODO path is wrong, File instead
}
function saveAs(path){
/**
* Saves current project as 'path'
* */
app.project.save(fp = new File(path));
}
function getRenderInfo(){
/***
Get info from render queue.
Currently pulls only file name to parse extension and
if it is sequence in Python
**/
try{
var render_item = app.project.renderQueue.item(1);
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate(); // create new, cannot change status if DONE
render_item.remove(); // remove existing to limit duplications
render_item = app.project.renderQueue.item(1);
}
render_item.render = true; // always set render queue to render
var item = render_item.outputModule(1);
} catch (error) {
return _prepareError("There is no render queue, create one");
}
var file_url = item.file.toString();
return JSON.stringify({
"file_name": file_url
})
}
function getAudioUrlForComp(comp_id){
/**
* Searches composition for audio layer
*
* Only single AVLayer is expected!
* Used for collecting Audio
*
* Args:
* comp_id (int): id of composition
* Return:
* (str) with url to audio content
*/
var item = app.project.itemByID(comp_id);
if (item){
for (i = 1; i <= item.numLayers; ++i){
var layer = item.layers[i];
if (layer instanceof AVLayer){
return layer.source.file.fsName.toString();
}
}
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function addItemAsLayerToComp(comp_id, item_id, found_comp){
/**
* Adds already imported FootageItem ('item_id') as a new
* layer to composition ('comp_id').
*
* Args:
* comp_id (int): id of target composition
* item_id (int): FootageItem.id
* found_comp (CompItem, optional): to limit quering if
* comp already found previously
*/
var comp = found_comp || app.project.itemByID(comp_id);
if (comp){
item = app.project.itemByID(item_id);
if (item){
comp.layers.add(item);
}else{
return _prepareError("There is no item with " + item_id);
}
}else{
return _prepareError("There is no composition with "+ comp_id);
}
}
function importBackground(comp_id, composition_name, files_to_import){
/**
* Imports backgrounds images to existing or new composition.
*
* If comp_id is not provided, new composition is created, basic
* values (width, heights, frameRatio) takes from first imported
* image.
*
* Args:
* comp_id (int): id of existing composition (null if new)
* composition_name (str): used when new composition
* files_to_import (list): list of absolute paths to import and
* add as layers
*
* Returns:
* (str): json representation (id, name, members)
*/
var comp;
var folder;
var imported_ids = [];
if (comp_id){
comp = app.project.itemByID(comp_id);
folder = comp.parentFolder;
}else{
if (app.project.selection.length > 1){
return _prepareError(
"Too many items selected, select only target composition!");
}else{
selected_item = app.project.activeItem;
if (selected_item instanceof Folder){
comp = selected_item;
folder = selected_item;
}
}
}
if (files_to_import){
for (i = 0; i < files_to_import.length; ++i){
item = _importItem(files_to_import[i]);
if (!item){
return _prepareError(
"No item for " + item_json["id"] +
". Import background failed.")
}
if (!comp){
folder = app.project.items.addFolder(composition_name);
imported_ids.push(folder.id);
comp = app.project.items.addComp(composition_name, item.width,
item.height, item.pixelAspect,
1, 26.7); // hardcode defaults
imported_ids.push(comp.id);
comp.parentFolder = folder;
}
imported_ids.push(item.id)
item.parentFolder = folder;
addItemAsLayerToComp(comp.id, item.id, comp);
}
}
var item = {"name": comp.name,
"id": folder.id,
"members": imported_ids};
return JSON.stringify(item);
}
function reloadBackground(comp_id, composition_name, files_to_import){
/**
* Reloads existing composition.
*
* It deletes complete composition with encompassing folder, recreates
* from scratch via 'importBackground' functionality.
*
* Args:
* comp_id (int): id of existing composition (null if new)
* composition_name (str): used when new composition
* files_to_import (list): list of absolute paths to import and
* add as layers
*
* Returns:
* (str): json representation (id, name, members)
*
*/
var imported_ids = []; // keep track of members of composition
comp = app.project.itemByID(comp_id);
folder = comp.parentFolder;
if (folder){
renameItem(folder.id, composition_name);
imported_ids.push(folder.id);
}
if (comp){
renameItem(comp.id, composition_name);
imported_ids.push(comp.id);
}
var existing_layer_names = [];
var existing_layer_ids = []; // because ExtendedScript doesnt have keys()
for (i = 1; i <= folder.items.length; ++i){
layer = folder.items[i];
//because comp.layers[i] doesnt have 'id' accessible
if (layer instanceof CompItem){
continue;
}
existing_layer_names.push(layer.name);
existing_layer_ids.push(layer.id);
}
var new_filenames = [];
if (files_to_import){
for (i = 0; i < files_to_import.length; ++i){
file_name = _get_file_name(files_to_import[i]);
new_filenames.push(file_name);
idx = existing_layer_names.indexOf(file_name);
if (idx >= 0){ // update
var layer_id = existing_layer_ids[idx];
replaceItem(layer_id, files_to_import[i], file_name);
imported_ids.push(layer_id);
}else{ // new layer
item = _importItem(files_to_import[i]);
if (!item){
return _prepareError(
"No item for " + files_to_import[i] +
". Reload background failed.");
}
imported_ids.push(item.id);
item.parentFolder = folder;
addItemAsLayerToComp(comp.id, item.id, comp);
}
}
}
_delete_obsolete_items(folder, new_filenames);
var item = {"name": comp.name,
"id": folder.id,
"members": imported_ids};
return JSON.stringify(item);
}
function _get_file_name(file_url){
/**
* Returns file name without extension from 'file_url'
*
* Args:
* file_url (str): full absolute url
* Returns:
* (str)
*/
fp = new File(file_url);
file_name = fp.name.substring(0, fp.name.lastIndexOf("."));
return file_name;
}
function _delete_obsolete_items(folder, new_filenames){
/***
* Goes through 'folder' and removes layers not in new
* background
*
* Args:
* folder (FolderItem)
* new_filenames (array): list of layer names in new bg
*/
// remove items in old, but not in new
delete_ids = []
for (i = 1; i <= folder.items.length; ++i){
layer = folder.items[i];
//because comp.layers[i] doesnt have 'id' accessible
if (layer instanceof CompItem){
continue;
}
if (new_filenames.indexOf(layer.name) < 0){
delete_ids.push(layer.id);
}
}
for (i = 0; i < delete_ids.length; ++i){
deleteItem(delete_ids[i]);
}
}
function _importItem(file_url){
/**
* Imports 'file_url' as new FootageItem
*
* Args:
* file_url (str): file url with content
* Returns:
* (FootageItem)
*/
file_name = _get_file_name(file_url);
//importFile prepared previously to return json
item_json = importFile(file_url, file_name, JSON.stringify({"ImportAsType":"FOOTAGE"}));
item_json = JSON.parse(item_json);
item = app.project.itemByID(item_json["id"]);
return item;
}
function isFileSequence (item){
/**
* Check that item is a recognizable sequence
*/
if (item instanceof FootageItem && item.mainSource instanceof FileSource && !(item.mainSource.isStill) && item.hasVideo){
var extname = item.mainSource.file.fsName.split('.').pop();
return extname.match(new RegExp("(ai|bmp|bw|cin|cr2|crw|dcr|dng|dib|dpx|eps|erf|exr|gif|hdr|ico|icb|iff|jpe|jpeg|jpg|mos|mrw|nef|orf|pbm|pef|pct|pcx|pdf|pic|pict|png|ps|psd|pxr|raf|raw|rgb|rgbe|rla|rle|rpf|sgi|srf|tdi|tga|tif|tiff|vda|vst|x3f|xyze)", "i")) !== null;
}
return false;
}
function render(target_folder){
var out_dir = new Folder(target_folder);
var out_dir = out_dir.fsName;
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
var om1 = app.project.renderQueue.item(i).outputModule(1);
var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space?
var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE );
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate();
render_item.remove();
continue;
}
var targetFolder = new Folder(target_folder);
if (!targetFolder.exists) {
targetFolder.create();
}
om1.file = new File(targetFolder.fsName + '/' + file_name);
}
app.project.renderQueue.render();
}
function close(){
app.project.close(CloseOptions.DO_NOT_SAVE_CHANGES);
app.quit();
}
function _prepareSingleValue(value){
return JSON.stringify({"result": value})
}
function _prepareError(error_msg){
return JSON.stringify({"error": error_msg})
}

View file

@ -0,0 +1,319 @@
import os
import subprocess
import collections
import logging
import asyncio
import functools
from wsrpc_aiohttp import (
WebSocketRoute,
WebSocketAsync
)
from Qt import QtCore
from openpype.tools.utils import host_tools
from avalon import api
from avalon.tools.webserver.app import WebServerTool
from .ws_stub import AfterEffectsServerStub
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
class ConnectionNotEstablishedYet(Exception):
pass
def get_stub():
"""
Convenience function to get server RPC stub to call methods directed
for host (Photoshop).
It expects already created connection, started from client.
Currently created when panel is opened (PS: Window>Extensions>Avalon)
:return: <PhotoshopClientStub> where functions could be called from
"""
ae_stub = AfterEffectsServerStub()
if not ae_stub.client:
raise ConnectionNotEstablishedYet("Connection is not created yet")
return ae_stub
def stub():
return get_stub()
def show_tool_by_name(tool_name):
kwargs = {}
if tool_name == "loader":
kwargs["use_context"] = True
host_tools.show_tool_by_name(tool_name, **kwargs)
class ProcessLauncher(QtCore.QObject):
route_name = "AfterEffects"
_main_thread_callbacks = collections.deque()
def __init__(self, subprocess_args):
self._subprocess_args = subprocess_args
self._log = None
super(ProcessLauncher, self).__init__()
# Keep track if launcher was alreadu started
self._started = False
self._process = None
self._websocket_server = None
start_process_timer = QtCore.QTimer()
start_process_timer.setInterval(100)
loop_timer = QtCore.QTimer()
loop_timer.setInterval(200)
start_process_timer.timeout.connect(self._on_start_process_timer)
loop_timer.timeout.connect(self._on_loop_timer)
self._start_process_timer = start_process_timer
self._loop_timer = loop_timer
@property
def log(self):
if self._log is None:
from openpype.api import Logger
self._log = Logger.get_logger("{}-launcher".format(
self.route_name))
return self._log
@property
def websocket_server_is_running(self):
if self._websocket_server is not None:
return self._websocket_server.is_running
return False
@property
def is_process_running(self):
if self._process is not None:
return self._process.poll() is None
return False
@property
def is_host_connected(self):
"""Returns True if connected, False if app is not running at all."""
if not self.is_process_running:
return False
try:
_stub = get_stub()
if _stub:
return True
except Exception:
pass
return None
@classmethod
def execute_in_main_thread(cls, callback):
cls._main_thread_callbacks.append(callback)
def start(self):
if self._started:
return
self.log.info("Started launch logic of AfterEffects")
self._started = True
self._start_process_timer.start()
def exit(self):
""" Exit whole application. """
if self._start_process_timer.isActive():
self._start_process_timer.stop()
if self._loop_timer.isActive():
self._loop_timer.stop()
if self._websocket_server is not None:
self._websocket_server.stop()
if self._process:
self._process.kill()
self._process.wait()
QtCore.QCoreApplication.exit()
def _on_loop_timer(self):
# TODO find better way and catch errors
# Run only callbacks that are in queue at the moment
cls = self.__class__
for _ in range(len(cls._main_thread_callbacks)):
if cls._main_thread_callbacks:
callback = cls._main_thread_callbacks.popleft()
callback()
if not self.is_process_running:
self.log.info("Host process is not running. Closing")
self.exit()
elif not self.websocket_server_is_running:
self.log.info("Websocket server is not running. Closing")
self.exit()
def _on_start_process_timer(self):
# TODO add try except validations for each part in this method
# Start server as first thing
if self._websocket_server is None:
self._init_server()
return
# TODO add waiting time
# Wait for webserver
if not self.websocket_server_is_running:
return
# Start application process
if self._process is None:
self._start_process()
self.log.info("Waiting for host to connect")
return
# TODO add waiting time
# Wait until host is connected
if self.is_host_connected:
self._start_process_timer.stop()
self._loop_timer.start()
elif (
not self.is_process_running
or not self.websocket_server_is_running
):
self.exit()
def _init_server(self):
if self._websocket_server is not None:
return
self.log.debug(
"Initialization of websocket server for host communication"
)
self._websocket_server = websocket_server = WebServerTool()
if websocket_server.port_occupied(
websocket_server.host_name,
websocket_server.port
):
self.log.info(
"Server already running, sending actual context and exit."
)
asyncio.run(websocket_server.send_context_change(self.route_name))
self.exit()
return
# Add Websocket route
websocket_server.add_route("*", "/ws/", WebSocketAsync)
# Add after effects route to websocket handler
print("Adding {} route".format(self.route_name))
WebSocketAsync.add_route(
self.route_name, AfterEffectsRoute
)
self.log.info("Starting websocket server for host communication")
websocket_server.start_server()
def _start_process(self):
if self._process is not None:
return
self.log.info("Starting host process")
try:
self._process = subprocess.Popen(
self._subprocess_args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
except Exception:
self.log.info("exce", exc_info=True)
self.exit()
class AfterEffectsRoute(WebSocketRoute):
"""
One route, mimicking external application (like Harmony, etc).
All functions could be called from client.
'do_notify' function calls function on the client - mimicking
notification after long running job on the server or similar
"""
instance = None
def init(self, **kwargs):
# Python __init__ must be return "self".
# This method might return anything.
log.debug("someone called AfterEffects route")
self.instance = self
return kwargs
# server functions
async def ping(self):
log.debug("someone called AfterEffects route ping")
# This method calls function on the client side
# client functions
async def set_context(self, project, asset, task):
"""
Sets 'project' and 'asset' to envs, eg. setting context
Args:
project (str)
asset (str)
"""
log.info("Setting context change")
log.info("project {} asset {} ".format(project, asset))
if project:
api.Session["AVALON_PROJECT"] = project
os.environ["AVALON_PROJECT"] = project
if asset:
api.Session["AVALON_ASSET"] = asset
os.environ["AVALON_ASSET"] = asset
if task:
api.Session["AVALON_TASK"] = task
os.environ["AVALON_TASK"] = task
async def read(self):
log.debug("aftereffects.read client calls server server calls "
"aftereffects client")
return await self.socket.call('aftereffects.read')
# panel routes for tools
async def creator_route(self):
self._tool_route("creator")
async def workfiles_route(self):
self._tool_route("workfiles")
async def loader_route(self):
self._tool_route("loader")
async def publish_route(self):
self._tool_route("publish")
async def sceneinventory_route(self):
self._tool_route("sceneinventory")
async def subsetmanager_route(self):
self._tool_route("subsetmanager")
async def experimental_tools_route(self):
self._tool_route("experimental_tools")
def _tool_route(self, _tool_name):
"""The address accessed when clicking on the buttons."""
partial_method = functools.partial(show_tool_by_name,
_tool_name)
ProcessLauncher.execute_in_main_thread(partial_method)
# Required return statement.
return "nothing"

View file

@ -0,0 +1,71 @@
import os
import sys
import contextlib
import traceback
import logging
from Qt import QtWidgets
from openpype.lib.remote_publish import headless_publish
from openpype.tools.utils import host_tools
from .launch_logic import ProcessLauncher, get_stub
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
def safe_excepthook(*args):
traceback.print_exception(*args)
def main(*subprocess_args):
sys.excepthook = safe_excepthook
import avalon.api
from openpype.hosts.aftereffects import api
avalon.api.install(api)
os.environ["OPENPYPE_LOG_NO_COLORS"] = "False"
app = QtWidgets.QApplication([])
app.setQuitOnLastWindowClosed(False)
launcher = ProcessLauncher(subprocess_args)
launcher.start()
if os.environ.get("HEADLESS_PUBLISH"):
# reusing ConsoleTrayApp approach as it was already implemented
launcher.execute_in_main_thread(lambda: headless_publish(
log,
"CloseAE",
os.environ.get("IS_TEST")))
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
save = False
if os.getenv("WORKFILES_SAVE_AS"):
save = True
launcher.execute_in_main_thread(
lambda: host_tools.show_tool_by_name("workfiles", save=save)
)
sys.exit(app.exec_())
@contextlib.contextmanager
def maintained_selection():
"""Maintain selection during context."""
selection = get_stub().get_selected_items(True, False, False)
try:
yield selection
finally:
pass
def get_extension_manifest_path():
return os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"extension",
"CSXS",
"manifest.xml"
)

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View file

@ -0,0 +1,272 @@
import os
import sys
from Qt import QtWidgets
import pyblish.api
import avalon.api
from avalon import io, pipeline
from openpype import lib
from openpype.api import Logger
import openpype.hosts.aftereffects
from .launch_logic import get_stub
log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(
os.path.abspath(openpype.hosts.aftereffects.__file__)
)
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
def check_inventory():
if not lib.any_outdated():
return
host = pyblish.api.registered_host()
outdated_containers = []
for container in host.ls():
representation = container['representation']
representation_doc = io.find_one(
{
"_id": io.ObjectId(representation),
"type": "representation"
},
projection={"parent": True}
)
if representation_doc and not lib.is_latest(representation_doc):
outdated_containers.append(container)
# Warn about outdated containers.
print("Starting new QApplication..")
app = QtWidgets.QApplication(sys.argv)
message_box = QtWidgets.QMessageBox()
message_box.setIcon(QtWidgets.QMessageBox.Warning)
msg = "There are outdated containers in the scene."
message_box.setText(msg)
message_box.exec_()
def application_launch():
check_inventory()
def install():
print("Installing Pype config...")
pyblish.api.register_host("aftereffects")
pyblish.api.register_plugin_path(PUBLISH_PATH)
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
log.info(PUBLISH_PATH)
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled
)
avalon.api.on("application.launched", application_launch)
def uninstall():
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle layer visibility on instance toggles."""
instance[0].Visible = new_value
def get_asset_settings():
"""Get settings on current asset from database.
Returns:
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
resolution_width = asset_data.get("resolutionWidth")
resolution_height = asset_data.get("resolutionHeight")
duration = (frame_end - frame_start + 1) + handle_start + handle_end
return {
"fps": fps,
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end,
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height,
"duration": duration
}
def containerise(name,
namespace,
comp,
context,
loader=None,
suffix="_CON"):
"""
Containerisation enables a tracking of version, author and origin
for loaded assets.
Creates dictionary payloads that gets saved into file metadata. Each
container contains of who loaded (loader) and members (single or multiple
in case of background).
Arguments:
name (str): Name of resulting assembly
namespace (str): Namespace under which to host container
comp (Comp): Composition to containerise
context (dict): Asset information
loader (str, optional): Name of loader used to produce this container.
suffix (str, optional): Suffix of container, defaults to `_CON`.
Returns:
container (str): Name of container assembly
"""
data = {
"schema": "openpype:container-2.0",
"id": pipeline.AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace,
"loader": str(loader),
"representation": str(context["representation"]["_id"]),
"members": comp.members or [comp.id]
}
stub = get_stub()
stub.imprint(comp, data)
return comp
def _get_stub():
"""
Handle pulling stub from PS to run operations on host
Returns:
(AEServerStub) or None
"""
try:
stub = get_stub() # only after Photoshop is up
except lib.ConnectionNotEstablishedYet:
print("Not connected yet, ignoring")
return
if not stub.get_active_document_name():
return
return stub
def ls():
"""Yields containers from active AfterEffects document.
This is the host-equivalent of api.ls(), but instead of listing
assets on disk, it lists assets already loaded in AE; once loaded
they are called 'containers'. Used in Manage tool.
Containers could be on multiple levels, single images/videos/was as a
FootageItem, or multiple items - backgrounds (folder with automatically
created composition and all imported layers).
Yields:
dict: container
"""
try:
stub = get_stub() # only after AfterEffects is up
except lib.ConnectionNotEstablishedYet:
print("Not connected yet, ignoring")
return
layers_meta = stub.get_metadata()
for item in stub.get_items(comps=True,
folders=True,
footages=True):
data = stub.read(item, layers_meta)
# Skip non-tagged layers.
if not data:
continue
# Filter to only containers.
if "container" not in data["id"]:
continue
# Append transient data
data["objectName"] = item.name.replace(stub.LOADED_ICON, '')
data["layer"] = item
yield data
def list_instances():
"""
List all created instances from current workfile which
will be published.
Pulls from File > File Info
For SubsetManager
Returns:
(list) of dictionaries matching instances format
"""
stub = _get_stub()
if not stub:
return []
instances = []
layers_meta = stub.get_metadata()
for instance in layers_meta:
if instance.get("schema") and \
"container" in instance.get("schema"):
continue
uuid_val = instance.get("uuid")
if uuid_val:
instance['uuid'] = uuid_val
else:
instance['uuid'] = instance.get("members")[0] # legacy
instances.append(instance)
return instances
def remove_instance(instance):
"""
Remove instance from current workfile metadata.
Updates metadata of current file in File > File Info and removes
icon highlight on group layer.
For SubsetManager
Args:
instance (dict): instance representation from subsetmanager model
"""
stub = _get_stub()
if not stub:
return
stub.remove_instance(instance.get("uuid"))
item = stub.get_item(instance.get("uuid"))
if item:
stub.rename_item(item.id,
item.name.replace(stub.PUBLISH_ICON, ''))

View file

@ -0,0 +1,9 @@
import avalon.api
from .launch_logic import get_stub
class AfterEffectsLoader(avalon.api.Loader):
@staticmethod
def get_stub():
return get_stub()

View file

@ -0,0 +1,49 @@
"""Host API required Work Files tool"""
import os
from .launch_logic import get_stub
from avalon import api
def _active_document():
document_name = get_stub().get_active_document_name()
if not document_name:
return None
return document_name
def file_extensions():
return api.HOST_WORKFILE_EXTENSIONS["aftereffects"]
def has_unsaved_changes():
if _active_document():
return not get_stub().is_saved()
return False
def save_file(filepath):
get_stub().saveAs(filepath, True)
def open_file(filepath):
get_stub().open(filepath)
return True
def current_file():
try:
full_name = get_stub().get_active_document_full_name()
if full_name and full_name != "null":
return os.path.normpath(full_name).replace("\\", "/")
except Exception:
pass
return None
def work_root(session):
return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/")

View file

@ -0,0 +1,605 @@
"""
Stub handling connection from server to client.
Used anywhere solution is calling client methods.
"""
import json
import logging
import attr
from wsrpc_aiohttp import WebSocketAsync
from avalon.tools.webserver.app import WebServerTool
@attr.s
class AEItem(object):
"""
Object denoting Item in AE. Each item is created in AE by any Loader,
but contains same fields, which are being used in later processing.
"""
# metadata
id = attr.ib() # id created by AE, could be used for querying
name = attr.ib() # name of item
item_type = attr.ib(default=None) # item type (footage, folder, comp)
# all imported elements, single for
# regular image, array for Backgrounds
members = attr.ib(factory=list)
workAreaStart = attr.ib(default=None)
workAreaDuration = attr.ib(default=None)
frameRate = attr.ib(default=None)
file_name = attr.ib(default=None)
class AfterEffectsServerStub():
"""
Stub for calling function on client (Photoshop js) side.
Expects that client is already connected (started when avalon menu
is opened).
'self.websocketserver.call' is used as async wrapper
"""
PUBLISH_ICON = '\u2117 '
LOADED_ICON = '\u25bc'
def __init__(self):
self.websocketserver = WebServerTool.get_instance()
self.client = self.get_client()
self.log = logging.getLogger(self.__class__.__name__)
@staticmethod
def get_client():
"""
Return first connected client to WebSocket
TODO implement selection by Route
:return: <WebSocketAsync> client
"""
clients = WebSocketAsync.get_clients()
client = None
if len(clients) > 0:
key = list(clients.keys())[0]
client = clients.get(key)
return client
def open(self, path):
"""
Open file located at 'path' (local).
Args:
path(string): file path locally
Returns: None
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.open', path=path))
return self._handle_return(res)
def get_metadata(self):
"""
Get complete stored JSON with metadata from AE.Metadata.Label
field.
It contains containers loaded by any Loader OR instances creted
by Creator.
Returns:
(list)
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_metadata'))
metadata = self._handle_return(res)
return metadata or []
def read(self, item, layers_meta=None):
"""
Parses item metadata from Label field of active document.
Used as filter to pick metadata for specific 'item' only.
Args:
item (AEItem): pulled info from AE
layers_meta (dict): full list from Headline
(load and inject for better performance in loops)
Returns:
(dict):
"""
if layers_meta is None:
layers_meta = self.get_metadata()
for item_meta in layers_meta:
if 'container' in item_meta.get('id') and \
str(item.id) == str(item_meta.get('members')[0]):
return item_meta
self.log.debug("Couldn't find layer metadata")
def imprint(self, item, data, all_items=None, items_meta=None):
"""
Save item metadata to Label field of metadata of active document
Args:
item (AEItem):
data(string): json representation for single layer
all_items (list of item): for performance, could be
injected for usage in loop, if not, single call will be
triggered
items_meta(string): json representation from Headline
(for performance - provide only if imprint is in
loop - value should be same)
Returns: None
"""
if not items_meta:
items_meta = self.get_metadata()
result_meta = []
# fix existing
is_new = True
for item_meta in items_meta:
if item_meta.get('members') \
and str(item.id) == str(item_meta.get('members')[0]):
is_new = False
if data:
item_meta.update(data)
result_meta.append(item_meta)
else:
result_meta.append(item_meta)
if is_new:
result_meta.append(data)
# Ensure only valid ids are stored.
if not all_items:
# loaders create FootageItem now
all_items = self.get_items(comps=True,
folders=True,
footages=True)
item_ids = [int(item.id) for item in all_items]
cleaned_data = []
for meta in result_meta:
# for creation of instance OR loaded container
if 'instance' in meta.get('id') or \
int(meta.get('members')[0]) in item_ids:
cleaned_data.append(meta)
payload = json.dumps(cleaned_data, indent=4)
res = self.websocketserver.call(self.client.call
('AfterEffects.imprint',
payload=payload))
return self._handle_return(res)
def get_active_document_full_name(self):
"""
Returns just a name of active document via ws call
Returns(string): file name
"""
res = self.websocketserver.call(self.client.call(
'AfterEffects.get_active_document_full_name'))
return self._handle_return(res)
def get_active_document_name(self):
"""
Returns just a name of active document via ws call
Returns(string): file name
"""
res = self.websocketserver.call(self.client.call(
'AfterEffects.get_active_document_name'))
return self._handle_return(res)
def get_items(self, comps, folders=False, footages=False):
"""
Get all items from Project panel according to arguments.
There are multiple different types:
CompItem (could have multiple layers - source for Creator,
will be rendered)
FolderItem (collection type, currently used for Background
loading)
FootageItem (imported file - created by Loader)
Args:
comps (bool): return CompItems
folders (bool): return FolderItem
footages (bool: return FootageItem
Returns:
(list) of namedtuples
"""
res = self.websocketserver.call(
self.client.call('AfterEffects.get_items',
comps=comps,
folders=folders,
footages=footages)
)
return self._to_records(self._handle_return(res))
def get_selected_items(self, comps, folders=False, footages=False):
"""
Same as get_items but using selected items only
Args:
comps (bool): return CompItems
folders (bool): return FolderItem
footages (bool: return FootageItem
Returns:
(list) of namedtuples
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_selected_items',
comps=comps,
folders=folders,
footages=footages)
)
return self._to_records(self._handle_return(res))
def get_item(self, item_id):
"""
Returns metadata for particular 'item_id' or None
Args:
item_id (int, or string)
"""
for item in self.get_items(True, True, True):
if str(item.id) == str(item_id):
return item
return None
def import_file(self, path, item_name, import_options=None):
"""
Imports file as a FootageItem. Used in Loader
Args:
path (string): absolute path for asset file
item_name (string): label for created FootageItem
import_options (dict): different files (img vs psd) need different
config
"""
res = self.websocketserver.call(
self.client.call('AfterEffects.import_file',
path=path,
item_name=item_name,
import_options=import_options)
)
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def replace_item(self, item_id, path, item_name):
""" Replace FootageItem with new file
Args:
item_id (int):
path (string):absolute path
item_name (string): label on item in Project list
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.replace_item',
item_id=item_id,
path=path, item_name=item_name))
return self._handle_return(res)
def rename_item(self, item_id, item_name):
""" Replace item with item_name
Args:
item_id (int):
item_name (string): label on item in Project list
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.rename_item',
item_id=item_id,
item_name=item_name))
return self._handle_return(res)
def delete_item(self, item_id):
""" Deletes *Item in a file
Args:
item_id (int):
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.delete_item',
item_id=item_id))
return self._handle_return(res)
def remove_instance(self, instance_id):
"""
Removes instance with 'instance_id' from file's metadata and
saves them.
Keep matching item in file though.
Args:
instance_id(string): instance uuid
"""
cleaned_data = []
for instance in self.get_metadata():
uuid_val = instance.get("uuid")
if not uuid_val:
uuid_val = instance.get("members")[0] # legacy
if uuid_val != instance_id:
cleaned_data.append(instance)
payload = json.dumps(cleaned_data, indent=4)
res = self.websocketserver.call(self.client.call
('AfterEffects.imprint',
payload=payload))
return self._handle_return(res)
def is_saved(self):
# TODO
return True
def set_label_color(self, item_id, color_idx):
"""
Used for highlight additional information in Project panel.
Green color is loaded asset, blue is created asset
Args:
item_id (int):
color_idx (int): 0-16 Label colors from AE Project view
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.set_label_color',
item_id=item_id,
color_idx=color_idx))
return self._handle_return(res)
def get_work_area(self, item_id):
""" Get work are information for render purposes
Args:
item_id (int):
Returns:
(namedtuple)
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_work_area',
item_id=item_id
))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def set_work_area(self, item, start, duration, frame_rate):
"""
Set work area to predefined values (from Ftrack).
Work area directs what gets rendered.
Beware of rounding, AE expects seconds, not frames directly.
Args:
item (dict):
start (float): workAreaStart in seconds
duration (float): in seconds
frame_rate (float): frames in seconds
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.set_work_area',
item_id=item.id,
start=start,
duration=duration,
frame_rate=frame_rate))
return self._handle_return(res)
def save(self):
"""
Saves active document
Returns: None
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.save'))
return self._handle_return(res)
def saveAs(self, project_path, as_copy):
"""
Saves active project to aep (copy) or png or jpg
Args:
project_path(string): full local path
as_copy: <boolean>
Returns: None
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.saveAs',
image_path=project_path,
as_copy=as_copy))
return self._handle_return(res)
def get_render_info(self):
""" Get render queue info for render purposes
Returns:
(namedtuple): with 'file_name' field
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_render_info'))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def get_audio_url(self, item_id):
""" Get audio layer absolute url for comp
Args:
item_id (int): composition id
Returns:
(str): absolute path url
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_audio_url',
item_id=item_id))
return self._handle_return(res)
def import_background(self, comp_id, comp_name, files):
"""
Imports backgrounds images to existing or new composition.
If comp_id is not provided, new composition is created, basic
values (width, heights, frameRatio) takes from first imported
image.
All images from background json are imported as a FootageItem and
separate layer is created for each of them under composition.
Order of imported 'files' is important.
Args:
comp_id (int): id of existing composition (null if new)
comp_name (str): used when new composition
files (list): list of absolute paths to import and
add as layers
Returns:
(AEItem): object with id of created folder, all imported images
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.import_background',
comp_id=comp_id,
comp_name=comp_name,
files=files))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def reload_background(self, comp_id, comp_name, files):
"""
Reloads backgrounds images to existing composition.
It actually deletes complete folder with imported images and
created composition for safety.
Args:
comp_id (int): id of existing composition to be overwritten
comp_name (str): new name of composition (could be same as old
if version up only)
files (list): list of absolute paths to import and
add as layers
Returns:
(AEItem): object with id of created folder, all imported images
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.reload_background',
comp_id=comp_id,
comp_name=comp_name,
files=files))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def add_item_as_layer(self, comp_id, item_id):
"""
Adds already imported FootageItem ('item_id') as a new
layer to composition ('comp_id').
Args:
comp_id (int): id of target composition
item_id (int): FootageItem.id
comp already found previously
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.add_item_as_layer',
comp_id=comp_id,
item_id=item_id))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
def render(self, folder_url):
"""
Render all renderqueueitem to 'folder_url'
Args:
folder_url(string): local folder path for collecting
Returns: None
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.render',
folder_url=folder_url))
return self._handle_return(res)
def get_extension_version(self):
"""Returns version number of installed extension."""
res = self.websocketserver.call(self.client.call(
'AfterEffects.get_extension_version'))
return self._handle_return(res)
def close(self):
res = self.websocketserver.call(self.client.call('AfterEffects.close'))
return self._handle_return(res)
def _handle_return(self, res):
"""Wraps return, throws ValueError if 'error' key is present."""
if res and isinstance(res, str) and res != "undefined":
try:
parsed = json.loads(res)
except json.decoder.JSONDecodeError:
raise ValueError("Received broken JSON {}".format(res))
if not parsed: # empty list
return parsed
first_item = parsed
if isinstance(parsed, list):
first_item = parsed[0]
if first_item:
if first_item.get("error"):
raise ValueError(first_item["error"])
# singular values (file name etc)
if first_item.get("result") is not None:
return first_item["result"]
return parsed # parsed
return res
def _to_records(self, payload):
"""
Converts string json representation into list of AEItem
dot notation access to work.
Returns: <list of AEItem>
payload(dict): - dictionary from json representation, expected to
come from _handle_return
"""
if not payload:
return []
if isinstance(payload, str): # safety fallback
try:
payload = json.loads(payload)
except json.decoder.JSONDecodeError:
raise ValueError("Received broken JSON {}".format(payload))
if isinstance(payload, dict):
payload = [payload]
ret = []
# convert to AEItem to use dot donation
for d in payload:
if not d:
continue
# currently implemented and expected fields
item = AEItem(d.get('id'),
d.get('name'),
d.get('type'),
d.get('members'),
d.get('workAreaStart'),
d.get('workAreaDuration'),
d.get('frameRate'),
d.get('file_name'))
ret.append(item)
return ret

View file

@ -1,9 +1,5 @@
from openpype.hosts.aftereffects.plugins.create import create_render
import logging
log = logging.getLogger(__name__)
class CreateLocalRender(create_render.CreateRender):
""" Creator to render locally.

View file

@ -1,10 +1,10 @@
from avalon.api import CreatorError
import openpype.api
from Qt import QtWidgets
from avalon import aftereffects
import logging
log = logging.getLogger(__name__)
from openpype.hosts.aftereffects.api import (
get_stub,
list_instances
)
class CreateRender(openpype.api.Creator):
@ -21,42 +21,41 @@ class CreateRender(openpype.api.Creator):
family = "render"
def process(self):
stub = aftereffects.stub() # only after After Effects is up
stub = get_stub() # only after After Effects is up
if (self.options or {}).get("useSelection"):
items = stub.get_selected_items(comps=True,
folders=False,
footages=False)
items = stub.get_selected_items(
comps=True, folders=False, footages=False
)
if len(items) > 1:
self._show_msg("Please select only single composition at time.")
return False
raise CreatorError(
"Please select only single composition at time."
)
if not items:
self._show_msg("Nothing to create. Select composition " +
"if 'useSelection' or create at least " +
"one composition.")
return False
raise CreatorError((
"Nothing to create. Select composition "
"if 'useSelection' or create at least "
"one composition."
))
existing_subsets = [instance['subset'].lower()
for instance in aftereffects.list_instances()]
existing_subsets = [
instance['subset'].lower()
for instance in list_instances()
]
item = items.pop()
if self.name.lower() in existing_subsets:
txt = "Instance with name \"{}\" already exists.".format(self.name)
self._show_msg(txt)
return False
raise CreatorError(txt)
self.data["members"] = [item.id]
self.data["uuid"] = item.id # for SubsetManager
self.data["subset"] = self.data["subset"]\
.replace(stub.PUBLISH_ICON, '')\
self.data["subset"] = (
self.data["subset"]
.replace(stub.PUBLISH_ICON, '')
.replace(stub.LOADED_ICON, '')
)
stub.imprint(item, self.data)
stub.set_label_color(item.id, 14) # Cyan options 0 - 16
stub.rename_item(item.id, stub.PUBLISH_ICON + self.data["subset"])
def _show_msg(self, txt):
msg = QtWidgets.QMessageBox()
msg.setIcon(QtWidgets.QMessageBox.Warning)
msg.setText(txt)
msg.exec_()

View file

@ -1,13 +1,18 @@
import re
from avalon import api, aftereffects
import avalon.api
from openpype.lib import get_background_layers, get_unique_layer_name
stub = aftereffects.stub()
from openpype.lib import (
get_background_layers,
get_unique_layer_name
)
from openpype.hosts.aftereffects.api import (
AfterEffectsLoader,
containerise
)
class BackgroundLoader(api.Loader):
class BackgroundLoader(AfterEffectsLoader):
"""
Load images from Background family
Creates for each background separate folder with all imported images
@ -21,27 +26,30 @@ class BackgroundLoader(api.Loader):
representations = ["json"]
def load(self, context, name=None, namespace=None, data=None):
stub = self.get_stub()
items = stub.get_items(comps=True)
existing_items = [layer.name for layer in items]
existing_items = [layer.name.replace(stub.LOADED_ICON, '')
for layer in items]
comp_name = get_unique_layer_name(
existing_items,
"{}_{}".format(context["asset"]["name"], name))
layers = get_background_layers(self.fname)
if not layers:
raise ValueError("No layers found in {}".format(self.fname))
comp = stub.import_background(None, stub.LOADED_ICON + comp_name,
layers)
if not comp:
self.log.warning(
"Import background failed.")
self.log.warning("Check host app for alert error.")
return
raise ValueError("Import background failed. "
"Please contact support")
self[:] = [comp]
namespace = namespace or comp_name
return aftereffects.containerise(
return containerise(
name,
namespace,
comp,
@ -51,6 +59,7 @@ class BackgroundLoader(api.Loader):
def update(self, container, representation):
""" Switch asset or change version """
stub = self.get_stub()
context = representation.get("context", {})
_ = container.pop("layer")
@ -69,7 +78,7 @@ class BackgroundLoader(api.Loader):
else: # switching version - keep same name
comp_name = container["namespace"]
path = api.get_representation_path(representation)
path = avalon.api.get_representation_path(representation)
layers = get_background_layers(path)
comp = stub.reload_background(container["members"][1],
@ -92,6 +101,7 @@ class BackgroundLoader(api.Loader):
container (dict): container to be removed - used to get layer_id
"""
print("!!!! container:: {}".format(container))
stub = self.get_stub()
layer = container.pop("layer")
stub.imprint(layer, {})
stub.delete_item(layer.id)

View file

@ -1,11 +1,15 @@
from avalon import api, aftereffects
from openpype import lib
import re
stub = aftereffects.stub()
import avalon.api
from openpype import lib
from openpype.hosts.aftereffects.api import (
AfterEffectsLoader,
containerise
)
class FileLoader(api.Loader):
class FileLoader(AfterEffectsLoader):
"""Load images
Stores the imported asset in a container named after the asset.
@ -21,6 +25,7 @@ class FileLoader(api.Loader):
representations = ["*"]
def load(self, context, name=None, namespace=None, data=None):
stub = self.get_stub()
layers = stub.get_items(comps=True, folders=True, footages=True)
existing_layers = [layer.name for layer in layers]
comp_name = lib.get_unique_layer_name(
@ -60,7 +65,7 @@ class FileLoader(api.Loader):
self[:] = [comp]
namespace = namespace or comp_name
return aftereffects.containerise(
return containerise(
name,
namespace,
comp,
@ -70,6 +75,7 @@ class FileLoader(api.Loader):
def update(self, container, representation):
""" Switch asset or change version """
stub = self.get_stub()
layer = container.pop("layer")
context = representation.get("context", {})
@ -86,7 +92,7 @@ class FileLoader(api.Loader):
"{}_{}".format(context["asset"], context["subset"]))
else: # switching version - keep same name
layer_name = container["namespace"]
path = api.get_representation_path(representation)
path = avalon.api.get_representation_path(representation)
# with aftereffects.maintained_selection(): # TODO
stub.replace_item(layer.id, path, stub.LOADED_ICON + layer_name)
stub.imprint(
@ -101,6 +107,7 @@ class FileLoader(api.Loader):
Args:
container (dict): container to be removed - used to get layer_id
"""
stub = self.get_stub()
layer = container.pop("layer")
stub.imprint(layer, {})
stub.delete_item(layer.id)

View file

@ -1,6 +1,6 @@
import pyblish.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class AddPublishHighlight(pyblish.api.InstancePlugin):
@ -15,7 +15,7 @@ class AddPublishHighlight(pyblish.api.InstancePlugin):
optional = True
def process(self, instance):
stub = aftereffects.stub()
stub = get_stub()
item = instance.data
# comp name contains highlight icon
stub.rename_item(item["comp_id"], item["comp_name"])

View file

@ -0,0 +1,27 @@
# -*- coding: utf-8 -*-
"""Close AE after publish. For Webpublishing only."""
import pyblish.api
from openpype.hosts.aftereffects.api import get_stub
class CloseAE(pyblish.api.ContextPlugin):
"""Close AE after publish. For Webpublishing only.
"""
order = pyblish.api.IntegratorOrder + 14
label = "Close AE"
optional = True
active = True
hosts = ["aftereffects"]
targets = ["remotepublish"]
def process(self, context):
self.log.info("CloseAE")
stub = get_stub()
self.log.info("Shutting down AE")
stub.save()
stub.close()
self.log.info("AE closed")

View file

@ -2,7 +2,7 @@ import os
import pyblish.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class CollectAudio(pyblish.api.ContextPlugin):
@ -21,7 +21,8 @@ class CollectAudio(pyblish.api.ContextPlugin):
comp_id = instance.data["comp_id"]
if not comp_id:
self.log.debug("No comp_id filled in instance")
# @iLLiCiTiT QUESTION Should return or continue?
return
context.data["audioFile"] = os.path.normpath(
aftereffects.stub().get_audio_url(comp_id)
get_stub().get_audio_url(comp_id)
).replace("\\", "/")

View file

@ -2,17 +2,17 @@ import os
import pyblish.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class CollectCurrentFile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
order = pyblish.api.CollectorOrder - 0.5
order = pyblish.api.CollectorOrder - 0.49
label = "Current File"
hosts = ["aftereffects"]
def process(self, context):
context.data["currentFile"] = os.path.normpath(
aftereffects.stub().get_active_document_full_name()
get_stub().get_active_document_full_name()
).replace("\\", "/")

View file

@ -0,0 +1,58 @@
import os
import re
import pyblish.api
from openpype.hosts.aftereffects.api import (
get_stub,
get_extension_manifest_path
)
class CollectExtensionVersion(pyblish.api.ContextPlugin):
""" Pulls and compares version of installed extension.
It is recommended to use same extension as in provided Openpype code.
Please use Anastasiys Extension Manager or ZXPInstaller to update
extension in case of an error.
You can locate extension.zxp in your installed Openpype code in
`repos/avalon-core/avalon/aftereffects`
"""
# This technically should be a validator, but other collectors might be
# impacted with usage of obsolete extension, so collector that runs first
# was chosen
order = pyblish.api.CollectorOrder - 0.5
label = "Collect extension version"
hosts = ["aftereffects"]
optional = True
active = True
def process(self, context):
installed_version = get_stub().get_extension_version()
if not installed_version:
raise ValueError("Unknown version, probably old extension")
manifest_url = get_extension_manifest_path()
if not os.path.exists(manifest_url):
self.log.debug("Unable to locate extension manifest, not checking")
return
expected_version = None
with open(manifest_url) as fp:
content = fp.read()
found = re.findall(r'(ExtensionBundleVersion=")([0-9\.]+)(")',
content)
if found:
expected_version = found[0][1]
if expected_version != installed_version:
msg = (
"Expected version '{}' found '{}'\n Please update"
" your installed extension, it might not work properly."
).format(expected_version, installed_version)
raise ValueError(msg)

View file

@ -1,7 +1,7 @@
import os
import re
import attr
import tempfile
import attr
from avalon import aftereffects
import pyblish.api
@ -10,6 +10,8 @@ from openpype.settings import get_project_settings
from openpype.lib import abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.hosts.aftereffects.api import get_stub
@attr.s
class AERenderInstance(RenderInstance):
@ -35,7 +37,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
padding_width = 6
rendered_extension = 'png'
stub = aftereffects.stub()
stub = get_stub()
def get_instances(self, context):
instances = []

View file

@ -1,9 +1,9 @@
import os
import six
import sys
import six
import openpype.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class ExtractLocalRender(openpype.api.Extractor):
@ -15,14 +15,13 @@ class ExtractLocalRender(openpype.api.Extractor):
families = ["render"]
def process(self, instance):
stub = aftereffects.stub()
stub = get_stub()
staging_dir = instance.data["stagingDir"]
self.log.info("staging_dir::{}".format(staging_dir))
stub.render(staging_dir)
# pull file name from Render Queue Output module
render_q = stub.get_render_info()
stub.render(staging_dir)
if not render_q:
raise ValueError("No file extension set in Render Queue")
_, ext = os.path.splitext(os.path.basename(render_q.file_name))
@ -56,8 +55,7 @@ class ExtractLocalRender(openpype.api.Extractor):
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.
thumbnail_path = os.path.join(staging_dir,
"thumbnail.jpg")
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
args = [
ffmpeg_path, "-y",

View file

@ -1,5 +1,5 @@
import openpype.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class ExtractSaveScene(openpype.api.Extractor):
@ -11,5 +11,5 @@ class ExtractSaveScene(openpype.api.Extractor):
families = ["workfile"]
def process(self, instance):
stub = aftereffects.stub()
stub = get_stub()
stub.save()

View file

@ -2,7 +2,7 @@ import pyblish.api
from openpype.action import get_errored_plugins_from_data
from openpype.lib import version_up
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class IncrementWorkfile(pyblish.api.InstancePlugin):
@ -25,6 +25,6 @@ class IncrementWorkfile(pyblish.api.InstancePlugin):
)
scene_path = version_up(instance.context.data["currentFile"])
aftereffects.stub().saveAs(scene_path, True)
get_stub().saveAs(scene_path, True)
self.log.info("Incremented workfile to: {}".format(scene_path))

View file

@ -1,5 +1,5 @@
import openpype.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class RemovePublishHighlight(openpype.api.Extractor):
@ -16,7 +16,7 @@ class RemovePublishHighlight(openpype.api.Extractor):
families = ["render.farm"]
def process(self, instance):
stub = aftereffects.stub()
stub = get_stub()
self.log.debug("instance::{}".format(instance.data))
item = instance.data
comp_name = item["comp_name"].replace(stub.PUBLISH_ICON, '')

View file

@ -1,7 +1,7 @@
from avalon import api
import pyblish.api
import openpype.api
from avalon import aftereffects
from openpype.hosts.aftereffects.api import get_stub
class ValidateInstanceAssetRepair(pyblish.api.Action):
@ -22,7 +22,7 @@ class ValidateInstanceAssetRepair(pyblish.api.Action):
# Apply pyblish.logic to get the instances for the plug-in
instances = pyblish.api.instances_by_plugin(failed, plugin)
stub = aftereffects.stub()
stub = get_stub()
for instance in instances:
data = stub.read(instance[0])

View file

@ -5,11 +5,7 @@ import re
import pyblish.api
from avalon import aftereffects
import openpype.hosts.aftereffects.api as api
stub = aftereffects.stub()
from openpype.hosts.aftereffects.api import get_asset_settings
class ValidateSceneSettings(pyblish.api.InstancePlugin):
@ -47,7 +43,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
resolutionWidth
resolutionHeight
TODO support in extension is missing for now
By defaults validates duration (how many frames should be published)
"""
@ -62,7 +58,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
def process(self, instance):
"""Plugin entry point."""
expected_settings = api.get_asset_settings()
expected_settings = get_asset_settings()
self.log.info("config from DB::{}".format(expected_settings))
if any(re.search(pattern, os.getenv('AVALON_TASK'))

View file

@ -32,7 +32,7 @@ class InstallPySideToBlender(PreLaunchHook):
def inner_execute(self):
# Get blender's python directory
version_regex = re.compile(r"^2\.[0-9]{2}$")
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
executable = self.launch_context.executable.executable_path
if os.path.basename(executable).lower() != "blender.exe":

View file

@ -1,105 +1,5 @@
from .api.utils import (
setup
)
from .api.pipeline import (
install,
uninstall,
ls,
containerise,
update_container,
maintained_selection,
remove_instance,
list_instances,
imprint
)
from .api.lib import (
FlameAppFramework,
maintain_current_timeline,
get_project_manager,
get_current_project,
get_current_timeline,
create_bin,
)
from .api.menu import (
FlameMenuProjectConnect,
FlameMenuTimeline
)
from .api.workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root
)
import os
HOST_DIR = os.path.dirname(
os.path.abspath(__file__)
)
API_DIR = os.path.join(HOST_DIR, "api")
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
app_framework = None
apps = []
__all__ = [
"HOST_DIR",
"API_DIR",
"PLUGINS_DIR",
"PUBLISH_PATH",
"LOAD_PATH",
"CREATE_PATH",
"INVENTORY_PATH",
"INVENTORY_PATH",
"app_framework",
"apps",
# pipeline
"install",
"uninstall",
"ls",
"containerise",
"update_container",
"reload_pipeline",
"maintained_selection",
"remove_instance",
"list_instances",
"imprint",
# utils
"setup",
# lib
"FlameAppFramework",
"maintain_current_timeline",
"get_project_manager",
"get_current_project",
"get_current_timeline",
"create_bin",
# menu
"FlameMenuProjectConnect",
"FlameMenuTimeline",
# plugin
# workio
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root"
]

View file

@ -1,3 +1,115 @@
"""
OpenPype Autodesk Flame api
"""
from .constants import (
COLOR_MAP,
MARKER_NAME,
MARKER_COLOR,
MARKER_DURATION,
MARKER_PUBLISH_DEFAULT
)
from .lib import (
CTX,
FlameAppFramework,
get_project_manager,
get_current_project,
get_current_sequence,
create_bin,
create_segment_data_marker,
get_segment_data_marker,
set_segment_data_marker,
set_publish_attribute,
get_publish_attribute,
get_sequence_segments,
maintained_segment_selection,
reset_segment_selection,
get_segment_attributes
)
from .utils import (
setup
)
from .pipeline import (
install,
uninstall,
ls,
containerise,
update_container,
remove_instance,
list_instances,
imprint,
maintained_selection
)
from .menu import (
FlameMenuProjectConnect,
FlameMenuTimeline
)
from .plugin import (
Creator,
PublishableClip
)
from .workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root
)
__all__ = [
# constants
"COLOR_MAP",
"MARKER_NAME",
"MARKER_COLOR",
"MARKER_DURATION",
"MARKER_PUBLISH_DEFAULT",
# lib
"CTX",
"FlameAppFramework",
"get_project_manager",
"get_current_project",
"get_current_sequence",
"create_bin",
"create_segment_data_marker",
"get_segment_data_marker",
"set_segment_data_marker",
"set_publish_attribute",
"get_publish_attribute",
"get_sequence_segments",
"maintained_segment_selection",
"reset_segment_selection",
"get_segment_attributes",
# pipeline
"install",
"uninstall",
"ls",
"containerise",
"update_container",
"reload_pipeline",
"maintained_selection",
"remove_instance",
"list_instances",
"imprint",
"maintained_selection",
# utils
"setup",
# menu
"FlameMenuProjectConnect",
"FlameMenuTimeline",
# plugin
"Creator",
"PublishableClip",
# workio
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root"
]

View file

@ -0,0 +1,24 @@
"""
OpenPype Flame api constances
"""
# OpenPype marker workflow variables
MARKER_NAME = "OpenPypeData"
MARKER_DURATION = 0
MARKER_COLOR = "cyan"
MARKER_PUBLISH_DEFAULT = False
# OpenPype color definitions
COLOR_MAP = {
"red": (1.0, 0.0, 0.0),
"orange": (1.0, 0.5, 0.0),
"yellow": (1.0, 1.0, 0.0),
"pink": (1.0, 0.5, 1.0),
"white": (1.0, 1.0, 1.0),
"green": (0.0, 1.0, 0.0),
"cyan": (0.0, 1.0, 1.0),
"blue": (0.0, 0.0, 1.0),
"purple": (0.5, 0.0, 0.5),
"magenta": (0.5, 0.0, 1.0),
"black": (0.0, 0.0, 0.0)
}

View file

@ -1,12 +1,27 @@
import sys
import os
import re
import json
import pickle
import contextlib
from pprint import pformat
from .constants import (
MARKER_COLOR,
MARKER_DURATION,
MARKER_NAME,
COLOR_MAP,
MARKER_PUBLISH_DEFAULT
)
from openpype.api import Logger
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
class CTX:
# singleton used for passing data between api modules
app_framework = None
flame_apps = []
selection = None
@contextlib.contextmanager
@ -115,10 +130,13 @@ class FlameAppFramework(object):
)
self.log.info("[{}] waking up".format(self.__class__.__name__))
self.load_prefs()
try:
self.load_prefs()
except RuntimeError:
self.save_prefs()
# menu auto-refresh defaults
if not self.prefs_global.get("menu_auto_refresh"):
self.prefs_global["menu_auto_refresh"] = {
"media_panel": True,
@ -207,40 +225,6 @@ class FlameAppFramework(object):
return True
@contextlib.contextmanager
def maintain_current_timeline(to_timeline, from_timeline=None):
"""Maintain current timeline selection during context
Attributes:
from_timeline (resolve.Timeline)[optional]:
Example:
>>> print(from_timeline.GetName())
timeline1
>>> print(to_timeline.GetName())
timeline2
>>> with maintain_current_timeline(to_timeline):
... print(get_current_timeline().GetName())
timeline2
>>> print(get_current_timeline().GetName())
timeline1
"""
# todo: this is still Resolve's implementation
project = get_current_project()
working_timeline = from_timeline or project.GetCurrentTimeline()
# swith to the input timeline
project.SetCurrentTimeline(to_timeline)
try:
# do a work
yield
finally:
# put the original working timeline to context
project.SetCurrentTimeline(working_timeline)
def get_project_manager():
# TODO: get_project_manager
return
@ -252,13 +236,32 @@ def get_media_storage():
def get_current_project():
# TODO: get_current_project
return
import flame
return flame.project.current_project
def get_current_timeline(new=False):
# TODO: get_current_timeline
return
def get_current_sequence(selection):
import flame
def segment_to_sequence(_segment):
track = _segment.parent
version = track.parent
return version.parent
process_timeline = None
if len(selection) == 1:
if isinstance(selection[0], flame.PySequence):
process_timeline = selection[0]
if isinstance(selection[0], flame.PySegment):
process_timeline = segment_to_sequence(selection[0])
else:
for segment in selection:
if isinstance(segment, flame.PySegment):
process_timeline = segment_to_sequence(segment)
break
return process_timeline
def create_bin(name, root=None):
@ -272,3 +275,287 @@ def rescan_hooks():
flame.execute_shortcut('Rescan Python Hooks')
except Exception:
pass
def get_metadata(project_name, _log=None):
from adsk.libwiretapPythonClientAPI import (
WireTapClient,
WireTapServerHandle,
WireTapNodeHandle,
WireTapStr
)
class GetProjectColorPolicy(object):
def __init__(self, host_name=None, _log=None):
# Create a connection to the Backburner manager using the Wiretap
# python API.
#
self.log = _log or log
self.host_name = host_name or "localhost"
self._wiretap_client = WireTapClient()
if not self._wiretap_client.init():
raise Exception("Could not initialize Wiretap Client")
self._server = WireTapServerHandle(
"{}:IFFFS".format(self.host_name))
def process(self, project_name):
policy_node_handle = WireTapNodeHandle(
self._server,
"/projects/{}/syncolor/policy".format(project_name)
)
self.log.info(policy_node_handle)
policy = WireTapStr()
if not policy_node_handle.getNodeTypeStr(policy):
self.log.warning(
"Could not retrieve policy of '%s': %s" % (
policy_node_handle.getNodeId().id(),
policy_node_handle.lastError()
)
)
return policy.c_str()
policy_wiretap = GetProjectColorPolicy(_log=_log)
return policy_wiretap.process(project_name)
def get_segment_data_marker(segment, with_marker=None):
"""
Get openpype track item tag created by creator or loader plugin.
Attributes:
segment (flame.PySegment): flame api object
with_marker (bool)[optional]: if true it will return also marker object
Returns:
dict: openpype tag data
Returns(with_marker=True):
flame.PyMarker, dict
"""
for marker in segment.markers:
comment = marker.comment.get_value()
color = marker.colour.get_value()
name = marker.name.get_value()
if (name == MARKER_NAME) and (
color == COLOR_MAP[MARKER_COLOR]):
if not with_marker:
return json.loads(comment)
else:
return marker, json.loads(comment)
def set_segment_data_marker(segment, data=None):
"""
Set openpype track item tag to input segment.
Attributes:
segment (flame.PySegment): flame api object
Returns:
dict: json loaded data
"""
data = data or dict()
marker_data = get_segment_data_marker(segment, True)
if marker_data:
# get available openpype tag if any
marker, tag_data = marker_data
# update tag data with new data
tag_data.update(data)
# update marker with tag data
marker.comment = json.dumps(tag_data)
else:
# update tag data with new data
marker = create_segment_data_marker(segment)
# add tag data to marker's comment
marker.comment = json.dumps(data)
def set_publish_attribute(segment, value):
""" Set Publish attribute in input Tag object
Attribute:
segment (flame.PySegment)): flame api object
value (bool): True or False
"""
tag_data = get_segment_data_marker(segment)
tag_data["publish"] = value
# set data to the publish attribute
set_segment_data_marker(segment, tag_data)
def get_publish_attribute(segment):
""" Get Publish attribute from input Tag object
Attribute:
segment (flame.PySegment)): flame api object
Returns:
bool: True or False
"""
tag_data = get_segment_data_marker(segment)
if not tag_data:
set_publish_attribute(segment, MARKER_PUBLISH_DEFAULT)
return MARKER_PUBLISH_DEFAULT
return tag_data["publish"]
def create_segment_data_marker(segment):
""" Create openpype marker on a segment.
Attributes:
segment (flame.PySegment): flame api object
Returns:
flame.PyMarker: flame api object
"""
# get duration of segment
duration = segment.record_duration.relative_frame
# calculate start frame of the new marker
start_frame = int(segment.record_in.relative_frame) + int(duration / 2)
# create marker
marker = segment.create_marker(start_frame)
# set marker name
marker.name = MARKER_NAME
# set duration
marker.duration = MARKER_DURATION
# set colour
marker.colour = COLOR_MAP[MARKER_COLOR] # Red
return marker
def get_sequence_segments(sequence, selected=False):
segments = []
# loop versions in sequence
for ver in sequence.versions:
# loop track in versions
for track in ver.tracks:
# ignore all empty tracks and hidden too
if len(track.segments) == 0 and track.hidden:
continue
# loop all segment in remaining tracks
for segment in track.segments:
if segment.name.get_value() == "":
continue
if (
selected is True
and segment.selected.get_value() is not True
):
continue
# add it to original selection
segments.append(segment)
return segments
@contextlib.contextmanager
def maintained_segment_selection(sequence):
"""Maintain selection during context
Attributes:
sequence (flame.PySequence): python api object
Yield:
list of flame.PySegment
Example:
>>> with maintained_segment_selection(sequence) as selected_segments:
... for segment in selected_segments:
... segment.selected = False
>>> print(segment.selected)
True
"""
selected_segments = get_sequence_segments(sequence, True)
try:
# do the operation on selected segments
yield selected_segments
finally:
# reset all selected clips
reset_segment_selection(sequence)
# select only original selection of segments
for segment in selected_segments:
segment.selected = True
def reset_segment_selection(sequence):
"""Deselect all selected nodes
"""
for ver in sequence.versions:
for track in ver.tracks:
if len(track.segments) == 0 and track.hidden:
continue
for segment in track.segments:
segment.selected = False
def _get_shot_tokens_values(clip, tokens):
old_value = None
output = {}
if not clip.shot_name:
return output
old_value = clip.shot_name.get_value()
for token in tokens:
clip.shot_name.set_value(token)
_key = str(re.sub("[<>]", "", token)).replace(" ", "_")
try:
output[_key] = int(clip.shot_name.get_value())
except ValueError:
output[_key] = clip.shot_name.get_value()
clip.shot_name.set_value(old_value)
return output
def get_segment_attributes(segment):
if str(segment.name)[1:-1] == "":
return None
# Add timeline segment to tree
clip_data = {
"segment_name": segment.name.get_value(),
"segment_comment": segment.comment.get_value(),
"tape_name": segment.tape_name,
"source_name": segment.source_name,
"fpath": segment.file_path,
"PySegment": segment
}
# add all available shot tokens
shot_tokens = _get_shot_tokens_values(segment, [
"<colour space>", "<width>", "<height>", "<depth>", "<segment>",
"<track>", "<track name>"
])
clip_data.update(shot_tokens)
# populate shot source metadata
segment_attrs = [
"record_duration", "record_in", "record_out",
"source_duration", "source_in", "source_out"
]
segment_attrs_data = {}
for attr_name in segment_attrs:
if not hasattr(segment, attr_name):
continue
attr = getattr(segment, attr_name)
segment_attrs_data[attr] = str(attr).replace("+", ":")
if attr in ["record_in", "record_out"]:
clip_data[attr_name] = attr.relative_frame
else:
clip_data[attr_name] = attr.frame
clip_data["segment_timecodes"] = segment_attrs_data
return clip_data

View file

@ -1,10 +1,9 @@
import os
from Qt import QtWidgets
from copy import deepcopy
from pprint import pformat
from openpype.tools.utils.host_tools import HostToolsHelper
menu_group_name = 'OpenPype'
default_flame_export_presets = {
@ -26,6 +25,17 @@ default_flame_export_presets = {
}
def callback_selection(selection, function):
import openpype.hosts.flame.api as opfapi
opfapi.CTX.selection = selection
print("Hook Selection: \n\t{}".format(
pformat({
index: (type(item), item.name)
for index, item in enumerate(opfapi.CTX.selection)})
))
function()
class _FlameMenuApp(object):
def __init__(self, framework):
self.name = self.__class__.__name__
@ -97,23 +107,12 @@ class FlameMenuProjectConnect(_FlameMenuApp):
if not self.flame:
return []
flame_project_name = self.flame_project_name
self.log.info("______ {} ______".format(flame_project_name))
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Workfiles ...",
"execute": lambda x: self.tools_helper.show_workfiles()
})
menu['actions'].append({
"name": "Create ...",
"execute": lambda x: self.tools_helper.show_creator()
})
menu['actions'].append({
"name": "Publish ...",
"execute": lambda x: self.tools_helper.show_publish()
})
menu['actions'].append({
"name": "Load ...",
"execute": lambda x: self.tools_helper.show_loader()
@ -128,9 +127,6 @@ class FlameMenuProjectConnect(_FlameMenuApp):
})
return menu
def get_projects(self, *args, **kwargs):
pass
def refresh(self, *args, **kwargs):
self.rescan()
@ -165,18 +161,17 @@ class FlameMenuTimeline(_FlameMenuApp):
if not self.flame:
return []
flame_project_name = self.flame_project_name
self.log.info("______ {} ______".format(flame_project_name))
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Create ...",
"execute": lambda x: self.tools_helper.show_creator()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_creator)
})
menu['actions'].append({
"name": "Publish ...",
"execute": lambda x: self.tools_helper.show_publish()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_publish)
})
menu['actions'].append({
"name": "Load ...",
@ -189,9 +184,6 @@ class FlameMenuTimeline(_FlameMenuApp):
return menu
def get_projects(self, *args, **kwargs):
pass
def refresh(self, *args, **kwargs):
self.rescan()

View file

@ -1,25 +1,33 @@
"""
Basic avalon integration
"""
import os
import contextlib
from avalon import api as avalon
from pyblish import api as pyblish
from openpype.api import Logger
from .lib import (
set_segment_data_marker,
set_publish_attribute,
maintained_segment_selection,
get_current_sequence,
reset_segment_selection
)
from .. import HOST_DIR
API_DIR = os.path.join(HOST_DIR, "api")
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = "AVALON_CONTAINERS"
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
def install():
from .. import (
PUBLISH_PATH,
LOAD_PATH,
CREATE_PATH,
INVENTORY_PATH
)
# TODO: install
# Disable all families except for the ones we explicitly want to see
family_states = [
"imagesequence",
@ -32,33 +40,24 @@ def install():
avalon.data["familiesStateDefault"] = False
avalon.data["familiesStateToggled"] = family_states
log.info("openpype.hosts.flame installed")
pyblish.register_host("flame")
pyblish.register_plugin_path(PUBLISH_PATH)
log.info("Registering Flame plug-ins..")
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
log.info("OpenPype Flame plug-ins registred ...")
# register callback for switching publishable
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
log.info("OpenPype Flame host installed ...")
def uninstall():
from .. import (
PUBLISH_PATH,
LOAD_PATH,
CREATE_PATH,
INVENTORY_PATH
)
# TODO: uninstall
pyblish.deregister_host("flame")
pyblish.deregister_plugin_path(PUBLISH_PATH)
log.info("Deregistering DaVinci Resovle plug-ins..")
log.info("Deregistering Flame plug-ins..")
pyblish.deregister_plugin_path(PUBLISH_PATH)
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
@ -66,6 +65,8 @@ def uninstall():
# register callback for switching publishable
pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled)
log.info("OpenPype Flame host uninstalled ...")
def containerise(tl_segment,
name,
@ -97,32 +98,6 @@ def update_container(tl_segment, data=None):
# TODO: update_container
pass
@contextlib.contextmanager
def maintained_selection():
"""Maintain selection during context
Example:
>>> with maintained_selection():
... node['selected'].setValue(True)
>>> print(node['selected'].value())
False
"""
# TODO: maintained_selection + remove undo steps
try:
# do the operation
yield
finally:
pass
def reset_selection():
"""Deselect all selected nodes
"""
pass
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle node passthrough states on instance toggles."""
@ -150,6 +125,46 @@ def list_instances():
pass
def imprint(item, data=None):
# TODO: imprint
pass
def imprint(segment, data=None):
"""
Adding openpype data to Flame timeline segment.
Also including publish attribute into tag.
Arguments:
segment (flame.PySegment)): flame api object
data (dict): Any data which needst to be imprinted
Examples:
data = {
'asset': 'sq020sh0280',
'family': 'render',
'subset': 'subsetMain'
}
"""
data = data or {}
set_segment_data_marker(segment, data)
# add publish attribute
set_publish_attribute(segment, True)
@contextlib.contextmanager
def maintained_selection():
import flame
from .lib import CTX
# check if segment is selected
if isinstance(CTX.selection[0], flame.PySegment):
sequence = get_current_sequence(CTX.selection)
try:
with maintained_segment_selection(sequence) as selected:
yield
finally:
# reset all selected clips
reset_segment_selection(sequence)
# select only original selection of segments
for segment in selected:
segment.selected = True

View file

@ -1,3 +1,646 @@
# Creator plugin functions
import re
from Qt import QtWidgets, QtCore
import openpype.api as openpype
from openpype import style
from . import (
lib as flib,
pipeline as fpipeline,
constants
)
from copy import deepcopy
log = openpype.Logger.get_logger(__name__)
class CreatorWidget(QtWidgets.QDialog):
# output items
items = dict()
_results_back = None
def __init__(self, name, info, ui_inputs, parent=None):
super(CreatorWidget, self).__init__(parent)
self.setObjectName(name)
self.setWindowFlags(
QtCore.Qt.Window
| QtCore.Qt.CustomizeWindowHint
| QtCore.Qt.WindowTitleHint
| QtCore.Qt.WindowCloseButtonHint
| QtCore.Qt.WindowStaysOnTopHint
)
self.setWindowTitle(name or "Pype Creator Input")
self.resize(500, 700)
# Where inputs and labels are set
self.content_widget = [QtWidgets.QWidget(self)]
top_layout = QtWidgets.QFormLayout(self.content_widget[0])
top_layout.setObjectName("ContentLayout")
top_layout.addWidget(Spacer(5, self))
# first add widget tag line
top_layout.addWidget(QtWidgets.QLabel(info))
# main dynamic layout
self.scroll_area = QtWidgets.QScrollArea(self, widgetResizable=True)
self.scroll_area.setVerticalScrollBarPolicy(
QtCore.Qt.ScrollBarAsNeeded)
self.scroll_area.setVerticalScrollBarPolicy(
QtCore.Qt.ScrollBarAlwaysOn)
self.scroll_area.setHorizontalScrollBarPolicy(
QtCore.Qt.ScrollBarAlwaysOff)
self.scroll_area.setWidgetResizable(True)
self.content_widget.append(self.scroll_area)
scroll_widget = QtWidgets.QWidget(self)
in_scroll_area = QtWidgets.QVBoxLayout(scroll_widget)
self.content_layout = [in_scroll_area]
# add preset data into input widget layout
self.items = self.populate_widgets(ui_inputs)
self.scroll_area.setWidget(scroll_widget)
# Confirmation buttons
btns_widget = QtWidgets.QWidget(self)
btns_layout = QtWidgets.QHBoxLayout(btns_widget)
cancel_btn = QtWidgets.QPushButton("Cancel")
btns_layout.addWidget(cancel_btn)
ok_btn = QtWidgets.QPushButton("Ok")
btns_layout.addWidget(ok_btn)
# Main layout of the dialog
main_layout = QtWidgets.QVBoxLayout(self)
main_layout.setContentsMargins(10, 10, 10, 10)
main_layout.setSpacing(0)
# adding content widget
for w in self.content_widget:
main_layout.addWidget(w)
main_layout.addWidget(btns_widget)
ok_btn.clicked.connect(self._on_ok_clicked)
cancel_btn.clicked.connect(self._on_cancel_clicked)
self.setStyleSheet(style.load_stylesheet())
@classmethod
def set_results_back(cls, value):
cls._results_back = value
@classmethod
def get_results_back(cls):
return cls._results_back
def _on_ok_clicked(self):
log.debug("ok is clicked: {}".format(self.items))
results_back = self._values(self.items)
self.set_results_back(results_back)
self.close()
def _on_cancel_clicked(self):
self.set_results_back(None)
self.close()
def showEvent(self, event):
self.set_results_back(None)
super(CreatorWidget, self).showEvent(event)
def _values(self, data, new_data=None):
new_data = new_data or dict()
for k, v in data.items():
new_data[k] = {
"target": None,
"value": None
}
if v["type"] == "dict":
new_data[k]["target"] = v["target"]
new_data[k]["value"] = self._values(v["value"])
if v["type"] == "section":
new_data.pop(k)
new_data = self._values(v["value"], new_data)
elif getattr(v["value"], "currentText", None):
new_data[k]["target"] = v["target"]
new_data[k]["value"] = v["value"].currentText()
elif getattr(v["value"], "isChecked", None):
new_data[k]["target"] = v["target"]
new_data[k]["value"] = v["value"].isChecked()
elif getattr(v["value"], "value", None):
new_data[k]["target"] = v["target"]
new_data[k]["value"] = v["value"].value()
elif getattr(v["value"], "text", None):
new_data[k]["target"] = v["target"]
new_data[k]["value"] = v["value"].text()
return new_data
def camel_case_split(self, text):
matches = re.finditer(
'.+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)', text)
return " ".join([str(m.group(0)).capitalize() for m in matches])
def create_row(self, layout, type_name, text, **kwargs):
# get type attribute from qwidgets
attr = getattr(QtWidgets, type_name)
# convert label text to normal capitalized text with spaces
label_text = self.camel_case_split(text)
# assign the new text to lable widget
label = QtWidgets.QLabel(label_text)
label.setObjectName("LineLabel")
# create attribute name text strip of spaces
attr_name = text.replace(" ", "")
# create attribute and assign default values
setattr(
self,
attr_name,
attr(parent=self))
# assign the created attribute to variable
item = getattr(self, attr_name)
for func, val in kwargs.items():
if getattr(item, func):
func_attr = getattr(item, func)
func_attr(val)
# add to layout
layout.addRow(label, item)
return item
def populate_widgets(self, data, content_layout=None):
"""
Populate widget from input dict.
Each plugin has its own set of widget rows defined in dictionary
each row values should have following keys: `type`, `target`,
`label`, `order`, `value` and optionally also `toolTip`.
Args:
data (dict): widget rows or organized groups defined
by types `dict` or `section`
content_layout (QtWidgets.QFormLayout)[optional]: used when nesting
Returns:
dict: redefined data dict updated with created widgets
"""
content_layout = content_layout or self.content_layout[-1]
# fix order of process by defined order value
ordered_keys = list(data.keys())
for k, v in data.items():
try:
# try removing a key from index which should
# be filled with new
ordered_keys.pop(v["order"])
except IndexError:
pass
# add key into correct order
ordered_keys.insert(v["order"], k)
# process ordered
for k in ordered_keys:
v = data[k]
tool_tip = v.get("toolTip", "")
if v["type"] == "dict":
self.content_layout.append(QtWidgets.QWidget(self))
content_layout.addWidget(self.content_layout[-1])
self.content_layout[-1].setObjectName("sectionHeadline")
headline = QtWidgets.QVBoxLayout(self.content_layout[-1])
headline.addWidget(Spacer(20, self))
headline.addWidget(QtWidgets.QLabel(v["label"]))
# adding nested layout with label
self.content_layout.append(QtWidgets.QWidget(self))
self.content_layout[-1].setObjectName("sectionContent")
nested_content_layout = QtWidgets.QFormLayout(
self.content_layout[-1])
nested_content_layout.setObjectName("NestedContentLayout")
content_layout.addWidget(self.content_layout[-1])
# add nested key as label
data[k]["value"] = self.populate_widgets(
v["value"], nested_content_layout)
if v["type"] == "section":
self.content_layout.append(QtWidgets.QWidget(self))
content_layout.addWidget(self.content_layout[-1])
self.content_layout[-1].setObjectName("sectionHeadline")
headline = QtWidgets.QVBoxLayout(self.content_layout[-1])
headline.addWidget(Spacer(20, self))
headline.addWidget(QtWidgets.QLabel(v["label"]))
# adding nested layout with label
self.content_layout.append(QtWidgets.QWidget(self))
self.content_layout[-1].setObjectName("sectionContent")
nested_content_layout = QtWidgets.QFormLayout(
self.content_layout[-1])
nested_content_layout.setObjectName("NestedContentLayout")
content_layout.addWidget(self.content_layout[-1])
# add nested key as label
data[k]["value"] = self.populate_widgets(
v["value"], nested_content_layout)
elif v["type"] == "QLineEdit":
data[k]["value"] = self.create_row(
content_layout, "QLineEdit", v["label"],
setText=v["value"], setToolTip=tool_tip)
elif v["type"] == "QComboBox":
data[k]["value"] = self.create_row(
content_layout, "QComboBox", v["label"],
addItems=v["value"], setToolTip=tool_tip)
elif v["type"] == "QCheckBox":
data[k]["value"] = self.create_row(
content_layout, "QCheckBox", v["label"],
setChecked=v["value"], setToolTip=tool_tip)
elif v["type"] == "QSpinBox":
data[k]["value"] = self.create_row(
content_layout, "QSpinBox", v["label"],
setValue=v["value"], setMinimum=0,
setMaximum=100000, setToolTip=tool_tip)
return data
class Spacer(QtWidgets.QWidget):
def __init__(self, height, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
self.setFixedHeight(height)
real_spacer = QtWidgets.QWidget(self)
real_spacer.setObjectName("Spacer")
real_spacer.setFixedHeight(height)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(real_spacer)
self.setLayout(layout)
class Creator(openpype.Creator):
"""Creator class wrapper
"""
clip_color = constants.COLOR_MAP["purple"]
rename_index = None
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
self.presets = openpype.get_current_project_settings()[
"flame"]["create"].get(self.__class__.__name__, {})
# adding basic current context flame objects
self.project = flib.get_current_project()
self.sequence = flib.get_current_sequence(flib.CTX.selection)
if (self.options or {}).get("useSelection"):
self.selected = flib.get_sequence_segments(self.sequence, True)
else:
self.selected = flib.get_sequence_segments(self.sequence)
def create_widget(self, *args, **kwargs):
widget = CreatorWidget(*args, **kwargs)
widget.exec_()
return widget.get_results_back()
class PublishableClip:
"""
Convert a segment to publishable instance
Args:
segment (flame.PySegment): flame api object
kwargs (optional): additional data needed for rename=True (presets)
Returns:
flame.PySegment: flame api object
"""
vertical_clip_match = {}
marker_data = {}
types = {
"shot": "shot",
"folder": "folder",
"episode": "episode",
"sequence": "sequence",
"track": "sequence",
}
# parents search patern
parents_search_patern = r"\{([a-z]*?)\}"
# default templates for non-ui use
rename_default = False
hierarchy_default = "{_folder_}/{_sequence_}/{_track_}"
clip_name_default = "shot_{_trackIndex_:0>3}_{_clipIndex_:0>4}"
subset_name_default = "[ track name ]"
review_track_default = "[ none ]"
subset_family_default = "plate"
count_from_default = 10
count_steps_default = 10
vertical_sync_default = False
driving_layer_default = ""
index_from_segment_default = False
def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"]
self.family = kwargs["family"]
self.log = kwargs["log"]
# get main parent objects
self.current_segment = segment
sequence_name = flib.get_current_sequence([segment]).name.get_value()
self.sequence_name = str(sequence_name).replace(" ", "_")
self.clip_data = flib.get_segment_attributes(segment)
# segment (clip) main attributes
self.cs_name = self.clip_data["segment_name"]
self.cs_index = int(self.clip_data["segment"])
# get track name and index
self.track_index = int(self.clip_data["track"])
track_name = self.clip_data["track_name"]
self.track_name = str(track_name).replace(" ", "_").replace(
"*", "noname{}".format(self.track_index))
# adding tag.family into tag
if kwargs.get("avalon"):
self.marker_data.update(kwargs["avalon"])
# add publish attribute to marker data
self.marker_data.update({"publish": True})
# adding ui inputs if any
self.ui_inputs = kwargs.get("ui_inputs", {})
self.log.info("Inside of plugin: {}".format(
self.marker_data
))
# populate default data before we get other attributes
self._populate_segment_default_data()
# use all populated default data to create all important attributes
self._populate_attributes()
# create parents with correct types
self._create_parents()
def convert(self):
# solve segment data and add them to marker data
self._convert_to_marker_data()
# if track name is in review track name and also if driving track name
# is not in review track name: skip tag creation
if (self.track_name in self.review_layer) and (
self.driving_layer not in self.review_layer):
return
# deal with clip name
new_name = self.marker_data.pop("newClipName")
if self.rename:
# rename segment
self.current_segment.name = str(new_name)
self.marker_data["asset"] = str(new_name)
else:
self.marker_data["asset"] = self.cs_name
self.marker_data["hierarchyData"]["shot"] = self.cs_name
if self.marker_data["heroTrack"] and self.review_layer:
self.marker_data.update({"reviewTrack": self.review_layer})
else:
self.marker_data.update({"reviewTrack": None})
# create pype tag on track_item and add data
fpipeline.imprint(self.current_segment, self.marker_data)
return self.current_segment
def _populate_segment_default_data(self):
""" Populate default formating data from segment. """
self.current_segment_default_data = {
"_folder_": "shots",
"_sequence_": self.sequence_name,
"_track_": self.track_name,
"_clip_": self.cs_name,
"_trackIndex_": self.track_index,
"_clipIndex_": self.cs_index
}
def _populate_attributes(self):
""" Populate main object attributes. """
# segment frame range and parent track name for vertical sync check
self.clip_in = int(self.clip_data["record_in"])
self.clip_out = int(self.clip_data["record_out"])
# define ui inputs if non gui mode was used
self.shot_num = self.cs_index
self.log.debug(
"____ self.shot_num: {}".format(self.shot_num))
# ui_inputs data or default values if gui was not used
self.rename = self.ui_inputs.get(
"clipRename", {}).get("value") or self.rename_default
self.clip_name = self.ui_inputs.get(
"clipName", {}).get("value") or self.clip_name_default
self.hierarchy = self.ui_inputs.get(
"hierarchy", {}).get("value") or self.hierarchy_default
self.hierarchy_data = self.ui_inputs.get(
"hierarchyData", {}).get("value") or \
self.current_segment_default_data.copy()
self.index_from_segment = self.ui_inputs.get(
"segmentIndex", {}).get("value") or self.index_from_segment_default
self.count_from = self.ui_inputs.get(
"countFrom", {}).get("value") or self.count_from_default
self.count_steps = self.ui_inputs.get(
"countSteps", {}).get("value") or self.count_steps_default
self.subset_name = self.ui_inputs.get(
"subsetName", {}).get("value") or self.subset_name_default
self.subset_family = self.ui_inputs.get(
"subsetFamily", {}).get("value") or self.subset_family_default
self.vertical_sync = self.ui_inputs.get(
"vSyncOn", {}).get("value") or self.vertical_sync_default
self.driving_layer = self.ui_inputs.get(
"vSyncTrack", {}).get("value") or self.driving_layer_default
self.review_track = self.ui_inputs.get(
"reviewTrack", {}).get("value") or self.review_track_default
self.audio = self.ui_inputs.get(
"audio", {}).get("value") or False
# build subset name from layer name
if self.subset_name == "[ track name ]":
self.subset_name = self.track_name
# create subset for publishing
self.subset = self.subset_family + self.subset_name.capitalize()
def _replace_hash_to_expression(self, name, text):
""" Replace hash with number in correct padding. """
_spl = text.split("#")
_len = (len(_spl) - 1)
_repl = "{{{0}:0>{1}}}".format(name, _len)
return text.replace(("#" * _len), _repl)
def _convert_to_marker_data(self):
""" Convert internal data to marker data.
Populating the marker data into internal variable self.marker_data
"""
# define vertical sync attributes
hero_track = True
self.review_layer = ""
if self.vertical_sync and self.track_name not in self.driving_layer:
# if it is not then define vertical sync as None
hero_track = False
# increasing steps by index of rename iteration
if not self.index_from_segment:
self.count_steps *= self.rename_index
hierarchy_formating_data = {}
hierarchy_data = deepcopy(self.hierarchy_data)
_data = self.current_segment_default_data.copy()
if self.ui_inputs:
# adding tag metadata from ui
for _k, _v in self.ui_inputs.items():
if _v["target"] == "tag":
self.marker_data[_k] = _v["value"]
# driving layer is set as positive match
if hero_track or self.vertical_sync:
# mark review layer
if self.review_track and (
self.review_track not in self.review_track_default):
# if review layer is defined and not the same as defalut
self.review_layer = self.review_track
# shot num calculate
if self.index_from_segment:
# use clip index from timeline
self.shot_num = self.count_steps * self.cs_index
else:
if self.rename_index == 0:
self.shot_num = self.count_from
else:
self.shot_num = self.count_from + self.count_steps
# clip name sequence number
_data.update({"shot": self.shot_num})
# solve # in test to pythonic expression
for _k, _v in hierarchy_data.items():
if "#" not in _v["value"]:
continue
hierarchy_data[
_k]["value"] = self._replace_hash_to_expression(
_k, _v["value"])
# fill up pythonic expresisons in hierarchy data
for k, _v in hierarchy_data.items():
hierarchy_formating_data[k] = _v["value"].format(**_data)
else:
# if no gui mode then just pass default data
hierarchy_formating_data = hierarchy_data
tag_hierarchy_data = self._solve_tag_hierarchy_data(
hierarchy_formating_data
)
tag_hierarchy_data.update({"heroTrack": True})
if hero_track and self.vertical_sync:
self.vertical_clip_match.update({
(self.clip_in, self.clip_out): tag_hierarchy_data
})
if not hero_track and self.vertical_sync:
# driving layer is set as negative match
for (_in, _out), hero_data in self.vertical_clip_match.items():
hero_data.update({"heroTrack": False})
if _in == self.clip_in and _out == self.clip_out:
data_subset = hero_data["subset"]
# add track index in case duplicity of names in hero data
if self.subset in data_subset:
hero_data["subset"] = self.subset + str(
self.track_index)
# in case track name and subset name is the same then add
if self.subset_name == self.track_name:
hero_data["subset"] = self.subset
# assing data to return hierarchy data to tag
tag_hierarchy_data = hero_data
# add data to return data dict
self.marker_data.update(tag_hierarchy_data)
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
""" Solve marker data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
# remove shot from hierarchy data: is not needed anymore
hierarchy_formating_data.pop("shot")
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,
"parents": self.parents,
"hierarchyData": hierarchy_formating_data,
"subset": self.subset,
"family": self.subset_family,
"families": [self.family]
}
def _convert_to_entity(self, type, template):
""" Converting input key to key with type. """
# convert to entity type
entity_type = self.types.get(type, None)
assert entity_type, "Missing entity type for `{}`".format(
type
)
# first collect formating data to use for formating template
formating_data = {}
for _k, _v in self.hierarchy_data.items():
value = _v["value"].format(
**self.current_segment_default_data)
formating_data[_k] = value
return {
"entity_type": entity_type,
"entity_name": template.format(
**formating_data
)
}
def _create_parents(self):
""" Create parents and return it in list. """
self.parents = []
patern = re.compile(self.parents_search_patern)
par_split = [(patern.findall(t).pop(), t)
for t in self.hierarchy.split("/")]
for type, template in par_split:
parent = self._convert_to_entity(type, template)
self.parents.append(parent)
# Publishing plugin functions
# Loader plugin functions

View file

@ -9,26 +9,14 @@ import json
import xml.dom.minidom as minidom
from copy import deepcopy
import datetime
try:
from libwiretapPythonClientAPI import (
WireTapClientInit)
except ImportError:
flame_python_path = "/opt/Autodesk/flame_2021/python"
flame_exe_path = (
"/opt/Autodesk/flame_2021/bin/flame.app"
"/Contents/MacOS/startApp")
sys.path.append(flame_python_path)
from libwiretapPythonClientAPI import (
WireTapClientInit,
WireTapClientUninit,
WireTapNodeHandle,
WireTapServerHandle,
WireTapInt,
WireTapStr
)
from libwiretapPythonClientAPI import ( # noqa
WireTapClientInit,
WireTapClientUninit,
WireTapNodeHandle,
WireTapServerHandle,
WireTapInt,
WireTapStr
)
class WireTapCom(object):
@ -55,6 +43,9 @@ class WireTapCom(object):
self.volume_name = volume_name or "stonefs"
self.group_name = group_name or "staff"
# wiretap tools dir path
self.wiretap_tools_dir = os.getenv("OPENPYPE_WIRETAP_TOOLS")
# initialize WireTap client
WireTapClientInit()
@ -84,9 +75,11 @@ class WireTapCom(object):
workspace_name = kwargs.get("workspace_name")
color_policy = kwargs.get("color_policy")
self._project_prep(project_name)
self._set_project_settings(project_name, project_data)
self._set_project_colorspace(project_name, color_policy)
project_exists = self._project_prep(project_name)
if not project_exists:
self._set_project_settings(project_name, project_data)
self._set_project_colorspace(project_name, color_policy)
user_name = self._user_prep(user_name)
if workspace_name is None:
@ -169,18 +162,15 @@ class WireTapCom(object):
# check if volumes exists
if self.volume_name not in volumes:
raise AttributeError(
("Volume '{}' does not exist '{}'").format(
("Volume '{}' does not exist in '{}'").format(
self.volume_name, volumes)
)
# form cmd arguments
project_create_cmd = [
os.path.join(
"/opt/Autodesk/",
"wiretap",
"tools",
"2021",
"wiretap_create_node",
self.wiretap_tools_dir,
"wiretap_create_node"
),
'-n',
os.path.join("/volumes", self.volume_name),
@ -202,6 +192,7 @@ class WireTapCom(object):
print(
"A new project '{}' is created.".format(project_name))
return project_exists
def _get_all_volumes(self):
"""Request all available volumens from WireTap
@ -431,11 +422,8 @@ class WireTapCom(object):
color_policy = color_policy or "Legacy"
project_colorspace_cmd = [
os.path.join(
"/opt/Autodesk/",
"wiretap",
"tools",
"2021",
"wiretap_duplicate_node",
self.wiretap_tools_dir,
"wiretap_duplicate_node"
),
"-s",
"/syncolor/policies/Autodesk/{}".format(color_policy),

View file

@ -5,17 +5,14 @@ from pprint import pformat
import atexit
import openpype
import avalon
import openpype.hosts.flame as opflame
flh = sys.modules[__name__]
flh._project = None
import openpype.hosts.flame.api as opfapi
def openpype_install():
"""Registering OpenPype in context
"""
openpype.install()
avalon.api.install(opflame)
avalon.api.install(opfapi)
print("Avalon registred hosts: {}".format(
avalon.api.registered_host()))
@ -48,30 +45,34 @@ sys.excepthook = exeption_handler
def cleanup():
"""Cleaning up Flame framework context
"""
if opflame.apps:
print('`{}` cleaning up apps:\n {}\n'.format(
__file__, pformat(opflame.apps)))
while len(opflame.apps):
app = opflame.apps.pop()
if opfapi.CTX.flame_apps:
print('`{}` cleaning up flame_apps:\n {}\n'.format(
__file__, pformat(opfapi.CTX.flame_apps)))
while len(opfapi.CTX.flame_apps):
app = opfapi.CTX.flame_apps.pop()
print('`{}` removing : {}'.format(__file__, app.name))
del app
opflame.apps = []
opfapi.CTX.flame_apps = []
if opflame.app_framework:
print('PYTHON\t: %s cleaning up' % opflame.app_framework.bundle_name)
opflame.app_framework.save_prefs()
opflame.app_framework = None
if opfapi.CTX.app_framework:
print('openpype\t: {} cleaning up'.format(
opfapi.CTX.app_framework.bundle_name)
)
opfapi.CTX.app_framework.save_prefs()
opfapi.CTX.app_framework = None
atexit.register(cleanup)
def load_apps():
"""Load available apps into Flame framework
"""Load available flame_apps into Flame framework
"""
opflame.apps.append(opflame.FlameMenuProjectConnect(opflame.app_framework))
opflame.apps.append(opflame.FlameMenuTimeline(opflame.app_framework))
opflame.app_framework.log.info("Apps are loaded")
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuTimeline(opfapi.CTX.app_framework))
opfapi.CTX.app_framework.log.info("Apps are loaded")
def project_changed_dict(info):
@ -89,10 +90,10 @@ def app_initialized(parent=None):
Args:
parent (obj, optional): Parent object. Defaults to None.
"""
opflame.app_framework = opflame.FlameAppFramework()
opfapi.CTX.app_framework = opfapi.FlameAppFramework()
print("{} initializing".format(
opflame.app_framework.bundle_name))
opfapi.CTX.app_framework.bundle_name))
load_apps()
@ -103,7 +104,7 @@ Initialisation of the hook is starting from here
First it needs to test if it can import the flame modul.
This will happen only in case a project has been loaded.
Then `app_initialized` will load main Framework which will load
all menu objects as apps.
all menu objects as flame_apps.
"""
try:
@ -131,15 +132,15 @@ def _build_app_menu(app_name):
# first find the relative appname
app = None
for _app in opflame.apps:
for _app in opfapi.CTX.flame_apps:
if _app.__class__.__name__ == app_name:
app = _app
if app:
menu.append(app.build_menu())
if opflame.app_framework:
menu_auto_refresh = opflame.app_framework.prefs_global.get(
if opfapi.CTX.app_framework:
menu_auto_refresh = opfapi.CTX.app_framework.prefs_global.get(
'menu_auto_refresh', {})
if menu_auto_refresh.get('timeline_menu', True):
try:
@ -163,8 +164,8 @@ def project_saved(project_name, save_time, is_auto_save):
save_time (str): time when it was saved
is_auto_save (bool): autosave is on or off
"""
if opflame.app_framework:
opflame.app_framework.save_prefs()
if opfapi.CTX.app_framework:
opfapi.CTX.app_framework.save_prefs()
def get_main_menu_custom_ui_actions():

View file

@ -5,7 +5,7 @@ Flame utils for syncing scripts
import os
import shutil
from openpype.api import Logger
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
def _sync_utility_scripts(env=None):
@ -27,6 +27,7 @@ def _sync_utility_scripts(env=None):
fsd_paths = [os.path.join(
HOST_DIR,
"api",
"utility_scripts"
)]
@ -74,10 +75,19 @@ def _sync_utility_scripts(env=None):
path = os.path.join(flame_shared_dir, _itm)
log.info("Removing `{path}`...".format(**locals()))
if os.path.isdir(path):
shutil.rmtree(path, onerror=None)
else:
os.remove(path)
try:
if os.path.isdir(path):
shutil.rmtree(path, onerror=None)
else:
os.remove(path)
except PermissionError as msg:
log.warning(
"Not able to remove: `{}`, Problem with: `{}`".format(
path,
msg
)
)
# copy scripts into Resolve's utility scripts dir
for dirpath, scriptlist in scripts.items():
@ -87,13 +97,22 @@ def _sync_utility_scripts(env=None):
src = os.path.join(dirpath, _script)
dst = os.path.join(flame_shared_dir, _script)
log.info("Copying `{src}` to `{dst}`...".format(**locals()))
if os.path.isdir(src):
shutil.copytree(
src, dst, symlinks=False,
ignore=None, ignore_dangling_symlinks=False
try:
if os.path.isdir(src):
shutil.copytree(
src, dst, symlinks=False,
ignore=None, ignore_dangling_symlinks=False
)
else:
shutil.copy2(src, dst)
except (PermissionError, FileExistsError) as msg:
log.warning(
"Not able to coppy to: `{}`, Problem with: `{}`".format(
dst,
msg
)
)
else:
shutil.copy2(src, dst)
def setup(env=None):

View file

@ -8,7 +8,7 @@ from openpype.api import Logger
# )
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
exported_projet_ext = ".otoc"

View file

@ -6,6 +6,7 @@ import socket
from openpype.lib import (
PreLaunchHook, get_openpype_username)
from openpype.hosts import flame as opflame
import openpype.hosts.flame.api as opfapi
import openpype
from pprint import pformat
@ -18,18 +19,18 @@ class FlamePrelaunch(PreLaunchHook):
"""
app_groups = ["flame"]
# todo: replace version number with avalon launch app version
flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7"
wtc_script_path = os.path.join(
opflame.HOST_DIR, "scripts", "wiretap_com.py")
opflame.HOST_DIR, "api", "scripts", "wiretap_com.py")
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.signature = "( {} )".format(self.__class__.__name__)
def execute(self):
_env = self.launch_context.env
self.flame_python_exe = _env["OPENPYPE_FLAME_PYTHON_EXEC"]
self.flame_pythonpath = _env["OPENPYPE_FLAME_PYTHONPATH"]
"""Hook entry method."""
project_doc = self.data["project_doc"]
user_name = get_openpype_username()
@ -55,12 +56,11 @@ class FlamePrelaunch(PreLaunchHook):
"FieldDominance": "PROGRESSIVE"
}
data_to_script = {
# from settings
"host_name": os.getenv("FLAME_WIRETAP_HOSTNAME") or hostname,
"volume_name": os.getenv("FLAME_WIRETAP_VOLUME"),
"group_name": os.getenv("FLAME_WIRETAP_GROUP"),
"host_name": _env.get("FLAME_WIRETAP_HOSTNAME") or hostname,
"volume_name": _env.get("FLAME_WIRETAP_VOLUME"),
"group_name": _env.get("FLAME_WIRETAP_GROUP"),
"color_policy": "ACES 1.1",
# from project
@ -68,14 +68,28 @@ class FlamePrelaunch(PreLaunchHook):
"user_name": user_name,
"project_data": project_data
}
self.log.info(pformat(dict(_env)))
self.log.info(pformat(data_to_script))
# add to python path from settings
self._add_pythonpath()
app_arguments = self._get_launch_arguments(data_to_script)
self.log.info(pformat(dict(self.launch_context.env)))
opflame.setup(self.launch_context.env)
opfapi.setup(self.launch_context.env)
self.launch_context.launch_args.extend(app_arguments)
def _add_pythonpath(self):
pythonpath = self.launch_context.env.get("PYTHONPATH")
# separate it explicity by `;` that is what we use in settings
new_pythonpath = self.flame_pythonpath.split(os.pathsep)
new_pythonpath += pythonpath.split(os.pathsep)
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(new_pythonpath)
def _get_launch_arguments(self, script_data):
# Dump data to string
dumped_script_data = json.dumps(script_data)
@ -83,7 +97,9 @@ class FlamePrelaunch(PreLaunchHook):
with make_temp_file(dumped_script_data) as tmp_json_path:
# Prepare subprocess arguments
args = [
self.flame_python_exe,
self.flame_python_exe.format(
**self.launch_context.env
),
self.wtc_script_path,
tmp_json_path
]
@ -91,7 +107,7 @@ class FlamePrelaunch(PreLaunchHook):
process_kwargs = {
"logger": self.log,
"env": {}
"env": self.launch_context.env
}
openpype.api.run_subprocess(args, **process_kwargs)

View file

@ -0,0 +1,657 @@
""" compatibility OpenTimelineIO 0.12.0 and newer
"""
import os
import re
import json
import logging
import opentimelineio as otio
from . import utils
import flame
from pprint import pformat
reload(utils) # noqa
log = logging.getLogger(__name__)
TRACK_TYPES = {
"video": otio.schema.TrackKind.Video,
"audio": otio.schema.TrackKind.Audio
}
MARKERS_COLOR_MAP = {
(1.0, 0.0, 0.0): otio.schema.MarkerColor.RED,
(1.0, 0.5, 0.0): otio.schema.MarkerColor.ORANGE,
(1.0, 1.0, 0.0): otio.schema.MarkerColor.YELLOW,
(1.0, 0.5, 1.0): otio.schema.MarkerColor.PINK,
(1.0, 1.0, 1.0): otio.schema.MarkerColor.WHITE,
(0.0, 1.0, 0.0): otio.schema.MarkerColor.GREEN,
(0.0, 1.0, 1.0): otio.schema.MarkerColor.CYAN,
(0.0, 0.0, 1.0): otio.schema.MarkerColor.BLUE,
(0.5, 0.0, 0.5): otio.schema.MarkerColor.PURPLE,
(0.5, 0.0, 1.0): otio.schema.MarkerColor.MAGENTA,
(0.0, 0.0, 0.0): otio.schema.MarkerColor.BLACK
}
MARKERS_INCLUDE = True
class CTX:
_fps = None
_tl_start_frame = None
project = None
clips = None
@classmethod
def set_fps(cls, new_fps):
if not isinstance(new_fps, float):
raise TypeError("Invalid fps type {}".format(type(new_fps)))
if cls._fps != new_fps:
cls._fps = new_fps
@classmethod
def get_fps(cls):
return cls._fps
@classmethod
def set_tl_start_frame(cls, number):
if not isinstance(number, int):
raise TypeError("Invalid timeline start frame type {}".format(
type(number)))
if cls._tl_start_frame != number:
cls._tl_start_frame = number
@classmethod
def get_tl_start_frame(cls):
return cls._tl_start_frame
def flatten(_list):
for item in _list:
if isinstance(item, (list, tuple)):
for sub_item in flatten(item):
yield sub_item
else:
yield item
def get_current_flame_project():
project = flame.project.current_project
return project
def create_otio_rational_time(frame, fps):
return otio.opentime.RationalTime(
float(frame),
float(fps)
)
def create_otio_time_range(start_frame, frame_duration, fps):
return otio.opentime.TimeRange(
start_time=create_otio_rational_time(start_frame, fps),
duration=create_otio_rational_time(frame_duration, fps)
)
def _get_metadata(item):
if hasattr(item, 'metadata'):
if not item.metadata:
return {}
return {key: value for key, value in dict(item.metadata)}
return {}
def create_time_effects(otio_clip, item):
# todo #2426: add retiming effects to export
# get all subtrack items
# subTrackItems = flatten(track_item.parent().subTrackItems())
# speed = track_item.playbackSpeed()
# otio_effect = None
# # retime on track item
# if speed != 1.:
# # make effect
# otio_effect = otio.schema.LinearTimeWarp()
# otio_effect.name = "Speed"
# otio_effect.time_scalar = speed
# otio_effect.metadata = {}
# # freeze frame effect
# if speed == 0.:
# otio_effect = otio.schema.FreezeFrame()
# otio_effect.name = "FreezeFrame"
# otio_effect.metadata = {}
# if otio_effect:
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
# # loop trought and get all Timewarps
# for effect in subTrackItems:
# if ((track_item not in effect.linkedItems())
# and (len(effect.linkedItems()) > 0)):
# continue
# # avoid all effect which are not TimeWarp and disabled
# if "TimeWarp" not in effect.name():
# continue
# if not effect.isEnabled():
# continue
# node = effect.node()
# name = node["name"].value()
# # solve effect class as effect name
# _name = effect.name()
# if "_" in _name:
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
# else:
# effect_name = re.sub(r"\d+", "", _name) # one number
# metadata = {}
# # add knob to metadata
# for knob in ["lookup", "length"]:
# value = node[knob].value()
# animated = node[knob].isAnimated()
# if animated:
# value = [
# ((node[knob].getValueAt(i)) - i)
# for i in range(
# track_item.timelineIn(),
# track_item.timelineOut() + 1)
# ]
# metadata[knob] = value
# # make effect
# otio_effect = otio.schema.TimeEffect()
# otio_effect.name = name
# otio_effect.effect_name = effect_name
# otio_effect.metadata = metadata
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
pass
def _get_marker_color(flame_colour):
# clamp colors to closes half numbers
_flame_colour = [
(lambda x: round(x * 2) / 2)(c)
for c in flame_colour]
for color, otio_color_type in MARKERS_COLOR_MAP.items():
if _flame_colour == list(color):
return otio_color_type
return otio.schema.MarkerColor.RED
def _get_flame_markers(item):
output_markers = []
time_in = item.record_in.relative_frame
for marker in item.markers:
log.debug(marker)
start_frame = marker.location.get_value().relative_frame
start_frame = (start_frame - time_in) + 1
marker_data = {
"name": marker.name.get_value(),
"duration": marker.duration.get_value().relative_frame,
"comment": marker.comment.get_value(),
"start_frame": start_frame,
"colour": marker.colour.get_value()
}
output_markers.append(marker_data)
return output_markers
def create_otio_markers(otio_item, item):
markers = _get_flame_markers(item)
for marker in markers:
frame_rate = CTX.get_fps()
marked_range = otio.opentime.TimeRange(
start_time=otio.opentime.RationalTime(
marker["start_frame"],
frame_rate
),
duration=otio.opentime.RationalTime(
marker["duration"],
frame_rate
)
)
# testing the comment if it is not containing json string
check_if_json = re.findall(
re.compile(r"[{:}]"),
marker["comment"]
)
# to identify this as json, at least 3 items in the list should
# be present ["{", ":", "}"]
metadata = {}
if len(check_if_json) >= 3:
# this is json string
try:
# capture exceptions which are related to strings only
metadata.update(
json.loads(marker["comment"])
)
except ValueError as msg:
log.error("Marker json conversion: {}".format(msg))
else:
metadata["comment"] = marker["comment"]
otio_marker = otio.schema.Marker(
name=marker["name"],
color=_get_marker_color(
marker["colour"]),
marked_range=marked_range,
metadata=metadata
)
otio_item.markers.append(otio_marker)
def create_otio_reference(clip_data):
metadata = _get_metadata(clip_data)
# get file info for path and start frame
frame_start = 0
fps = CTX.get_fps()
path = clip_data["fpath"]
reel_clip = None
match_reel_clip = [
clip for clip in CTX.clips
if clip["fpath"] == path
]
if match_reel_clip:
reel_clip = match_reel_clip.pop()
fps = reel_clip["fps"]
file_name = os.path.basename(path)
file_head, extension = os.path.splitext(file_name)
# get padding and other file infos
log.debug("_ path: {}".format(path))
is_sequence = padding = utils.get_frame_from_path(path)
if is_sequence:
number = utils.get_frame_from_path(path)
file_head = file_name.split(number)[:-1]
frame_start = int(number)
frame_duration = clip_data["source_duration"]
if is_sequence:
metadata.update({
"isSequence": True,
"padding": padding
})
otio_ex_ref_item = None
if is_sequence:
# if it is file sequence try to create `ImageSequenceReference`
# the OTIO might not be compatible so return nothing and do it old way
try:
dirname = os.path.dirname(path)
otio_ex_ref_item = otio.schema.ImageSequenceReference(
target_url_base=dirname + os.sep,
name_prefix=file_head,
name_suffix=extension,
start_frame=frame_start,
frame_zero_padding=padding,
rate=fps,
available_range=create_otio_time_range(
frame_start,
frame_duration,
fps
)
)
except AttributeError:
pass
if not otio_ex_ref_item:
reformat_path = utils.get_reformated_path(path, padded=False)
# in case old OTIO or video file create `ExternalReference`
otio_ex_ref_item = otio.schema.ExternalReference(
target_url=reformat_path,
available_range=create_otio_time_range(
frame_start,
frame_duration,
fps
)
)
# add metadata to otio item
add_otio_metadata(otio_ex_ref_item, clip_data, **metadata)
return otio_ex_ref_item
def create_otio_clip(clip_data):
segment = clip_data["PySegment"]
# create media reference
media_reference = create_otio_reference(clip_data)
# calculate source in
first_frame = utils.get_frame_from_path(clip_data["fpath"]) or 0
source_in = int(clip_data["source_in"]) - int(first_frame)
# creatae source range
source_range = create_otio_time_range(
source_in,
clip_data["record_duration"],
CTX.get_fps()
)
otio_clip = otio.schema.Clip(
name=clip_data["segment_name"],
source_range=source_range,
media_reference=media_reference
)
# Add markers
if MARKERS_INCLUDE:
create_otio_markers(otio_clip, segment)
return otio_clip
def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
return otio.schema.Gap(
source_range=create_otio_time_range(
gap_start,
(clip_start - tl_start_frame) - gap_start,
fps
)
)
def get_clips_in_reels(project):
output_clips = []
project_desktop = project.current_workspace.desktop
for reel_group in project_desktop.reel_groups:
for reel in reel_group.reels:
for clip in reel.clips:
clip_data = {
"PyClip": clip,
"fps": float(str(clip.frame_rate)[:-4])
}
attrs = [
"name", "width", "height",
"ratio", "sample_rate", "bit_depth"
]
for attr in attrs:
val = getattr(clip, attr)
clip_data[attr] = val
version = clip.versions[-1]
track = version.tracks[-1]
for segment in track.segments:
segment_data = _get_segment_attributes(segment)
clip_data.update(segment_data)
output_clips.append(clip_data)
return output_clips
def _get_colourspace_policy():
output = {}
# get policies project path
policy_dir = "/opt/Autodesk/project/{}/synColor/policy".format(
CTX.project.name
)
log.debug(policy_dir)
policy_fp = os.path.join(policy_dir, "policy.cfg")
if not os.path.exists(policy_fp):
return output
with open(policy_fp) as file:
dict_conf = dict(line.strip().split(' = ', 1) for line in file)
output.update(
{"openpype.flame.{}".format(k): v for k, v in dict_conf.items()}
)
return output
def _create_otio_timeline(sequence):
metadata = _get_metadata(sequence)
# find colour policy files and add them to metadata
colorspace_policy = _get_colourspace_policy()
metadata.update(colorspace_policy)
metadata.update({
"openpype.timeline.width": int(sequence.width),
"openpype.timeline.height": int(sequence.height),
"openpype.timeline.pixelAspect": 1
})
rt_start_time = create_otio_rational_time(
CTX.get_tl_start_frame(), CTX.get_fps())
return otio.schema.Timeline(
name=str(sequence.name)[1:-1],
global_start_time=rt_start_time,
metadata=metadata
)
def create_otio_track(track_type, track_name):
return otio.schema.Track(
name=track_name,
kind=TRACK_TYPES[track_type]
)
def add_otio_gap(clip_data, otio_track, prev_out):
gap_length = clip_data["record_in"] - prev_out
if prev_out != 0:
gap_length -= 1
gap = otio.opentime.TimeRange(
duration=otio.opentime.RationalTime(
gap_length,
CTX.get_fps()
)
)
otio_gap = otio.schema.Gap(source_range=gap)
otio_track.append(otio_gap)
def add_otio_metadata(otio_item, item, **kwargs):
metadata = _get_metadata(item)
# add additional metadata from kwargs
if kwargs:
metadata.update(kwargs)
# add metadata to otio item metadata
for key, value in metadata.items():
otio_item.metadata.update({key: value})
def _get_shot_tokens_values(clip, tokens):
old_value = None
output = {}
if not clip.shot_name:
return output
old_value = clip.shot_name.get_value()
for token in tokens:
clip.shot_name.set_value(token)
_key = re.sub("[ <>]", "", token)
try:
output[_key] = int(clip.shot_name.get_value())
except ValueError:
output[_key] = clip.shot_name.get_value()
clip.shot_name.set_value(old_value)
return output
def _get_segment_attributes(segment):
# log.debug(dir(segment))
if str(segment.name)[1:-1] == "":
return None
# Add timeline segment to tree
clip_data = {
"segment_name": segment.name.get_value(),
"segment_comment": segment.comment.get_value(),
"tape_name": segment.tape_name,
"source_name": segment.source_name,
"fpath": segment.file_path,
"PySegment": segment
}
# add all available shot tokens
shot_tokens = _get_shot_tokens_values(segment, [
"<colour space>", "<width>", "<height>", "<depth>",
])
clip_data.update(shot_tokens)
# populate shot source metadata
segment_attrs = [
"record_duration", "record_in", "record_out",
"source_duration", "source_in", "source_out"
]
segment_attrs_data = {}
for attr in segment_attrs:
if not hasattr(segment, attr):
continue
_value = getattr(segment, attr)
segment_attrs_data[attr] = str(_value).replace("+", ":")
if attr in ["record_in", "record_out"]:
clip_data[attr] = _value.relative_frame
else:
clip_data[attr] = _value.frame
clip_data["segment_timecodes"] = segment_attrs_data
return clip_data
def create_otio_timeline(sequence):
log.info(dir(sequence))
log.info(sequence.attributes)
CTX.project = get_current_flame_project()
CTX.clips = get_clips_in_reels(CTX.project)
log.debug(pformat(
CTX.clips
))
# get current timeline
CTX.set_fps(
float(str(sequence.frame_rate)[:-4]))
tl_start_frame = utils.timecode_to_frames(
str(sequence.start_time).replace("+", ":"),
CTX.get_fps()
)
CTX.set_tl_start_frame(tl_start_frame)
# convert timeline to otio
otio_timeline = _create_otio_timeline(sequence)
# create otio tracks and clips
for ver in sequence.versions:
for track in ver.tracks:
if len(track.segments) == 0 and track.hidden:
return None
# convert track to otio
otio_track = create_otio_track(
"video", str(track.name)[1:-1])
all_segments = []
for segment in track.segments:
clip_data = _get_segment_attributes(segment)
if not clip_data:
continue
all_segments.append(clip_data)
segments_ordered = {
itemindex: clip_data
for itemindex, clip_data in enumerate(
all_segments)
}
log.debug("_ segments_ordered: {}".format(
pformat(segments_ordered)
))
if not segments_ordered:
continue
for itemindex, segment_data in segments_ordered.items():
log.debug("_ itemindex: {}".format(itemindex))
# Add Gap if needed
if itemindex == 0:
# if it is first track item at track then add
# it to previouse item
prev_item = segment_data
else:
# get previouse item
prev_item = segments_ordered[itemindex - 1]
log.debug("_ segment_data: {}".format(segment_data))
# calculate clip frame range difference from each other
clip_diff = segment_data["record_in"] - prev_item["record_out"]
# add gap if first track item is not starting
# at first timeline frame
if itemindex == 0 and segment_data["record_in"] > 0:
add_otio_gap(segment_data, otio_track, 0)
# or add gap if following track items are having
# frame range differences from each other
elif itemindex and clip_diff != 1:
add_otio_gap(
segment_data, otio_track, prev_item["record_out"])
# create otio clip and add it to track
otio_clip = create_otio_clip(segment_data)
otio_track.append(otio_clip)
log.debug("_ otio_clip: {}".format(otio_clip))
# create otio marker
# create otio metadata
# add track to otio timeline
otio_timeline.tracks.append(otio_track)
return otio_timeline
def write_to_file(otio_timeline, path):
otio.adapters.write_to_file(otio_timeline, path)

View file

@ -0,0 +1,95 @@
import re
import opentimelineio as otio
import logging
log = logging.getLogger(__name__)
def timecode_to_frames(timecode, framerate):
rt = otio.opentime.from_timecode(timecode, framerate)
return int(otio.opentime.to_frames(rt))
def frames_to_timecode(frames, framerate):
rt = otio.opentime.from_frames(frames, framerate)
return otio.opentime.to_timecode(rt)
def frames_to_seconds(frames, framerate):
rt = otio.opentime.from_frames(frames, framerate)
return otio.opentime.to_seconds(rt)
def get_reformated_path(path, padded=True):
"""
Return fixed python expression path
Args:
path (str): path url or simple file name
Returns:
type: string with reformated path
Example:
get_reformated_path("plate.1001.exr") > plate.%04d.exr
"""
padding = get_padding_from_path(path)
found = get_frame_from_path(path)
if not found:
log.info("Path is not sequence: {}".format(path))
return path
if padded:
path = path.replace(found, "%0{}d".format(padding))
else:
path = path.replace(found, "%d")
return path
def get_padding_from_path(path):
"""
Return padding number from Flame path style
Args:
path (str): path url or simple file name
Returns:
int: padding number
Example:
get_padding_from_path("plate.0001.exr") > 4
"""
found = get_frame_from_path(path)
if found:
return len(found)
else:
return None
def get_frame_from_path(path):
"""
Return sequence number from Flame path style
Args:
path (str): path url or simple file name
Returns:
int: sequence frame number
Example:
def get_frame_from_path(path):
("plate.0001.exr") > 0001
"""
frame_pattern = re.compile(r"[._](\d+)[.]")
found = re.findall(frame_pattern, path)
if found:
return found.pop()
else:
return None

View file

@ -0,0 +1,275 @@
from copy import deepcopy
import openpype.hosts.flame.api as opfapi
class CreateShotClip(opfapi.Creator):
"""Publishable clip"""
label = "Create Publishable Clip"
family = "clip"
icon = "film"
defaults = ["Main"]
presets = None
def process(self):
# Creator copy of object attributes that are modified during `process`
presets = deepcopy(self.presets)
gui_inputs = self.get_gui_inputs()
# get key pares from presets and match it on ui inputs
for k, v in gui_inputs.items():
if v["type"] in ("dict", "section"):
# nested dictionary (only one level allowed
# for sections and dict)
for _k, _v in v["value"].items():
if presets.get(_k):
gui_inputs[k][
"value"][_k]["value"] = presets[_k]
if presets.get(k):
gui_inputs[k]["value"] = presets[k]
# open widget for plugins inputs
results_back = self.create_widget(
"Pype publish attributes creator",
"Define sequential rename and fill hierarchy data.",
gui_inputs
)
if len(self.selected) < 1:
return
if not results_back:
print("Operation aborted")
return
# get ui output for track name for vertical sync
v_sync_track = results_back["vSyncTrack"]["value"]
# sort selected trackItems by
sorted_selected_segments = []
unsorted_selected_segments = []
for _segment in self.selected:
if _segment.parent.name.get_value() in v_sync_track:
sorted_selected_segments.append(_segment)
else:
unsorted_selected_segments.append(_segment)
sorted_selected_segments.extend(unsorted_selected_segments)
kwargs = {
"log": self.log,
"ui_inputs": results_back,
"avalon": self.data,
"family": self.data["family"]
}
for i, segment in enumerate(sorted_selected_segments):
kwargs["rename_index"] = i
# convert track item to timeline media pool item
opfapi.PublishableClip(segment, **kwargs).convert()
def get_gui_inputs(self):
gui_tracks = self._get_video_track_names(
opfapi.get_current_sequence(opfapi.CTX.selection)
)
return deepcopy({
"renameHierarchy": {
"type": "section",
"label": "Shot Hierarchy And Rename Settings",
"target": "ui",
"order": 0,
"value": {
"hierarchy": {
"value": "{folder}/{sequence}",
"type": "QLineEdit",
"label": "Shot Parent Hierarchy",
"target": "tag",
"toolTip": "Parents folder for shot root folder, Template filled with `Hierarchy Data` section", # noqa
"order": 0},
"clipRename": {
"value": False,
"type": "QCheckBox",
"label": "Rename clips",
"target": "ui",
"toolTip": "Renaming selected clips on fly", # noqa
"order": 1},
"clipName": {
"value": "{sequence}{shot}",
"type": "QLineEdit",
"label": "Clip Name Template",
"target": "ui",
"toolTip": "template for creating shot namespaused for renaming (use rename: on)", # noqa
"order": 2},
"segmentIndex": {
"value": True,
"type": "QCheckBox",
"label": "Segment index",
"target": "ui",
"toolTip": "Take number from segment index", # noqa
"order": 3},
"countFrom": {
"value": 10,
"type": "QSpinBox",
"label": "Count sequence from",
"target": "ui",
"toolTip": "Set when the sequence number stafrom", # noqa
"order": 4},
"countSteps": {
"value": 10,
"type": "QSpinBox",
"label": "Stepping number",
"target": "ui",
"toolTip": "What number is adding every new step", # noqa
"order": 5},
}
},
"hierarchyData": {
"type": "dict",
"label": "Shot Template Keywords",
"target": "tag",
"order": 1,
"value": {
"folder": {
"value": "shots",
"type": "QLineEdit",
"label": "{folder}",
"target": "tag",
"toolTip": "Name of folder used for root of generated shots.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
"order": 0},
"episode": {
"value": "ep01",
"type": "QLineEdit",
"label": "{episode}",
"target": "tag",
"toolTip": "Name of episode.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
"order": 1},
"sequence": {
"value": "sq01",
"type": "QLineEdit",
"label": "{sequence}",
"target": "tag",
"toolTip": "Name of sequence of shots.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
"order": 2},
"track": {
"value": "{_track_}",
"type": "QLineEdit",
"label": "{track}",
"target": "tag",
"toolTip": "Name of sequence of shots.\nUsable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
"order": 3},
"shot": {
"value": "sh###",
"type": "QLineEdit",
"label": "{shot}",
"target": "tag",
"toolTip": "Name of shot. `#` is converted to paded number. \nAlso could be used with usable tokens:\n\t{_clip_}: name of used clip\n\t{_track_}: name of parent track layer\n\t{_sequence_}: name of parent sequence (timeline)", # noqa
"order": 4}
}
},
"verticalSync": {
"type": "section",
"label": "Vertical Synchronization Of Attributes",
"target": "ui",
"order": 2,
"value": {
"vSyncOn": {
"value": True,
"type": "QCheckBox",
"label": "Enable Vertical Sync",
"target": "ui",
"toolTip": "Switch on if you want clips above each other to share its attributes", # noqa
"order": 0},
"vSyncTrack": {
"value": gui_tracks, # noqa
"type": "QComboBox",
"label": "Hero track",
"target": "ui",
"toolTip": "Select driving track name which should be hero for all others", # noqa
"order": 1}
}
},
"publishSettings": {
"type": "section",
"label": "Publish Settings",
"target": "ui",
"order": 3,
"value": {
"subsetName": {
"value": ["[ track name ]", "main", "bg", "fg", "bg",
"animatic"],
"type": "QComboBox",
"label": "Subset Name",
"target": "ui",
"toolTip": "chose subset name patern, if [ track name ] is selected, name of track layer will be used", # noqa
"order": 0},
"subsetFamily": {
"value": ["plate", "take"],
"type": "QComboBox",
"label": "Subset Family",
"target": "ui", "toolTip": "What use of this subset is for", # noqa
"order": 1},
"reviewTrack": {
"value": ["< none >"] + gui_tracks,
"type": "QComboBox",
"label": "Use Review Track",
"target": "ui",
"toolTip": "Generate preview videos on fly, if `< none >` is defined nothing will be generated.", # noqa
"order": 2},
"audio": {
"value": False,
"type": "QCheckBox",
"label": "Include audio",
"target": "tag",
"toolTip": "Process subsets with corresponding audio", # noqa
"order": 3},
"sourceResolution": {
"value": False,
"type": "QCheckBox",
"label": "Source resolution",
"target": "tag",
"toolTip": "Is resloution taken from timeline or source?", # noqa
"order": 4},
}
},
"frameRangeAttr": {
"type": "section",
"label": "Shot Attributes",
"target": "ui",
"order": 4,
"value": {
"workfileFrameStart": {
"value": 1001,
"type": "QSpinBox",
"label": "Workfiles Start Frame",
"target": "tag",
"toolTip": "Set workfile starting frame number", # noqa
"order": 0
},
"handleStart": {
"value": 0,
"type": "QSpinBox",
"label": "Handle Start",
"target": "tag",
"toolTip": "Handle at start of clip", # noqa
"order": 1
},
"handleEnd": {
"value": 0,
"type": "QSpinBox",
"label": "Handle End",
"target": "tag",
"toolTip": "Handle at end of clip", # noqa
"order": 2
}
}
}
})
def _get_video_track_names(self, sequence):
track_names = []
for ver in sequence.versions:
for track in ver.tracks:
track_names.append(track.name.get_value())
return track_names

View file

@ -0,0 +1,63 @@
import os
import pyblish.api
import tempfile
import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export as otio_export
import opentimelineio as otio
from pprint import pformat
reload(otio_export) # noqa
@pyblish.api.log
class CollectTestSelection(pyblish.api.ContextPlugin):
"""testing selection sharing
"""
order = pyblish.api.CollectorOrder
label = "test selection"
hosts = ["flame"]
def process(self, context):
self.log.info(
"Active Selection: {}".format(opfapi.CTX.selection))
sequence = opfapi.get_current_sequence(opfapi.CTX.selection)
self.test_imprint_data(sequence)
self.test_otio_export(sequence)
def test_otio_export(self, sequence):
test_dir = os.path.normpath(
tempfile.mkdtemp(prefix="test_pyblish_tmp_")
)
export_path = os.path.normpath(
os.path.join(
test_dir, "otio_timeline_export.otio"
)
)
otio_timeline = otio_export.create_otio_timeline(sequence)
otio_export.write_to_file(
otio_timeline, export_path
)
read_timeline_otio = otio.adapters.read_from_file(export_path)
if otio_timeline != read_timeline_otio:
raise Exception("Exported timeline is different from original")
self.log.info(pformat(otio_timeline))
self.log.info("Otio exported to: {}".format(export_path))
def test_imprint_data(self, sequence):
with opfapi.maintained_segment_selection(sequence) as sel_segments:
for segment in sel_segments:
if str(segment.name)[1:-1] == "":
continue
self.log.debug("Segment with OpenPypeData: {}".format(
segment.name))
opfapi.imprint(segment, {
'asset': segment.name.get_value(),
'family': 'render',
'subset': 'subsetMain'
})

View file

@ -45,19 +45,14 @@ class CreateHDA(plugin.Creator):
if (self.options or {}).get("useSelection") and self.nodes:
# if we have `use selection` enabled and we have some
# selected nodes ...
to_hda = self.nodes[0]
if len(self.nodes) > 1:
# if there is more then one node, create subnet first
subnet = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
to_hda = subnet
else:
# in case of no selection, just create subnet node
subnet = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
subnet = out.collapseIntoSubnet(
self.nodes,
subnet_name="{}_subnet".format(self.name))
subnet.moveToGoodPosition()
to_hda = subnet
else:
to_hda = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
if not to_hda.type().definition():
# if node type has not its definition, it is not user
# created hda. We test if hda can be created from the node.
@ -69,13 +64,12 @@ class CreateHDA(plugin.Creator):
name=subset_name,
hda_file_name="$HIP/{}.hda".format(subset_name)
)
hou.moveNodesTo(self.nodes, hda_node)
hda_node.layoutChildren()
elif self._check_existing(subset_name):
raise plugin.OpenPypeCreatorError(
("subset {} is already published with different HDA"
"definition.").format(subset_name))
else:
if self._check_existing(subset_name):
raise plugin.OpenPypeCreatorError(
("subset {} is already published with different HDA"
"definition.").format(subset_name))
hda_node = to_hda
hda_node.setName(subset_name)

View file

@ -1,6 +1,6 @@
from avalon import api
from avalon.houdini import pipeline, lib
from avalon.houdini import pipeline
class AbcLoader(api.Loader):
@ -25,16 +25,9 @@ class AbcLoader(api.Loader):
# Get the root node
obj = hou.node("/obj")
# Create a unique name
counter = 1
# Define node name
namespace = namespace if namespace else context["asset"]["name"]
formatted = "{}_{}".format(namespace, name) if namespace else name
node_name = "{0}_{1:03d}".format(formatted, counter)
children = lib.children_as_string(hou.node("/obj"))
while node_name in children:
counter += 1
node_name = "{0}_{1:03d}".format(formatted, counter)
node_name = "{}_{}".format(namespace, name) if namespace else name
# Create a new geo node
container = obj.createNode("geo", node_name=node_name)

View file

@ -1,5 +1,5 @@
from avalon import api
from avalon.houdini import pipeline, lib
from avalon.houdini import pipeline
ARCHIVE_EXPRESSION = ('__import__("_alembic_hom_extensions")'
@ -97,18 +97,9 @@ class CameraLoader(api.Loader):
# Get the root node
obj = hou.node("/obj")
# Create a unique name
counter = 1
asset_name = context["asset"]["name"]
namespace = namespace or asset_name
formatted = "{}_{}".format(namespace, name) if namespace else name
node_name = "{0}_{1:03d}".format(formatted, counter)
children = lib.children_as_string(hou.node("/obj"))
while node_name in children:
counter += 1
node_name = "{0}_{1:03d}".format(formatted, counter)
# Define node name
namespace = namespace if namespace else context["asset"]["name"]
node_name = "{}_{}".format(namespace, name) if namespace else name
# Create a archive node
container = self.create_and_connect(obj, "alembicarchive", node_name)

View file

@ -37,7 +37,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
# Check if frames are bigger than 1 (file collection)
# override the result
if end_frame - start_frame > 1:
if end_frame - start_frame > 0:
result = self.create_file_list(
match, int(start_frame), int(end_frame)
)

View file

@ -313,13 +313,7 @@ def attribute_values(attr_values):
"""
# NOTE(antirotor): this didn't work for some reason for Yeti attributes
# original = [(attr, cmds.getAttr(attr)) for attr in attr_values]
original = []
for attr in attr_values:
type = cmds.getAttr(attr, type=True)
value = cmds.getAttr(attr)
original.append((attr, str(value) if type == "string" else value))
original = [(attr, cmds.getAttr(attr)) for attr in attr_values]
try:
for attr, value in attr_values.items():
if isinstance(value, string_types):
@ -331,6 +325,12 @@ def attribute_values(attr_values):
for attr, value in original:
if isinstance(value, string_types):
cmds.setAttr(attr, value, type="string")
elif value is None and cmds.getAttr(attr, type=True) == "string":
# In some cases the maya.cmds.getAttr command returns None
# for string attributes but this value cannot assigned.
# Note: After setting it once to "" it will then return ""
# instead of None. So this would only happen once.
cmds.setAttr(attr, "", type="string")
else:
cmds.setAttr(attr, value)
@ -745,6 +745,33 @@ def namespaced(namespace, new=True):
cmds.namespace(set=original)
@contextlib.contextmanager
def maintained_selection_api():
"""Maintain selection using the Maya Python API.
Warning: This is *not* added to the undo stack.
"""
original = om.MGlobal.getActiveSelectionList()
try:
yield
finally:
om.MGlobal.setActiveSelectionList(original)
@contextlib.contextmanager
def tool(context):
"""Set a tool context during the context manager.
"""
original = cmds.currentCtx()
try:
cmds.setToolTo(context)
yield
finally:
cmds.setToolTo(original)
def polyConstraint(components, *args, **kwargs):
"""Return the list of *components* with the constraints applied.
@ -763,17 +790,25 @@ def polyConstraint(components, *args, **kwargs):
kwargs.pop('mode', None)
with no_undo(flush=False):
with maya.maintained_selection():
# Apply constraint using mode=2 (current and next) so
# it applies to the selection made before it; because just
# a `maya.cmds.select()` call will not trigger the constraint.
with reset_polySelectConstraint():
cmds.select(components, r=1, noExpand=True)
cmds.polySelectConstraint(*args, mode=2, **kwargs)
result = cmds.ls(selection=True)
cmds.select(clear=True)
return result
# Reverting selection to the original selection using
# `maya.cmds.select` can be slow in rare cases where previously
# `maya.cmds.polySelectConstraint` had set constrain to "All and Next"
# and the "Random" setting was activated. To work around this we
# revert to the original selection using the Maya API. This is safe
# since we're not generating any undo change anyway.
with tool("selectSuperContext"):
# Selection can be very slow when in a manipulator mode.
# So we force the selection context which is fast.
with maintained_selection_api():
# Apply constraint using mode=2 (current and next) so
# it applies to the selection made before it; because just
# a `maya.cmds.select()` call will not trigger the constraint.
with reset_polySelectConstraint():
cmds.select(components, r=1, noExpand=True)
cmds.polySelectConstraint(*args, mode=2, **kwargs)
result = cmds.ls(selection=True)
cmds.select(clear=True)
return result
@contextlib.contextmanager

View file

@ -100,6 +100,13 @@ class ReferenceLoader(api.Loader):
"offset",
label="Position Offset",
help="Offset loaded models for easier selection."
),
qargparse.Boolean(
"attach_to_root",
label="Group imported asset",
default=True,
help="Should a group be created to encapsulate"
" imported representation ?"
)
]

View file

@ -13,10 +13,14 @@ class CameraWindow(QtWidgets.QDialog):
self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint)
self.camera = None
self.static_image_plane = False
self.show_in_all_views = False
self.widgets = {
"label": QtWidgets.QLabel("Select camera for image plane."),
"list": QtWidgets.QListWidget(),
"staticImagePlane": QtWidgets.QCheckBox(),
"showInAllViews": QtWidgets.QCheckBox(),
"warning": QtWidgets.QLabel("No cameras selected!"),
"buttons": QtWidgets.QWidget(),
"okButton": QtWidgets.QPushButton("Ok"),
@ -31,6 +35,9 @@ class CameraWindow(QtWidgets.QDialog):
for camera in cameras:
self.widgets["list"].addItem(camera)
self.widgets["staticImagePlane"].setText("Make Image Plane Static")
self.widgets["showInAllViews"].setText("Show Image Plane in All Views")
# Build buttons.
layout = QtWidgets.QHBoxLayout(self.widgets["buttons"])
layout.addWidget(self.widgets["okButton"])
@ -40,6 +47,8 @@ class CameraWindow(QtWidgets.QDialog):
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(self.widgets["label"])
layout.addWidget(self.widgets["list"])
layout.addWidget(self.widgets["staticImagePlane"])
layout.addWidget(self.widgets["showInAllViews"])
layout.addWidget(self.widgets["buttons"])
layout.addWidget(self.widgets["warning"])
@ -54,6 +63,8 @@ class CameraWindow(QtWidgets.QDialog):
if self.camera is None:
self.widgets["warning"].setVisible(True)
return
self.show_in_all_views = self.widgets["showInAllViews"].isChecked()
self.static_image_plane = self.widgets["staticImagePlane"].isChecked()
self.close()
@ -65,15 +76,15 @@ class CameraWindow(QtWidgets.QDialog):
class ImagePlaneLoader(api.Loader):
"""Specific loader of plate for image planes on selected camera."""
families = ["plate", "render"]
families = ["image", "plate", "render"]
label = "Load imagePlane."
representations = ["mov", "exr", "preview", "png"]
icon = "image"
color = "orange"
def load(self, context, name, namespace, data):
def load(self, context, name, namespace, data, options=None):
import pymel.core as pm
new_nodes = []
image_plane_depth = 1000
asset = context['asset']['name']
@ -85,17 +96,23 @@ class ImagePlaneLoader(api.Loader):
# Get camera from user selection.
camera = None
default_cameras = [
"frontShape", "perspShape", "sideShape", "topShape"
]
cameras = [
x for x in pm.ls(type="camera") if x.name() not in default_cameras
]
camera_names = {x.getParent().name(): x for x in cameras}
camera_names["Create new camera."] = "create_camera"
window = CameraWindow(camera_names.keys())
window.exec_()
camera = camera_names[window.camera]
is_static_image_plane = None
is_in_all_views = None
if data:
camera = pm.PyNode(data.get("camera"))
is_static_image_plane = data.get("static_image_plane")
is_in_all_views = data.get("in_all_views")
if not camera:
cameras = pm.ls(type="camera")
camera_names = {x.getParent().name(): x for x in cameras}
camera_names["Create new camera."] = "create_camera"
window = CameraWindow(camera_names.keys())
window.exec_()
camera = camera_names[window.camera]
is_static_image_plane = window.static_image_plane
is_in_all_views = window.show_in_all_views
if camera == "create_camera":
camera = pm.createNode("camera")
@ -111,13 +128,14 @@ class ImagePlaneLoader(api.Loader):
# Create image plane
image_plane_transform, image_plane_shape = pm.imagePlane(
camera=camera, showInAllViews=False
fileName=context["representation"]["data"]["path"],
camera=camera, showInAllViews=is_in_all_views
)
image_plane_shape.depth.set(image_plane_depth)
image_plane_shape.imageName.set(
context["representation"]["data"]["path"]
)
if is_static_image_plane:
image_plane_shape.detach()
image_plane_transform.setRotation(camera.getRotation())
start_frame = pm.playbackOptions(q=True, min=True)
end_frame = pm.playbackOptions(q=True, max=True)

View file

@ -8,6 +8,8 @@ from collections import defaultdict
from openpype.widgets.message_window import ScrollMessageBox
from Qt import QtWidgets
from openpype.hosts.maya.api.plugin import get_reference_node
class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"""Specific loader for lookdev"""
@ -70,7 +72,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# Get reference node from container members
members = cmds.sets(node, query=True, nodesOnly=True)
reference_node = self._get_reference_node(members)
reference_node = get_reference_node(members, log=self.log)
shader_nodes = cmds.ls(members, type='shadingEngine')
orig_nodes = set(self._get_nodes_with_shader(shader_nodes))

View file

@ -40,85 +40,88 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
except ValueError:
family = "model"
group_name = "{}:_GRP".format(namespace)
# True by default to keep legacy behaviours
attach_to_root = options.get("attach_to_root", True)
with maya.maintained_selection():
groupName = "{}:_GRP".format(namespace)
cmds.loadPlugin("AbcImport.mll", quiet=True)
nodes = cmds.file(self.fname,
namespace=namespace,
sharedReferenceFile=False,
groupReference=True,
groupName=groupName,
reference=True,
returnNewNodes=True)
# namespace = cmds.referenceQuery(nodes[0], namespace=True)
returnNewNodes=True,
groupReference=attach_to_root,
groupName=group_name)
shapes = cmds.ls(nodes, shapes=True, long=True)
newNodes = (list(set(nodes) - set(shapes)))
new_nodes = (list(set(nodes) - set(shapes)))
current_namespace = pm.namespaceInfo(currentNamespace=True)
if current_namespace != ":":
groupName = current_namespace + ":" + groupName
group_name = current_namespace + ":" + group_name
groupNode = pm.PyNode(groupName)
roots = set()
self[:] = new_nodes
for node in newNodes:
try:
roots.add(pm.PyNode(node).getAllParents()[-2])
except: # noqa: E722
pass
if attach_to_root:
group_node = pm.PyNode(group_name)
roots = set()
if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]:
for node in new_nodes:
try:
roots.add(pm.PyNode(node).getAllParents()[-2])
except: # noqa: E722
pass
if family not in ["layout", "setdress",
"mayaAscii", "mayaScene"]:
for root in roots:
root.setParent(world=True)
group_node.zeroTransformPivots()
for root in roots:
root.setParent(world=True)
root.setParent(group_node)
groupNode.zeroTransformPivots()
for root in roots:
root.setParent(groupNode)
cmds.setAttr(group_name + ".displayHandle", 1)
cmds.setAttr(groupName + ".displayHandle", 1)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
group_node.useOutlinerColor.set(1)
group_node.outlinerColor.set(
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255))
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
groupNode.useOutlinerColor.set(1)
groupNode.outlinerColor.set(
(float(c[0])/255),
(float(c[1])/255),
(float(c[2])/255)
)
self[:] = newNodes
cmds.setAttr(groupName + ".displayHandle", 1)
# get bounding box
bbox = cmds.exactWorldBoundingBox(groupName)
# get pivot position on world space
pivot = cmds.xform(groupName, q=True, sp=True, ws=True)
# center of bounding box
cx = (bbox[0] + bbox[3]) / 2
cy = (bbox[1] + bbox[4]) / 2
cz = (bbox[2] + bbox[5]) / 2
# add pivot position to calculate offset
cx = cx + pivot[0]
cy = cy + pivot[1]
cz = cz + pivot[2]
# set selection handle offset to center of bounding box
cmds.setAttr(groupName + ".selectHandleX", cx)
cmds.setAttr(groupName + ".selectHandleY", cy)
cmds.setAttr(groupName + ".selectHandleZ", cz)
cmds.setAttr(group_name + ".displayHandle", 1)
# get bounding box
bbox = cmds.exactWorldBoundingBox(group_name)
# get pivot position on world space
pivot = cmds.xform(group_name, q=True, sp=True, ws=True)
# center of bounding box
cx = (bbox[0] + bbox[3]) / 2
cy = (bbox[1] + bbox[4]) / 2
cz = (bbox[2] + bbox[5]) / 2
# add pivot position to calculate offset
cx = cx + pivot[0]
cy = cy + pivot[1]
cz = cz + pivot[2]
# set selection handle offset to center of bounding box
cmds.setAttr(group_name + ".selectHandleX", cx)
cmds.setAttr(group_name + ".selectHandleY", cy)
cmds.setAttr(group_name + ".selectHandleZ", cz)
if family == "rig":
self._post_process_rig(name, namespace, context, options)
else:
if "translate" in options:
cmds.setAttr(groupName + ".t", *options["translate"])
return newNodes
if "translate" in options:
cmds.setAttr(group_name + ".t", *options["translate"])
return new_nodes
def switch(self, container, representation):
self.update(container, representation)

View file

@ -22,15 +22,22 @@ class CollectMayaHistory(pyblish.api.InstancePlugin):
def process(self, instance):
# Collect the history with long names
history = cmds.listHistory(instance, leaf=False) or []
history = cmds.ls(history, long=True)
kwargs = {}
if int(cmds.about(version=True)) >= 2020:
# New flag since Maya 2020 which makes cmds.listHistory faster
kwargs = {"fastIteration": True}
else:
self.log.debug("Ignoring `fastIteration` flag before Maya 2020..")
# Remove invalid node types (like renderlayers)
invalid = cmds.ls(history, type="renderLayer", long=True)
if invalid:
invalid = set(invalid) # optimize lookup
history = [x for x in history if x not in invalid]
# Collect the history with long names
history = set(cmds.listHistory(instance, leaf=False, **kwargs) or [])
history = cmds.ls(list(history), long=True)
# Exclude invalid nodes (like renderlayers)
exclude = cmds.ls(type="renderLayer", long=True)
if exclude:
exclude = set(exclude) # optimize lookup
history = [x for x in history if x not in exclude]
# Combine members with history
members = instance[:] + history

View file

@ -492,6 +492,8 @@ class CollectLook(pyblish.api.InstancePlugin):
if not cmds.attributeQuery(attr, node=node, exists=True):
continue
attribute = "{}.{}".format(node, attr)
if cmds.getAttr(attribute, type=True) == "message":
continue
node_attributes[attr] = cmds.getAttr(attribute)
attributes.append({"name": node,

View file

@ -224,14 +224,19 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
# append full path
full_exp_files = []
aov_dict = {}
default_render_file = context.data.get('project_settings')\
.get('maya')\
.get('create')\
.get('CreateRender')\
.get('default_render_image_folder')
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
for aov in exp_files:
full_paths = []
for file in aov[aov.keys()[0]]:
full_path = os.path.join(workspace, "renders", file)
full_path = os.path.join(workspace, default_render_file,
file)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)

Some files were not shown because too many files have changed in this diff Show more