Merge branch 'develop' into enhancement/OP-2075_version-handling

This commit is contained in:
iLLiCiTiT 2022-01-04 16:53:23 +01:00
commit 247c931c0f
310 changed files with 6874 additions and 1047 deletions

View file

@ -1,134 +1,78 @@
# Changelog
## [3.7.0-nightly.6](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD)
### 📖 Documentation
- docs\[website\]: Add Ellipse Studio \(logo\) as an OpenPype contributor [\#2324](https://github.com/pypeclub/OpenPype/pull/2324)
**🆕 New features**
- Settings UI use OpenPype styles [\#2296](https://github.com/pypeclub/OpenPype/pull/2296)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...3.7.0)
**🚀 Enhancements**
- Settings: Webpublisher in hosts enum [\#2367](https://github.com/pypeclub/OpenPype/pull/2367)
- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462)
- Photoshop: New style validations for New publisher [\#2429](https://github.com/pypeclub/OpenPype/pull/2429)
- General: Environment variables groups [\#2424](https://github.com/pypeclub/OpenPype/pull/2424)
- Unreal: Dynamic menu created in Python [\#2422](https://github.com/pypeclub/OpenPype/pull/2422)
- Settings UI: Hyperlinks to settings [\#2420](https://github.com/pypeclub/OpenPype/pull/2420)
- Modules: JobQueue module moved one hierarchy level higher [\#2419](https://github.com/pypeclub/OpenPype/pull/2419)
- TimersManager: Start timer post launch hook [\#2418](https://github.com/pypeclub/OpenPype/pull/2418)
- General: Run applications as separate processes under linux [\#2408](https://github.com/pypeclub/OpenPype/pull/2408)
- Ftrack: Check existence of object type on recreation [\#2404](https://github.com/pypeclub/OpenPype/pull/2404)
- Enhancement: Global cleanup plugin that explicitly remove paths from context [\#2402](https://github.com/pypeclub/OpenPype/pull/2402)
- General: MongoDB ability to specify replica set groups [\#2401](https://github.com/pypeclub/OpenPype/pull/2401)
- Flame: moving `utility\_scripts` to api folder also with `scripts` [\#2385](https://github.com/pypeclub/OpenPype/pull/2385)
- Centos 7 dependency compatibility [\#2384](https://github.com/pypeclub/OpenPype/pull/2384)
- Enhancement: Settings: Use project settings values from another project [\#2382](https://github.com/pypeclub/OpenPype/pull/2382)
- Blender 3: Support auto install for new blender version [\#2377](https://github.com/pypeclub/OpenPype/pull/2377)
- Maya add render image path to settings [\#2375](https://github.com/pypeclub/OpenPype/pull/2375)
- Hiero: python3 compatibility [\#2365](https://github.com/pypeclub/OpenPype/pull/2365)
- Burnins: Be able recognize mxf OPAtom format [\#2361](https://github.com/pypeclub/OpenPype/pull/2361)
- Local settings: Copyable studio paths [\#2349](https://github.com/pypeclub/OpenPype/pull/2349)
- Assets Widget: Clear model on project change [\#2345](https://github.com/pypeclub/OpenPype/pull/2345)
- General: OpenPype default modules hierarchy [\#2338](https://github.com/pypeclub/OpenPype/pull/2338)
- General: FFprobe error exception contain original error message [\#2328](https://github.com/pypeclub/OpenPype/pull/2328)
- Resolve: Add experimental button to menu [\#2325](https://github.com/pypeclub/OpenPype/pull/2325)
- Hiero: Add experimental tools action [\#2323](https://github.com/pypeclub/OpenPype/pull/2323)
- Input links: Cleanup and unification of differences [\#2322](https://github.com/pypeclub/OpenPype/pull/2322)
- General: Don't validate vendor bin with executing them [\#2317](https://github.com/pypeclub/OpenPype/pull/2317)
- General: Multilayer EXRs support [\#2315](https://github.com/pypeclub/OpenPype/pull/2315)
- General: Run process log stderr as info log level [\#2309](https://github.com/pypeclub/OpenPype/pull/2309)
- General: Reduce vendor imports [\#2305](https://github.com/pypeclub/OpenPype/pull/2305)
- Tools: Cleanup of unused classes [\#2304](https://github.com/pypeclub/OpenPype/pull/2304)
- Project Manager: Added ability to delete project [\#2298](https://github.com/pypeclub/OpenPype/pull/2298)
- Nuke: extract baked review videos presets [\#2248](https://github.com/pypeclub/OpenPype/pull/2248)
- Maya: Add is\_static\_image\_plane and is\_in\_all\_views option in imagePlaneLoader [\#2356](https://github.com/pypeclub/OpenPype/pull/2356)
- TVPaint: Move implementation to OpenPype [\#2336](https://github.com/pypeclub/OpenPype/pull/2336)
**🐛 Bug fixes**
- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471)
- Short Pyblish plugin path [\#2428](https://github.com/pypeclub/OpenPype/pull/2428)
- PS: Introduced settings for invalid characters to use in ValidateNaming plugin [\#2417](https://github.com/pypeclub/OpenPype/pull/2417)
- Settings UI: Breadcrumbs path does not create new entities [\#2416](https://github.com/pypeclub/OpenPype/pull/2416)
- AfterEffects: Variant 2022 is in defaults but missing in schemas [\#2412](https://github.com/pypeclub/OpenPype/pull/2412)
- Nuke: baking representations was not additive [\#2406](https://github.com/pypeclub/OpenPype/pull/2406)
- General: Fix access to environments from default settings [\#2403](https://github.com/pypeclub/OpenPype/pull/2403)
- Fix: Placeholder Input color set fix [\#2399](https://github.com/pypeclub/OpenPype/pull/2399)
- Settings: Fix state change of wrapper label [\#2396](https://github.com/pypeclub/OpenPype/pull/2396)
- Flame: fix ftrack publisher [\#2381](https://github.com/pypeclub/OpenPype/pull/2381)
- hiero: solve custom ocio path [\#2379](https://github.com/pypeclub/OpenPype/pull/2379)
- hiero: fix workio and flatten [\#2378](https://github.com/pypeclub/OpenPype/pull/2378)
- Nuke: fixing menu re-drawing during context change [\#2374](https://github.com/pypeclub/OpenPype/pull/2374)
- Webpublisher: Fix assignment of families of TVpaint instances [\#2373](https://github.com/pypeclub/OpenPype/pull/2373)
- Nuke: fixing node name based on switched asset name [\#2369](https://github.com/pypeclub/OpenPype/pull/2369)
- JobQueue: Fix loading of settings [\#2362](https://github.com/pypeclub/OpenPype/pull/2362)
- Tools: Placeholder color [\#2359](https://github.com/pypeclub/OpenPype/pull/2359)
- Launcher: Minimize button on MacOs [\#2355](https://github.com/pypeclub/OpenPype/pull/2355)
- StandalonePublisher: Fix import of constant [\#2354](https://github.com/pypeclub/OpenPype/pull/2354)
- Adobe products show issue [\#2347](https://github.com/pypeclub/OpenPype/pull/2347)
- Maya Look Assigner: Fix Python 3 compatibility [\#2343](https://github.com/pypeclub/OpenPype/pull/2343)
- Remove wrongly used host for hook [\#2342](https://github.com/pypeclub/OpenPype/pull/2342)
- Tools: Use Qt context on tools show [\#2340](https://github.com/pypeclub/OpenPype/pull/2340)
- Flame: Fix default argument value in custom dictionary [\#2339](https://github.com/pypeclub/OpenPype/pull/2339)
- Timers Manager: Disable auto stop timer on linux platform [\#2334](https://github.com/pypeclub/OpenPype/pull/2334)
- nuke: bake preset single input exception [\#2331](https://github.com/pypeclub/OpenPype/pull/2331)
- Hiero: fixing multiple templates at a hierarchy parent [\#2330](https://github.com/pypeclub/OpenPype/pull/2330)
- Fix - provider icons are pulled from a folder [\#2326](https://github.com/pypeclub/OpenPype/pull/2326)
- InputLinks: Typo in "inputLinks" key [\#2314](https://github.com/pypeclub/OpenPype/pull/2314)
- Deadline timeout and logging [\#2312](https://github.com/pypeclub/OpenPype/pull/2312)
- nuke: do not multiply representation on class method [\#2311](https://github.com/pypeclub/OpenPype/pull/2311)
- Workfiles tool: Fix task formatting [\#2306](https://github.com/pypeclub/OpenPype/pull/2306)
- Delivery: Fix delivery paths created on windows [\#2302](https://github.com/pypeclub/OpenPype/pull/2302)
- Maya: Deadline - fix limit groups [\#2295](https://github.com/pypeclub/OpenPype/pull/2295)
- Royal Render: Fix plugin order and OpenPype auto-detection [\#2291](https://github.com/pypeclub/OpenPype/pull/2291)
- Alternate site for site sync doesnt work for sequences [\#2284](https://github.com/pypeclub/OpenPype/pull/2284)
- Houdini: Fix HDA creation [\#2350](https://github.com/pypeclub/OpenPype/pull/2350)
**Merged pull requests:**
- Forced cx\_freeze to include sqlite3 into build [\#2432](https://github.com/pypeclub/OpenPype/pull/2432)
- Maya: Replaced PATH usage with vendored oiio path for maketx utility [\#2405](https://github.com/pypeclub/OpenPype/pull/2405)
- \[Fix\]\[MAYA\] Handle message type attribute within CollectLook [\#2394](https://github.com/pypeclub/OpenPype/pull/2394)
- Add validator to check correct version of extension for PS and AE [\#2387](https://github.com/pypeclub/OpenPype/pull/2387)
- Linux : flip updating submodules logic [\#2357](https://github.com/pypeclub/OpenPype/pull/2357)
- Update of avalon-core [\#2346](https://github.com/pypeclub/OpenPype/pull/2346)
- Maya: configurable model top level validation [\#2321](https://github.com/pypeclub/OpenPype/pull/2321)
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)
**🐛 Bug fixes**
- Nuke: inventory update removes all loaded read nodes [\#2294](https://github.com/pypeclub/OpenPype/pull/2294)
## [3.6.3](https://github.com/pypeclub/OpenPype/tree/3.6.3) (2021-11-19)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.3-nightly.1...3.6.3)
**🐛 Bug fixes**
- Deadline: Fix publish targets [\#2280](https://github.com/pypeclub/OpenPype/pull/2280)
## [3.6.2](https://github.com/pypeclub/OpenPype/tree/3.6.2) (2021-11-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.2-nightly.2...3.6.2)
**🚀 Enhancements**
- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265)
- SceneInventory: Choose loader in asset switcher [\#2262](https://github.com/pypeclub/OpenPype/pull/2262)
- Style: New fonts in OpenPype style [\#2256](https://github.com/pypeclub/OpenPype/pull/2256)
- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255)
- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251)
- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244)
**🐛 Bug fixes**
- Tools: Parenting of tools in Nuke and Hiero [\#2266](https://github.com/pypeclub/OpenPype/pull/2266)
- limiting validator to specific editorial hosts [\#2264](https://github.com/pypeclub/OpenPype/pull/2264)
- Tools: Select Context dialog attribute fix [\#2261](https://github.com/pypeclub/OpenPype/pull/2261)
- Maya: Render publishing fails on linux [\#2260](https://github.com/pypeclub/OpenPype/pull/2260)
- LookAssigner: Fix tool reopen [\#2259](https://github.com/pypeclub/OpenPype/pull/2259)
- Standalone: editorial not publishing thumbnails on all subsets [\#2258](https://github.com/pypeclub/OpenPype/pull/2258)
- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247)
## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.1-nightly.1...3.6.1)
**🐛 Bug fixes**
- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254)
## [3.6.0](https://github.com/pypeclub/OpenPype/tree/3.6.0) (2021-11-15)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.0-nightly.6...3.6.0)
**🚀 Enhancements**
- Tools: Subset manager in OpenPype [\#2243](https://github.com/pypeclub/OpenPype/pull/2243)
- General: Skip module directories without init file [\#2239](https://github.com/pypeclub/OpenPype/pull/2239)
**🐛 Bug fixes**
- Ftrack: Sync project ftrack id cache issue [\#2250](https://github.com/pypeclub/OpenPype/pull/2250)
- Ftrack: Session creation and Prepare project [\#2245](https://github.com/pypeclub/OpenPype/pull/2245)
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)

View file

@ -41,6 +41,8 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
ncurses \
ncurses-devel \
qt5-qtbase-devel \
xcb-util-wm \
xcb-util-renderutil \
&& yum clean all
# we need to build our own patchelf
@ -92,7 +94,8 @@ RUN source $HOME/.bashrc \
RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib \
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.7/vendor/python/PySide2/Qt/lib
RUN cd /opt/openpype \
rm -rf ./vendor/bin

49
app_launcher.py Normal file
View file

@ -0,0 +1,49 @@
"""Launch process that is not child process of python or OpenPype.
This is written for linux distributions where process tree may affect what
is when closed or blocked to be closed.
"""
import os
import sys
import subprocess
import json
def main(input_json_path):
"""Read launch arguments from json file and launch the process.
Expected that json contains "args" key with string or list of strings.
Arguments are converted to string using `list2cmdline`. At the end is added
`&` which will cause that launched process is detached and running as
"background" process.
## Notes
@iLLiCiT: This should be possible to do with 'disown' or double forking but
I didn't find a way how to do it properly. Disown didn't work as
expected for me and double forking killed parent process which is
unexpected too.
"""
with open(input_json_path, "r") as stream:
data = json.load(stream)
# Change environment variables
env = data.get("env") or {}
for key, value in env.items():
os.environ[key] = value
# Prepare launch arguments
args = data["args"]
if isinstance(args, list):
args = subprocess.list2cmdline(args)
# Run the command as background process
shell_cmd = args + " &"
os.system(shell_cmd)
sys.exit(0)
if __name__ == "__main__":
# Expect that last argument is path to a json with launch args information
main(sys.argv[-1])

View file

@ -31,8 +31,6 @@ from .lib import (
)
from .lib.mongo import (
decompose_url,
compose_url,
get_default_components
)
@ -84,8 +82,6 @@ __all__ = [
"Anatomy",
"config",
"execute",
"decompose_url",
"compose_url",
"get_default_components",
"ApplicationManager",
"BuildWorkfile",

View file

@ -138,7 +138,10 @@ def webpublisherwebserver(debug, executable, upload_dir, host=None, port=None):
@click.option("--asset", help="Asset name", default=None)
@click.option("--task", help="Task name", default=None)
@click.option("--app", help="Application name", default=None)
def extractenvironments(output_json_path, project, asset, task, app):
@click.option(
"--envgroup", help="Environment group (e.g. \"farm\")", default=None
)
def extractenvironments(output_json_path, project, asset, task, app, envgroup):
"""Extract environment variables for entered context to a json file.
Entered output filepath will be created if does not exists.
@ -149,7 +152,7 @@ def extractenvironments(output_json_path, project, asset, task, app):
Context options are "project", "asset", "task", "app"
"""
PypeCommands.extractenvironments(
output_json_path, project, asset, task, app
output_json_path, project, asset, task, app, envgroup
)
@ -356,9 +359,22 @@ def run(script):
"--pyargs",
help="Run tests from package",
default=None)
def runtests(folder, mark, pyargs):
@click.option("-t",
"--test_data_folder",
help="Unzipped directory path of test file",
default=None)
@click.option("-s",
"--persist",
help="Persist test DB and published files after test end",
default=None)
@click.option("-a",
"--app_variant",
help="Provide specific app variant for test, empty for latest",
default=None)
def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant):
"""Run all automatic tests after proper initialization via start.py"""
PypeCommands().run_tests(folder, mark, pyargs)
PypeCommands().run_tests(folder, mark, pyargs, test_data_folder,
persist, app_variant)
@main.command()

View file

@ -0,0 +1,33 @@
import os
from openpype.lib import (
PreLaunchHook,
create_workdir_extra_folders
)
class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"""Add last workfile path to launch arguments.
This is not possible to do for all applications the same way.
"""
# Execute after workfile template copy
order = 15
def execute(self):
if not self.application.is_host:
return
env = self.data.get("env") or {}
workdir = env.get("AVALON_WORKDIR")
if not workdir or not os.path.exists(workdir):
return
host_name = self.application.host_name
task_type = self.data["task_type"]
task_name = self.data["task_name"]
project_name = self.data["project_name"]
create_workdir_extra_folders(
workdir, host_name, task_type, task_name, project_name,
)

View file

@ -48,7 +48,7 @@ class GlobalHostDataHook(PreLaunchHook):
"log": self.log
})
prepare_host_environments(temp_data)
prepare_host_environments(temp_data, self.launch_context.env_group)
prepare_context_environments(temp_data)
temp_data.pop("log")

View file

@ -0,0 +1,27 @@
# -*- coding: utf-8 -*-
"""Close AE after publish. For Webpublishing only."""
import pyblish.api
from avalon import aftereffects
class CloseAE(pyblish.api.ContextPlugin):
"""Close AE after publish. For Webpublishing only.
"""
order = pyblish.api.IntegratorOrder + 14
label = "Close AE"
optional = True
active = True
hosts = ["aftereffects"]
targets = ["remotepublish"]
def process(self, context):
self.log.info("CloseAE")
stub = aftereffects.stub()
self.log.info("Shutting down AE")
stub.save()
stub.close()
self.log.info("AE closed")

View file

@ -8,7 +8,7 @@ from avalon import aftereffects
class CollectCurrentFile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
order = pyblish.api.CollectorOrder - 0.5
order = pyblish.api.CollectorOrder - 0.49
label = "Current File"
hosts = ["aftereffects"]

View file

@ -0,0 +1,56 @@
import os
import re
import pyblish.api
from avalon import aftereffects
class CollectExtensionVersion(pyblish.api.ContextPlugin):
""" Pulls and compares version of installed extension.
It is recommended to use same extension as in provided Openpype code.
Please use Anastasiys Extension Manager or ZXPInstaller to update
extension in case of an error.
You can locate extension.zxp in your installed Openpype code in
`repos/avalon-core/avalon/aftereffects`
"""
# This technically should be a validator, but other collectors might be
# impacted with usage of obsolete extension, so collector that runs first
# was chosen
order = pyblish.api.CollectorOrder - 0.5
label = "Collect extension version"
hosts = ["aftereffects"]
optional = True
active = True
def process(self, context):
installed_version = aftereffects.stub().get_extension_version()
if not installed_version:
raise ValueError("Unknown version, probably old extension")
manifest_url = os.path.join(os.path.dirname(aftereffects.__file__),
"extension", "CSXS", "manifest.xml")
if not os.path.exists(manifest_url):
self.log.debug("Unable to locate extension manifest, not checking")
return
expected_version = None
with open(manifest_url) as fp:
content = fp.read()
found = re.findall(r'(ExtensionBundleVersion=")([0-9\.]+)(")',
content)
if found:
expected_version = found[0][1]
if expected_version != installed_version:
msg = (
"Expected version '{}' found '{}'\n Please update"
" your installed extension, it might not work properly."
).format(expected_version, installed_version)
raise ValueError(msg)

View file

@ -19,10 +19,9 @@ class ExtractLocalRender(openpype.api.Extractor):
staging_dir = instance.data["stagingDir"]
self.log.info("staging_dir::{}".format(staging_dir))
stub.render(staging_dir)
# pull file name from Render Queue Output module
render_q = stub.get_render_info()
stub.render(staging_dir)
if not render_q:
raise ValueError("No file extension set in Render Queue")
_, ext = os.path.splitext(os.path.basename(render_q.file_name))

View file

@ -32,7 +32,7 @@ class InstallPySideToBlender(PreLaunchHook):
def inner_execute(self):
# Get blender's python directory
version_regex = re.compile(r"^2\.[0-9]{2}$")
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
executable = self.launch_context.executable.executable_path
if os.path.basename(executable).lower() != "blender.exe":

View file

@ -19,7 +19,7 @@ from .api.lib import (
maintain_current_timeline,
get_project_manager,
get_current_project,
get_current_timeline,
get_current_sequence,
create_bin,
)
@ -51,6 +51,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
app_framework = None
apps = []
selection = None
__all__ = [
@ -65,6 +66,7 @@ __all__ = [
"app_framework",
"apps",
"selection",
# pipeline
"install",
@ -86,7 +88,7 @@ __all__ = [
"maintain_current_timeline",
"get_project_manager",
"get_current_project",
"get_current_timeline",
"get_current_sequence",
"create_bin",
# menu

View file

@ -220,10 +220,10 @@ def maintain_current_timeline(to_timeline, from_timeline=None):
timeline2
>>> with maintain_current_timeline(to_timeline):
... print(get_current_timeline().GetName())
... print(get_current_sequence().GetName())
timeline2
>>> print(get_current_timeline().GetName())
>>> print(get_current_sequence().GetName())
timeline1
"""
# todo: this is still Resolve's implementation
@ -256,9 +256,28 @@ def get_current_project():
return
def get_current_timeline(new=False):
# TODO: get_current_timeline
return
def get_current_sequence(selection):
import flame
def segment_to_sequence(_segment):
track = _segment.parent
version = track.parent
return version.parent
process_timeline = None
if len(selection) == 1:
if isinstance(selection[0], flame.PySequence):
process_timeline = selection[0]
if isinstance(selection[0], flame.PySegment):
process_timeline = segment_to_sequence(selection[0])
else:
for segment in selection:
if isinstance(segment, flame.PySegment):
process_timeline = segment_to_sequence(segment)
break
return process_timeline
def create_bin(name, root=None):
@ -272,3 +291,46 @@ def rescan_hooks():
flame.execute_shortcut('Rescan Python Hooks')
except Exception:
pass
def get_metadata(project_name, _log=None):
from adsk.libwiretapPythonClientAPI import (
WireTapClient,
WireTapServerHandle,
WireTapNodeHandle,
WireTapStr
)
class GetProjectColorPolicy(object):
def __init__(self, host_name=None, _log=None):
# Create a connection to the Backburner manager using the Wiretap
# python API.
#
self.log = _log or log
self.host_name = host_name or "localhost"
self._wiretap_client = WireTapClient()
if not self._wiretap_client.init():
raise Exception("Could not initialize Wiretap Client")
self._server = WireTapServerHandle(
"{}:IFFFS".format(self.host_name))
def process(self, project_name):
policy_node_handle = WireTapNodeHandle(
self._server,
"/projects/{}/syncolor/policy".format(project_name)
)
self.log.info(policy_node_handle)
policy = WireTapStr()
if not policy_node_handle.getNodeTypeStr(policy):
self.log.warning(
"Could not retrieve policy of '%s': %s" % (
policy_node_handle.getNodeId().id(),
policy_node_handle.lastError()
)
)
return policy.c_str()
policy_wiretap = GetProjectColorPolicy(_log=_log)
return policy_wiretap.process(project_name)

View file

@ -4,7 +4,6 @@ from copy import deepcopy
from openpype.tools.utils.host_tools import HostToolsHelper
menu_group_name = 'OpenPype'
default_flame_export_presets = {
@ -26,6 +25,13 @@ default_flame_export_presets = {
}
def callback_selection(selection, function):
import openpype.hosts.flame as opflame
opflame.selection = selection
print(opflame.selection)
function()
class _FlameMenuApp(object):
def __init__(self, framework):
self.name = self.__class__.__name__
@ -97,9 +103,6 @@ class FlameMenuProjectConnect(_FlameMenuApp):
if not self.flame:
return []
flame_project_name = self.flame_project_name
self.log.info("______ {} ______".format(flame_project_name))
menu = deepcopy(self.menu)
menu['actions'].append({
@ -108,11 +111,13 @@ class FlameMenuProjectConnect(_FlameMenuApp):
})
menu['actions'].append({
"name": "Create ...",
"execute": lambda x: self.tools_helper.show_creator()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_creator)
})
menu['actions'].append({
"name": "Publish ...",
"execute": lambda x: self.tools_helper.show_publish()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_publish)
})
menu['actions'].append({
"name": "Load ...",
@ -128,9 +133,6 @@ class FlameMenuProjectConnect(_FlameMenuApp):
})
return menu
def get_projects(self, *args, **kwargs):
pass
def refresh(self, *args, **kwargs):
self.rescan()
@ -165,18 +167,17 @@ class FlameMenuTimeline(_FlameMenuApp):
if not self.flame:
return []
flame_project_name = self.flame_project_name
self.log.info("______ {} ______".format(flame_project_name))
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Create ...",
"execute": lambda x: self.tools_helper.show_creator()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_creator)
})
menu['actions'].append({
"name": "Publish ...",
"execute": lambda x: self.tools_helper.show_publish()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_publish)
})
menu['actions'].append({
"name": "Load ...",
@ -189,9 +190,6 @@ class FlameMenuTimeline(_FlameMenuApp):
return menu
def get_projects(self, *args, **kwargs):
pass
def refresh(self, *args, **kwargs):
self.rescan()

View file

@ -27,6 +27,7 @@ def _sync_utility_scripts(env=None):
fsd_paths = [os.path.join(
HOST_DIR,
"api",
"utility_scripts"
)]

View file

@ -22,7 +22,7 @@ class FlamePrelaunch(PreLaunchHook):
flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7"
wtc_script_path = os.path.join(
opflame.HOST_DIR, "scripts", "wiretap_com.py")
opflame.HOST_DIR, "api", "scripts", "wiretap_com.py")
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

View file

@ -0,0 +1,657 @@
""" compatibility OpenTimelineIO 0.12.0 and newer
"""
import os
import re
import json
import logging
import opentimelineio as otio
from . import utils
import flame
from pprint import pformat
reload(utils) # noqa
log = logging.getLogger(__name__)
TRACK_TYPES = {
"video": otio.schema.TrackKind.Video,
"audio": otio.schema.TrackKind.Audio
}
MARKERS_COLOR_MAP = {
(1.0, 0.0, 0.0): otio.schema.MarkerColor.RED,
(1.0, 0.5, 0.0): otio.schema.MarkerColor.ORANGE,
(1.0, 1.0, 0.0): otio.schema.MarkerColor.YELLOW,
(1.0, 0.5, 1.0): otio.schema.MarkerColor.PINK,
(1.0, 1.0, 1.0): otio.schema.MarkerColor.WHITE,
(0.0, 1.0, 0.0): otio.schema.MarkerColor.GREEN,
(0.0, 1.0, 1.0): otio.schema.MarkerColor.CYAN,
(0.0, 0.0, 1.0): otio.schema.MarkerColor.BLUE,
(0.5, 0.0, 0.5): otio.schema.MarkerColor.PURPLE,
(0.5, 0.0, 1.0): otio.schema.MarkerColor.MAGENTA,
(0.0, 0.0, 0.0): otio.schema.MarkerColor.BLACK
}
MARKERS_INCLUDE = True
class CTX:
_fps = None
_tl_start_frame = None
project = None
clips = None
@classmethod
def set_fps(cls, new_fps):
if not isinstance(new_fps, float):
raise TypeError("Invalid fps type {}".format(type(new_fps)))
if cls._fps != new_fps:
cls._fps = new_fps
@classmethod
def get_fps(cls):
return cls._fps
@classmethod
def set_tl_start_frame(cls, number):
if not isinstance(number, int):
raise TypeError("Invalid timeline start frame type {}".format(
type(number)))
if cls._tl_start_frame != number:
cls._tl_start_frame = number
@classmethod
def get_tl_start_frame(cls):
return cls._tl_start_frame
def flatten(_list):
for item in _list:
if isinstance(item, (list, tuple)):
for sub_item in flatten(item):
yield sub_item
else:
yield item
def get_current_flame_project():
project = flame.project.current_project
return project
def create_otio_rational_time(frame, fps):
return otio.opentime.RationalTime(
float(frame),
float(fps)
)
def create_otio_time_range(start_frame, frame_duration, fps):
return otio.opentime.TimeRange(
start_time=create_otio_rational_time(start_frame, fps),
duration=create_otio_rational_time(frame_duration, fps)
)
def _get_metadata(item):
if hasattr(item, 'metadata'):
if not item.metadata:
return {}
return {key: value for key, value in dict(item.metadata)}
return {}
def create_time_effects(otio_clip, item):
# todo #2426: add retiming effects to export
# get all subtrack items
# subTrackItems = flatten(track_item.parent().subTrackItems())
# speed = track_item.playbackSpeed()
# otio_effect = None
# # retime on track item
# if speed != 1.:
# # make effect
# otio_effect = otio.schema.LinearTimeWarp()
# otio_effect.name = "Speed"
# otio_effect.time_scalar = speed
# otio_effect.metadata = {}
# # freeze frame effect
# if speed == 0.:
# otio_effect = otio.schema.FreezeFrame()
# otio_effect.name = "FreezeFrame"
# otio_effect.metadata = {}
# if otio_effect:
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
# # loop trought and get all Timewarps
# for effect in subTrackItems:
# if ((track_item not in effect.linkedItems())
# and (len(effect.linkedItems()) > 0)):
# continue
# # avoid all effect which are not TimeWarp and disabled
# if "TimeWarp" not in effect.name():
# continue
# if not effect.isEnabled():
# continue
# node = effect.node()
# name = node["name"].value()
# # solve effect class as effect name
# _name = effect.name()
# if "_" in _name:
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
# else:
# effect_name = re.sub(r"\d+", "", _name) # one number
# metadata = {}
# # add knob to metadata
# for knob in ["lookup", "length"]:
# value = node[knob].value()
# animated = node[knob].isAnimated()
# if animated:
# value = [
# ((node[knob].getValueAt(i)) - i)
# for i in range(
# track_item.timelineIn(),
# track_item.timelineOut() + 1)
# ]
# metadata[knob] = value
# # make effect
# otio_effect = otio.schema.TimeEffect()
# otio_effect.name = name
# otio_effect.effect_name = effect_name
# otio_effect.metadata = metadata
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
pass
def _get_marker_color(flame_colour):
# clamp colors to closes half numbers
_flame_colour = [
(lambda x: round(x * 2) / 2)(c)
for c in flame_colour]
for color, otio_color_type in MARKERS_COLOR_MAP.items():
if _flame_colour == list(color):
return otio_color_type
return otio.schema.MarkerColor.RED
def _get_flame_markers(item):
output_markers = []
time_in = item.record_in.relative_frame
for marker in item.markers:
log.debug(marker)
start_frame = marker.location.get_value().relative_frame
start_frame = (start_frame - time_in) + 1
marker_data = {
"name": marker.name.get_value(),
"duration": marker.duration.get_value().relative_frame,
"comment": marker.comment.get_value(),
"start_frame": start_frame,
"colour": marker.colour.get_value()
}
output_markers.append(marker_data)
return output_markers
def create_otio_markers(otio_item, item):
markers = _get_flame_markers(item)
for marker in markers:
frame_rate = CTX.get_fps()
marked_range = otio.opentime.TimeRange(
start_time=otio.opentime.RationalTime(
marker["start_frame"],
frame_rate
),
duration=otio.opentime.RationalTime(
marker["duration"],
frame_rate
)
)
# testing the comment if it is not containing json string
check_if_json = re.findall(
re.compile(r"[{:}]"),
marker["comment"]
)
# to identify this as json, at least 3 items in the list should
# be present ["{", ":", "}"]
metadata = {}
if len(check_if_json) >= 3:
# this is json string
try:
# capture exceptions which are related to strings only
metadata.update(
json.loads(marker["comment"])
)
except ValueError as msg:
log.error("Marker json conversion: {}".format(msg))
else:
metadata["comment"] = marker["comment"]
otio_marker = otio.schema.Marker(
name=marker["name"],
color=_get_marker_color(
marker["colour"]),
marked_range=marked_range,
metadata=metadata
)
otio_item.markers.append(otio_marker)
def create_otio_reference(clip_data):
metadata = _get_metadata(clip_data)
# get file info for path and start frame
frame_start = 0
fps = CTX.get_fps()
path = clip_data["fpath"]
reel_clip = None
match_reel_clip = [
clip for clip in CTX.clips
if clip["fpath"] == path
]
if match_reel_clip:
reel_clip = match_reel_clip.pop()
fps = reel_clip["fps"]
file_name = os.path.basename(path)
file_head, extension = os.path.splitext(file_name)
# get padding and other file infos
log.debug("_ path: {}".format(path))
is_sequence = padding = utils.get_frame_from_path(path)
if is_sequence:
number = utils.get_frame_from_path(path)
file_head = file_name.split(number)[:-1]
frame_start = int(number)
frame_duration = clip_data["source_duration"]
if is_sequence:
metadata.update({
"isSequence": True,
"padding": padding
})
otio_ex_ref_item = None
if is_sequence:
# if it is file sequence try to create `ImageSequenceReference`
# the OTIO might not be compatible so return nothing and do it old way
try:
dirname = os.path.dirname(path)
otio_ex_ref_item = otio.schema.ImageSequenceReference(
target_url_base=dirname + os.sep,
name_prefix=file_head,
name_suffix=extension,
start_frame=frame_start,
frame_zero_padding=padding,
rate=fps,
available_range=create_otio_time_range(
frame_start,
frame_duration,
fps
)
)
except AttributeError:
pass
if not otio_ex_ref_item:
reformat_path = utils.get_reformated_path(path, padded=False)
# in case old OTIO or video file create `ExternalReference`
otio_ex_ref_item = otio.schema.ExternalReference(
target_url=reformat_path,
available_range=create_otio_time_range(
frame_start,
frame_duration,
fps
)
)
# add metadata to otio item
add_otio_metadata(otio_ex_ref_item, clip_data, **metadata)
return otio_ex_ref_item
def create_otio_clip(clip_data):
segment = clip_data["PySegment"]
# create media reference
media_reference = create_otio_reference(clip_data)
# calculate source in
first_frame = utils.get_frame_from_path(clip_data["fpath"]) or 0
source_in = int(clip_data["source_in"]) - int(first_frame)
# creatae source range
source_range = create_otio_time_range(
source_in,
clip_data["record_duration"],
CTX.get_fps()
)
otio_clip = otio.schema.Clip(
name=clip_data["segment_name"],
source_range=source_range,
media_reference=media_reference
)
# Add markers
if MARKERS_INCLUDE:
create_otio_markers(otio_clip, segment)
return otio_clip
def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
return otio.schema.Gap(
source_range=create_otio_time_range(
gap_start,
(clip_start - tl_start_frame) - gap_start,
fps
)
)
def get_clips_in_reels(project):
output_clips = []
project_desktop = project.current_workspace.desktop
for reel_group in project_desktop.reel_groups:
for reel in reel_group.reels:
for clip in reel.clips:
clip_data = {
"PyClip": clip,
"fps": float(str(clip.frame_rate)[:-4])
}
attrs = [
"name", "width", "height",
"ratio", "sample_rate", "bit_depth"
]
for attr in attrs:
val = getattr(clip, attr)
clip_data[attr] = val
version = clip.versions[-1]
track = version.tracks[-1]
for segment in track.segments:
segment_data = _get_segment_attributes(segment)
clip_data.update(segment_data)
output_clips.append(clip_data)
return output_clips
def _get_colourspace_policy():
output = {}
# get policies project path
policy_dir = "/opt/Autodesk/project/{}/synColor/policy".format(
CTX.project.name
)
log.debug(policy_dir)
policy_fp = os.path.join(policy_dir, "policy.cfg")
if not os.path.exists(policy_fp):
return output
with open(policy_fp) as file:
dict_conf = dict(line.strip().split(' = ', 1) for line in file)
output.update(
{"openpype.flame.{}".format(k): v for k, v in dict_conf.items()}
)
return output
def _create_otio_timeline(sequence):
metadata = _get_metadata(sequence)
# find colour policy files and add them to metadata
colorspace_policy = _get_colourspace_policy()
metadata.update(colorspace_policy)
metadata.update({
"openpype.timeline.width": int(sequence.width),
"openpype.timeline.height": int(sequence.height),
"openpype.timeline.pixelAspect": 1
})
rt_start_time = create_otio_rational_time(
CTX.get_tl_start_frame(), CTX.get_fps())
return otio.schema.Timeline(
name=str(sequence.name)[1:-1],
global_start_time=rt_start_time,
metadata=metadata
)
def create_otio_track(track_type, track_name):
return otio.schema.Track(
name=track_name,
kind=TRACK_TYPES[track_type]
)
def add_otio_gap(clip_data, otio_track, prev_out):
gap_length = clip_data["record_in"] - prev_out
if prev_out != 0:
gap_length -= 1
gap = otio.opentime.TimeRange(
duration=otio.opentime.RationalTime(
gap_length,
CTX.get_fps()
)
)
otio_gap = otio.schema.Gap(source_range=gap)
otio_track.append(otio_gap)
def add_otio_metadata(otio_item, item, **kwargs):
metadata = _get_metadata(item)
# add additional metadata from kwargs
if kwargs:
metadata.update(kwargs)
# add metadata to otio item metadata
for key, value in metadata.items():
otio_item.metadata.update({key: value})
def _get_shot_tokens_values(clip, tokens):
old_value = None
output = {}
if not clip.shot_name:
return output
old_value = clip.shot_name.get_value()
for token in tokens:
clip.shot_name.set_value(token)
_key = re.sub("[ <>]", "", token)
try:
output[_key] = int(clip.shot_name.get_value())
except ValueError:
output[_key] = clip.shot_name.get_value()
clip.shot_name.set_value(old_value)
return output
def _get_segment_attributes(segment):
# log.debug(dir(segment))
if str(segment.name)[1:-1] == "":
return None
# Add timeline segment to tree
clip_data = {
"segment_name": segment.name.get_value(),
"segment_comment": segment.comment.get_value(),
"tape_name": segment.tape_name,
"source_name": segment.source_name,
"fpath": segment.file_path,
"PySegment": segment
}
# add all available shot tokens
shot_tokens = _get_shot_tokens_values(segment, [
"<colour space>", "<width>", "<height>", "<depth>",
])
clip_data.update(shot_tokens)
# populate shot source metadata
segment_attrs = [
"record_duration", "record_in", "record_out",
"source_duration", "source_in", "source_out"
]
segment_attrs_data = {}
for attr in segment_attrs:
if not hasattr(segment, attr):
continue
_value = getattr(segment, attr)
segment_attrs_data[attr] = str(_value).replace("+", ":")
if attr in ["record_in", "record_out"]:
clip_data[attr] = _value.relative_frame
else:
clip_data[attr] = _value.frame
clip_data["segment_timecodes"] = segment_attrs_data
return clip_data
def create_otio_timeline(sequence):
log.info(dir(sequence))
log.info(sequence.attributes)
CTX.project = get_current_flame_project()
CTX.clips = get_clips_in_reels(CTX.project)
log.debug(pformat(
CTX.clips
))
# get current timeline
CTX.set_fps(
float(str(sequence.frame_rate)[:-4]))
tl_start_frame = utils.timecode_to_frames(
str(sequence.start_time).replace("+", ":"),
CTX.get_fps()
)
CTX.set_tl_start_frame(tl_start_frame)
# convert timeline to otio
otio_timeline = _create_otio_timeline(sequence)
# create otio tracks and clips
for ver in sequence.versions:
for track in ver.tracks:
if len(track.segments) == 0 and track.hidden:
return None
# convert track to otio
otio_track = create_otio_track(
"video", str(track.name)[1:-1])
all_segments = []
for segment in track.segments:
clip_data = _get_segment_attributes(segment)
if not clip_data:
continue
all_segments.append(clip_data)
segments_ordered = {
itemindex: clip_data
for itemindex, clip_data in enumerate(
all_segments)
}
log.debug("_ segments_ordered: {}".format(
pformat(segments_ordered)
))
if not segments_ordered:
continue
for itemindex, segment_data in segments_ordered.items():
log.debug("_ itemindex: {}".format(itemindex))
# Add Gap if needed
if itemindex == 0:
# if it is first track item at track then add
# it to previouse item
prev_item = segment_data
else:
# get previouse item
prev_item = segments_ordered[itemindex - 1]
log.debug("_ segment_data: {}".format(segment_data))
# calculate clip frame range difference from each other
clip_diff = segment_data["record_in"] - prev_item["record_out"]
# add gap if first track item is not starting
# at first timeline frame
if itemindex == 0 and segment_data["record_in"] > 0:
add_otio_gap(segment_data, otio_track, 0)
# or add gap if following track items are having
# frame range differences from each other
elif itemindex and clip_diff != 1:
add_otio_gap(
segment_data, otio_track, prev_item["record_out"])
# create otio clip and add it to track
otio_clip = create_otio_clip(segment_data)
otio_track.append(otio_clip)
log.debug("_ otio_clip: {}".format(otio_clip))
# create otio marker
# create otio metadata
# add track to otio timeline
otio_timeline.tracks.append(otio_track)
return otio_timeline
def write_to_file(otio_timeline, path):
otio.adapters.write_to_file(otio_timeline, path)

View file

@ -0,0 +1,95 @@
import re
import opentimelineio as otio
import logging
log = logging.getLogger(__name__)
def timecode_to_frames(timecode, framerate):
rt = otio.opentime.from_timecode(timecode, framerate)
return int(otio.opentime.to_frames(rt))
def frames_to_timecode(frames, framerate):
rt = otio.opentime.from_frames(frames, framerate)
return otio.opentime.to_timecode(rt)
def frames_to_seconds(frames, framerate):
rt = otio.opentime.from_frames(frames, framerate)
return otio.opentime.to_seconds(rt)
def get_reformated_path(path, padded=True):
"""
Return fixed python expression path
Args:
path (str): path url or simple file name
Returns:
type: string with reformated path
Example:
get_reformated_path("plate.1001.exr") > plate.%04d.exr
"""
padding = get_padding_from_path(path)
found = get_frame_from_path(path)
if not found:
log.info("Path is not sequence: {}".format(path))
return path
if padded:
path = path.replace(found, "%0{}d".format(padding))
else:
path = path.replace(found, "%d")
return path
def get_padding_from_path(path):
"""
Return padding number from Flame path style
Args:
path (str): path url or simple file name
Returns:
int: padding number
Example:
get_padding_from_path("plate.0001.exr") > 4
"""
found = get_frame_from_path(path)
if found:
return len(found)
else:
return None
def get_frame_from_path(path):
"""
Return sequence number from Flame path style
Args:
path (str): path url or simple file name
Returns:
int: sequence frame number
Example:
def get_frame_from_path(path):
("plate.0001.exr") > 0001
"""
frame_pattern = re.compile(r"[._](\d+)[.]")
found = re.findall(frame_pattern, path)
if found:
return found.pop()
else:
return None

View file

@ -0,0 +1,26 @@
import pyblish.api
import openpype.hosts.flame as opflame
from openpype.hosts.flame.otio import flame_export as otio_export
from openpype.hosts.flame.api import lib
from pprint import pformat
reload(lib) # noqa
reload(otio_export) # noqa
@pyblish.api.log
class CollectTestSelection(pyblish.api.ContextPlugin):
"""testing selection sharing
"""
order = pyblish.api.CollectorOrder
label = "test selection"
hosts = ["flame"]
def process(self, context):
self.log.info(opflame.selection)
sequence = lib.get_current_sequence(opflame.selection)
otio_timeline = otio_export.create_otio_timeline(sequence)
self.log.info(pformat(otio_timeline))

View file

@ -45,19 +45,14 @@ class CreateHDA(plugin.Creator):
if (self.options or {}).get("useSelection") and self.nodes:
# if we have `use selection` enabled and we have some
# selected nodes ...
to_hda = self.nodes[0]
if len(self.nodes) > 1:
# if there is more then one node, create subnet first
subnet = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
to_hda = subnet
else:
# in case of no selection, just create subnet node
subnet = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
subnet = out.collapseIntoSubnet(
self.nodes,
subnet_name="{}_subnet".format(self.name))
subnet.moveToGoodPosition()
to_hda = subnet
else:
to_hda = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
if not to_hda.type().definition():
# if node type has not its definition, it is not user
# created hda. We test if hda can be created from the node.
@ -69,13 +64,12 @@ class CreateHDA(plugin.Creator):
name=subset_name,
hda_file_name="$HIP/{}.hda".format(subset_name)
)
hou.moveNodesTo(self.nodes, hda_node)
hda_node.layoutChildren()
elif self._check_existing(subset_name):
raise plugin.OpenPypeCreatorError(
("subset {} is already published with different HDA"
"definition.").format(subset_name))
else:
if self._check_existing(subset_name):
raise plugin.OpenPypeCreatorError(
("subset {} is already published with different HDA"
"definition.").format(subset_name))
hda_node = to_hda
hda_node.setName(subset_name)

View file

@ -100,6 +100,13 @@ class ReferenceLoader(api.Loader):
"offset",
label="Position Offset",
help="Offset loaded models for easier selection."
),
qargparse.Boolean(
"attach_to_root",
label="Group imported asset",
default=True,
help="Should a group be created to encapsulate"
" imported representation ?"
)
]

View file

@ -13,10 +13,14 @@ class CameraWindow(QtWidgets.QDialog):
self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint)
self.camera = None
self.static_image_plane = False
self.show_in_all_views = False
self.widgets = {
"label": QtWidgets.QLabel("Select camera for image plane."),
"list": QtWidgets.QListWidget(),
"staticImagePlane": QtWidgets.QCheckBox(),
"showInAllViews": QtWidgets.QCheckBox(),
"warning": QtWidgets.QLabel("No cameras selected!"),
"buttons": QtWidgets.QWidget(),
"okButton": QtWidgets.QPushButton("Ok"),
@ -31,6 +35,9 @@ class CameraWindow(QtWidgets.QDialog):
for camera in cameras:
self.widgets["list"].addItem(camera)
self.widgets["staticImagePlane"].setText("Make Image Plane Static")
self.widgets["showInAllViews"].setText("Show Image Plane in All Views")
# Build buttons.
layout = QtWidgets.QHBoxLayout(self.widgets["buttons"])
layout.addWidget(self.widgets["okButton"])
@ -40,6 +47,8 @@ class CameraWindow(QtWidgets.QDialog):
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(self.widgets["label"])
layout.addWidget(self.widgets["list"])
layout.addWidget(self.widgets["staticImagePlane"])
layout.addWidget(self.widgets["showInAllViews"])
layout.addWidget(self.widgets["buttons"])
layout.addWidget(self.widgets["warning"])
@ -54,6 +63,8 @@ class CameraWindow(QtWidgets.QDialog):
if self.camera is None:
self.widgets["warning"].setVisible(True)
return
self.show_in_all_views = self.widgets["showInAllViews"].isChecked()
self.static_image_plane = self.widgets["staticImagePlane"].isChecked()
self.close()
@ -65,15 +76,15 @@ class CameraWindow(QtWidgets.QDialog):
class ImagePlaneLoader(api.Loader):
"""Specific loader of plate for image planes on selected camera."""
families = ["plate", "render"]
families = ["image", "plate", "render"]
label = "Load imagePlane."
representations = ["mov", "exr", "preview", "png"]
icon = "image"
color = "orange"
def load(self, context, name, namespace, data):
def load(self, context, name, namespace, data, options=None):
import pymel.core as pm
new_nodes = []
image_plane_depth = 1000
asset = context['asset']['name']
@ -85,17 +96,23 @@ class ImagePlaneLoader(api.Loader):
# Get camera from user selection.
camera = None
default_cameras = [
"frontShape", "perspShape", "sideShape", "topShape"
]
cameras = [
x for x in pm.ls(type="camera") if x.name() not in default_cameras
]
camera_names = {x.getParent().name(): x for x in cameras}
camera_names["Create new camera."] = "create_camera"
window = CameraWindow(camera_names.keys())
window.exec_()
camera = camera_names[window.camera]
is_static_image_plane = None
is_in_all_views = None
if data:
camera = pm.PyNode(data.get("camera"))
is_static_image_plane = data.get("static_image_plane")
is_in_all_views = data.get("in_all_views")
if not camera:
cameras = pm.ls(type="camera")
camera_names = {x.getParent().name(): x for x in cameras}
camera_names["Create new camera."] = "create_camera"
window = CameraWindow(camera_names.keys())
window.exec_()
camera = camera_names[window.camera]
is_static_image_plane = window.static_image_plane
is_in_all_views = window.show_in_all_views
if camera == "create_camera":
camera = pm.createNode("camera")
@ -111,13 +128,14 @@ class ImagePlaneLoader(api.Loader):
# Create image plane
image_plane_transform, image_plane_shape = pm.imagePlane(
camera=camera, showInAllViews=False
fileName=context["representation"]["data"]["path"],
camera=camera, showInAllViews=is_in_all_views
)
image_plane_shape.depth.set(image_plane_depth)
image_plane_shape.imageName.set(
context["representation"]["data"]["path"]
)
if is_static_image_plane:
image_plane_shape.detach()
image_plane_transform.setRotation(camera.getRotation())
start_frame = pm.playbackOptions(q=True, min=True)
end_frame = pm.playbackOptions(q=True, max=True)

View file

@ -40,85 +40,88 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
except ValueError:
family = "model"
group_name = "{}:_GRP".format(namespace)
# True by default to keep legacy behaviours
attach_to_root = options.get("attach_to_root", True)
with maya.maintained_selection():
groupName = "{}:_GRP".format(namespace)
cmds.loadPlugin("AbcImport.mll", quiet=True)
nodes = cmds.file(self.fname,
namespace=namespace,
sharedReferenceFile=False,
groupReference=True,
groupName=groupName,
reference=True,
returnNewNodes=True)
# namespace = cmds.referenceQuery(nodes[0], namespace=True)
returnNewNodes=True,
groupReference=attach_to_root,
groupName=group_name)
shapes = cmds.ls(nodes, shapes=True, long=True)
newNodes = (list(set(nodes) - set(shapes)))
new_nodes = (list(set(nodes) - set(shapes)))
current_namespace = pm.namespaceInfo(currentNamespace=True)
if current_namespace != ":":
groupName = current_namespace + ":" + groupName
group_name = current_namespace + ":" + group_name
groupNode = pm.PyNode(groupName)
roots = set()
self[:] = new_nodes
for node in newNodes:
try:
roots.add(pm.PyNode(node).getAllParents()[-2])
except: # noqa: E722
pass
if attach_to_root:
group_node = pm.PyNode(group_name)
roots = set()
if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]:
for node in new_nodes:
try:
roots.add(pm.PyNode(node).getAllParents()[-2])
except: # noqa: E722
pass
if family not in ["layout", "setdress",
"mayaAscii", "mayaScene"]:
for root in roots:
root.setParent(world=True)
group_node.zeroTransformPivots()
for root in roots:
root.setParent(world=True)
root.setParent(group_node)
groupNode.zeroTransformPivots()
for root in roots:
root.setParent(groupNode)
cmds.setAttr(group_name + ".displayHandle", 1)
cmds.setAttr(groupName + ".displayHandle", 1)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
group_node.useOutlinerColor.set(1)
group_node.outlinerColor.set(
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255))
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
groupNode.useOutlinerColor.set(1)
groupNode.outlinerColor.set(
(float(c[0])/255),
(float(c[1])/255),
(float(c[2])/255)
)
self[:] = newNodes
cmds.setAttr(groupName + ".displayHandle", 1)
# get bounding box
bbox = cmds.exactWorldBoundingBox(groupName)
# get pivot position on world space
pivot = cmds.xform(groupName, q=True, sp=True, ws=True)
# center of bounding box
cx = (bbox[0] + bbox[3]) / 2
cy = (bbox[1] + bbox[4]) / 2
cz = (bbox[2] + bbox[5]) / 2
# add pivot position to calculate offset
cx = cx + pivot[0]
cy = cy + pivot[1]
cz = cz + pivot[2]
# set selection handle offset to center of bounding box
cmds.setAttr(groupName + ".selectHandleX", cx)
cmds.setAttr(groupName + ".selectHandleY", cy)
cmds.setAttr(groupName + ".selectHandleZ", cz)
cmds.setAttr(group_name + ".displayHandle", 1)
# get bounding box
bbox = cmds.exactWorldBoundingBox(group_name)
# get pivot position on world space
pivot = cmds.xform(group_name, q=True, sp=True, ws=True)
# center of bounding box
cx = (bbox[0] + bbox[3]) / 2
cy = (bbox[1] + bbox[4]) / 2
cz = (bbox[2] + bbox[5]) / 2
# add pivot position to calculate offset
cx = cx + pivot[0]
cy = cy + pivot[1]
cz = cz + pivot[2]
# set selection handle offset to center of bounding box
cmds.setAttr(group_name + ".selectHandleX", cx)
cmds.setAttr(group_name + ".selectHandleY", cy)
cmds.setAttr(group_name + ".selectHandleZ", cz)
if family == "rig":
self._post_process_rig(name, namespace, context, options)
else:
if "translate" in options:
cmds.setAttr(groupName + ".t", *options["translate"])
return newNodes
if "translate" in options:
cmds.setAttr(group_name + ".t", *options["translate"])
return new_nodes
def switch(self, container, representation):
self.update(container, representation)

View file

@ -22,15 +22,22 @@ class CollectMayaHistory(pyblish.api.InstancePlugin):
def process(self, instance):
# Collect the history with long names
history = cmds.listHistory(instance, leaf=False) or []
history = cmds.ls(history, long=True)
kwargs = {}
if int(cmds.about(version=True)) >= 2020:
# New flag since Maya 2020 which makes cmds.listHistory faster
kwargs = {"fastIteration": True}
else:
self.log.debug("Ignoring `fastIteration` flag before Maya 2020..")
# Remove invalid node types (like renderlayers)
invalid = cmds.ls(history, type="renderLayer", long=True)
if invalid:
invalid = set(invalid) # optimize lookup
history = [x for x in history if x not in invalid]
# Collect the history with long names
history = set(cmds.listHistory(instance, leaf=False, **kwargs) or [])
history = cmds.ls(list(history), long=True)
# Exclude invalid nodes (like renderlayers)
exclude = cmds.ls(type="renderLayer", long=True)
if exclude:
exclude = set(exclude) # optimize lookup
history = [x for x in history if x not in exclude]
# Combine members with history
members = instance[:] + history

View file

@ -492,6 +492,8 @@ class CollectLook(pyblish.api.InstancePlugin):
if not cmds.attributeQuery(attr, node=node, exists=True):
continue
attribute = "{}.{}".format(node, attr)
if cmds.getAttr(attribute, type=True) == "message":
continue
node_attributes[attr] = cmds.getAttr(attribute)
attributes.append({"name": node,

View file

@ -224,14 +224,19 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
# append full path
full_exp_files = []
aov_dict = {}
default_render_file = context.data.get('project_settings')\
.get('maya')\
.get('create')\
.get('CreateRender')\
.get('default_render_image_folder')
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
for aov in exp_files:
full_paths = []
for file in aov[aov.keys()[0]]:
full_path = os.path.join(workspace, "renders", file)
full_path = os.path.join(workspace, default_render_file,
file)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)

View file

@ -55,8 +55,16 @@ def maketx(source, destination, *args):
str: Output of `maketx` command.
"""
from openpype.lib import get_oiio_tools_path
maketx_path = get_oiio_tools_path("maketx")
if not os.path.exists(maketx_path):
print(
"OIIO tool not found in {}".format(maketx_path))
raise AssertionError("OIIO tool not found")
cmd = [
"maketx",
maketx_path,
"-v", # verbose
"-u", # update mode
# unpremultiply before conversion (recommended when alpha present)

View file

@ -23,11 +23,24 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin):
def process(self, instance):
assert get_file_rule("images") == "renders", (
"Workspace's `images` file rule must be set to: renders"
default_render_file = self.get_default_render_image_folder(instance)
assert get_file_rule("images") == default_render_file, (
"Workspace's `images` file rule must be set to: {}".format(
default_render_file
)
)
@classmethod
def repair(cls, instance):
pm.workspace.fileRules["images"] = "renders"
default = cls.get_default_render_image_folder(instance)
pm.workspace.fileRules["images"] = default
pm.system.Workspace.save()
@staticmethod
def get_default_render_image_folder(instance):
return instance.context.data.get('project_settings')\
.get('maya') \
.get('create') \
.get('CreateRender') \
.get('default_render_image_folder')

View file

@ -164,7 +164,8 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
continue
# Ignore proxy connections.
if cmds.addAttr(plug, query=True, usedAsProxy=True):
if (cmds.addAttr(plug, query=True, exists=True) and
cmds.addAttr(plug, query=True, usedAsProxy=True)):
continue
# Check for incoming connections

View file

@ -42,10 +42,14 @@ class NukeRenderLocal(openpype.api.Extractor):
self.log.info("Start frame: {}".format(first_frame))
self.log.info("End frame: {}".format(last_frame))
# write node url might contain nuke's ctl expressin
# as [python ...]/path...
path = node["file"].evaluate()
# Ensure output directory exists.
directory = os.path.dirname(node["file"].value())
if not os.path.exists(directory):
os.makedirs(directory)
out_dir = os.path.dirname(path)
if not os.path.exists(out_dir):
os.makedirs(out_dir)
# Render frames
nuke.execute(
@ -58,15 +62,12 @@ class NukeRenderLocal(openpype.api.Extractor):
if "slate" in families:
first_frame += 1
path = node['file'].value()
out_dir = os.path.dirname(path)
ext = node["file_type"].value()
if "representations" not in instance.data:
instance.data["representations"] = []
collected_frames = os.listdir(out_dir)
if len(collected_frames) == 1:
repre = {
'name': ext,

View file

@ -42,6 +42,7 @@ class ExtractReviewDataMov(openpype.api.Extractor):
# generate data
with anlib.maintained_selection():
generated_repres = []
for o_name, o_data in self.outputs.items():
f_families = o_data["filter"]["families"]
f_task_types = o_data["filter"]["task_types"]
@ -112,11 +113,13 @@ class ExtractReviewDataMov(openpype.api.Extractor):
})
else:
data = exporter.generate_mov(**o_data)
generated_repres.extend(data["representations"])
self.log.info(data["representations"])
self.log.info(generated_repres)
# assign to representations
instance.data["representations"] += data["representations"]
if generated_repres:
# assign to representations
instance.data["representations"] += generated_repres
self.log.debug(
"_ representations: {}".format(

View file

@ -67,7 +67,9 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
if not repre.get("files"):
msg = ("no frames were collected, "
"you need to render them")
"you need to render them.\n"
"Check properties of write node (group) and"
"select 'Local' option in 'Publish' dropdown.")
self.log.error(msg)
raise ValidationException(msg)

View file

@ -8,7 +8,7 @@ from avalon import photoshop
class CollectCurrentFile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
order = pyblish.api.CollectorOrder - 0.5
order = pyblish.api.CollectorOrder - 0.49
label = "Current File"
hosts = ["photoshop"]

View file

@ -0,0 +1,57 @@
import os
import re
import pyblish.api
from avalon import photoshop
class CollectExtensionVersion(pyblish.api.ContextPlugin):
""" Pulls and compares version of installed extension.
It is recommended to use same extension as in provided Openpype code.
Please use Anastasiys Extension Manager or ZXPInstaller to update
extension in case of an error.
You can locate extension.zxp in your installed Openpype code in
`repos/avalon-core/avalon/photoshop`
"""
# This technically should be a validator, but other collectors might be
# impacted with usage of obsolete extension, so collector that runs first
# was chosen
order = pyblish.api.CollectorOrder - 0.5
label = "Collect extension version"
hosts = ["photoshop"]
optional = True
active = True
def process(self, context):
installed_version = photoshop.stub().get_extension_version()
if not installed_version:
raise ValueError("Unknown version, probably old extension")
manifest_url = os.path.join(os.path.dirname(photoshop.__file__),
"extension", "CSXS", "manifest.xml")
if not os.path.exists(manifest_url):
self.log.debug("Unable to locate extension manifest, not checking")
return
expected_version = None
with open(manifest_url) as fp:
content = fp.read()
found = re.findall(r'(ExtensionBundleVersion=")([0-10\.]+)(")',
content)
if found:
expected_version = found[0][1]
if expected_version != installed_version:
msg = "Expected version '{}' found '{}'\n".format(
expected_version, installed_version)
msg += "Please update your installed extension, it might not work "
msg += "properly."
raise ValueError(msg)

View file

@ -1,3 +1,5 @@
import re
import pyblish.api
import openpype.api
from avalon import photoshop
@ -19,20 +21,33 @@ class ValidateNamingRepair(pyblish.api.Action):
and result["instance"] not in failed):
failed.append(result["instance"])
invalid_chars, replace_char = plugin.get_replace_chars()
self.log.info("{} --- {}".format(invalid_chars, replace_char))
# Apply pyblish.logic to get the instances for the plug-in
instances = pyblish.api.instances_by_plugin(failed, plugin)
stub = photoshop.stub()
for instance in instances:
self.log.info("validate_naming instance {}".format(instance))
name = instance.data["name"].replace(" ", "_")
name = name.replace(instance.data["family"], '')
instance[0].Name = name
data = stub.read(instance[0])
data["subset"] = "image" + name
stub.imprint(instance[0], data)
metadata = stub.read(instance[0])
self.log.info("metadata instance {}".format(metadata))
layer_name = None
if metadata.get("uuid"):
layer_data = stub.get_layer(metadata["uuid"])
self.log.info("layer_data {}".format(layer_data))
if layer_data:
layer_name = re.sub(invalid_chars,
replace_char,
layer_data.name)
name = stub.PUBLISH_ICON + name
stub.rename_layer(instance.data["uuid"], name)
stub.rename_layer(instance.data["uuid"], layer_name)
subset_name = re.sub(invalid_chars, replace_char,
instance.data["name"])
instance[0].Name = layer_name or subset_name
metadata["subset"] = subset_name
stub.imprint(instance[0], metadata)
return True
@ -49,12 +64,21 @@ class ValidateNaming(pyblish.api.InstancePlugin):
families = ["image"]
actions = [ValidateNamingRepair]
# configured by Settings
invalid_chars = ''
replace_char = ''
def process(self, instance):
help_msg = ' Use Repair action (A) in Pyblish to fix it.'
msg = "Name \"{}\" is not allowed.{}".format(instance.data["name"],
help_msg)
assert " " not in instance.data["name"], msg
assert not re.search(self.invalid_chars, instance.data["name"]), msg
msg = "Subset \"{}\" is not allowed.{}".format(instance.data["subset"],
help_msg)
assert " " not in instance.data["subset"], msg
assert not re.search(self.invalid_chars, instance.data["subset"]), msg
@classmethod
def get_replace_chars(cls):
"""Pass values configured in Settings for Repair."""
return cls.invalid_chars, cls.replace_char

View file

@ -1,3 +1,6 @@
import os
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
@ -6,3 +9,12 @@ def add_implementation_envs(env, _app):
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_launch_script_path():
current_dir = os.path.dirname(os.path.abspath(__file__))
return os.path.join(
current_dir,
"api",
"launch_script.py"
)

View file

@ -1,93 +1,49 @@
import os
import logging
from .communication_server import CommunicationWrapper
from . import lib
from . import launch_script
from . import workio
from . import pipeline
from . import plugin
from .pipeline import (
install,
uninstall,
maintained_selection,
remove_instance,
list_instances,
ls
)
import requests
import avalon.api
import pyblish.api
from avalon.tvpaint import pipeline
from avalon.tvpaint.communication_server import register_localization_file
from .lib import set_context_settings
from openpype.hosts import tvpaint
from openpype.api import get_current_project_settings
log = logging.getLogger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(tvpaint.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
from .workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root,
)
def on_instance_toggle(instance, old_value, new_value):
# Review may not have real instance in wokrfile metadata
if not instance.data.get("uuid"):
return
__all__ = (
"CommunicationWrapper",
instance_id = instance.data["uuid"]
found_idx = None
current_instances = pipeline.list_instances()
for idx, workfile_instance in enumerate(current_instances):
if workfile_instance["uuid"] == instance_id:
found_idx = idx
break
"lib",
"launch_script",
"workio",
"pipeline",
"plugin",
if found_idx is None:
return
"install",
"uninstall",
"maintained_selection",
"remove_instance",
"list_instances",
"ls",
if "active" in current_instances[found_idx]:
current_instances[found_idx]["active"] = new_value
pipeline._write_instances(current_instances)
def initial_launch():
# Setup project settings if its the template that's launched.
# TODO also check for template creation when it's possible to define
# templates
last_workfile = os.environ.get("AVALON_LAST_WORKFILE")
if not last_workfile or os.path.exists(last_workfile):
return
log.info("Setting up project...")
set_context_settings()
def application_exit():
data = get_current_project_settings()
stop_timer = data["tvpaint"]["stop_timer_on_application_exit"]
if not stop_timer:
return
# Stop application timer.
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
requests.post(rest_api_url)
def install():
log.info("OpenPype - Installing TVPaint integration")
localization_file = os.path.join(HOST_DIR, "resources", "avalon.loc")
register_localization_file(localization_file)
pyblish.api.register_plugin_path(PUBLISH_PATH)
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
registered_callbacks = (
pyblish.api.registered_callbacks().get("instanceToggled") or []
)
if on_instance_toggle not in registered_callbacks:
pyblish.api.register_callback("instanceToggled", on_instance_toggle)
avalon.api.on("application.launched", initial_launch)
avalon.api.on("application.exit", application_exit)
def uninstall():
log.info("OpenPype - Uninstalling TVPaint integration")
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
# Workfiles API
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root"
)

View file

@ -0,0 +1,939 @@
import os
import json
import time
import subprocess
import collections
import asyncio
import logging
import socket
import platform
import filecmp
import tempfile
import threading
import shutil
from queue import Queue
from contextlib import closing
from aiohttp import web
from aiohttp_json_rpc import JsonRpc
from aiohttp_json_rpc.protocol import (
encode_request, encode_error, decode_msg, JsonRpcMsgTyp
)
from aiohttp_json_rpc.exceptions import RpcError
from avalon import api
from openpype.hosts.tvpaint.tvpaint_plugin import get_plugin_files_path
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
class CommunicationWrapper:
# TODO add logs and exceptions
communicator = None
log = logging.getLogger("CommunicationWrapper")
@classmethod
def create_qt_communicator(cls, *args, **kwargs):
"""Create communicator for Artist usage."""
communicator = QtCommunicator(*args, **kwargs)
cls.set_communicator(communicator)
return communicator
@classmethod
def set_communicator(cls, communicator):
if not cls.communicator:
cls.communicator = communicator
else:
cls.log.warning("Communicator was set multiple times.")
@classmethod
def client(cls):
if not cls.communicator:
return None
return cls.communicator.client()
@classmethod
def execute_george(cls, george_script):
"""Execute passed goerge script in TVPaint."""
if not cls.communicator:
return
return cls.communicator.execute_george(george_script)
class WebSocketServer:
def __init__(self):
self.client = None
self.loop = asyncio.new_event_loop()
self.app = web.Application(loop=self.loop)
self.port = self.find_free_port()
self.websocket_thread = WebsocketServerThread(
self, self.port, loop=self.loop
)
@property
def server_is_running(self):
return self.websocket_thread.server_is_running
def add_route(self, *args, **kwargs):
self.app.router.add_route(*args, **kwargs)
@staticmethod
def find_free_port():
with closing(
socket.socket(socket.AF_INET, socket.SOCK_STREAM)
) as sock:
sock.bind(("", 0))
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
port = sock.getsockname()[1]
return port
def start(self):
self.websocket_thread.start()
def stop(self):
try:
if self.websocket_thread.is_running:
log.debug("Stopping websocket server")
self.websocket_thread.is_running = False
self.websocket_thread.stop()
except Exception:
log.warning(
"Error has happened during Killing websocket server",
exc_info=True
)
class WebsocketServerThread(threading.Thread):
""" Listener for websocket rpc requests.
It would be probably better to "attach" this to main thread (as for
example Harmony needs to run something on main thread), but currently
it creates separate thread and separate asyncio event loop
"""
def __init__(self, module, port, loop):
super(WebsocketServerThread, self).__init__()
self.is_running = False
self.server_is_running = False
self.port = port
self.module = module
self.loop = loop
self.runner = None
self.site = None
self.tasks = []
def run(self):
self.is_running = True
try:
log.debug("Starting websocket server")
self.loop.run_until_complete(self.start_server())
log.info(
"Running Websocket server on URL:"
" \"ws://localhost:{}\"".format(self.port)
)
asyncio.ensure_future(self.check_shutdown(), loop=self.loop)
self.server_is_running = True
self.loop.run_forever()
except Exception:
log.warning(
"Websocket Server service has failed", exc_info=True
)
finally:
self.server_is_running = False
# optional
self.loop.close()
self.is_running = False
log.info("Websocket server stopped")
async def start_server(self):
""" Starts runner and TCPsite """
self.runner = web.AppRunner(self.module.app)
await self.runner.setup()
self.site = web.TCPSite(self.runner, "localhost", self.port)
await self.site.start()
def stop(self):
"""Sets is_running flag to false, 'check_shutdown' shuts server down"""
self.is_running = False
async def check_shutdown(self):
""" Future that is running and checks if server should be running
periodically.
"""
while self.is_running:
while self.tasks:
task = self.tasks.pop(0)
log.debug("waiting for task {}".format(task))
await task
log.debug("returned value {}".format(task.result))
await asyncio.sleep(0.5)
log.debug("## Server shutdown started")
await self.site.stop()
log.debug("# Site stopped")
await self.runner.cleanup()
log.debug("# Server runner stopped")
tasks = [
task for task in asyncio.all_tasks()
if task is not asyncio.current_task()
]
list(map(lambda task: task.cancel(), tasks)) # cancel all the tasks
results = await asyncio.gather(*tasks, return_exceptions=True)
log.debug(f"Finished awaiting cancelled tasks, results: {results}...")
await self.loop.shutdown_asyncgens()
# to really make sure everything else has time to stop
await asyncio.sleep(0.07)
self.loop.stop()
class BaseTVPaintRpc(JsonRpc):
def __init__(self, communication_obj, route_name="", **kwargs):
super().__init__(**kwargs)
self.requests_ids = collections.defaultdict(lambda: 0)
self.waiting_requests = collections.defaultdict(list)
self.responses = collections.defaultdict(list)
self.route_name = route_name
self.communication_obj = communication_obj
async def _handle_rpc_msg(self, http_request, raw_msg):
# This is duplicated code from super but there is no way how to do it
# to be able handle server->client requests
host = http_request.host
if host in self.waiting_requests:
try:
_raw_message = raw_msg.data
msg = decode_msg(_raw_message)
except RpcError as error:
await self._ws_send_str(http_request, encode_error(error))
return
if msg.type in (JsonRpcMsgTyp.RESULT, JsonRpcMsgTyp.ERROR):
msg_data = json.loads(_raw_message)
if msg_data.get("id") in self.waiting_requests[host]:
self.responses[host].append(msg_data)
return
return await super()._handle_rpc_msg(http_request, raw_msg)
def client_connected(self):
# TODO This is poor check. Add check it is client from TVPaint
if self.clients:
return True
return False
def send_notification(self, client, method, params=None):
if params is None:
params = []
asyncio.run_coroutine_threadsafe(
client.ws.send_str(encode_request(method, params=params)),
loop=self.loop
)
def send_request(self, client, method, params=None, timeout=0):
if params is None:
params = []
client_host = client.host
request_id = self.requests_ids[client_host]
self.requests_ids[client_host] += 1
self.waiting_requests[client_host].append(request_id)
log.debug("Sending request to client {} ({}, {}) id: {}".format(
client_host, method, params, request_id
))
future = asyncio.run_coroutine_threadsafe(
client.ws.send_str(encode_request(method, request_id, params)),
loop=self.loop
)
result = future.result()
not_found = object()
response = not_found
start = time.time()
while True:
if client.ws.closed:
return None
for _response in self.responses[client_host]:
_id = _response.get("id")
if _id == request_id:
response = _response
break
if response is not not_found:
break
if timeout > 0 and (time.time() - start) > timeout:
raise Exception("Timeout passed")
return
time.sleep(0.1)
if response is not_found:
raise Exception("Connection closed")
self.responses[client_host].remove(response)
error = response.get("error")
result = response.get("result")
if error:
raise Exception("Error happened: {}".format(error))
return result
class QtTVPaintRpc(BaseTVPaintRpc):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
from openpype.tools.utils import host_tools
self.tools_helper = host_tools.HostToolsHelper()
route_name = self.route_name
# Register methods
self.add_methods(
(route_name, self.workfiles_tool),
(route_name, self.loader_tool),
(route_name, self.creator_tool),
(route_name, self.subset_manager_tool),
(route_name, self.publish_tool),
(route_name, self.scene_inventory_tool),
(route_name, self.library_loader_tool),
(route_name, self.experimental_tools)
)
# Panel routes for tools
async def workfiles_tool(self):
log.info("Triggering Workfile tool")
item = MainThreadItem(self.tools_helper.show_workfiles)
self._execute_in_main_thread(item)
return
async def loader_tool(self):
log.info("Triggering Loader tool")
item = MainThreadItem(self.tools_helper.show_loader)
self._execute_in_main_thread(item)
return
async def creator_tool(self):
log.info("Triggering Creator tool")
item = MainThreadItem(self.tools_helper.show_creator)
await self._async_execute_in_main_thread(item, wait=False)
async def subset_manager_tool(self):
log.info("Triggering Subset Manager tool")
item = MainThreadItem(self.tools_helper.show_subset_manager)
# Do not wait for result of callback
self._execute_in_main_thread(item, wait=False)
return
async def publish_tool(self):
log.info("Triggering Publish tool")
item = MainThreadItem(self.tools_helper.show_publish)
self._execute_in_main_thread(item)
return
async def scene_inventory_tool(self):
"""Open Scene Inventory tool.
Funciton can't confirm if tool was opened becauise one part of
SceneInventory initialization is calling websocket request to host but
host can't response because is waiting for response from this call.
"""
log.info("Triggering Scene inventory tool")
item = MainThreadItem(self.tools_helper.show_scene_inventory)
# Do not wait for result of callback
self._execute_in_main_thread(item, wait=False)
return
async def library_loader_tool(self):
log.info("Triggering Library loader tool")
item = MainThreadItem(self.tools_helper.show_library_loader)
self._execute_in_main_thread(item)
return
async def experimental_tools(self):
log.info("Triggering Library loader tool")
item = MainThreadItem(self.tools_helper.show_experimental_tools_dialog)
self._execute_in_main_thread(item)
return
async def _async_execute_in_main_thread(self, item, **kwargs):
await self.communication_obj.async_execute_in_main_thread(
item, **kwargs
)
def _execute_in_main_thread(self, item, **kwargs):
return self.communication_obj.execute_in_main_thread(item, **kwargs)
class MainThreadItem:
"""Structure to store information about callback in main thread.
Item should be used to execute callback in main thread which may be needed
for execution of Qt objects.
Item store callback (callable variable), arguments and keyword arguments
for the callback. Item hold information about it's process.
"""
not_set = object()
sleep_time = 0.1
def __init__(self, callback, *args, **kwargs):
self.done = False
self.exception = self.not_set
self.result = self.not_set
self.callback = callback
self.args = args
self.kwargs = kwargs
def execute(self):
"""Execute callback and store it's result.
Method must be called from main thread. Item is marked as `done`
when callback execution finished. Store output of callback of exception
information when callback raise one.
"""
log.debug("Executing process in main thread")
if self.done:
log.warning("- item is already processed")
return
callback = self.callback
args = self.args
kwargs = self.kwargs
log.info("Running callback: {}".format(str(callback)))
try:
result = callback(*args, **kwargs)
self.result = result
except Exception as exc:
self.exception = exc
finally:
self.done = True
def wait(self):
"""Wait for result from main thread.
This method stops current thread until callback is executed.
Returns:
object: Output of callback. May be any type or object.
Raises:
Exception: Reraise any exception that happened during callback
execution.
"""
while not self.done:
time.sleep(self.sleep_time)
if self.exception is self.not_set:
return self.result
raise self.exception
async def async_wait(self):
"""Wait for result from main thread.
Returns:
object: Output of callback. May be any type or object.
Raises:
Exception: Reraise any exception that happened during callback
execution.
"""
while not self.done:
await asyncio.sleep(self.sleep_time)
if self.exception is self.not_set:
return self.result
raise self.exception
class BaseCommunicator:
def __init__(self):
self.process = None
self.websocket_server = None
self.websocket_rpc = None
self.exit_code = None
self._connected_client = None
@property
def server_is_running(self):
if self.websocket_server is None:
return False
return self.websocket_server.server_is_running
def _windows_file_process(self, src_dst_mapping, to_remove):
"""Windows specific file processing asking for admin permissions.
It is required to have administration permissions to modify plugin
files in TVPaint installation folder.
Method requires `pywin32` python module.
Args:
src_dst_mapping (list, tuple, set): Mapping of source file to
destination. Both must be full path. Each item must be iterable
of size 2 `(C:/src/file.dll, C:/dst/file.dll)`.
to_remove (list): Fullpath to files that should be removed.
"""
import pythoncom
from win32comext.shell import shell
# Create temp folder where plugin files are temporary copied
# - reason is that copy to TVPaint requires administartion permissions
# but admin may not have access to source folder
tmp_dir = os.path.normpath(
tempfile.mkdtemp(prefix="tvpaint_copy_")
)
# Copy source to temp folder and create new mapping
dst_folders = collections.defaultdict(list)
new_src_dst_mapping = []
for old_src, dst in src_dst_mapping:
new_src = os.path.join(tmp_dir, os.path.split(old_src)[1])
shutil.copy(old_src, new_src)
new_src_dst_mapping.append((new_src, dst))
for src, dst in new_src_dst_mapping:
src = os.path.normpath(src)
dst = os.path.normpath(dst)
dst_filename = os.path.basename(dst)
dst_folder_path = os.path.dirname(dst)
dst_folders[dst_folder_path].append((dst_filename, src))
# create an instance of IFileOperation
fo = pythoncom.CoCreateInstance(
shell.CLSID_FileOperation,
None,
pythoncom.CLSCTX_ALL,
shell.IID_IFileOperation
)
# Add delete command to file operation object
for filepath in to_remove:
item = shell.SHCreateItemFromParsingName(
filepath, None, shell.IID_IShellItem
)
fo.DeleteItem(item)
# here you can use SetOperationFlags, progress Sinks, etc.
for folder_path, items in dst_folders.items():
# create an instance of IShellItem for the target folder
folder_item = shell.SHCreateItemFromParsingName(
folder_path, None, shell.IID_IShellItem
)
for _dst_filename, source_file_path in items:
# create an instance of IShellItem for the source item
copy_item = shell.SHCreateItemFromParsingName(
source_file_path, None, shell.IID_IShellItem
)
# queue the copy operation
fo.CopyItem(copy_item, folder_item, _dst_filename, None)
# commit
fo.PerformOperations()
# Remove temp folder
shutil.rmtree(tmp_dir)
def _prepare_windows_plugin(self, launch_args):
"""Copy plugin to TVPaint plugins and set PATH to dependencies.
Check if plugin in TVPaint's plugins exist and match to plugin
version to current implementation version. Based on 64-bit or 32-bit
version of the plugin. Path to libraries required for plugin is added
to PATH variable.
"""
host_executable = launch_args[0]
executable_file = os.path.basename(host_executable)
if "64bit" in executable_file:
subfolder = "windows_x64"
elif "32bit" in executable_file:
subfolder = "windows_x86"
else:
raise ValueError(
"Can't determine if executable "
"leads to 32-bit or 64-bit TVPaint!"
)
plugin_files_path = get_plugin_files_path()
# Folder for right windows plugin files
source_plugins_dir = os.path.join(plugin_files_path, subfolder)
# Path to libraies (.dll) required for plugin library
# - additional libraries can be copied to TVPaint installation folder
# (next to executable) or added to PATH environment variable
additional_libs_folder = os.path.join(
source_plugins_dir,
"additional_libraries"
)
additional_libs_folder = additional_libs_folder.replace("\\", "/")
if additional_libs_folder not in os.environ["PATH"]:
os.environ["PATH"] += (os.pathsep + additional_libs_folder)
# Path to TVPaint's plugins folder (where we want to add our plugin)
host_plugins_path = os.path.join(
os.path.dirname(host_executable),
"plugins"
)
# Files that must be copied to TVPaint's plugin folder
plugin_dir = os.path.join(source_plugins_dir, "plugin")
to_copy = []
to_remove = []
# Remove old plugin name
deprecated_filepath = os.path.join(
host_plugins_path, "AvalonPlugin.dll"
)
if os.path.exists(deprecated_filepath):
to_remove.append(deprecated_filepath)
for filename in os.listdir(plugin_dir):
src_full_path = os.path.join(plugin_dir, filename)
dst_full_path = os.path.join(host_plugins_path, filename)
if dst_full_path in to_remove:
to_remove.remove(dst_full_path)
if (
not os.path.exists(dst_full_path)
or not filecmp.cmp(src_full_path, dst_full_path)
):
to_copy.append((src_full_path, dst_full_path))
# Skip copy if everything is done
if not to_copy and not to_remove:
return
# Try to copy
try:
self._windows_file_process(to_copy, to_remove)
except Exception:
log.error("Plugin copy failed", exc_info=True)
# Validate copy was done
invalid_copy = []
for src, dst in to_copy:
if not os.path.exists(dst) or not filecmp.cmp(src, dst):
invalid_copy.append((src, dst))
# Validate delete was dones
invalid_remove = []
for filepath in to_remove:
if os.path.exists(filepath):
invalid_remove.append(filepath)
if not invalid_remove and not invalid_copy:
return
msg_parts = []
if invalid_remove:
msg_parts.append(
"Failed to remove files: {}".format(", ".join(invalid_remove))
)
if invalid_copy:
_invalid = [
"\"{}\" -> \"{}\"".format(src, dst)
for src, dst in invalid_copy
]
msg_parts.append(
"Failed to copy files: {}".format(", ".join(_invalid))
)
raise RuntimeError(" & ".join(msg_parts))
def _launch_tv_paint(self, launch_args):
flags = (
subprocess.DETACHED_PROCESS
| subprocess.CREATE_NEW_PROCESS_GROUP
)
env = os.environ.copy()
# Remove QuickTime from PATH on windows
# - quicktime overrides TVPaint's ffmpeg encode/decode which may
# cause issues on loading
if platform.system().lower() == "windows":
new_path = []
for path in env["PATH"].split(os.pathsep):
if path and "quicktime" not in path.lower():
new_path.append(path)
env["PATH"] = os.pathsep.join(new_path)
kwargs = {
"env": env,
"creationflags": flags
}
self.process = subprocess.Popen(launch_args, **kwargs)
def _create_routes(self):
self.websocket_rpc = BaseTVPaintRpc(
self, loop=self.websocket_server.loop
)
self.websocket_server.add_route(
"*", "/", self.websocket_rpc.handle_request
)
def _start_webserver(self):
self.websocket_server.start()
# Make sure RPC is using same loop as websocket server
while not self.websocket_server.server_is_running:
time.sleep(0.1)
def _stop_webserver(self):
self.websocket_server.stop()
def _exit(self, exit_code=None):
self._stop_webserver()
if exit_code is not None:
self.exit_code = exit_code
def stop(self):
"""Stop communication and currently running python process."""
log.info("Stopping communication")
self._exit()
def launch(self, launch_args):
"""Prepare all required data and launch host.
First is prepared websocket server as communication point for host,
when server is ready to use host is launched as subprocess.
"""
if platform.system().lower() == "windows":
self._prepare_windows_plugin(launch_args)
# Launch TVPaint and the websocket server.
log.info("Launching TVPaint")
self.websocket_server = WebSocketServer()
self._create_routes()
os.environ["WEBSOCKET_URL"] = "ws://localhost:{}".format(
self.websocket_server.port
)
log.info("Added request handler for url: {}".format(
os.environ["WEBSOCKET_URL"]
))
self._start_webserver()
# Start TVPaint when server is running
self._launch_tv_paint(launch_args)
log.info("Waiting for client connection")
while True:
if self.process.poll() is not None:
log.debug("Host process is not alive. Exiting")
self._exit(1)
return
if self.websocket_rpc.client_connected():
log.info("Client has connected")
break
time.sleep(0.5)
self._on_client_connect()
api.emit("application.launched")
def _on_client_connect(self):
self._initial_textfile_write()
def _initial_textfile_write(self):
"""Show popup about Write to file at start of TVPaint."""
tmp_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
tmp_file.close()
tmp_filepath = tmp_file.name.replace("\\", "/")
george_script = (
"tv_writetextfile \"strict\" \"append\" \"{}\" \"empty\""
).format(tmp_filepath)
result = CommunicationWrapper.execute_george(george_script)
# Remote the file
os.remove(tmp_filepath)
if result is None:
log.warning(
"Host was probably closed before plugin was initialized."
)
elif result.lower() == "forbidden":
log.warning("User didn't confirm saving files.")
def _client(self):
if not self.websocket_rpc:
log.warning("Communicator's server did not start yet.")
return None
for client in self.websocket_rpc.clients:
if not client.ws.closed:
return client
log.warning("Client is not yet connected to Communicator.")
return None
def client(self):
if not self._connected_client or self._connected_client.ws.closed:
self._connected_client = self._client()
return self._connected_client
def send_request(self, method, params=None):
client = self.client()
if not client:
return
return self.websocket_rpc.send_request(
client, method, params
)
def send_notification(self, method, params=None):
client = self.client()
if not client:
return
self.websocket_rpc.send_notification(
client, method, params
)
def execute_george(self, george_script):
"""Execute passed goerge script in TVPaint."""
return self.send_request(
"execute_george", [george_script]
)
def execute_george_through_file(self, george_script):
"""Execute george script with temp file.
Allows to execute multiline george script without stopping websocket
client.
On windows make sure script does not contain paths with backwards
slashes in paths, TVPaint won't execute properly in that case.
Args:
george_script (str): George script to execute. May be multilined.
"""
temporary_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".grg", delete=False
)
temporary_file.write(george_script)
temporary_file.close()
temp_file_path = temporary_file.name.replace("\\", "/")
self.execute_george("tv_runscript {}".format(temp_file_path))
os.remove(temp_file_path)
class QtCommunicator(BaseCommunicator):
menu_definitions = {
"title": "OpenPype Tools",
"menu_items": [
{
"callback": "workfiles_tool",
"label": "Workfiles",
"help": "Open workfiles tool"
}, {
"callback": "loader_tool",
"label": "Load",
"help": "Open loader tool"
}, {
"callback": "creator_tool",
"label": "Create",
"help": "Open creator tool"
}, {
"callback": "scene_inventory_tool",
"label": "Scene inventory",
"help": "Open scene inventory tool"
}, {
"callback": "publish_tool",
"label": "Publish",
"help": "Open publisher"
}, {
"callback": "library_loader_tool",
"label": "Library",
"help": "Open library loader tool"
}, {
"callback": "subset_manager_tool",
"label": "Subset Manager",
"help": "Open subset manager tool"
}, {
"callback": "experimental_tools",
"label": "Experimental tools",
"help": "Open experimental tools dialog"
}
]
}
def __init__(self, qt_app):
super().__init__()
self.callback_queue = Queue()
self.qt_app = qt_app
def _create_routes(self):
self.websocket_rpc = QtTVPaintRpc(
self, loop=self.websocket_server.loop
)
self.websocket_server.add_route(
"*", "/", self.websocket_rpc.handle_request
)
def execute_in_main_thread(self, main_thread_item, wait=True):
"""Add `MainThreadItem` to callback queue and wait for result."""
self.callback_queue.put(main_thread_item)
if wait:
return main_thread_item.wait()
return
async def async_execute_in_main_thread(self, main_thread_item, wait=True):
"""Add `MainThreadItem` to callback queue and wait for result."""
self.callback_queue.put(main_thread_item)
if wait:
return await main_thread_item.async_wait()
def main_thread_listen(self):
"""Get last `MainThreadItem` from queue.
Must be called from main thread.
Method checks if host process is still running as it may cause
issues if not.
"""
# check if host still running
if self.process.poll() is not None:
self._exit()
return None
if self.callback_queue.empty():
return None
return self.callback_queue.get()
def _on_client_connect(self):
super()._on_client_connect()
self._build_menu()
def _build_menu(self):
self.send_request(
"define_menu", [self.menu_definitions]
)
def _exit(self, *args, **kwargs):
super()._exit(*args, **kwargs)
api.emit("application.exit")
self.qt_app.exit(self.exit_code)

View file

@ -0,0 +1,84 @@
import os
import sys
import signal
import traceback
import ctypes
import platform
import logging
from Qt import QtWidgets, QtCore, QtGui
from avalon import api
from openpype import style
from openpype.hosts.tvpaint.api.communication_server import (
CommunicationWrapper
)
from openpype.hosts.tvpaint import api as tvpaint_host
log = logging.getLogger(__name__)
def safe_excepthook(*args):
traceback.print_exception(*args)
def main(launch_args):
# Be sure server won't crash at any moment but just print traceback
sys.excepthook = safe_excepthook
# Create QtApplication for tools
# - QApplicaiton is also main thread/event loop of the server
qt_app = QtWidgets.QApplication([])
# Execute pipeline installation
api.install(tvpaint_host)
# Create Communicator object and trigger launch
# - this must be done before anything is processed
communicator = CommunicationWrapper.create_qt_communicator(qt_app)
communicator.launch(launch_args)
def process_in_main_thread():
"""Execution of `MainThreadItem`."""
item = communicator.main_thread_listen()
if item:
item.execute()
timer = QtCore.QTimer()
timer.setInterval(100)
timer.timeout.connect(process_in_main_thread)
timer.start()
# Register terminal signal handler
def signal_handler(*_args):
print("You pressed Ctrl+C. Process ended.")
communicator.stop()
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
qt_app.setQuitOnLastWindowClosed(False)
qt_app.setStyleSheet(style.load_stylesheet())
# Load avalon icon
icon_path = style.app_icon_path()
if icon_path:
icon = QtGui.QIcon(icon_path)
qt_app.setWindowIcon(icon)
# Set application name to be able show application icon in task bar
if platform.system().lower() == "windows":
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(
u"WebsocketServer"
)
# Run Qt application event processing
sys.exit(qt_app.exec_())
if __name__ == "__main__":
args = list(sys.argv)
if os.path.abspath(__file__) == os.path.normpath(args[0]):
# Pop path to script
args.pop(0)
main(args)

View file

@ -1,85 +1,534 @@
from PIL import Image
import os
import logging
import tempfile
import avalon.io
from avalon.tvpaint.lib import execute_george
from . import CommunicationWrapper
log = logging.getLogger(__name__)
def composite_images(input_image_paths, output_filepath):
"""Composite images in order from passed list.
def execute_george(george_script, communicator=None):
if not communicator:
communicator = CommunicationWrapper.communicator
return communicator.execute_george(george_script)
Raises:
ValueError: When entered list is empty.
def execute_george_through_file(george_script, communicator=None):
"""Execute george script with temp file.
Allows to execute multiline george script without stopping websocket
client.
On windows make sure script does not contain paths with backwards
slashes in paths, TVPaint won't execute properly in that case.
Args:
george_script (str): George script to execute. May be multilined.
"""
if not input_image_paths:
raise ValueError("Nothing to composite.")
if not communicator:
communicator = CommunicationWrapper.communicator
img_obj = None
for image_filepath in input_image_paths:
_img_obj = Image.open(image_filepath)
if img_obj is None:
img_obj = _img_obj
else:
img_obj.alpha_composite(_img_obj)
img_obj.save(output_filepath)
return communicator.execute_george_through_file(george_script)
def set_context_settings(asset_doc=None):
"""Set workfile settings by asset document data.
def parse_layers_data(data):
"""Parse layers data loaded in 'get_layers_data'."""
layers = []
layers_raw = data.split("\n")
for layer_raw in layers_raw:
layer_raw = layer_raw.strip()
if not layer_raw:
continue
(
layer_id, group_id, visible, position, opacity, name,
layer_type,
frame_start, frame_end, prelighttable, postlighttable,
selected, editable, sencil_state
) = layer_raw.split("|")
layer = {
"layer_id": int(layer_id),
"group_id": int(group_id),
"visible": visible == "ON",
"position": int(position),
"opacity": int(opacity),
"name": name,
"type": layer_type,
"frame_start": int(frame_start),
"frame_end": int(frame_end),
"prelighttable": prelighttable == "1",
"postlighttable": postlighttable == "1",
"selected": selected == "1",
"editable": editable == "1",
"sencil_state": sencil_state
}
layers.append(layer)
return layers
Change fps, resolution and frame start/end.
def get_layers_data_george_script(output_filepath, layer_ids=None):
"""Prepare george script which will collect all layers from workfile."""
output_filepath = output_filepath.replace("\\", "/")
george_script_lines = [
# Variable containing full path to output file
"output_path = \"{}\"".format(output_filepath),
# Get Current Layer ID
"tv_LayerCurrentID",
"current_layer_id = result"
]
# Script part for getting and storing layer information to temp
layer_data_getter = (
# Get information about layer's group
"tv_layercolor \"get\" layer_id",
"group_id = result",
"tv_LayerInfo layer_id",
(
"PARSE result visible position opacity name"
" type startFrame endFrame prelighttable postlighttable"
" selected editable sencilState"
),
# Check if layer ID match `tv_LayerCurrentID`
"IF CMP(current_layer_id, layer_id)==1",
# - mark layer as selected if layer id match to current layer id
"selected=1",
"END",
# Prepare line with data separated by "|"
(
"line = layer_id'|'group_id'|'visible'|'position'|'opacity'|'"
"name'|'type'|'startFrame'|'endFrame'|'prelighttable'|'"
"postlighttable'|'selected'|'editable'|'sencilState"
),
# Write data to output file
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line",
)
# Collect data for all layers if layers are not specified
if layer_ids is None:
george_script_lines.extend((
# Layer loop variables
"loop = 1",
"idx = 0",
# Layers loop
"WHILE loop",
"tv_LayerGetID idx",
"layer_id = result",
"idx = idx + 1",
# Stop loop if layer_id is "NONE"
"IF CMP(layer_id, \"NONE\")==1",
"loop = 0",
"ELSE",
*layer_data_getter,
"END",
"END"
))
else:
for layer_id in layer_ids:
george_script_lines.append("layer_id = {}".format(layer_id))
george_script_lines.extend(layer_data_getter)
return "\n".join(george_script_lines)
def layers_data(layer_ids=None, communicator=None):
"""Backwards compatible function of 'get_layers_data'."""
return get_layers_data(layer_ids, communicator)
def get_layers_data(layer_ids=None, communicator=None):
"""Collect all layers information from currently opened workfile."""
output_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
output_file.close()
if layer_ids is not None and isinstance(layer_ids, int):
layer_ids = [layer_ids]
output_filepath = output_file.name
george_script = get_layers_data_george_script(output_filepath, layer_ids)
execute_george_through_file(george_script, communicator)
with open(output_filepath, "r") as stream:
data = stream.read()
output = parse_layers_data(data)
os.remove(output_filepath)
return output
def parse_group_data(data):
"""Paser group data collected in 'get_groups_data'."""
output = []
groups_raw = data.split("\n")
for group_raw in groups_raw:
group_raw = group_raw.strip()
if not group_raw:
continue
parts = group_raw.split(" ")
# Check for length and concatenate 2 last items until length match
# - this happens if name contain spaces
while len(parts) > 6:
last_item = parts.pop(-1)
parts[-1] = " ".join([parts[-1], last_item])
clip_id, group_id, red, green, blue, name = parts
group = {
"group_id": int(group_id),
"name": name,
"clip_id": int(clip_id),
"red": int(red),
"green": int(green),
"blue": int(blue),
}
output.append(group)
return output
def groups_data(communicator=None):
"""Backwards compatible function of 'get_groups_data'."""
return get_groups_data(communicator)
def get_groups_data(communicator=None):
"""Information about groups from current workfile."""
output_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
output_file.close()
output_filepath = output_file.name.replace("\\", "/")
george_script_lines = (
# Variable containing full path to output file
"output_path = \"{}\"".format(output_filepath),
"loop = 1",
"FOR idx = 1 TO 12",
"tv_layercolor \"getcolor\" 0 idx",
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result",
"END"
)
george_script = "\n".join(george_script_lines)
execute_george_through_file(george_script, communicator)
with open(output_filepath, "r") as stream:
data = stream.read()
output = parse_group_data(data)
os.remove(output_filepath)
return output
def get_layers_pre_post_behavior(layer_ids, communicator=None):
"""Collect data about pre and post behavior of layer ids.
Pre and Post behaviors is enumerator of possible values:
- "none"
- "repeat" / "loop"
- "pingpong"
- "hold"
Example output:
```json
{
0: {
"pre": "none",
"post": "loop"
}
}
```
Returns:
dict: Key is layer id value is dictionary with "pre" and "post" keys.
"""
if asset_doc is None:
# Use current session asset if not passed
asset_doc = avalon.io.find_one({
"type": "asset",
"name": avalon.io.Session["AVALON_ASSET"]
})
# Skip if is empty
if not layer_ids:
return {}
project_doc = avalon.io.find_one({"type": "project"})
# Auto convert to list
if not isinstance(layer_ids, (list, set, tuple)):
layer_ids = [layer_ids]
framerate = asset_doc["data"].get("fps")
if framerate is None:
framerate = project_doc["data"].get("fps")
# Prepare temp file
output_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
output_file.close()
if framerate is not None:
execute_george(
"tv_framerate {} \"timestretch\"".format(framerate)
)
else:
print("Framerate was not found!")
output_filepath = output_file.name.replace("\\", "/")
george_script_lines = [
# Variable containing full path to output file
"output_path = \"{}\"".format(output_filepath),
]
for layer_id in layer_ids:
george_script_lines.extend([
"layer_id = {}".format(layer_id),
"tv_layerprebehavior layer_id",
"pre_beh = result",
"tv_layerpostbehavior layer_id",
"post_beh = result",
"line = layer_id'|'pre_beh'|'post_beh",
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line"
])
width_key = "resolutionWidth"
height_key = "resolutionHeight"
george_script = "\n".join(george_script_lines)
execute_george_through_file(george_script, communicator)
width = asset_doc["data"].get(width_key)
height = asset_doc["data"].get(height_key)
if width is None or height is None:
width = project_doc["data"].get(width_key)
height = project_doc["data"].get(height_key)
# Read data
with open(output_filepath, "r") as stream:
data = stream.read()
if width is None or height is None:
print("Resolution was not found!")
else:
execute_george("tv_resizepage {} {} 0".format(width, height))
# Remove temp file
os.remove(output_filepath)
frame_start = asset_doc["data"].get("frameStart")
frame_end = asset_doc["data"].get("frameEnd")
# Parse data
output = {}
raw_lines = data.split("\n")
for raw_line in raw_lines:
line = raw_line.strip()
if not line:
continue
parts = line.split("|")
if len(parts) != 3:
continue
layer_id, pre_beh, post_beh = parts
output[int(layer_id)] = {
"pre": pre_beh.lower(),
"post": post_beh.lower()
}
return output
if frame_start is None or frame_end is None:
print("Frame range was not found!")
return
handles = asset_doc["data"].get("handles") or 0
handle_start = asset_doc["data"].get("handleStart")
handle_end = asset_doc["data"].get("handleEnd")
def get_layers_exposure_frames(layer_ids, layers_data=None, communicator=None):
"""Get exposure frames.
if handle_start is None or handle_end is None:
handle_start = handles
handle_end = handles
Easily said returns frames where keyframes are. Recognized with george
function `tv_exposureinfo` returning "Head".
# Always start from 0 Mark In and set only Mark Out
mark_in = 0
mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end
Args:
layer_ids (list): Ids of a layers for which exposure frames should
look for.
layers_data (list): Precollected layers data. If are not passed then
'get_layers_data' is used.
communicator (BaseCommunicator): Communicator used for communication
with TVPaint.
execute_george("tv_markin {} set".format(mark_in))
execute_george("tv_markout {} set".format(mark_out))
Returns:
dict: Frames where exposure is set to "Head" by layer id.
"""
if layers_data is None:
layers_data = get_layers_data(layer_ids)
_layers_by_id = {
layer["layer_id"]: layer
for layer in layers_data
}
layers_by_id = {
layer_id: _layers_by_id.get(layer_id)
for layer_id in layer_ids
}
tmp_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
tmp_file.close()
tmp_output_path = tmp_file.name.replace("\\", "/")
george_script_lines = [
"output_path = \"{}\"".format(tmp_output_path)
]
output = {}
layer_id_mapping = {}
for layer_id, layer_data in layers_by_id.items():
layer_id_mapping[str(layer_id)] = layer_id
output[layer_id] = []
if not layer_data:
continue
first_frame = layer_data["frame_start"]
last_frame = layer_data["frame_end"]
george_script_lines.extend([
"line = \"\"",
"layer_id = {}".format(layer_id),
"line = line''layer_id",
"tv_layerset layer_id",
"frame = {}".format(first_frame),
"WHILE (frame <= {})".format(last_frame),
"tv_exposureinfo frame",
"exposure = result",
"IF (CMP(exposure, \"Head\") == 1)",
"line = line'|'frame",
"END",
"frame = frame + 1",
"END",
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line"
])
execute_george_through_file("\n".join(george_script_lines), communicator)
with open(tmp_output_path, "r") as stream:
data = stream.read()
os.remove(tmp_output_path)
lines = []
for line in data.split("\n"):
line = line.strip()
if line:
lines.append(line)
for line in lines:
line_items = list(line.split("|"))
layer_id = line_items.pop(0)
_layer_id = layer_id_mapping[layer_id]
output[_layer_id] = [int(frame) for frame in line_items]
return output
def get_exposure_frames(
layer_id, first_frame=None, last_frame=None, communicator=None
):
"""Get exposure frames.
Easily said returns frames where keyframes are. Recognized with george
function `tv_exposureinfo` returning "Head".
Args:
layer_id (int): Id of a layer for which exposure frames should
look for.
first_frame (int): From which frame will look for exposure frames.
Used layers first frame if not entered.
last_frame (int): Last frame where will look for exposure frames.
Used layers last frame if not entered.
Returns:
list: Frames where exposure is set to "Head".
"""
if first_frame is None or last_frame is None:
layer = layers_data(layer_id)[0]
if first_frame is None:
first_frame = layer["frame_start"]
if last_frame is None:
last_frame = layer["frame_end"]
tmp_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
tmp_file.close()
tmp_output_path = tmp_file.name.replace("\\", "/")
george_script_lines = [
"tv_layerset {}".format(layer_id),
"output_path = \"{}\"".format(tmp_output_path),
"output = \"\"",
"frame = {}".format(first_frame),
"WHILE (frame <= {})".format(last_frame),
"tv_exposureinfo frame",
"exposure = result",
"IF (CMP(exposure, \"Head\") == 1)",
"IF (CMP(output, \"\") == 1)",
"output = output''frame",
"ELSE",
"output = output'|'frame",
"END",
"END",
"frame = frame + 1",
"END",
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' output"
]
execute_george_through_file("\n".join(george_script_lines), communicator)
with open(tmp_output_path, "r") as stream:
data = stream.read()
os.remove(tmp_output_path)
lines = []
for line in data.split("\n"):
line = line.strip()
if line:
lines.append(line)
exposure_frames = []
for line in lines:
for frame in line.split("|"):
exposure_frames.append(int(frame))
return exposure_frames
def get_scene_data(communicator=None):
"""Scene data of currently opened scene.
Result contains resolution, pixel aspect, fps mark in/out with states,
frame start and background color.
Returns:
dict: Scene data collected in many ways.
"""
workfile_info = execute_george("tv_projectinfo", communicator)
workfile_info_parts = workfile_info.split(" ")
# Project frame start - not used
workfile_info_parts.pop(-1)
field_order = workfile_info_parts.pop(-1)
frame_rate = float(workfile_info_parts.pop(-1))
pixel_apsect = float(workfile_info_parts.pop(-1))
height = int(workfile_info_parts.pop(-1))
width = int(workfile_info_parts.pop(-1))
# Marks return as "{frame - 1} {state} ", example "0 set".
result = execute_george("tv_markin", communicator)
mark_in_frame, mark_in_state, _ = result.split(" ")
result = execute_george("tv_markout", communicator)
mark_out_frame, mark_out_state, _ = result.split(" ")
start_frame = execute_george("tv_startframe", communicator)
return {
"width": width,
"height": height,
"pixel_aspect": pixel_apsect,
"fps": frame_rate,
"field_order": field_order,
"mark_in": int(mark_in_frame),
"mark_in_state": mark_in_state,
"mark_in_set": mark_in_state == "set",
"mark_out": int(mark_out_frame),
"mark_out_state": mark_out_state,
"mark_out_set": mark_out_state == "set",
"start_frame": int(start_frame),
"bg_color": get_scene_bg_color(communicator)
}
def get_scene_bg_color(communicator=None):
"""Background color set on scene.
Is important for review exporting where scene bg color is used as
background.
"""
output_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
output_file.close()
output_filepath = output_file.name.replace("\\", "/")
george_script_lines = [
# Variable containing full path to output file
"output_path = \"{}\"".format(output_filepath),
"tv_background",
"bg_color = result",
# Write data to output file
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' bg_color"
]
george_script = "\n".join(george_script_lines)
execute_george_through_file(george_script, communicator)
with open(output_filepath, "r") as stream:
data = stream.read()
os.remove(output_filepath)
data = data.strip()
if not data:
return None
return data.split(" ")

View file

@ -0,0 +1,491 @@
import os
import json
import contextlib
import tempfile
import logging
import requests
import pyblish.api
import avalon.api
from avalon import io
from avalon.pipeline import AVALON_CONTAINER_ID
from openpype.hosts import tvpaint
from openpype.api import get_current_project_settings
from .lib import (
execute_george,
execute_george_through_file
)
log = logging.getLogger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(tvpaint.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
METADATA_SECTION = "avalon"
SECTION_NAME_CONTEXT = "context"
SECTION_NAME_INSTANCES = "instances"
SECTION_NAME_CONTAINERS = "containers"
# Maximum length of metadata chunk string
# TODO find out the max (500 is safe enough)
TVPAINT_CHUNK_LENGTH = 500
"""TVPaint's Metadata
Metadata are stored to TVPaint's workfile.
Workfile works similar to .ini file but has few limitation. Most important
limitation is that value under key has limited length. Due to this limitation
each metadata section/key stores number of "subkeys" that are related to
the section.
Example:
Metadata key `"instances"` may have stored value "2". In that case it is
expected that there are also keys `["instances0", "instances1"]`.
Workfile data looks like:
```
[avalon]
instances0=[{{__dq__}id{__dq__}: {__dq__}pyblish.avalon.instance{__dq__...
instances1=...more data...
instances=2
```
"""
def install():
"""Install Maya-specific functionality of avalon-core.
This function is called automatically on calling `api.install(maya)`.
"""
log.info("OpenPype - Installing TVPaint integration")
io.install()
# Create workdir folder if does not exist yet
workdir = io.Session["AVALON_WORKDIR"]
if not os.path.exists(workdir):
os.makedirs(workdir)
pyblish.api.register_host("tvpaint")
pyblish.api.register_plugin_path(PUBLISH_PATH)
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
registered_callbacks = (
pyblish.api.registered_callbacks().get("instanceToggled") or []
)
if on_instance_toggle not in registered_callbacks:
pyblish.api.register_callback("instanceToggled", on_instance_toggle)
avalon.api.on("application.launched", initial_launch)
avalon.api.on("application.exit", application_exit)
def uninstall():
"""Uninstall TVPaint-specific functionality of avalon-core.
This function is called automatically on calling `api.uninstall()`.
"""
log.info("OpenPype - Uninstalling TVPaint integration")
pyblish.api.deregister_host("tvpaint")
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
def containerise(
name, namespace, members, context, loader, current_containers=None
):
"""Add new container to metadata.
Args:
name (str): Container name.
namespace (str): Container namespace.
members (list): List of members that were loaded and belongs
to the container (layer names).
current_containers (list): Preloaded containers. Should be used only
on update/switch when containers were modified durring the process.
Returns:
dict: Container data stored to workfile metadata.
"""
container_data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"members": members,
"name": name,
"namespace": namespace,
"loader": str(loader),
"representation": str(context["representation"]["_id"])
}
if current_containers is None:
current_containers = ls()
# Add container to containers list
current_containers.append(container_data)
# Store data to metadata
write_workfile_metadata(SECTION_NAME_CONTAINERS, current_containers)
return container_data
@contextlib.contextmanager
def maintained_selection():
# TODO implement logic
try:
yield
finally:
pass
def split_metadata_string(text, chunk_length=None):
"""Split string by length.
Split text to chunks by entered length.
Example:
```python
text = "ABCDEFGHIJKLM"
result = split_metadata_string(text, 3)
print(result)
>>> ['ABC', 'DEF', 'GHI', 'JKL']
```
Args:
text (str): Text that will be split into chunks.
chunk_length (int): Single chunk size. Default chunk_length is
set to global variable `TVPAINT_CHUNK_LENGTH`.
Returns:
list: List of strings wil at least one item.
"""
if chunk_length is None:
chunk_length = TVPAINT_CHUNK_LENGTH
chunks = []
for idx in range(chunk_length, len(text) + chunk_length, chunk_length):
start_idx = idx - chunk_length
chunks.append(text[start_idx:idx])
return chunks
def get_workfile_metadata_string_for_keys(metadata_keys):
"""Read metadata for specific keys from current project workfile.
All values from entered keys are stored to single string without separator.
Function is designed to help get all values for one metadata key at once.
So order of passed keys matteres.
Args:
metadata_keys (list, str): Metadata keys for which data should be
retrieved. Order of keys matters! It is possible to enter only
single key as string.
"""
# Add ability to pass only single key
if isinstance(metadata_keys, str):
metadata_keys = [metadata_keys]
output_file = tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".txt", delete=False
)
output_file.close()
output_filepath = output_file.name.replace("\\", "/")
george_script_parts = []
george_script_parts.append(
"output_path = \"{}\"".format(output_filepath)
)
# Store data for each index of metadata key
for metadata_key in metadata_keys:
george_script_parts.append(
"tv_readprojectstring \"{}\" \"{}\" \"\"".format(
METADATA_SECTION, metadata_key
)
)
george_script_parts.append(
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result"
)
# Execute the script
george_script = "\n".join(george_script_parts)
execute_george_through_file(george_script)
# Load data from temp file
with open(output_filepath, "r") as stream:
file_content = stream.read()
# Remove `\n` from content
output_string = file_content.replace("\n", "")
# Delete temp file
os.remove(output_filepath)
return output_string
def get_workfile_metadata_string(metadata_key):
"""Read metadata for specific key from current project workfile."""
result = get_workfile_metadata_string_for_keys([metadata_key])
if not result:
return None
stripped_result = result.strip()
if not stripped_result:
return None
# NOTE Backwards compatibility when metadata key did not store range of key
# indexes but the value itself
# NOTE We don't have to care about negative values with `isdecimal` check
if not stripped_result.isdecimal():
metadata_string = result
else:
keys = []
for idx in range(int(stripped_result)):
keys.append("{}{}".format(metadata_key, idx))
metadata_string = get_workfile_metadata_string_for_keys(keys)
# Replace quotes plaholders with their values
metadata_string = (
metadata_string
.replace("{__sq__}", "'")
.replace("{__dq__}", "\"")
)
return metadata_string
def get_workfile_metadata(metadata_key, default=None):
"""Read and parse metadata for specific key from current project workfile.
Pipeline use function to store loaded and created instances within keys
stored in `SECTION_NAME_INSTANCES` and `SECTION_NAME_CONTAINERS`
constants.
Args:
metadata_key (str): Key defying which key should read. It is expected
value contain json serializable string.
"""
if default is None:
default = []
json_string = get_workfile_metadata_string(metadata_key)
if json_string:
try:
return json.loads(json_string)
except json.decoder.JSONDecodeError:
# TODO remove when backwards compatibility of storing metadata
# will be removed
print((
"Fixed invalid metadata in workfile."
" Not serializable string was: {}"
).format(json_string))
write_workfile_metadata(metadata_key, default)
return default
def write_workfile_metadata(metadata_key, value):
"""Write metadata for specific key into current project workfile.
George script has specific way how to work with quotes which should be
solved automatically with this function.
Args:
metadata_key (str): Key defying under which key value will be stored.
value (dict,list,str): Data to store they must be json serializable.
"""
if isinstance(value, (dict, list)):
value = json.dumps(value)
if not value:
value = ""
# Handle quotes in dumped json string
# - replace single and double quotes with placeholders
value = (
value
.replace("'", "{__sq__}")
.replace("\"", "{__dq__}")
)
chunks = split_metadata_string(value)
chunks_len = len(chunks)
write_template = "tv_writeprojectstring \"{}\" \"{}\" \"{}\""
george_script_parts = []
# Add information about chunks length to metadata key itself
george_script_parts.append(
write_template.format(METADATA_SECTION, metadata_key, chunks_len)
)
# Add chunk values to indexed metadata keys
for idx, chunk_value in enumerate(chunks):
sub_key = "{}{}".format(metadata_key, idx)
george_script_parts.append(
write_template.format(METADATA_SECTION, sub_key, chunk_value)
)
george_script = "\n".join(george_script_parts)
return execute_george_through_file(george_script)
def get_current_workfile_context():
"""Return context in which was workfile saved."""
return get_workfile_metadata(SECTION_NAME_CONTEXT, {})
def save_current_workfile_context(context):
"""Save context which was used to create a workfile."""
return write_workfile_metadata(SECTION_NAME_CONTEXT, context)
def remove_instance(instance):
"""Remove instance from current workfile metadata."""
current_instances = get_workfile_metadata(SECTION_NAME_INSTANCES)
instance_id = instance.get("uuid")
found_idx = None
if instance_id:
for idx, _inst in enumerate(current_instances):
if _inst["uuid"] == instance_id:
found_idx = idx
break
if found_idx is None:
return
current_instances.pop(found_idx)
write_instances(current_instances)
def list_instances():
"""List all created instances from current workfile."""
return get_workfile_metadata(SECTION_NAME_INSTANCES)
def write_instances(data):
return write_workfile_metadata(SECTION_NAME_INSTANCES, data)
# Backwards compatibility
def _write_instances(*args, **kwargs):
return write_instances(*args, **kwargs)
def ls():
return get_workfile_metadata(SECTION_NAME_CONTAINERS)
def on_instance_toggle(instance, old_value, new_value):
"""Update instance data in workfile on publish toggle."""
# Review may not have real instance in wokrfile metadata
if not instance.data.get("uuid"):
return
instance_id = instance.data["uuid"]
found_idx = None
current_instances = list_instances()
for idx, workfile_instance in enumerate(current_instances):
if workfile_instance["uuid"] == instance_id:
found_idx = idx
break
if found_idx is None:
return
if "active" in current_instances[found_idx]:
current_instances[found_idx]["active"] = new_value
write_instances(current_instances)
def initial_launch():
# Setup project settings if its the template that's launched.
# TODO also check for template creation when it's possible to define
# templates
last_workfile = os.environ.get("AVALON_LAST_WORKFILE")
if not last_workfile or os.path.exists(last_workfile):
return
log.info("Setting up project...")
set_context_settings()
def application_exit():
data = get_current_project_settings()
stop_timer = data["tvpaint"]["stop_timer_on_application_exit"]
if not stop_timer:
return
# Stop application timer.
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
requests.post(rest_api_url)
def set_context_settings(asset_doc=None):
"""Set workfile settings by asset document data.
Change fps, resolution and frame start/end.
"""
if asset_doc is None:
# Use current session asset if not passed
asset_doc = avalon.io.find_one({
"type": "asset",
"name": avalon.io.Session["AVALON_ASSET"]
})
project_doc = avalon.io.find_one({"type": "project"})
framerate = asset_doc["data"].get("fps")
if framerate is None:
framerate = project_doc["data"].get("fps")
if framerate is not None:
execute_george(
"tv_framerate {} \"timestretch\"".format(framerate)
)
else:
print("Framerate was not found!")
width_key = "resolutionWidth"
height_key = "resolutionHeight"
width = asset_doc["data"].get(width_key)
height = asset_doc["data"].get(height_key)
if width is None or height is None:
width = project_doc["data"].get(width_key)
height = project_doc["data"].get(height_key)
if width is None or height is None:
print("Resolution was not found!")
else:
execute_george(
"tv_resizepage {} {} 0".format(width, height)
)
frame_start = asset_doc["data"].get("frameStart")
frame_end = asset_doc["data"].get("frameEnd")
if frame_start is None or frame_end is None:
print("Frame range was not found!")
return
handles = asset_doc["data"].get("handles") or 0
handle_start = asset_doc["data"].get("handleStart")
handle_end = asset_doc["data"].get("handleEnd")
if handle_start is None or handle_end is None:
handle_start = handles
handle_end = handles
# Always start from 0 Mark In and set only Mark Out
mark_in = 0
mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end
execute_george("tv_markin {} set".format(mark_in))
execute_george("tv_markout {} set".format(mark_out))

View file

@ -1,8 +1,21 @@
import re
import uuid
import avalon.api
from openpype.api import PypeCreatorMixin
from avalon.tvpaint import pipeline
from openpype.hosts.tvpaint.api import (
pipeline,
lib
)
class Creator(PypeCreatorMixin, pipeline.Creator):
class Creator(PypeCreatorMixin, avalon.api.Creator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
# Add unified identifier created with `uuid` module
self.data["uuid"] = str(uuid.uuid4())
@classmethod
def get_dynamic_data(cls, *args, **kwargs):
dynamic_data = super(Creator, cls).get_dynamic_data(*args, **kwargs)
@ -17,3 +30,95 @@ class Creator(PypeCreatorMixin, pipeline.Creator):
if "task" not in dynamic_data and task_name:
dynamic_data["task"] = task_name
return dynamic_data
@staticmethod
def are_instances_same(instance_1, instance_2):
"""Compare instances but skip keys with unique values.
During compare are skiped keys that will be 100% sure
different on new instance, like "id".
Returns:
bool: True if instances are same.
"""
if (
not isinstance(instance_1, dict)
or not isinstance(instance_2, dict)
):
return instance_1 == instance_2
checked_keys = set()
checked_keys.add("id")
for key, value in instance_1.items():
if key not in checked_keys:
if key not in instance_2:
return False
if value != instance_2[key]:
return False
checked_keys.add(key)
for key in instance_2.keys():
if key not in checked_keys:
return False
return True
def write_instances(self, data):
self.log.debug(
"Storing instance data to workfile. {}".format(str(data))
)
return pipeline.write_instances(data)
def process(self):
data = pipeline.list_instances()
data.append(self.data)
self.write_instances(data)
class Loader(avalon.api.Loader):
hosts = ["tvpaint"]
@staticmethod
def get_members_from_container(container):
if "members" not in container and "objectName" in container:
# Backwards compatibility
layer_ids_str = container.get("objectName")
return [
int(layer_id) for layer_id in layer_ids_str.split("|")
]
return container["members"]
def get_unique_layer_name(self, asset_name, name):
"""Layer name with counter as suffix.
Find higher 3 digit suffix from all layer names in scene matching regex
`{asset_name}_{name}_{suffix}`. Higher 3 digit suffix is used
as base for next number if scene does not contain layer matching regex
`0` is used ase base.
Args:
asset_name (str): Name of subset's parent asset document.
name (str): Name of loaded subset.
Returns:
(str): `{asset_name}_{name}_{higher suffix + 1}`
"""
layer_name_base = "{}_{}".format(asset_name, name)
counter_regex = re.compile(r"_(\d{3})$")
higher_counter = 0
for layer in lib.get_layers_data():
layer_name = layer["name"]
if not layer_name.startswith(layer_name_base):
continue
number_subpart = layer_name[len(layer_name_base):]
groups = counter_regex.findall(number_subpart)
if len(groups) != 1:
continue
counter = int(groups[0])
if counter > higher_counter:
higher_counter = counter
continue
return "{}_{:0>3d}".format(layer_name_base, higher_counter + 1)

View file

@ -0,0 +1,55 @@
"""Host API required for Work Files.
# TODO @iLLiCiT implement functions:
has_unsaved_changes
"""
from avalon import api
from .lib import (
execute_george,
execute_george_through_file
)
from .pipeline import save_current_workfile_context
def open_file(filepath):
"""Open the scene file in Blender."""
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(
filepath.replace("\\", "/")
)
return execute_george_through_file(george_script)
def save_file(filepath):
"""Save the open scene file."""
# Store context to workfile before save
context = {
"project": api.Session["AVALON_PROJECT"],
"asset": api.Session["AVALON_ASSET"],
"task": api.Session["AVALON_TASK"]
}
save_current_workfile_context(context)
# Execute george script to save workfile.
george_script = "tv_SaveProject {}".format(filepath.replace("\\", "/"))
return execute_george(george_script)
def current_file():
"""Return the path of the open scene file."""
george_script = "tv_GetProjectName"
return execute_george(george_script)
def has_unsaved_changes():
"""Does the open scene file have unsaved changes?"""
return False
def file_extensions():
"""Return the supported file extensions for Blender scene files."""
return api.HOST_WORKFILE_EXTENSIONS["tvpaint"]
def work_root(session):
"""Return the default root to browse for work files."""
return session["AVALON_WORKDIR"]

View file

@ -44,10 +44,6 @@ class TvpaintPrelaunchHook(PreLaunchHook):
self.launch_context.launch_args.extend(remainders)
def launch_script_path(self):
avalon_dir = os.path.dirname(os.path.abspath(avalon.__file__))
script_path = os.path.join(
avalon_dir,
"tvpaint",
"launch_script.py"
)
return script_path
from openpype.hosts.tvpaint import get_launch_script_path
return get_launch_script_path()

View file

@ -1,11 +1,12 @@
from avalon.api import CreatorError
from avalon.tvpaint import (
from openpype.lib import prepare_template_data
from openpype.hosts.tvpaint.api import (
plugin,
pipeline,
lib,
CommunicationWrapper
)
from openpype.hosts.tvpaint.api import plugin
from openpype.lib import prepare_template_data
class CreateRenderlayer(plugin.Creator):
@ -56,7 +57,7 @@ class CreateRenderlayer(plugin.Creator):
# Validate that communication is initialized
if CommunicationWrapper.communicator:
# Get currently selected layers
layers_data = lib.layers_data()
layers_data = lib.get_layers_data()
selected_layers = [
layer
@ -75,7 +76,7 @@ class CreateRenderlayer(plugin.Creator):
def process(self):
self.log.debug("Query data from workfile.")
instances = pipeline.list_instances()
layers_data = lib.layers_data()
layers_data = lib.get_layers_data()
self.log.debug("Checking for selection groups.")
# Collect group ids from selection
@ -102,7 +103,7 @@ class CreateRenderlayer(plugin.Creator):
self.log.debug(f"Selected group id is \"{group_id}\".")
self.data["group_id"] = group_id
group_data = lib.groups_data()
group_data = lib.get_groups_data()
group_name = None
for group in group_data:
if group["group_id"] == group_id:
@ -169,7 +170,7 @@ class CreateRenderlayer(plugin.Creator):
return
self.log.debug("Querying groups data from workfile.")
groups_data = lib.groups_data()
groups_data = lib.get_groups_data()
self.log.debug("Changing name of the group.")
selected_group = None
@ -196,6 +197,7 @@ class CreateRenderlayer(plugin.Creator):
)
def _ask_user_subset_override(self, instance):
from Qt import QtCore
from Qt.QtWidgets import QMessageBox
title = "Subset \"{}\" already exist".format(instance["subset"])
@ -205,6 +207,10 @@ class CreateRenderlayer(plugin.Creator):
).format(instance["subset"])
dialog = QMessageBox()
dialog.setWindowFlags(
dialog.windowFlags()
| QtCore.Qt.WindowStaysOnTopHint
)
dialog.setWindowTitle(title)
dialog.setText(text)
dialog.setStandardButtons(QMessageBox.Yes | QMessageBox.No)

View file

@ -1,11 +1,11 @@
from avalon.api import CreatorError
from avalon.tvpaint import (
from openpype.lib import prepare_template_data
from openpype.hosts.tvpaint.api import (
plugin,
pipeline,
lib,
CommunicationWrapper
)
from openpype.hosts.tvpaint.api import plugin
from openpype.lib import prepare_template_data
class CreateRenderPass(plugin.Creator):

View file

@ -1,8 +1,8 @@
from avalon.vendor import qargparse
from avalon.tvpaint import lib, pipeline
from openpype.hosts.tvpaint.api import lib, plugin
class ImportImage(pipeline.Loader):
class ImportImage(plugin.Loader):
"""Load image or image sequence to TVPaint as new layer."""
families = ["render", "image", "background", "plate"]

View file

@ -1,10 +1,10 @@
import collections
from avalon.pipeline import get_representation_context
from avalon.vendor import qargparse
from avalon.tvpaint import lib, pipeline
from openpype.hosts.tvpaint.api import lib, pipeline, plugin
class LoadImage(pipeline.Loader):
class LoadImage(plugin.Loader):
"""Load image or image sequence to TVPaint as new layer."""
families = ["render", "image", "background", "plate"]

View file

@ -1,9 +1,9 @@
import os
import tempfile
from avalon.tvpaint import lib, pipeline
from openpype.hosts.tvpaint.api import lib, plugin
class ImportSound(pipeline.Loader):
class ImportSound(plugin.Loader):
"""Load sound to TVPaint.
Sound layers does not have ids but only position index so we can't

View file

@ -1,16 +1,16 @@
import getpass
import os
from avalon.tvpaint import lib, pipeline, get_current_workfile_context
from avalon import api, io
from openpype.lib import (
get_workfile_template_key_from_context,
get_workdir_data
)
from openpype.api import Anatomy
from openpype.hosts.tvpaint.api import lib, pipeline, plugin
class LoadWorkfile(pipeline.Loader):
class LoadWorkfile(plugin.Loader):
"""Load workfile."""
families = ["workfile"]
@ -24,7 +24,7 @@ class LoadWorkfile(pipeline.Loader):
host = api.registered_host()
current_file = host.current_file()
context = get_current_workfile_context()
context = pipeline.get_current_workfile_context()
filepath = self.fname.replace("\\", "/")

View file

@ -218,7 +218,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
# - not 100% working as it was found out that layer ids can't be
# used as unified identifier across multiple workstations
layers_by_id = {
layer["id"]: layer
layer["layer_id"]: layer
for layer in layers_data
}
layer_ids = instance_data["layer_ids"]

View file

@ -4,7 +4,7 @@ import tempfile
import pyblish.api
import avalon.api
from avalon.tvpaint import pipeline, lib
from openpype.hosts.tvpaint.api import pipeline, lib
class ResetTVPaintWorkfileMetadata(pyblish.api.Action):

View file

@ -2,17 +2,18 @@ import os
import copy
import tempfile
from PIL import Image
import pyblish.api
from avalon.tvpaint import lib
from openpype.hosts.tvpaint.api.lib import composite_images
from openpype.hosts.tvpaint.api import lib
from openpype.hosts.tvpaint.lib import (
calculate_layers_extraction_data,
get_frame_filename_template,
fill_reference_frames,
composite_rendered_layers,
rename_filepaths_by_frame_start
rename_filepaths_by_frame_start,
composite_images
)
from PIL import Image
class ExtractSequence(pyblish.api.Extractor):

View file

@ -1,7 +1,7 @@
import pyblish.api
from avalon.tvpaint import workio
from openpype.api import version_up
from openpype.hosts.tvpaint.api import workio
class IncrementWorkfileVersion(pyblish.api.ContextPlugin):

View file

@ -1,5 +1,5 @@
import pyblish.api
from avalon.tvpaint import pipeline
from openpype.hosts.tvpaint.api import pipeline
class FixAssetNames(pyblish.api.Action):

View file

@ -1,7 +1,7 @@
import json
import pyblish.api
from avalon.tvpaint import lib
from openpype.hosts.tvpaint.api import lib
class ValidateMarksRepair(pyblish.api.Action):

View file

@ -1,5 +1,5 @@
import pyblish.api
from avalon.tvpaint import lib
from openpype.hosts.tvpaint.api import lib
class RepairStartFrame(pyblish.api.Action):

View file

@ -1,5 +1,5 @@
import pyblish.api
from avalon.tvpaint import save_file
from openpype.hosts.tvpaint.api import save_file
class ValidateWorkfileMetadataRepair(pyblish.api.Action):

View file

@ -1,37 +0,0 @@
#-------------------------------------------------
#------------ AVALON PLUGIN LOC FILE -------------
#-------------------------------------------------
#Language : English
#Version : 1.0
#Date : 27/10/2020
#-------------------------------------------------
#------------ COMMON -----------------------------
#-------------------------------------------------
$100 "OpenPype Tools"
$10010 "Workfiles"
$10020 "Load"
$10030 "Create"
$10040 "Scene inventory"
$10050 "Publish"
$10060 "Library"
#------------ Help -------------------------------
$20010 "Open workfiles tool"
$20020 "Open loader tool"
$20030 "Open creator tool"
$20040 "Open scene inventory tool"
$20050 "Open publisher"
$20060 "Open library loader tool"
#------------ Errors -----------------------------
$30001 "Can't Open Requester !"
#-------------------------------------------------
#------------ END --------------------------------
#-------------------------------------------------

View file

@ -0,0 +1,6 @@
import os
def get_plugin_files_path():
current_dir = os.path.dirname(os.path.abspath(__file__))
return os.path.join(current_dir, "plugin_files")

View file

@ -0,0 +1,45 @@
cmake_minimum_required(VERSION 3.17)
project(OpenPypePlugin C CXX)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_EXTENSIONS OFF)
set(IP_ENABLE_UNICODE OFF)
set(IP_ENABLE_DOCTEST OFF)
if(MSVC)
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)
add_definitions(-D_CRT_SECURE_NO_WARNINGS)
endif()
# TODO better options
option(BOOST_ROOT "Path to root of Boost" "")
option(OPENSSL_INCLUDE "OpenSSL include path" "")
option(OPENSSL_LIB_DIR "OpenSSL lib path" "")
option(WEBSOCKETPP_INCLUDE "Websocketpp include path" "")
option(WEBSOCKETPP_LIB_DIR "Websocketpp lib path" "")
option(JSONRPCPP_INCLUDE "Jsonrpcpp include path" "")
find_package(Boost 1.72.0 COMPONENTS random)
include_directories(
"${TVPAINT_SDK_INCLUDE}"
"${OPENSSL_INCLUDE}"
"${WEBSOCKETPP_INCLUDE}"
"${JSONRPCPP_INCLUDE}"
"${Boost_INCLUDE_DIR}"
)
link_directories(
"${OPENSSL_LIB_DIR}"
"${WEBSOCKETPP_LIB_DIR}"
)
add_library(jsonrpcpp INTERFACE)
add_library(${PROJECT_NAME} SHARED library.cpp library.def "${TVPAINT_SDK_LIB}/dllx.c")
target_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES})
target_link_libraries(${PROJECT_NAME} jsonrpcpp)

View file

@ -0,0 +1,34 @@
README for TVPaint Avalon plugin
================================
Introduction
------------
This project is dedicated to integrate Avalon functionality to TVPaint.
This implementaiton is using TVPaint plugin (C/C++) which can communicate with python process. The communication should allow to trigger tools or pipeline functions from TVPaint and accept requests from python process at the same time.
Current implementation is based on websocket protocol, using json-rpc communication (specification 2.0). Project is in beta stage, tested only on Windows.
To be able to load plugin, environment variable `WEBSOCKET_URL` must be set otherwise plugin won't load at all. Plugin should not affect TVPaint if python server crash, but buttons won't work.
## Requirements - Python server
- python >= 3.6
- aiohttp
- aiohttp-json-rpc
### Windows
- pywin32 - required only for plugin installation
## Requirements - Plugin compilation
- TVPaint SDK - Ask for SDK on TVPaint support.
- Boost 1.72.0 - Boost is used across other plugins (Should be possible to use different version with CMakeLists modification)
- Websocket++/Websocketpp - Websocket library (https://github.com/zaphoyd/websocketpp)
- OpenSSL library - Required by Websocketpp
- jsonrpcpp - C++ library handling json-rpc 2.0 (https://github.com/badaix/jsonrpcpp)
- nlohmann/json - Required for jsonrpcpp (https://github.com/nlohmann/json)
### jsonrpcpp
This library has `nlohmann/json` as it's part, but current `master` has old version which has bug and probably won't be possible to use library on windows without using last `nlohmann/json`.
## TODO
- modify code and CMake to be able to compile on MacOS/Linux
- separate websocket logic from plugin logic
- hide buttons and show error message if server is closed

View file

@ -0,0 +1,790 @@
#ifdef _WIN32
// Include <winsock2.h> before <windows.h>
#include <winsock2.h>
#endif
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <cstring>
#include <map>
#include <string>
#include <queue>
#include "plugdllx.h"
#include <boost/chrono.hpp>
#include <websocketpp/config/asio_no_tls_client.hpp>
#include <websocketpp/client.hpp>
#include "json.hpp"
#include "jsonrpcpp.hpp"
// All functions not exported should be static.
// All global variables should be static.
// mReq Identification of the requester. (=0 closed, !=0 requester ID)
static struct {
bool firstParams;
DWORD mReq;
void* mLocalFile;
PIFilter *current_filter;
// Id counter for client requests
int client_request_id;
// There are new menu items
bool newMenuItems;
// Menu item definitions received from connection
nlohmann::json menuItems;
// Menu items used in requester by their ID
nlohmann::json menuItemsById;
std::list<int> menuItemsIds;
// Messages from server before processing.
// - messages can't be process at the moment of recieve as client is running in thread
std::queue<std::string> messages;
// Responses to requests mapped by request id
std::map<int, jsonrpcpp::Response> responses;
} Data = {
true,
0,
nullptr,
nullptr,
1,
false,
nlohmann::json::object(),
nlohmann::json::object()
};
// Json rpc 2.0 parser - for handling messages and callbacks
jsonrpcpp::Parser parser;
typedef websocketpp::client<websocketpp::config::asio_client> client;
class connection_metadata {
private:
websocketpp::connection_hdl m_hdl;
client *m_endpoint;
std::string m_status;
public:
typedef websocketpp::lib::shared_ptr<connection_metadata> ptr;
connection_metadata(websocketpp::connection_hdl hdl, client *endpoint)
: m_hdl(hdl), m_status("Connecting") {
m_endpoint = endpoint;
}
void on_open(client *c, websocketpp::connection_hdl hdl) {
m_status = "Open";
}
void on_fail(client *c, websocketpp::connection_hdl hdl) {
m_status = "Failed";
}
void on_close(client *c, websocketpp::connection_hdl hdl) {
m_status = "Closed";
}
void on_message(websocketpp::connection_hdl, client::message_ptr msg) {
std::string json_str;
if (msg->get_opcode() == websocketpp::frame::opcode::text) {
json_str = msg->get_payload();
} else {
json_str = websocketpp::utility::to_hex(msg->get_payload());
}
process_message(json_str);
}
void process_message(std::string msg) {
std::cout << "--> " << msg << "\n";
try {
jsonrpcpp::entity_ptr entity = parser.do_parse(msg);
if (!entity) {
// Return error code?
} else if (entity->is_response()) {
jsonrpcpp::Response response = jsonrpcpp::Response(entity->to_json());
Data.responses[response.id().int_id()] = response;
} else if (entity->is_request() || entity->is_notification()) {
Data.messages.push(msg);
}
}
catch (const jsonrpcpp::RequestException &e) {
std::string message = e.to_json().dump();
std::cout << "<-- " << e.to_json().dump() << "\n";
send(message);
}
catch (const jsonrpcpp::ParseErrorException &e) {
std::string message = e.to_json().dump();
std::cout << "<-- " << message << "\n";
send(message);
}
catch (const jsonrpcpp::RpcException &e) {
std::cerr << "RpcException: " << e.what() << "\n";
std::string message = jsonrpcpp::ParseErrorException(e.what()).to_json().dump();
std::cout << "<-- " << message << "\n";
send(message);
}
catch (const std::exception &e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
void send(std::string message) {
if (get_status() != "Open") {
return;
}
websocketpp::lib::error_code ec;
m_endpoint->send(m_hdl, message, websocketpp::frame::opcode::text, ec);
if (ec) {
std::cout << "> Error sending message: " << ec.message() << std::endl;
return;
}
}
void send_notification(jsonrpcpp::Notification *notification) {
send(notification->to_json().dump());
}
void send_response(jsonrpcpp::Response *response) {
send(response->to_json().dump());
}
void send_request(jsonrpcpp::Request *request) {
send(request->to_json().dump());
}
websocketpp::connection_hdl get_hdl() const {
return m_hdl;
}
std::string get_status() const {
return m_status;
}
};
class websocket_endpoint {
private:
client m_endpoint;
connection_metadata::ptr client_metadata;
websocketpp::lib::shared_ptr<websocketpp::lib::thread> m_thread;
bool thread_is_running = false;
public:
websocket_endpoint() {
m_endpoint.clear_access_channels(websocketpp::log::alevel::all);
m_endpoint.clear_error_channels(websocketpp::log::elevel::all);
}
~websocket_endpoint() {
close_connection();
}
void close_connection() {
m_endpoint.stop_perpetual();
if (connected())
{
// Close client
close(websocketpp::close::status::normal, "");
}
if (thread_is_running) {
// Join thread
m_thread->join();
thread_is_running = false;
}
}
bool connected()
{
return (client_metadata && client_metadata->get_status() == "Open");
}
int connect(std::string const &uri) {
if (client_metadata && client_metadata->get_status() == "Open") {
std::cout << "> Already connected" << std::endl;
return 0;
}
m_endpoint.init_asio();
m_endpoint.start_perpetual();
m_thread.reset(new websocketpp::lib::thread(&client::run, &m_endpoint));
thread_is_running = true;
websocketpp::lib::error_code ec;
client::connection_ptr con = m_endpoint.get_connection(uri, ec);
if (ec) {
std::cout << "> Connect initialization error: " << ec.message() << std::endl;
return -1;
}
client_metadata = websocketpp::lib::make_shared<connection_metadata>(con->get_handle(), &m_endpoint);
con->set_open_handler(websocketpp::lib::bind(
&connection_metadata::on_open,
client_metadata,
&m_endpoint,
websocketpp::lib::placeholders::_1
));
con->set_fail_handler(websocketpp::lib::bind(
&connection_metadata::on_fail,
client_metadata,
&m_endpoint,
websocketpp::lib::placeholders::_1
));
con->set_close_handler(websocketpp::lib::bind(
&connection_metadata::on_close,
client_metadata,
&m_endpoint,
websocketpp::lib::placeholders::_1
));
con->set_message_handler(websocketpp::lib::bind(
&connection_metadata::on_message,
client_metadata,
websocketpp::lib::placeholders::_1,
websocketpp::lib::placeholders::_2
));
m_endpoint.connect(con);
return 1;
}
void close(websocketpp::close::status::value code, std::string reason) {
if (!client_metadata || client_metadata->get_status() != "Open") {
std::cout << "> Not connected yet" << std::endl;
return;
}
websocketpp::lib::error_code ec;
m_endpoint.close(client_metadata->get_hdl(), code, reason, ec);
if (ec) {
std::cout << "> Error initiating close: " << ec.message() << std::endl;
}
}
void send(std::string message) {
if (!client_metadata || client_metadata->get_status() != "Open") {
std::cout << "> Not connected yet" << std::endl;
return;
}
client_metadata->send(message);
}
void send_notification(jsonrpcpp::Notification *notification) {
client_metadata->send_notification(notification);
}
void send_response(jsonrpcpp::Response *response) {
client_metadata->send(response->to_json().dump());
}
void send_response(std::shared_ptr<jsonrpcpp::Entity> response) {
client_metadata->send(response->to_json().dump());
}
void send_request(jsonrpcpp::Request *request) {
client_metadata->send_request(request);
}
};
class Communicator {
private:
// URL to websocket server
std::string websocket_url;
// Should be avalon plugin available?
// - this may change during processing if websocketet url is not set or server is down
bool use_avalon;
public:
Communicator();
websocket_endpoint endpoint;
bool is_connected();
bool is_usable();
void connect();
void process_requests();
jsonrpcpp::Response call_method(std::string method_name, nlohmann::json params);
void call_notification(std::string method_name, nlohmann::json params);
};
Communicator::Communicator() {
// URL to websocket server
websocket_url = std::getenv("WEBSOCKET_URL");
// Should be avalon plugin available?
// - this may change during processing if websocketet url is not set or server is down
if (websocket_url == "") {
use_avalon = false;
} else {
use_avalon = true;
}
}
bool Communicator::is_connected(){
return endpoint.connected();
}
bool Communicator::is_usable(){
return use_avalon;
}
void Communicator::connect()
{
if (!use_avalon) {
return;
}
int con_result;
con_result = endpoint.connect(websocket_url);
if (con_result == -1)
{
use_avalon = false;
} else {
use_avalon = true;
}
}
void Communicator::call_notification(std::string method_name, nlohmann::json params) {
if (!use_avalon || !is_connected()) {return;}
jsonrpcpp::Notification notification = {method_name, params};
endpoint.send_notification(&notification);
}
jsonrpcpp::Response Communicator::call_method(std::string method_name, nlohmann::json params) {
jsonrpcpp::Response response;
if (!use_avalon || !is_connected())
{
return response;
}
int request_id = Data.client_request_id++;
jsonrpcpp::Request request = {request_id, method_name, params};
endpoint.send_request(&request);
bool found = false;
while (!found) {
std::map<int, jsonrpcpp::Response>::iterator iter = Data.responses.find(request_id);
if (iter != Data.responses.end()) {
//element found == was found response
response = iter->second;
Data.responses.erase(request_id);
found = true;
} else {
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
}
return response;
}
void Communicator::process_requests() {
if (!use_avalon || !is_connected() || Data.messages.empty()) {return;}
std::string msg = Data.messages.front();
Data.messages.pop();
std::cout << "Parsing: " << msg << std::endl;
// TODO: add try->except block
auto response = parser.parse(msg);
if (response->is_response()) {
endpoint.send_response(response);
} else {
jsonrpcpp::request_ptr request = std::dynamic_pointer_cast<jsonrpcpp::Request>(response);
jsonrpcpp::Error error("Method \"" + request->method() + "\" not found", -32601);
jsonrpcpp::Response _response(request->id(), error);
endpoint.send_response(&_response);
}
}
jsonrpcpp::response_ptr define_menu(const jsonrpcpp::Id &id, const jsonrpcpp::Parameter &params) {
/* Define plugin menu.
Menu is defined with json with "title" and "menu_items".
Each item in "menu_items" must have keys:
- "callback" - callback called with RPC when button is clicked
- "label" - label of button
- "help" - tooltip of button
```
{
"title": "< Menu title>",
"menu_items": [
{
"callback": "workfiles_tool",
"label": "Workfiles",
"help": "Open workfiles tool"
},
...
]
}
```
*/
Data.menuItems = params.to_json()[0];
Data.newMenuItems = true;
std::string output;
return std::make_shared<jsonrpcpp::Response>(id, output);
}
jsonrpcpp::response_ptr execute_george(const jsonrpcpp::Id &id, const jsonrpcpp::Parameter &params) {
const char *george_script;
char cmd_output[1024] = {0};
char empty_char = {0};
std::string std_george_script;
std::string output;
nlohmann::json json_params = params.to_json();
std_george_script = json_params[0];
george_script = std_george_script.c_str();
// Result of `TVSendCmd` is int with length of output string
TVSendCmd(Data.current_filter, george_script, cmd_output);
for (int i = 0; i < sizeof(cmd_output); i++)
{
if (cmd_output[i] == empty_char){
break;
}
output += cmd_output[i];
}
return std::make_shared<jsonrpcpp::Response>(id, output);
}
void register_callbacks(){
parser.register_request_callback("define_menu", define_menu);
parser.register_request_callback("execute_george", execute_george);
}
Communicator communication;
////////////////////////////////////////////////////////////////////////////////////////
static char* GetLocalString( PIFilter* iFilter, int iNum, char* iDefault )
{
char* str;
if( Data.mLocalFile == NULL )
return iDefault;
str = TVGetLocalString( iFilter, Data.mLocalFile, iNum );
if( str == NULL || strlen( str ) == 0 )
return iDefault;
return str;
}
/**************************************************************************************/
// Localisation
// numbers (like 10011) are IDs in the localized file.
// strings are the default values to use when the ID is not found
// in the localized file (or the localized file doesn't exist).
std::string label_from_evn()
{
std::string _plugin_label = "Avalon";
if (std::getenv("AVALON_LABEL") && std::getenv("AVALON_LABEL") != "")
{
_plugin_label = std::getenv("AVALON_LABEL");
}
return _plugin_label;
}
std::string plugin_label = label_from_evn();
#define TXT_REQUESTER GetLocalString( iFilter, 100, "OpenPype Tools" )
#define TXT_REQUESTER_ERROR GetLocalString( iFilter, 30001, "Can't Open Requester !" )
////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////
// The functions directly called by Aura through the plugin interface
/**************************************************************************************/
// "About" function.
void FAR PASCAL PI_About( PIFilter* iFilter )
{
char text[256];
sprintf( text, "%s %d,%d", iFilter->PIName, iFilter->PIVersion, iFilter->PIRevision );
// Just open a warning popup with the filter name and version.
// You can open a much nicer requester if you want.
TVWarning( iFilter, text );
}
/**************************************************************************************/
// Function called at Aura startup, when the filter is loaded.
// Should do as little as possible to keep Aura's startup time small.
int FAR PASCAL PI_Open( PIFilter* iFilter )
{
Data.current_filter = iFilter;
char tmp[256];
strcpy( iFilter->PIName, plugin_label.c_str() );
iFilter->PIVersion = 1;
iFilter->PIRevision = 0;
// If this plugin was the one open at Aura shutdown, re-open it
TVReadUserString( iFilter, iFilter->PIName, "Open", tmp, "0", 255 );
if( atoi( tmp ) )
{
PI_Parameters( iFilter, NULL ); // NULL as iArg means "open the requester"
}
communication.connect();
register_callbacks();
return 1; // OK
}
/**************************************************************************************/
// Aura shutdown: we make all the necessary cleanup
void FAR PASCAL PI_Close( PIFilter* iFilter )
{
if( Data.mLocalFile )
{
TVCloseLocalFile( iFilter, Data.mLocalFile );
}
if( Data.mReq )
{
TVCloseReq( iFilter, Data.mReq );
}
communication.endpoint.close_connection();
}
/**************************************************************************************/
// we have something to do !
int FAR PASCAL PI_Parameters( PIFilter* iFilter, char* iArg )
{
if( !iArg )
{
// If the requester is not open, we open it.
if( Data.mReq == 0)
{
// Create empty requester because menu items are defined with
// `define_menu` callback
DWORD req = TVOpenFilterReqEx(
iFilter,
185,
20,
NULL,
NULL,
PIRF_STANDARD_REQ | PIRF_COLLAPSABLE_REQ,
FILTERREQ_NO_TBAR
);
if( req == 0 )
{
TVWarning( iFilter, TXT_REQUESTER_ERROR );
return 0;
}
Data.mReq = req;
// This is a very simple requester, so we create it's content right here instead
// of waiting for the PICBREQ_OPEN message...
// Not recommended for more complex requesters. (see the other examples)
// Sets the title of the requester.
TVSetReqTitle( iFilter, Data.mReq, TXT_REQUESTER );
// Request to listen to ticks
TVGrabTicks(iFilter, req, PITICKS_FLAG_ON);
}
else
{
// If it is already open, we just put it on front of all other requesters.
TVReqToFront( iFilter, Data.mReq );
}
}
return 1;
}
int newMenuItemsProcess(PIFilter* iFilter) {
// Menu items defined with `define_menu` should be propagated.
// Change flag that there are new menu items (avoid infinite loop)
Data.newMenuItems = false;
// Skip if requester does not exists
if (Data.mReq == 0) {
return 0;
}
// Remove all previous menu items
for (int menu_id : Data.menuItemsIds)
{
TVRemoveButtonReq(iFilter, Data.mReq, menu_id);
}
// Clear caches
Data.menuItemsById.clear();
Data.menuItemsIds.clear();
// We use a variable to contains the vertical position of the buttons.
// Each time we create a button, we add its size to this variable.
// This makes it very easy to add/remove/displace buttons in a requester.
int x_pos = 9;
int y_pos = 5;
// Menu width
int menu_width = 185;
// Single menu item width
int btn_width = menu_width - 19;
// Single row height (btn height is 18)
int row_height = 20;
// Additional height to menu
int height_offset = 5;
// This is a very simple requester, so we create it's content right here instead
// of waiting for the PICBREQ_OPEN message...
// Not recommended for more complex requesters. (see the other examples)
const char *menu_title = TXT_REQUESTER;
if (Data.menuItems.contains("title"))
{
menu_title = Data.menuItems["title"].get<nlohmann::json::string_t*>()->c_str();
}
// Sets the title of the requester.
TVSetReqTitle( iFilter, Data.mReq, menu_title );
// Resize menu
// First get current position and sizes (we only need the position)
int current_x = 0;
int current_y = 0;
int current_width = 0;
int current_height = 0;
TVInfoReq(iFilter, Data.mReq, &current_x, &current_y, &current_width, &current_height);
// Calculate new height
int menu_height = (row_height * Data.menuItems["menu_items"].size()) + height_offset;
// Resize
TVResizeReq(iFilter, Data.mReq, current_x, current_y, menu_width, menu_height);
// Add menu items
int item_counter = 1;
for (auto& item : Data.menuItems["menu_items"].items())
{
int item_id = item_counter * 10;
item_counter ++;
std::string item_id_str = std::to_string(item_id);
nlohmann::json item_data = item.value();
const char *item_label = item_data["label"].get<nlohmann::json::string_t*>()->c_str();
const char *help_text = item_data["help"].get<nlohmann::json::string_t*>()->c_str();
std::string item_callback = item_data["callback"].get<std::string>();
TVAddButtonReq(iFilter, Data.mReq, x_pos, y_pos, btn_width, 0, item_id, PIRBF_BUTTON_NORMAL|PIRBF_BUTTON_ACTION, item_label);
TVSetButtonInfoText( iFilter, Data.mReq, item_id, help_text );
y_pos += row_height;
Data.menuItemsById[std::to_string(item_id)] = item_callback;
Data.menuItemsIds.push_back(item_id);
}
return 1;
}
/**************************************************************************************/
// something happenned that needs our attention.
// Global variable where current button up data are stored
std::string button_up_item_id_str;
int FAR PASCAL PI_Msg( PIFilter* iFilter, INTPTR iEvent, INTPTR iReq, INTPTR* iArgs )
{
Data.current_filter = iFilter;
// what did happen ?
switch( iEvent )
{
// The user just 'clicked' on a normal button
case PICBREQ_BUTTON_UP:
button_up_item_id_str = std::to_string(iArgs[0]);
if (Data.menuItemsById.contains(button_up_item_id_str))
{
std::string callback_name = Data.menuItemsById[button_up_item_id_str].get<std::string>();
communication.call_method(callback_name, nlohmann::json::array());
}
TVExecute( iFilter );
break;
// The requester was just closed.
case PICBREQ_CLOSE:
// requester doesn't exists anymore
Data.mReq = 0;
char tmp[256];
// Save the requester state (opened or closed)
// iArgs[4] contains a flag which tells us if the requester
// has been closed by the user (flag=0) or by Aura's shutdown (flag=1).
// If it was by Aura's shutdown, that means this requester was the
// last one open, so we should reopen this one the next time Aura
// is started. Else we won't open it next time.
sprintf( tmp, "%d", (int)(iArgs[4]) );
// Save it in Aura's init file.
TVWriteUserString( iFilter, iFilter->PIName, "Open", tmp );
break;
case PICBREQ_TICKS:
if (Data.newMenuItems)
{
newMenuItemsProcess(iFilter);
}
communication.process_requests();
}
return 1;
}
/**************************************************************************************/
// Start of the 'execution' of the filter for a new sequence.
// - iNumImages contains the total number of frames to be processed.
// Here you should allocate memory that is used for all frames,
// and precompute all the stuff that doesn't change from frame to frame.
int FAR PASCAL PI_SequenceStart( PIFilter* iFilter, int iNumImages )
{
// In this simple example we don't have anything to allocate/precompute.
// 1 means 'continue', 0 means 'error, abort' (like 'not enough memory')
return 1;
}
// Here you should cleanup what you've done in PI_SequenceStart
void FAR PASCAL PI_SequenceFinish( PIFilter* iFilter )
{}
/**************************************************************************************/
// This is called before each frame.
// Here you should allocate memory and precompute all the stuff you can.
int FAR PASCAL PI_Start( PIFilter* iFilter, double iPos, double iSize )
{
return 1;
}
void FAR PASCAL PI_Finish( PIFilter* iFilter )
{
// nothing special to cleanup
}
/**************************************************************************************/
// 'Execution' of the filter.
int FAR PASCAL PI_Work( PIFilter* iFilter )
{
return 1;
}

View file

@ -0,0 +1,10 @@
LIBRARY Avalonplugin
EXPORTS
PI_Msg
PI_Open
PI_About
PI_Parameters
PI_Start
PI_Work
PI_Finish
PI_Close

View file

@ -2,7 +2,7 @@ import signal
import time
import asyncio
from avalon.tvpaint.communication_server import (
from openpype.hosts.tvpaint.api.communication_server import (
BaseCommunicator,
CommunicationWrapper
)

View file

@ -256,7 +256,7 @@ class CollectSceneData(BaseCommand):
name = "collect_scene_data"
def execute(self):
from avalon.tvpaint.lib import (
from openpype.hosts.tvpaint.api.lib import (
get_layers_data,
get_groups_data,
get_layers_pre_post_behavior,

View file

@ -0,0 +1,158 @@
import sys
from Qt import QtWidgets, QtCore, QtGui
from openpype import (
resources,
style
)
from openpype.tools.utils import host_tools
from openpype.tools.utils.lib import qt_app_context
class ToolsBtnsWidget(QtWidgets.QWidget):
"""Widget containing buttons which are clickable."""
tool_required = QtCore.Signal(str)
def __init__(self, parent=None):
super(ToolsBtnsWidget, self).__init__(parent)
create_btn = QtWidgets.QPushButton("Create...", self)
load_btn = QtWidgets.QPushButton("Load...", self)
publish_btn = QtWidgets.QPushButton("Publish...", self)
manage_btn = QtWidgets.QPushButton("Manage...", self)
experimental_tools_btn = QtWidgets.QPushButton(
"Experimental tools...", self
)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(create_btn, 0)
layout.addWidget(load_btn, 0)
layout.addWidget(publish_btn, 0)
layout.addWidget(manage_btn, 0)
layout.addWidget(experimental_tools_btn, 0)
layout.addStretch(1)
create_btn.clicked.connect(self._on_create)
load_btn.clicked.connect(self._on_load)
publish_btn.clicked.connect(self._on_publish)
manage_btn.clicked.connect(self._on_manage)
experimental_tools_btn.clicked.connect(self._on_experimental)
def _on_create(self):
self.tool_required.emit("creator")
def _on_load(self):
self.tool_required.emit("loader")
def _on_publish(self):
self.tool_required.emit("publish")
def _on_manage(self):
self.tool_required.emit("sceneinventory")
def _on_experimental(self):
self.tool_required.emit("experimental_tools")
class ToolsDialog(QtWidgets.QDialog):
"""Dialog with tool buttons that will stay opened until user close it."""
def __init__(self, *args, **kwargs):
super(ToolsDialog, self).__init__(*args, **kwargs)
self.setWindowTitle("OpenPype tools")
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
self.setWindowIcon(icon)
self.setWindowFlags(
QtCore.Qt.Window
| QtCore.Qt.WindowStaysOnTopHint
)
self.setFocusPolicy(QtCore.Qt.StrongFocus)
tools_widget = ToolsBtnsWidget(self)
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(tools_widget)
tools_widget.tool_required.connect(self._on_tool_require)
self._tools_widget = tools_widget
self._first_show = True
def sizeHint(self):
result = super(ToolsDialog, self).sizeHint()
result.setWidth(result.width() * 2)
return result
def showEvent(self, event):
super(ToolsDialog, self).showEvent(event)
if self._first_show:
self.setStyleSheet(style.load_stylesheet())
self._first_show = False
def _on_tool_require(self, tool_name):
host_tools.show_tool_by_name(tool_name, parent=self)
class ToolsPopup(ToolsDialog):
"""Popup with tool buttons that will close when loose focus."""
def __init__(self, *args, **kwargs):
super(ToolsPopup, self).__init__(*args, **kwargs)
self.setWindowFlags(
QtCore.Qt.FramelessWindowHint
| QtCore.Qt.Popup
)
def showEvent(self, event):
super(ToolsPopup, self).showEvent(event)
app = QtWidgets.QApplication.instance()
app.processEvents()
pos = QtGui.QCursor.pos()
self.move(pos)
class WindowCache:
"""Cached objects and methods to be used in global scope."""
_dialog = None
_popup = None
_first_show = True
@classmethod
def _before_show(cls):
"""Create QApplication if does not exists yet."""
if not cls._first_show:
return
cls._first_show = False
if not QtWidgets.QApplication.instance():
QtWidgets.QApplication(sys.argv)
@classmethod
def show_popup(cls):
cls._before_show()
with qt_app_context():
if cls._popup is None:
cls._popup = ToolsPopup()
cls._popup.show()
@classmethod
def show_dialog(cls):
cls._before_show()
with qt_app_context():
if cls._dialog is None:
cls._dialog = ToolsDialog()
cls._dialog.show()
cls._dialog.raise_()
cls._dialog.activateWindow()
def show_tools_popup():
WindowCache.show_popup()
def show_tools_dialog():
WindowCache.show_dialog()

View file

@ -25,6 +25,7 @@ from .env_tools import (
from .terminal import Terminal
from .execute import (
get_pype_execute_args,
get_linux_launcher_args,
execute,
run_subprocess,
path_to_subprocess_arg,
@ -32,8 +33,6 @@ from .execute import (
)
from .log import PypeLogger, timeit
from .mongo import (
decompose_url,
compose_url,
get_default_components,
validate_mongo_connection,
OpenPypeMongoConnection
@ -150,7 +149,8 @@ from .path_tools import (
get_version_from_path,
get_last_version_from_path,
create_project_folders,
get_project_basic_paths
create_workdir_extra_folders,
get_project_basic_paths,
)
from .editorial import (
@ -174,6 +174,7 @@ terminal = Terminal
__all__ = [
"get_pype_execute_args",
"get_linux_launcher_args",
"execute",
"run_subprocess",
"path_to_subprocess_arg",
@ -276,8 +277,6 @@ __all__ = [
"get_datetime_data",
"PypeLogger",
"decompose_url",
"compose_url",
"get_default_components",
"validate_mongo_connection",
"OpenPypeMongoConnection",
@ -294,6 +293,7 @@ __all__ = [
"frames_to_timecode",
"make_sequence_collection",
"create_project_folders",
"create_workdir_extra_folders",
"get_project_basic_paths",
"get_openpype_version",

View file

@ -1,8 +1,8 @@
import os
import sys
import re
import copy
import json
import tempfile
import platform
import collections
import inspect
@ -37,10 +37,102 @@ from .python_module_tools import (
modules_from_path,
classes_from_module
)
from .execute import get_linux_launcher_args
_logger = None
PLATFORM_NAMES = {"windows", "linux", "darwin"}
DEFAULT_ENV_SUBGROUP = "standard"
def parse_environments(env_data, env_group=None, platform_name=None):
"""Parse environment values from settings byt group and platfrom.
Data may contain up to 2 hierarchical levels of dictionaries. At the end
of the last level must be string or list. List is joined using platform
specific joiner (';' for windows and ':' for linux and mac).
Hierarchical levels can contain keys for subgroups and platform name.
Platform specific values must be always last level of dictionary. Platform
names are "windows" (MS Windows), "linux" (any linux distribution) and
"darwin" (any MacOS distribution).
Subgroups are helpers added mainly for standard and on farm usage. Farm
may require different environments for e.g. licence related values or
plugins. Default subgroup is "standard".
Examples:
```
{
# Unchanged value
"ENV_KEY1": "value",
# Empty values are kept (unset environment variable)
"ENV_KEY2": "",
# Join list values with ':' or ';'
"ENV_KEY3": ["value1", "value2"],
# Environment groups
"ENV_KEY4": {
"standard": "DEMO_SERVER_URL",
"farm": "LICENCE_SERVER_URL"
},
# Platform specific (and only for windows and mac)
"ENV_KEY5": {
"windows": "windows value",
"darwin": ["value 1", "value 2"]
},
# Environment groups and platform combination
"ENV_KEY6": {
"farm": "FARM_VALUE",
"standard": {
"windows": ["value1", "value2"],
"linux": "value1",
"darwin": ""
}
}
}
```
"""
output = {}
if not env_data:
return output
if not env_group:
env_group = DEFAULT_ENV_SUBGROUP
if not platform_name:
platform_name = platform.system().lower()
for key, value in env_data.items():
if isinstance(value, dict):
# Look if any key is platform key
# - expect that represents environment group if does not contain
# platform keys
if not PLATFORM_NAMES.intersection(set(value.keys())):
# Skip the key if group is not available
if env_group not in value:
continue
value = value[env_group]
# Check again if value is dictionary
# - this time there should be only platform keys
if isinstance(value, dict):
value = value.get(platform_name)
# Check if value is list and join it's values
# QUESTION Should empty values be skipped?
if isinstance(value, (list, tuple)):
value = os.pathsep.join(value)
# Set key to output if value is string
if isinstance(value, six.string_types):
output[key] = value
return output
def get_logger():
"""Global lib.applications logger getter."""
@ -640,6 +732,10 @@ class LaunchHook:
def app_name(self):
return getattr(self.application, "full_name", None)
@property
def modules_manager(self):
return getattr(self.launch_context, "modules_manager", None)
def validate(self):
"""Optional validation of launch hook on initialization.
@ -701,21 +797,32 @@ class ApplicationLaunchContext:
preparation to store objects usable in multiple places.
"""
def __init__(self, application, executable, **data):
def __init__(self, application, executable, env_group=None, **data):
from openpype.modules import ModulesManager
# Application object
self.application = application
self.modules_manager = ModulesManager()
# Logger
logger_name = "{}-{}".format(self.__class__.__name__, self.app_name)
self.log = PypeLogger.get_logger(logger_name)
self.executable = executable
if env_group is None:
env_group = DEFAULT_ENV_SUBGROUP
self.env_group = env_group
self.data = dict(data)
# subprocess.Popen launch arguments (first argument in constructor)
self.launch_args = executable.as_args()
self.launch_args.extend(application.arguments)
if self.data.get("app_args"):
self.launch_args.extend(self.data.pop("app_args"))
# Handle launch environemtns
env = self.data.pop("env", None)
@ -810,10 +917,7 @@ class ApplicationLaunchContext:
paths.append(path)
# Load modules paths
from openpype.modules import ModulesManager
manager = ModulesManager()
paths.extend(manager.collect_launch_hook_paths())
paths.extend(self.modules_manager.collect_launch_hook_paths())
return paths
@ -919,6 +1023,48 @@ class ApplicationLaunchContext:
def manager(self):
return self.application.manager
def _run_process(self):
# Windows and MacOS have easier process start
low_platform = platform.system().lower()
if low_platform in ("windows", "darwin"):
return subprocess.Popen(self.launch_args, **self.kwargs)
# Linux uses mid process
# - it is possible that the mid process executable is not
# available for this version of OpenPype in that case use standard
# launch
launch_args = get_linux_launcher_args()
if launch_args is None:
return subprocess.Popen(self.launch_args, **self.kwargs)
# Prepare data that will be passed to midprocess
# - store arguments to a json and pass path to json as last argument
# - pass environments to set
json_data = {
"args": self.launch_args,
"env": self.kwargs.pop("env", {})
}
# Create temp file
json_temp = tempfile.NamedTemporaryFile(
mode="w", prefix="op_app_args", suffix=".json", delete=False
)
json_temp.close()
json_temp_filpath = json_temp.name
with open(json_temp_filpath, "w") as stream:
json.dump(json_data, stream)
launch_args.append(json_temp_filpath)
# Create mid-process which will launch application
process = subprocess.Popen(launch_args, **self.kwargs)
# Wait until the process finishes
# - This is important! The process would stay in "open" state.
process.wait()
# Remove the temp file
os.remove(json_temp_filpath)
# Return process which is already terminated
return process
def launch(self):
"""Collect data for new process and then create it.
@ -955,8 +1101,10 @@ class ApplicationLaunchContext:
self.app_name, args_len_str, args
)
)
self.launch_args = args
# Run process
self.process = subprocess.Popen(args, **self.kwargs)
self.process = self._run_process()
# Process post launch hooks
for postlaunch_hook in self.postlaunch_hooks:
@ -1045,7 +1193,7 @@ class EnvironmentPrepData(dict):
def get_app_environments_for_context(
project_name, asset_name, task_name, app_name, env=None
project_name, asset_name, task_name, app_name, env_group=None, env=None
):
"""Prepare environment variables by context.
Args:
@ -1097,8 +1245,8 @@ def get_app_environments_for_context(
"env": env
})
prepare_host_environments(data)
prepare_context_environments(data)
prepare_host_environments(data, env_group)
prepare_context_environments(data, env_group)
# Discard avalon connection
dbcon.uninstall()
@ -1118,7 +1266,7 @@ def _merge_env(env, current_env):
return result
def prepare_host_environments(data, implementation_envs=True):
def prepare_host_environments(data, env_group=None, implementation_envs=True):
"""Modify launch environments based on launched app and context.
Args:
@ -1172,7 +1320,7 @@ def prepare_host_environments(data, implementation_envs=True):
continue
# Choose right platform
tool_env = acre.parse(_env_values)
tool_env = parse_environments(_env_values, env_group)
# Merge dictionaries
env_values = _merge_env(tool_env, env_values)
@ -1204,7 +1352,9 @@ def prepare_host_environments(data, implementation_envs=True):
data["env"].pop(key, None)
def apply_project_environments_value(project_name, env, project_settings=None):
def apply_project_environments_value(
project_name, env, project_settings=None, env_group=None
):
"""Apply project specific environments on passed environments.
The enviornments are applied on passed `env` argument value so it is not
@ -1232,14 +1382,15 @@ def apply_project_environments_value(project_name, env, project_settings=None):
env_value = project_settings["global"]["project_environments"]
if env_value:
parsed_value = parse_environments(env_value, env_group)
env.update(acre.compute(
_merge_env(acre.parse(env_value), env),
_merge_env(parsed_value, env),
cleanup=False
))
return env
def prepare_context_environments(data):
def prepare_context_environments(data, env_group=None):
"""Modify launch environemnts with context data for launched host.
Args:
@ -1269,7 +1420,7 @@ def prepare_context_environments(data):
data["project_settings"] = project_settings
# Apply project specific environments on current env value
apply_project_environments_value(
project_name, data["env"], project_settings
project_name, data["env"], project_settings, env_group
)
app = data["app"]

View file

@ -508,13 +508,18 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
Returns:
dict: Data prepared for filling workdir template.
"""
hierarchy = "/".join(asset_doc["data"]["parents"])
task_type = asset_doc['data']['tasks'].get(task_name, {}).get('type')
project_task_types = project_doc["config"]["tasks"]
task_code = project_task_types.get(task_type, {}).get("short_name")
asset_parents = asset_doc["data"]["parents"]
hierarchy = "/".join(asset_parents)
parent_name = project_doc["name"]
if asset_parents:
parent_name = asset_parents[-1]
data = {
"project": {
"name": project_doc["name"],
@ -526,6 +531,7 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
"short": task_code,
},
"asset": asset_doc["name"],
"parent": parent_name,
"app": host_name,
"user": getpass.getuser(),
"hierarchy": hierarchy,
@ -1427,7 +1433,11 @@ def get_creator_by_name(creator_name, case_sensitive=False):
@with_avalon
def change_timer_to_current_context():
"""Called after context change to change timers"""
"""Called after context change to change timers.
TODO:
- use TimersManager's static method instead of reimplementing it here
"""
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
if not webserver_url:
log.warning("Couldn't find webserver url")
@ -1442,8 +1452,7 @@ def change_timer_to_current_context():
data = {
"project_name": avalon.io.Session["AVALON_PROJECT"],
"asset_name": avalon.io.Session["AVALON_ASSET"],
"task_name": avalon.io.Session["AVALON_TASK"],
"hierarchy": get_hierarchy()
"task_name": avalon.io.Session["AVALON_TASK"]
}
requests.post(rest_api_url, json=data)

View file

@ -1,7 +1,6 @@
import os
import shlex
import subprocess
import platform
import distutils.spawn
from .log import PypeLogger as Logger
@ -175,3 +174,46 @@ def get_pype_execute_args(*args):
pype_args.extend(args)
return pype_args
def get_linux_launcher_args(*args):
"""Path to application mid process executable.
This function should be able as arguments are different when used
from code and build.
It is possible that this function is used in OpenPype build which does
not have yet the new executable. In that case 'None' is returned.
Args:
args (iterable): List of additional arguments added after executable
argument.
Returns:
list: Executables with possible positional argument to script when
called from code.
"""
filename = "app_launcher"
openpype_executable = os.environ["OPENPYPE_EXECUTABLE"]
executable_filename = os.path.basename(openpype_executable)
if "python" in executable_filename.lower():
script_path = os.path.join(
os.environ["OPENPYPE_ROOT"],
"{}.py".format(filename)
)
launch_args = [openpype_executable, script_path]
else:
new_executable = os.path.join(
os.path.dirname(openpype_executable),
filename
)
executable_path = distutils.spawn.find_executable(new_executable)
if executable_path is None:
return None
launch_args = [executable_path]
if args:
launch_args.extend(args)
return launch_args

View file

@ -27,7 +27,7 @@ import copy
from . import Terminal
from .mongo import (
MongoEnvNotSet,
decompose_url,
get_default_components,
OpenPypeMongoConnection
)
try:
@ -202,8 +202,9 @@ class PypeLogger:
use_mongo_logging = None
mongo_process_id = None
# Information about mongo url
log_mongo_url = None
# Backwards compatibility - was used in start.py
# TODO remove when all old builds are replaced with new one
# not using 'log_mongo_url_components'
log_mongo_url_components = None
# Database name in Mongo
@ -282,9 +283,9 @@ class PypeLogger:
if not cls.use_mongo_logging:
return
components = cls.log_mongo_url_components
components = get_default_components()
kwargs = {
"host": cls.log_mongo_url,
"host": components["host"],
"database_name": cls.log_database_name,
"collection": cls.log_collection_name,
"username": components["username"],
@ -324,6 +325,7 @@ class PypeLogger:
# Change initialization state to prevent runtime changes
# if is executed during runtime
cls.initialized = False
cls.log_mongo_url_components = get_default_components()
# Define if should logging to mongo be used
use_mongo_logging = bool(log4mongo is not None)
@ -354,14 +356,8 @@ class PypeLogger:
# Define if is in OPENPYPE_DEBUG mode
cls.pype_debug = int(os.getenv("OPENPYPE_DEBUG") or "0")
# Mongo URL where logs will be stored
cls.log_mongo_url = os.environ.get("OPENPYPE_MONGO")
if not cls.log_mongo_url:
if not os.environ.get("OPENPYPE_MONGO"):
cls.use_mongo_logging = False
else:
# Decompose url
cls.log_mongo_url_components = decompose_url(cls.log_mongo_url)
# Mark as initialized
cls.initialized = True
@ -474,7 +470,7 @@ class PypeLogger:
if not cls.initialized:
cls.initialize()
return OpenPypeMongoConnection.get_mongo_client(cls.log_mongo_url)
return OpenPypeMongoConnection.get_mongo_client()
def timeit(method):

View file

@ -15,7 +15,19 @@ class MongoEnvNotSet(Exception):
pass
def decompose_url(url):
def _decompose_url(url):
"""Decompose mongo url to basic components.
Used for creation of MongoHandler which expect mongo url components as
separated kwargs. Components are at the end not used as we're setting
connection directly this is just a dumb components for MongoHandler
validation pass.
"""
# Use first url from passed url
# - this is beacuse it is possible to pass multiple urls for multiple
# replica sets which would crash on urlparse otherwise
# - please don't use comma in username of password
url = url.split(",")[0]
components = {
"scheme": None,
"host": None,
@ -48,42 +60,13 @@ def decompose_url(url):
return components
def compose_url(scheme=None,
host=None,
username=None,
password=None,
port=None,
auth_db=None):
url = "{scheme}://"
if username and password:
url += "{username}:{password}@"
url += "{host}"
if port:
url += ":{port}"
if auth_db:
url += "?authSource={auth_db}"
return url.format(**{
"scheme": scheme,
"host": host,
"username": username,
"password": password,
"port": port,
"auth_db": auth_db
})
def get_default_components():
mongo_url = os.environ.get("OPENPYPE_MONGO")
if mongo_url is None:
raise MongoEnvNotSet(
"URL for Mongo logging connection is not set."
)
return decompose_url(mongo_url)
return _decompose_url(mongo_url)
def should_add_certificate_path_to_mongo_url(mongo_url):

View file

@ -1,13 +1,15 @@
import json
import logging
import os
import re
import abc
import json
import logging
import six
from openpype.settings import get_project_settings
from openpype.settings.lib import get_site_local_overrides
from .anatomy import Anatomy
from openpype.settings import get_project_settings
from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__)
@ -200,6 +202,58 @@ def get_project_basic_paths(project_name):
return _list_path_items(folder_structure)
def create_workdir_extra_folders(
workdir, host_name, task_type, task_name, project_name,
project_settings=None
):
"""Create extra folders in work directory based on context.
Args:
workdir (str): Path to workdir where workfiles is stored.
host_name (str): Name of host implementation.
task_type (str): Type of task for which extra folders should be
created.
task_name (str): Name of task for which extra folders should be
created.
project_name (str): Name of project on which task is.
project_settings (dict): Prepared project settings. Are loaded if not
passed.
"""
# Load project settings if not set
if not project_settings:
project_settings = get_project_settings(project_name)
# Load extra folders profiles
extra_folders_profiles = (
project_settings["global"]["tools"]["Workfiles"]["extra_folders"]
)
# Skip if are empty
if not extra_folders_profiles:
return
# Prepare profiles filters
filter_data = {
"task_types": task_type,
"task_names": task_name,
"hosts": host_name
}
profile = filter_profiles(extra_folders_profiles, filter_data)
if profile is None:
return
for subfolder in profile["folders"]:
# Make sure backslashes are converted to forwards slashes
# and does not start with slash
subfolder = subfolder.replace("\\", "/").lstrip("/")
# Skip empty strings
if not subfolder:
continue
fullpath = os.path.join(workdir, subfolder)
if not os.path.exists(fullpath):
os.makedirs(fullpath)
@six.add_metaclass(abc.ABCMeta)
class HostDirmap:
"""
@ -307,7 +361,6 @@ class HostDirmap:
mapping = {}
if not project_settings["global"]["sync_server"]["enabled"]:
log.debug("Site Sync not enabled")
return mapping
from openpype.settings.lib import get_site_local_overrides

View file

@ -227,20 +227,27 @@ def filter_pyblish_plugins(plugins):
# iterate over plugins
for plugin in plugins[:]:
file = os.path.normpath(inspect.getsourcefile(plugin))
file = os.path.normpath(file)
# host determined from path
host_from_file = file.split(os.path.sep)[-4:-3][0]
plugin_kind = file.split(os.path.sep)[-2:-1][0]
# TODO: change after all plugins are moved one level up
if host_from_file == "openpype":
host_from_file = "global"
try:
config_data = presets[host]["publish"][plugin.__name__]
except KeyError:
# host determined from path
file = os.path.normpath(inspect.getsourcefile(plugin))
file = os.path.normpath(file)
split_path = file.split(os.path.sep)
if len(split_path) < 4:
log.warning(
'plugin path too short to extract host {}'.format(file)
)
continue
host_from_file = split_path[-4]
plugin_kind = split_path[-2]
# TODO: change after all plugins are moved one level up
if host_from_file == "openpype":
host_from_file = "global"
try:
config_data = presets[host_from_file][plugin_kind][plugin.__name__] # noqa: E501
except KeyError:

View file

@ -2,6 +2,7 @@ import os
from datetime import datetime
import sys
from bson.objectid import ObjectId
import collections
import pyblish.util
import pyblish.api
@ -11,6 +12,25 @@ from openpype.lib.mongo import OpenPypeMongoConnection
from openpype.lib.plugin_tools import parse_json
def headless_publish(log, close_plugin_name=None, is_test=False):
"""Runs publish in a opened host with a context and closes Python process.
Host is being closed via ClosePS pyblish plugin which triggers 'exit'
method in ConsoleTrayApp.
"""
if not is_test:
dbcon = get_webpublish_conn()
_id = os.environ.get("BATCH_LOG_ID")
if not _id:
log.warning("Unable to store log records, "
"batch will be unfinished!")
return
publish_and_log(dbcon, _id, log, close_plugin_name)
else:
publish(log, close_plugin_name)
def get_webpublish_conn():
"""Get connection to OP 'webpublishes' collection."""
mongo_client = OpenPypeMongoConnection.get_mongo_client()
@ -37,6 +57,33 @@ def start_webpublish_log(dbcon, batch_id, user):
}).inserted_id
def publish(log, close_plugin_name=None):
"""Loops through all plugins, logs to console. Used for tests.
Args:
log (OpenPypeLogger)
close_plugin_name (str): name of plugin with responsibility to
close host app
"""
# Error exit as soon as any error occurs.
error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"
close_plugin = _get_close_plugin(close_plugin_name, log)
for result in pyblish.util.publish_iter():
for record in result["records"]:
log.info("{}: {}".format(
result["plugin"].label, record.msg))
if result["error"]:
log.error(error_format.format(**result))
uninstall()
if close_plugin: # close host app explicitly after error
context = pyblish.api.Context()
close_plugin().process(context)
sys.exit(1)
def publish_and_log(dbcon, _id, log, close_plugin_name=None):
"""Loops through all plugins, logs ok and fails into OP DB.
@ -140,7 +187,9 @@ def find_variant_key(application_manager, host):
found_variant_key = None
# finds most up-to-date variant if any installed
for variant_key, variant in app_group.variants.items():
sorted_variants = collections.OrderedDict(
sorted(app_group.variants.items()))
for variant_key, variant in sorted_variants.items():
for executable in variant.executables:
if executable.exists():
found_variant_key = variant_key

View file

@ -13,14 +13,6 @@ class AvalonModule(OpenPypeModule, ITrayModule):
avalon_settings = modules_settings[self.name]
# Check if environment is already set
avalon_mongo_url = os.environ.get("AVALON_MONGO")
if not avalon_mongo_url:
avalon_mongo_url = avalon_settings["AVALON_MONGO"]
# Use pype mongo if Avalon's mongo not defined
if not avalon_mongo_url:
avalon_mongo_url = os.environ["OPENPYPE_MONGO"]
thumbnail_root = os.environ.get("AVALON_THUMBNAIL_ROOT")
if not thumbnail_root:
thumbnail_root = avalon_settings["AVALON_THUMBNAIL_ROOT"]
@ -31,7 +23,6 @@ class AvalonModule(OpenPypeModule, ITrayModule):
avalon_mongo_timeout = avalon_settings["AVALON_TIMEOUT"]
self.thumbnail_root = thumbnail_root
self.avalon_mongo_url = avalon_mongo_url
self.avalon_mongo_timeout = avalon_mongo_timeout
# Tray attributes
@ -51,12 +42,20 @@ class AvalonModule(OpenPypeModule, ITrayModule):
def tray_init(self):
# Add library tool
try:
from Qt import QtCore
from openpype.tools.libraryloader import LibraryLoaderWindow
self.libraryloader = LibraryLoaderWindow(
libraryloader = LibraryLoaderWindow(
show_projects=True,
show_libraries=True
)
# Remove always on top flag for tray
window_flags = libraryloader.windowFlags()
if window_flags | QtCore.Qt.WindowStaysOnTopHint:
window_flags ^= QtCore.Qt.WindowStaysOnTopHint
libraryloader.setWindowFlags(window_flags)
self.libraryloader = libraryloader
except Exception:
self.log.warning(
"Couldn't load Library loader tool for tray.",

View file

@ -29,6 +29,22 @@ from openpype.settings.lib import (
from openpype.lib import PypeLogger
DEFAULT_OPENPYPE_MODULES = (
"avalon_apps",
"clockify",
"log_viewer",
"muster",
"python_console_interpreter",
"slack",
"webserver",
"launcher_action",
"project_manager_action",
"settings_action",
"standalonepublish_action",
"job_queue",
)
# Inherit from `object` for Python 2 hosts
class _ModuleClass(object):
"""Fake module class for storing OpenPype modules.
@ -272,17 +288,12 @@ def _load_modules():
log = PypeLogger.get_logger("ModulesLoader")
# Import default modules imported from 'openpype.modules'
for default_module_name in (
"settings_action",
"launcher_action",
"project_manager_action",
"standalonepublish_action",
):
for default_module_name in DEFAULT_OPENPYPE_MODULES:
try:
default_module = __import__(
"openpype.modules.{}".format(default_module_name),
fromlist=("", )
)
import_str = "openpype.modules.{}".format(default_module_name)
new_import_str = "{}.{}".format(modules_key, default_module_name)
default_module = __import__(import_str, fromlist=("", ))
sys.modules[new_import_str] = default_module
setattr(openpype_modules, default_module_name, default_module)
except Exception:

Some files were not shown because too many files have changed in this diff Show more