mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-01 16:34:53 +01:00
Merge branch 'develop' into feature/OP-2361_Store-settings-by-OpenPype-version
This commit is contained in:
commit
716bd5b492
272 changed files with 5678 additions and 1670 deletions
2
.github/workflows/test_build.yml
vendored
2
.github/workflows/test_build.yml
vendored
|
|
@ -37,6 +37,7 @@ jobs:
|
|||
- name: 🔨 Build
|
||||
shell: pwsh
|
||||
run: |
|
||||
$env:SKIP_THIRD_PARTY_VALIDATION="1"
|
||||
./tools/build.ps1
|
||||
|
||||
Ubuntu-latest:
|
||||
|
|
@ -61,6 +62,7 @@ jobs:
|
|||
|
||||
- name: 🔨 Build
|
||||
run: |
|
||||
export SKIP_THIRD_PARTY_VALIDATION="1"
|
||||
./tools/build.sh
|
||||
|
||||
# MacOS-latest:
|
||||
|
|
|
|||
126
CHANGELOG.md
126
CHANGELOG.md
|
|
@ -1,62 +1,84 @@
|
|||
# Changelog
|
||||
|
||||
## [3.8.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.8.1-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.7.0...HEAD)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Renamed to proper name [\#2546](https://github.com/pypeclub/OpenPype/pull/2546)
|
||||
- Slack: Add review to notification message [\#2498](https://github.com/pypeclub/OpenPype/pull/2498)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: OpenTimelineIO Export Modul [\#2398](https://github.com/pypeclub/OpenPype/pull/2398)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.8.0...HEAD)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Loader: Allow to toggle default family filters between "include" or "exclude" filtering [\#2541](https://github.com/pypeclub/OpenPype/pull/2541)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Webpublisher: Fix - subset names from processed .psd used wrong value for task [\#2586](https://github.com/pypeclub/OpenPype/pull/2586)
|
||||
- `vrscene` creator Deadline webservice URL handling [\#2580](https://github.com/pypeclub/OpenPype/pull/2580)
|
||||
- global: track name was failing if duplicated root word in name [\#2568](https://github.com/pypeclub/OpenPype/pull/2568)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- build\(deps\): bump pillow from 8.4.0 to 9.0.0 [\#2523](https://github.com/pypeclub/OpenPype/pull/2523)
|
||||
|
||||
## [3.8.0](https://github.com/pypeclub/OpenPype/tree/3.8.0) (2022-01-24)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.8.0-nightly.7...3.8.0)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Variable in docs renamed to proper name [\#2546](https://github.com/pypeclub/OpenPype/pull/2546)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: extracting segments with trans-coding [\#2547](https://github.com/pypeclub/OpenPype/pull/2547)
|
||||
- Maya : V-Ray Proxy - load all ABC files via proxy [\#2544](https://github.com/pypeclub/OpenPype/pull/2544)
|
||||
- Maya to Unreal: Extended static mesh workflow [\#2537](https://github.com/pypeclub/OpenPype/pull/2537)
|
||||
- Flame: collecting publishable instances [\#2519](https://github.com/pypeclub/OpenPype/pull/2519)
|
||||
- Flame: create publishable clips [\#2495](https://github.com/pypeclub/OpenPype/pull/2495)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Webpublisher: Moved error at the beginning of the log [\#2559](https://github.com/pypeclub/OpenPype/pull/2559)
|
||||
- Ftrack: Use ApplicationManager to get DJV path [\#2558](https://github.com/pypeclub/OpenPype/pull/2558)
|
||||
- Webpublisher: Added endpoint to reprocess batch through UI [\#2555](https://github.com/pypeclub/OpenPype/pull/2555)
|
||||
- Settings: PathInput strip passed string [\#2550](https://github.com/pypeclub/OpenPype/pull/2550)
|
||||
- Global: Exctract Review anatomy fill data with output name [\#2548](https://github.com/pypeclub/OpenPype/pull/2548)
|
||||
- Cosmetics: Clean up some cosmetics / typos [\#2542](https://github.com/pypeclub/OpenPype/pull/2542)
|
||||
- General: Validate if current process OpenPype version is requested version [\#2529](https://github.com/pypeclub/OpenPype/pull/2529)
|
||||
- General: Be able to use anatomy data in ffmpeg output arguments [\#2525](https://github.com/pypeclub/OpenPype/pull/2525)
|
||||
- Expose toggle publish plug-in settings for Maya Look Shading Engine Naming [\#2521](https://github.com/pypeclub/OpenPype/pull/2521)
|
||||
- Photoshop: Move implementation to OpenPype [\#2510](https://github.com/pypeclub/OpenPype/pull/2510)
|
||||
- TimersManager: Move module one hierarchy higher [\#2501](https://github.com/pypeclub/OpenPype/pull/2501)
|
||||
- Slack: notifications are sent with Openpype logo and bot name [\#2499](https://github.com/pypeclub/OpenPype/pull/2499)
|
||||
- Ftrack: Event handlers settings [\#2496](https://github.com/pypeclub/OpenPype/pull/2496)
|
||||
- Flame - create publishable clips [\#2495](https://github.com/pypeclub/OpenPype/pull/2495)
|
||||
- Tools: Fix style and modality of errors in loader and creator [\#2489](https://github.com/pypeclub/OpenPype/pull/2489)
|
||||
- Project Manager: Remove project button cleanup [\#2482](https://github.com/pypeclub/OpenPype/pull/2482)
|
||||
- Tools: Be able to change models of tasks and assets widgets [\#2475](https://github.com/pypeclub/OpenPype/pull/2475)
|
||||
- Publish pype: Reduce publish process defering [\#2464](https://github.com/pypeclub/OpenPype/pull/2464)
|
||||
- Maya: Improve speed of Collect History logic [\#2460](https://github.com/pypeclub/OpenPype/pull/2460)
|
||||
- Maya: Validate Rig Controllers - fix Error: in script editor [\#2459](https://github.com/pypeclub/OpenPype/pull/2459)
|
||||
- Maya: Optimize Validate Locked Normals speed for dense polymeshes [\#2457](https://github.com/pypeclub/OpenPype/pull/2457)
|
||||
- Fix \#2453 Refactor missing \_get\_reference\_node method [\#2455](https://github.com/pypeclub/OpenPype/pull/2455)
|
||||
- Houdini: Remove broken unique name counter [\#2450](https://github.com/pypeclub/OpenPype/pull/2450)
|
||||
- Maya: Improve lib.polyConstraint performance when Select tool is not the active tool context [\#2447](https://github.com/pypeclub/OpenPype/pull/2447)
|
||||
- Slack: Add review to notification message [\#2498](https://github.com/pypeclub/OpenPype/pull/2498)
|
||||
- Maya: Collect 'fps' animation data only for "review" instances [\#2486](https://github.com/pypeclub/OpenPype/pull/2486)
|
||||
- General: Validate third party before build [\#2425](https://github.com/pypeclub/OpenPype/pull/2425)
|
||||
- Maya : add option to not group reference in ReferenceLoader [\#2383](https://github.com/pypeclub/OpenPype/pull/2383)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- AfterEffects: Fix - removed obsolete import [\#2577](https://github.com/pypeclub/OpenPype/pull/2577)
|
||||
- General: OpenPype version updates [\#2575](https://github.com/pypeclub/OpenPype/pull/2575)
|
||||
- Ftrack: Delete action revision [\#2563](https://github.com/pypeclub/OpenPype/pull/2563)
|
||||
- Webpublisher: ftrack shows incorrect user names [\#2560](https://github.com/pypeclub/OpenPype/pull/2560)
|
||||
- General: Do not validate version if build does not support it [\#2557](https://github.com/pypeclub/OpenPype/pull/2557)
|
||||
- Webpublisher: Fixed progress reporting [\#2553](https://github.com/pypeclub/OpenPype/pull/2553)
|
||||
- Fix Maya AssProxyLoader version switch [\#2551](https://github.com/pypeclub/OpenPype/pull/2551)
|
||||
- General: Fix install thread in igniter [\#2549](https://github.com/pypeclub/OpenPype/pull/2549)
|
||||
- Houdini: vdbcache family preserve frame numbers on publish integration + enable validate version for Houdini [\#2535](https://github.com/pypeclub/OpenPype/pull/2535)
|
||||
- Maya: Fix Load VDB to V-Ray [\#2533](https://github.com/pypeclub/OpenPype/pull/2533)
|
||||
- Maya: ReferenceLoader fix not unique group name error for attach to root [\#2532](https://github.com/pypeclub/OpenPype/pull/2532)
|
||||
- Maya: namespaced context go back to original namespace when started from inside a namespace [\#2531](https://github.com/pypeclub/OpenPype/pull/2531)
|
||||
- Fix create zip tool - path argument [\#2522](https://github.com/pypeclub/OpenPype/pull/2522)
|
||||
- Maya: Fix Extract Look with space in names [\#2518](https://github.com/pypeclub/OpenPype/pull/2518)
|
||||
- Fix published frame content for sequence starting with 0 [\#2513](https://github.com/pypeclub/OpenPype/pull/2513)
|
||||
- Fix \#2497: reset empty string attributes correctly to "" instead of "None" [\#2506](https://github.com/pypeclub/OpenPype/pull/2506)
|
||||
- General: Settings work if OpenPypeVersion is available [\#2494](https://github.com/pypeclub/OpenPype/pull/2494)
|
||||
- General: PYTHONPATH may break OpenPype dependencies [\#2493](https://github.com/pypeclub/OpenPype/pull/2493)
|
||||
- Workfiles tool: Files widget show files on first show [\#2488](https://github.com/pypeclub/OpenPype/pull/2488)
|
||||
- General: Custom template paths filter fix [\#2483](https://github.com/pypeclub/OpenPype/pull/2483)
|
||||
- Loader: Remove always on top flag in tray [\#2480](https://github.com/pypeclub/OpenPype/pull/2480)
|
||||
- General: Anatomy does not return root envs as unicode [\#2465](https://github.com/pypeclub/OpenPype/pull/2465)
|
||||
- Maya: reset empty string attributes correctly to "" instead of "None" [\#2506](https://github.com/pypeclub/OpenPype/pull/2506)
|
||||
- Improve FusionPreLaunch hook errors [\#2505](https://github.com/pypeclub/OpenPype/pull/2505)
|
||||
- Maya: Validate Shape Zero do not keep fixed geometry vertices selected/active after repair [\#2456](https://github.com/pypeclub/OpenPype/pull/2456)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- General: Fix install thread in igniter [\#2549](https://github.com/pypeclub/OpenPype/pull/2549)
|
||||
- AfterEffects: Move implementation to OpenPype [\#2543](https://github.com/pypeclub/OpenPype/pull/2543)
|
||||
- Fix create zip tool - path argument [\#2522](https://github.com/pypeclub/OpenPype/pull/2522)
|
||||
- General: Modules import function output fix [\#2492](https://github.com/pypeclub/OpenPype/pull/2492)
|
||||
- AE: fix hiding of alert window below Publish [\#2491](https://github.com/pypeclub/OpenPype/pull/2491)
|
||||
- Maya: Validate NGONs re-use polyConstraint code from openpype.host.maya.api.lib [\#2458](https://github.com/pypeclub/OpenPype/pull/2458)
|
||||
- Maya: Remove Maya Look Assigner check on startup [\#2540](https://github.com/pypeclub/OpenPype/pull/2540)
|
||||
- build\(deps\): bump shelljs from 0.8.4 to 0.8.5 in /website [\#2538](https://github.com/pypeclub/OpenPype/pull/2538)
|
||||
- build\(deps\): bump follow-redirects from 1.14.4 to 1.14.7 in /website [\#2534](https://github.com/pypeclub/OpenPype/pull/2534)
|
||||
- Nuke: Merge avalon's implementation into OpenPype [\#2514](https://github.com/pypeclub/OpenPype/pull/2514)
|
||||
|
||||
## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04)
|
||||
|
||||
|
|
@ -66,44 +88,14 @@
|
|||
|
||||
- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462)
|
||||
- Photoshop: New style validations for New publisher [\#2429](https://github.com/pypeclub/OpenPype/pull/2429)
|
||||
- General: Environment variables groups [\#2424](https://github.com/pypeclub/OpenPype/pull/2424)
|
||||
- Unreal: Dynamic menu created in Python [\#2422](https://github.com/pypeclub/OpenPype/pull/2422)
|
||||
- Settings UI: Hyperlinks to settings [\#2420](https://github.com/pypeclub/OpenPype/pull/2420)
|
||||
- Modules: JobQueue module moved one hierarchy level higher [\#2419](https://github.com/pypeclub/OpenPype/pull/2419)
|
||||
- TimersManager: Start timer post launch hook [\#2418](https://github.com/pypeclub/OpenPype/pull/2418)
|
||||
- General: Run applications as separate processes under linux [\#2408](https://github.com/pypeclub/OpenPype/pull/2408)
|
||||
- Ftrack: Check existence of object type on recreation [\#2404](https://github.com/pypeclub/OpenPype/pull/2404)
|
||||
- Enhancement: Global cleanup plugin that explicitly remove paths from context [\#2402](https://github.com/pypeclub/OpenPype/pull/2402)
|
||||
- General: MongoDB ability to specify replica set groups [\#2401](https://github.com/pypeclub/OpenPype/pull/2401)
|
||||
- Flame: moving `utility\_scripts` to api folder also with `scripts` [\#2385](https://github.com/pypeclub/OpenPype/pull/2385)
|
||||
- Centos 7 dependency compatibility [\#2384](https://github.com/pypeclub/OpenPype/pull/2384)
|
||||
- Enhancement: Settings: Use project settings values from another project [\#2382](https://github.com/pypeclub/OpenPype/pull/2382)
|
||||
- Blender 3: Support auto install for new blender version [\#2377](https://github.com/pypeclub/OpenPype/pull/2377)
|
||||
- Maya add render image path to settings [\#2375](https://github.com/pypeclub/OpenPype/pull/2375)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471)
|
||||
- Short Pyblish plugin path [\#2428](https://github.com/pypeclub/OpenPype/pull/2428)
|
||||
- PS: Introduced settings for invalid characters to use in ValidateNaming plugin [\#2417](https://github.com/pypeclub/OpenPype/pull/2417)
|
||||
- Settings UI: Breadcrumbs path does not create new entities [\#2416](https://github.com/pypeclub/OpenPype/pull/2416)
|
||||
- AfterEffects: Variant 2022 is in defaults but missing in schemas [\#2412](https://github.com/pypeclub/OpenPype/pull/2412)
|
||||
- Nuke: baking representations was not additive [\#2406](https://github.com/pypeclub/OpenPype/pull/2406)
|
||||
- General: Fix access to environments from default settings [\#2403](https://github.com/pypeclub/OpenPype/pull/2403)
|
||||
- Fix: Placeholder Input color set fix [\#2399](https://github.com/pypeclub/OpenPype/pull/2399)
|
||||
- Settings: Fix state change of wrapper label [\#2396](https://github.com/pypeclub/OpenPype/pull/2396)
|
||||
- Flame: fix ftrack publisher [\#2381](https://github.com/pypeclub/OpenPype/pull/2381)
|
||||
- hiero: solve custom ocio path [\#2379](https://github.com/pypeclub/OpenPype/pull/2379)
|
||||
- hiero: fix workio and flatten [\#2378](https://github.com/pypeclub/OpenPype/pull/2378)
|
||||
- Nuke: fixing menu re-drawing during context change [\#2374](https://github.com/pypeclub/OpenPype/pull/2374)
|
||||
- Webpublisher: Fix assignment of families of TVpaint instances [\#2373](https://github.com/pypeclub/OpenPype/pull/2373)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Forced cx\_freeze to include sqlite3 into build [\#2432](https://github.com/pypeclub/OpenPype/pull/2432)
|
||||
- Maya: Replaced PATH usage with vendored oiio path for maketx utility [\#2405](https://github.com/pypeclub/OpenPype/pull/2405)
|
||||
- \[Fix\]\[MAYA\] Handle message type attribute within CollectLook [\#2394](https://github.com/pypeclub/OpenPype/pull/2394)
|
||||
- Add validator to check correct version of extension for PS and AE [\#2387](https://github.com/pypeclub/OpenPype/pull/2387)
|
||||
|
||||
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ import re
|
|||
import tempfile
|
||||
import attr
|
||||
|
||||
from avalon import aftereffects
|
||||
import pyblish.api
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
|
|
@ -159,7 +158,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
in url
|
||||
|
||||
Returns:
|
||||
(list) of absolut urls to rendered file
|
||||
(list) of absolute urls to rendered file
|
||||
"""
|
||||
start = render_instance.frameStart
|
||||
end = render_instance.frameEnd
|
||||
|
|
|
|||
|
|
@ -5,11 +5,8 @@ def add_implementation_envs(env, _app):
|
|||
"""Modify environments to contain all required for implementation."""
|
||||
# Prepare path to implementation script
|
||||
implementation_user_script_path = os.path.join(
|
||||
os.environ["OPENPYPE_REPOS_ROOT"],
|
||||
"repos",
|
||||
"avalon-core",
|
||||
"setup",
|
||||
"blender"
|
||||
os.path.dirname(os.path.abspath(__file__)),
|
||||
"blender_addon"
|
||||
)
|
||||
|
||||
# Add blender implementation script path to PYTHONPATH
|
||||
|
|
|
|||
|
|
@ -1,94 +1,64 @@
|
|||
import os
|
||||
import sys
|
||||
import traceback
|
||||
"""Public API
|
||||
|
||||
import bpy
|
||||
Anything that isn't defined here is INTERNAL and unreliable for external use.
|
||||
|
||||
from .lib import append_user_scripts
|
||||
"""
|
||||
|
||||
from avalon import api as avalon
|
||||
from pyblish import api as pyblish
|
||||
from .pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ls,
|
||||
publish,
|
||||
containerise,
|
||||
)
|
||||
|
||||
import openpype.hosts.blender
|
||||
from .plugin import (
|
||||
Creator,
|
||||
Loader,
|
||||
)
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
from .workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
current_file,
|
||||
has_unsaved_changes,
|
||||
file_extensions,
|
||||
work_root,
|
||||
)
|
||||
|
||||
ORIGINAL_EXCEPTHOOK = sys.excepthook
|
||||
from .lib import (
|
||||
lsattr,
|
||||
lsattrs,
|
||||
read,
|
||||
maintained_selection,
|
||||
get_selection,
|
||||
# unique_name,
|
||||
)
|
||||
|
||||
|
||||
def pype_excepthook_handler(*args):
|
||||
traceback.print_exception(*args)
|
||||
__all__ = [
|
||||
"install",
|
||||
"uninstall",
|
||||
"ls",
|
||||
"publish",
|
||||
"containerise",
|
||||
|
||||
"Creator",
|
||||
"Loader",
|
||||
|
||||
def install():
|
||||
"""Install Blender configuration for Avalon."""
|
||||
sys.excepthook = pype_excepthook_handler
|
||||
pyblish.register_plugin_path(str(PUBLISH_PATH))
|
||||
avalon.register_plugin_path(avalon.Loader, str(LOAD_PATH))
|
||||
avalon.register_plugin_path(avalon.Creator, str(CREATE_PATH))
|
||||
append_user_scripts()
|
||||
avalon.on("new", on_new)
|
||||
avalon.on("open", on_open)
|
||||
# Workfiles API
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root",
|
||||
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall Blender configuration for Avalon."""
|
||||
sys.excepthook = ORIGINAL_EXCEPTHOOK
|
||||
pyblish.deregister_plugin_path(str(PUBLISH_PATH))
|
||||
avalon.deregister_plugin_path(avalon.Loader, str(LOAD_PATH))
|
||||
avalon.deregister_plugin_path(avalon.Creator, str(CREATE_PATH))
|
||||
|
||||
|
||||
def set_start_end_frames():
|
||||
from avalon import io
|
||||
|
||||
asset_name = io.Session["AVALON_ASSET"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
|
||||
scene = bpy.context.scene
|
||||
|
||||
# Default scene settings
|
||||
frameStart = scene.frame_start
|
||||
frameEnd = scene.frame_end
|
||||
fps = scene.render.fps
|
||||
resolution_x = scene.render.resolution_x
|
||||
resolution_y = scene.render.resolution_y
|
||||
|
||||
# Check if settings are set
|
||||
data = asset_doc.get("data")
|
||||
|
||||
if not data:
|
||||
return
|
||||
|
||||
if data.get("frameStart"):
|
||||
frameStart = data.get("frameStart")
|
||||
if data.get("frameEnd"):
|
||||
frameEnd = data.get("frameEnd")
|
||||
if data.get("fps"):
|
||||
fps = data.get("fps")
|
||||
if data.get("resolutionWidth"):
|
||||
resolution_x = data.get("resolutionWidth")
|
||||
if data.get("resolutionHeight"):
|
||||
resolution_y = data.get("resolutionHeight")
|
||||
|
||||
scene.frame_start = frameStart
|
||||
scene.frame_end = frameEnd
|
||||
scene.render.fps = fps
|
||||
scene.render.resolution_x = resolution_x
|
||||
scene.render.resolution_y = resolution_y
|
||||
|
||||
|
||||
def on_new(arg1, arg2):
|
||||
set_start_end_frames()
|
||||
|
||||
|
||||
def on_open(arg1, arg2):
|
||||
set_start_end_frames()
|
||||
# Utility functions
|
||||
"maintained_selection",
|
||||
"lsattr",
|
||||
"lsattrs",
|
||||
"read",
|
||||
"get_selection",
|
||||
# "unique_name",
|
||||
]
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ class SelectInvalidAction(pyblish.api.Action):
|
|||
invalid.extend(invalid_nodes)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Failed plug-in doens't have any selectable objects."
|
||||
"Failed plug-in doesn't have any selectable objects."
|
||||
)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
|
|
|||
BIN
openpype/hosts/blender/api/icons/pyblish-32x32.png
Normal file
BIN
openpype/hosts/blender/api/icons/pyblish-32x32.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 632 B |
|
|
@ -1,15 +1,22 @@
|
|||
import os
|
||||
import traceback
|
||||
import importlib
|
||||
import contextlib
|
||||
from typing import Dict, List, Union
|
||||
|
||||
import bpy
|
||||
import addon_utils
|
||||
from openpype.api import Logger
|
||||
|
||||
from . import pipeline
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def load_scripts(paths):
|
||||
"""Copy of `load_scripts` from Blender's implementation.
|
||||
|
||||
It is possible that whis function will be changed in future and usage will
|
||||
It is possible that this function will be changed in future and usage will
|
||||
be based on Blender version.
|
||||
"""
|
||||
import bpy_types
|
||||
|
|
@ -125,3 +132,155 @@ def append_user_scripts():
|
|||
except Exception:
|
||||
print("Couldn't load user scripts \"{}\"".format(user_scripts))
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
def imprint(node: bpy.types.bpy_struct_meta_idprop, data: Dict):
|
||||
r"""Write `data` to `node` as userDefined attributes
|
||||
|
||||
Arguments:
|
||||
node: Long name of node
|
||||
data: Dictionary of key/value pairs
|
||||
|
||||
Example:
|
||||
>>> import bpy
|
||||
>>> def compute():
|
||||
... return 6
|
||||
...
|
||||
>>> bpy.ops.mesh.primitive_cube_add()
|
||||
>>> cube = bpy.context.view_layer.objects.active
|
||||
>>> imprint(cube, {
|
||||
... "regularString": "myFamily",
|
||||
... "computedValue": lambda: compute()
|
||||
... })
|
||||
...
|
||||
>>> cube['avalon']['computedValue']
|
||||
6
|
||||
"""
|
||||
|
||||
imprint_data = dict()
|
||||
|
||||
for key, value in data.items():
|
||||
if value is None:
|
||||
continue
|
||||
|
||||
if callable(value):
|
||||
# Support values evaluated at imprint
|
||||
value = value()
|
||||
|
||||
if not isinstance(value, (int, float, bool, str, list)):
|
||||
raise TypeError(f"Unsupported type: {type(value)}")
|
||||
|
||||
imprint_data[key] = value
|
||||
|
||||
pipeline.metadata_update(node, imprint_data)
|
||||
|
||||
|
||||
def lsattr(attr: str,
|
||||
value: Union[str, int, bool, List, Dict, None] = None) -> List:
|
||||
r"""Return nodes matching `attr` and `value`
|
||||
|
||||
Arguments:
|
||||
attr: Name of Blender property
|
||||
value: Value of attribute. If none
|
||||
is provided, return all nodes with this attribute.
|
||||
|
||||
Example:
|
||||
>>> lsattr("id", "myId")
|
||||
... [bpy.data.objects["myNode"]
|
||||
>>> lsattr("id")
|
||||
... [bpy.data.objects["myNode"], bpy.data.objects["myOtherNode"]]
|
||||
|
||||
Returns:
|
||||
list
|
||||
"""
|
||||
|
||||
return lsattrs({attr: value})
|
||||
|
||||
|
||||
def lsattrs(attrs: Dict) -> List:
|
||||
r"""Return nodes with the given attribute(s).
|
||||
|
||||
Arguments:
|
||||
attrs: Name and value pairs of expected matches
|
||||
|
||||
Example:
|
||||
>>> lsattrs({"age": 5}) # Return nodes with an `age` of 5
|
||||
# Return nodes with both `age` and `color` of 5 and blue
|
||||
>>> lsattrs({"age": 5, "color": "blue"})
|
||||
|
||||
Returns a list.
|
||||
|
||||
"""
|
||||
|
||||
# For now return all objects, not filtered by scene/collection/view_layer.
|
||||
matches = set()
|
||||
for coll in dir(bpy.data):
|
||||
if not isinstance(
|
||||
getattr(bpy.data, coll),
|
||||
bpy.types.bpy_prop_collection,
|
||||
):
|
||||
continue
|
||||
for node in getattr(bpy.data, coll):
|
||||
for attr, value in attrs.items():
|
||||
avalon_prop = node.get(pipeline.AVALON_PROPERTY)
|
||||
if not avalon_prop:
|
||||
continue
|
||||
if (avalon_prop.get(attr)
|
||||
and (value is None or avalon_prop.get(attr) == value)):
|
||||
matches.add(node)
|
||||
return list(matches)
|
||||
|
||||
|
||||
def read(node: bpy.types.bpy_struct_meta_idprop):
|
||||
"""Return user-defined attributes from `node`"""
|
||||
|
||||
data = dict(node.get(pipeline.AVALON_PROPERTY))
|
||||
|
||||
# Ignore hidden/internal data
|
||||
data = {
|
||||
key: value
|
||||
for key, value in data.items() if not key.startswith("_")
|
||||
}
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def get_selection() -> List[bpy.types.Object]:
|
||||
"""Return the selected objects from the current scene."""
|
||||
return [obj for obj in bpy.context.scene.objects if obj.select_get()]
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
r"""Maintain selection during context
|
||||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... # Modify selection
|
||||
... bpy.ops.object.select_all(action='DESELECT')
|
||||
>>> # Selection restored
|
||||
"""
|
||||
|
||||
previous_selection = get_selection()
|
||||
previous_active = bpy.context.view_layer.objects.active
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
# Clear the selection
|
||||
for node in get_selection():
|
||||
node.select_set(state=False)
|
||||
if previous_selection:
|
||||
for node in previous_selection:
|
||||
try:
|
||||
node.select_set(state=True)
|
||||
except ReferenceError:
|
||||
# This could happen if a selected node was deleted during
|
||||
# the context.
|
||||
log.exception("Failed to reselect")
|
||||
continue
|
||||
try:
|
||||
bpy.context.view_layer.objects.active = previous_active
|
||||
except ReferenceError:
|
||||
# This could happen if the active node was deleted during the
|
||||
# context.
|
||||
log.exception("Failed to set active object.")
|
||||
|
|
|
|||
410
openpype/hosts/blender/api/ops.py
Normal file
410
openpype/hosts/blender/api/ops.py
Normal file
|
|
@ -0,0 +1,410 @@
|
|||
"""Blender operators and menus for use with Avalon."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import platform
|
||||
import time
|
||||
import traceback
|
||||
import collections
|
||||
from pathlib import Path
|
||||
from types import ModuleType
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
import bpy
|
||||
import bpy.utils.previews
|
||||
|
||||
import avalon.api
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype import style
|
||||
|
||||
from .workio import OpenFileCacher
|
||||
|
||||
PREVIEW_COLLECTIONS: Dict = dict()
|
||||
|
||||
# This seems like a good value to keep the Qt app responsive and doesn't slow
|
||||
# down Blender. At least on macOS I the interace of Blender gets very laggy if
|
||||
# you make it smaller.
|
||||
TIMER_INTERVAL: float = 0.01
|
||||
|
||||
|
||||
class BlenderApplication(QtWidgets.QApplication):
|
||||
_instance = None
|
||||
blender_windows = {}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(BlenderApplication, self).__init__(*args, **kwargs)
|
||||
self.setQuitOnLastWindowClosed(False)
|
||||
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
self.lastWindowClosed.connect(self.__class__.reset)
|
||||
|
||||
@classmethod
|
||||
def get_app(cls):
|
||||
if cls._instance is None:
|
||||
cls._instance = cls(sys.argv)
|
||||
return cls._instance
|
||||
|
||||
@classmethod
|
||||
def reset(cls):
|
||||
cls._instance = None
|
||||
|
||||
@classmethod
|
||||
def store_window(cls, identifier, window):
|
||||
current_window = cls.get_window(identifier)
|
||||
cls.blender_windows[identifier] = window
|
||||
if current_window:
|
||||
current_window.close()
|
||||
# current_window.deleteLater()
|
||||
|
||||
@classmethod
|
||||
def get_window(cls, identifier):
|
||||
return cls.blender_windows.get(identifier)
|
||||
|
||||
|
||||
class MainThreadItem:
|
||||
"""Structure to store information about callback in main thread.
|
||||
|
||||
Item should be used to execute callback in main thread which may be needed
|
||||
for execution of Qt objects.
|
||||
|
||||
Item store callback (callable variable), arguments and keyword arguments
|
||||
for the callback. Item hold information about it's process.
|
||||
"""
|
||||
not_set = object()
|
||||
sleep_time = 0.1
|
||||
|
||||
def __init__(self, callback, *args, **kwargs):
|
||||
self.done = False
|
||||
self.exception = self.not_set
|
||||
self.result = self.not_set
|
||||
self.callback = callback
|
||||
self.args = args
|
||||
self.kwargs = kwargs
|
||||
|
||||
def execute(self):
|
||||
"""Execute callback and store it's result.
|
||||
|
||||
Method must be called from main thread. Item is marked as `done`
|
||||
when callback execution finished. Store output of callback of exception
|
||||
information when callback raise one.
|
||||
"""
|
||||
print("Executing process in main thread")
|
||||
if self.done:
|
||||
print("- item is already processed")
|
||||
return
|
||||
|
||||
callback = self.callback
|
||||
args = self.args
|
||||
kwargs = self.kwargs
|
||||
print("Running callback: {}".format(str(callback)))
|
||||
try:
|
||||
result = callback(*args, **kwargs)
|
||||
self.result = result
|
||||
|
||||
except Exception:
|
||||
self.exception = sys.exc_info()
|
||||
|
||||
finally:
|
||||
print("Done")
|
||||
self.done = True
|
||||
|
||||
def wait(self):
|
||||
"""Wait for result from main thread.
|
||||
|
||||
This method stops current thread until callback is executed.
|
||||
|
||||
Returns:
|
||||
object: Output of callback. May be any type or object.
|
||||
|
||||
Raises:
|
||||
Exception: Reraise any exception that happened during callback
|
||||
execution.
|
||||
"""
|
||||
while not self.done:
|
||||
print(self.done)
|
||||
time.sleep(self.sleep_time)
|
||||
|
||||
if self.exception is self.not_set:
|
||||
return self.result
|
||||
raise self.exception
|
||||
|
||||
|
||||
class GlobalClass:
|
||||
app = None
|
||||
main_thread_callbacks = collections.deque()
|
||||
is_windows = platform.system().lower() == "windows"
|
||||
|
||||
|
||||
def execute_in_main_thread(main_thead_item):
|
||||
print("execute_in_main_thread")
|
||||
GlobalClass.main_thread_callbacks.append(main_thead_item)
|
||||
|
||||
|
||||
def _process_app_events() -> Optional[float]:
|
||||
"""Process the events of the Qt app if the window is still visible.
|
||||
|
||||
If the app has any top level windows and at least one of them is visible
|
||||
return the time after which this function should be run again. Else return
|
||||
None, so the function is not run again and will be unregistered.
|
||||
"""
|
||||
while GlobalClass.main_thread_callbacks:
|
||||
main_thread_item = GlobalClass.main_thread_callbacks.popleft()
|
||||
main_thread_item.execute()
|
||||
if main_thread_item.exception is not MainThreadItem.not_set:
|
||||
_clc, val, tb = main_thread_item.exception
|
||||
msg = str(val)
|
||||
detail = "\n".join(traceback.format_exception(_clc, val, tb))
|
||||
dialog = QtWidgets.QMessageBox(
|
||||
QtWidgets.QMessageBox.Warning,
|
||||
"Error",
|
||||
msg)
|
||||
dialog.setMinimumWidth(500)
|
||||
dialog.setDetailedText(detail)
|
||||
dialog.exec_()
|
||||
|
||||
if not GlobalClass.is_windows:
|
||||
if OpenFileCacher.opening_file:
|
||||
return TIMER_INTERVAL
|
||||
|
||||
app = GlobalClass.app
|
||||
if app._instance:
|
||||
app.processEvents()
|
||||
return TIMER_INTERVAL
|
||||
return TIMER_INTERVAL
|
||||
|
||||
|
||||
class LaunchQtApp(bpy.types.Operator):
|
||||
"""A Base class for opertors to launch a Qt app."""
|
||||
|
||||
_app: QtWidgets.QApplication
|
||||
_window = Union[QtWidgets.QDialog, ModuleType]
|
||||
_tool_name: str = None
|
||||
_init_args: Optional[List] = list()
|
||||
_init_kwargs: Optional[Dict] = dict()
|
||||
bl_idname: str = None
|
||||
|
||||
def __init__(self):
|
||||
if self.bl_idname is None:
|
||||
raise NotImplementedError("Attribute `bl_idname` must be set!")
|
||||
print(f"Initialising {self.bl_idname}...")
|
||||
self._app = BlenderApplication.get_app()
|
||||
GlobalClass.app = self._app
|
||||
|
||||
bpy.app.timers.register(
|
||||
_process_app_events,
|
||||
persistent=True
|
||||
)
|
||||
|
||||
def execute(self, context):
|
||||
"""Execute the operator.
|
||||
|
||||
The child class must implement `execute()` where it only has to set
|
||||
`self._window` to the desired Qt window and then simply run
|
||||
`return super().execute(context)`.
|
||||
`self._window` is expected to have a `show` method.
|
||||
If the `show` method requires arguments, you can set `self._show_args`
|
||||
and `self._show_kwargs`. `args` should be a list, `kwargs` a
|
||||
dictionary.
|
||||
"""
|
||||
|
||||
if self._tool_name is None:
|
||||
if self._window is None:
|
||||
raise AttributeError("`self._window` is not set.")
|
||||
|
||||
else:
|
||||
window = self._app.get_window(self.bl_idname)
|
||||
if window is None:
|
||||
window = host_tools.get_tool_by_name(self._tool_name)
|
||||
self._app.store_window(self.bl_idname, window)
|
||||
self._window = window
|
||||
|
||||
if not isinstance(
|
||||
self._window,
|
||||
(QtWidgets.QMainWindow, QtWidgets.QDialog, ModuleType)
|
||||
):
|
||||
raise AttributeError(
|
||||
"`window` should be a `QDialog or module`. Got: {}".format(
|
||||
str(type(window))
|
||||
)
|
||||
)
|
||||
|
||||
self.before_window_show()
|
||||
|
||||
if isinstance(self._window, ModuleType):
|
||||
self._window.show()
|
||||
window = None
|
||||
if hasattr(self._window, "window"):
|
||||
window = self._window.window
|
||||
elif hasattr(self._window, "_window"):
|
||||
window = self._window.window
|
||||
|
||||
if window:
|
||||
self._app.store_window(self.bl_idname, window)
|
||||
|
||||
else:
|
||||
origin_flags = self._window.windowFlags()
|
||||
on_top_flags = origin_flags | QtCore.Qt.WindowStaysOnTopHint
|
||||
self._window.setWindowFlags(on_top_flags)
|
||||
self._window.show()
|
||||
|
||||
if on_top_flags != origin_flags:
|
||||
self._window.setWindowFlags(origin_flags)
|
||||
self._window.show()
|
||||
|
||||
return {'FINISHED'}
|
||||
|
||||
def before_window_show(self):
|
||||
return
|
||||
|
||||
|
||||
class LaunchCreator(LaunchQtApp):
|
||||
"""Launch Avalon Creator."""
|
||||
|
||||
bl_idname = "wm.avalon_creator"
|
||||
bl_label = "Create..."
|
||||
_tool_name = "creator"
|
||||
|
||||
def before_window_show(self):
|
||||
self._window.refresh()
|
||||
|
||||
|
||||
class LaunchLoader(LaunchQtApp):
|
||||
"""Launch Avalon Loader."""
|
||||
|
||||
bl_idname = "wm.avalon_loader"
|
||||
bl_label = "Load..."
|
||||
_tool_name = "loader"
|
||||
|
||||
def before_window_show(self):
|
||||
self._window.set_context(
|
||||
{"asset": avalon.api.Session["AVALON_ASSET"]},
|
||||
refresh=True
|
||||
)
|
||||
|
||||
|
||||
class LaunchPublisher(LaunchQtApp):
|
||||
"""Launch Avalon Publisher."""
|
||||
|
||||
bl_idname = "wm.avalon_publisher"
|
||||
bl_label = "Publish..."
|
||||
|
||||
def execute(self, context):
|
||||
host_tools.show_publish()
|
||||
return {"FINISHED"}
|
||||
|
||||
|
||||
class LaunchManager(LaunchQtApp):
|
||||
"""Launch Avalon Manager."""
|
||||
|
||||
bl_idname = "wm.avalon_manager"
|
||||
bl_label = "Manage..."
|
||||
_tool_name = "sceneinventory"
|
||||
|
||||
def before_window_show(self):
|
||||
self._window.refresh()
|
||||
|
||||
|
||||
class LaunchWorkFiles(LaunchQtApp):
|
||||
"""Launch Avalon Work Files."""
|
||||
|
||||
bl_idname = "wm.avalon_workfiles"
|
||||
bl_label = "Work Files..."
|
||||
_tool_name = "workfiles"
|
||||
|
||||
def execute(self, context):
|
||||
result = super().execute(context)
|
||||
self._window.set_context({
|
||||
"asset": avalon.api.Session["AVALON_ASSET"],
|
||||
"silo": avalon.api.Session["AVALON_SILO"],
|
||||
"task": avalon.api.Session["AVALON_TASK"]
|
||||
})
|
||||
return result
|
||||
|
||||
def before_window_show(self):
|
||||
self._window.root = str(Path(
|
||||
os.environ.get("AVALON_WORKDIR", ""),
|
||||
os.environ.get("AVALON_SCENEDIR", ""),
|
||||
))
|
||||
self._window.refresh()
|
||||
|
||||
|
||||
class TOPBAR_MT_avalon(bpy.types.Menu):
|
||||
"""Avalon menu."""
|
||||
|
||||
bl_idname = "TOPBAR_MT_avalon"
|
||||
bl_label = os.environ.get("AVALON_LABEL")
|
||||
|
||||
def draw(self, context):
|
||||
"""Draw the menu in the UI."""
|
||||
|
||||
layout = self.layout
|
||||
|
||||
pcoll = PREVIEW_COLLECTIONS.get("avalon")
|
||||
if pcoll:
|
||||
pyblish_menu_icon = pcoll["pyblish_menu_icon"]
|
||||
pyblish_menu_icon_id = pyblish_menu_icon.icon_id
|
||||
else:
|
||||
pyblish_menu_icon_id = 0
|
||||
|
||||
asset = avalon.api.Session['AVALON_ASSET']
|
||||
task = avalon.api.Session['AVALON_TASK']
|
||||
context_label = f"{asset}, {task}"
|
||||
context_label_item = layout.row()
|
||||
context_label_item.operator(
|
||||
LaunchWorkFiles.bl_idname, text=context_label
|
||||
)
|
||||
context_label_item.enabled = False
|
||||
layout.separator()
|
||||
layout.operator(LaunchCreator.bl_idname, text="Create...")
|
||||
layout.operator(LaunchLoader.bl_idname, text="Load...")
|
||||
layout.operator(
|
||||
LaunchPublisher.bl_idname,
|
||||
text="Publish...",
|
||||
icon_value=pyblish_menu_icon_id,
|
||||
)
|
||||
layout.operator(LaunchManager.bl_idname, text="Manage...")
|
||||
layout.separator()
|
||||
layout.operator(LaunchWorkFiles.bl_idname, text="Work Files...")
|
||||
# TODO (jasper): maybe add 'Reload Pipeline', 'Reset Frame Range' and
|
||||
# 'Reset Resolution'?
|
||||
|
||||
|
||||
def draw_avalon_menu(self, context):
|
||||
"""Draw the Avalon menu in the top bar."""
|
||||
|
||||
self.layout.menu(TOPBAR_MT_avalon.bl_idname)
|
||||
|
||||
|
||||
classes = [
|
||||
LaunchCreator,
|
||||
LaunchLoader,
|
||||
LaunchPublisher,
|
||||
LaunchManager,
|
||||
LaunchWorkFiles,
|
||||
TOPBAR_MT_avalon,
|
||||
]
|
||||
|
||||
|
||||
def register():
|
||||
"Register the operators and menu."
|
||||
|
||||
pcoll = bpy.utils.previews.new()
|
||||
pyblish_icon_file = Path(__file__).parent / "icons" / "pyblish-32x32.png"
|
||||
pcoll.load("pyblish_menu_icon", str(pyblish_icon_file.absolute()), 'IMAGE')
|
||||
PREVIEW_COLLECTIONS["avalon"] = pcoll
|
||||
|
||||
for cls in classes:
|
||||
bpy.utils.register_class(cls)
|
||||
bpy.types.TOPBAR_MT_editor_menus.append(draw_avalon_menu)
|
||||
|
||||
|
||||
def unregister():
|
||||
"""Unregister the operators and menu."""
|
||||
|
||||
pcoll = PREVIEW_COLLECTIONS.pop("avalon")
|
||||
bpy.utils.previews.remove(pcoll)
|
||||
bpy.types.TOPBAR_MT_editor_menus.remove(draw_avalon_menu)
|
||||
for cls in reversed(classes):
|
||||
bpy.utils.unregister_class(cls)
|
||||
427
openpype/hosts/blender/api/pipeline.py
Normal file
427
openpype/hosts/blender/api/pipeline.py
Normal file
|
|
@ -0,0 +1,427 @@
|
|||
import os
|
||||
import sys
|
||||
import importlib
|
||||
import traceback
|
||||
from typing import Callable, Dict, Iterator, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from . import lib
|
||||
from . import ops
|
||||
|
||||
import pyblish.api
|
||||
import avalon.api
|
||||
from avalon import io, schema
|
||||
from avalon.pipeline import AVALON_CONTAINER_ID
|
||||
|
||||
from openpype.api import Logger
|
||||
import openpype.hosts.blender
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
ORIGINAL_EXCEPTHOOK = sys.excepthook
|
||||
|
||||
AVALON_INSTANCES = "AVALON_INSTANCES"
|
||||
AVALON_CONTAINERS = "AVALON_CONTAINERS"
|
||||
AVALON_PROPERTY = 'avalon'
|
||||
IS_HEADLESS = bpy.app.background
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def pype_excepthook_handler(*args):
|
||||
traceback.print_exception(*args)
|
||||
|
||||
|
||||
def install():
|
||||
"""Install Blender configuration for Avalon."""
|
||||
sys.excepthook = pype_excepthook_handler
|
||||
|
||||
pyblish.api.register_host("blender")
|
||||
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
|
||||
|
||||
avalon.api.register_plugin_path(avalon.api.Loader, str(LOAD_PATH))
|
||||
avalon.api.register_plugin_path(avalon.api.Creator, str(CREATE_PATH))
|
||||
|
||||
lib.append_user_scripts()
|
||||
|
||||
avalon.api.on("new", on_new)
|
||||
avalon.api.on("open", on_open)
|
||||
_register_callbacks()
|
||||
_register_events()
|
||||
|
||||
if not IS_HEADLESS:
|
||||
ops.register()
|
||||
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall Blender configuration for Avalon."""
|
||||
sys.excepthook = ORIGINAL_EXCEPTHOOK
|
||||
|
||||
pyblish.api.deregister_host("blender")
|
||||
pyblish.api.deregister_plugin_path(str(PUBLISH_PATH))
|
||||
|
||||
avalon.api.deregister_plugin_path(avalon.api.Loader, str(LOAD_PATH))
|
||||
avalon.api.deregister_plugin_path(avalon.api.Creator, str(CREATE_PATH))
|
||||
|
||||
if not IS_HEADLESS:
|
||||
ops.unregister()
|
||||
|
||||
|
||||
def set_start_end_frames():
|
||||
asset_name = io.Session["AVALON_ASSET"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
|
||||
scene = bpy.context.scene
|
||||
|
||||
# Default scene settings
|
||||
frameStart = scene.frame_start
|
||||
frameEnd = scene.frame_end
|
||||
fps = scene.render.fps
|
||||
resolution_x = scene.render.resolution_x
|
||||
resolution_y = scene.render.resolution_y
|
||||
|
||||
# Check if settings are set
|
||||
data = asset_doc.get("data")
|
||||
|
||||
if not data:
|
||||
return
|
||||
|
||||
if data.get("frameStart"):
|
||||
frameStart = data.get("frameStart")
|
||||
if data.get("frameEnd"):
|
||||
frameEnd = data.get("frameEnd")
|
||||
if data.get("fps"):
|
||||
fps = data.get("fps")
|
||||
if data.get("resolutionWidth"):
|
||||
resolution_x = data.get("resolutionWidth")
|
||||
if data.get("resolutionHeight"):
|
||||
resolution_y = data.get("resolutionHeight")
|
||||
|
||||
scene.frame_start = frameStart
|
||||
scene.frame_end = frameEnd
|
||||
scene.render.fps = fps
|
||||
scene.render.resolution_x = resolution_x
|
||||
scene.render.resolution_y = resolution_y
|
||||
|
||||
|
||||
def on_new(arg1, arg2):
|
||||
set_start_end_frames()
|
||||
|
||||
|
||||
def on_open(arg1, arg2):
|
||||
set_start_end_frames()
|
||||
|
||||
|
||||
@bpy.app.handlers.persistent
|
||||
def _on_save_pre(*args):
|
||||
avalon.api.emit("before_save", args)
|
||||
|
||||
|
||||
@bpy.app.handlers.persistent
|
||||
def _on_save_post(*args):
|
||||
avalon.api.emit("save", args)
|
||||
|
||||
|
||||
@bpy.app.handlers.persistent
|
||||
def _on_load_post(*args):
|
||||
# Detect new file or opening an existing file
|
||||
if bpy.data.filepath:
|
||||
# Likely this was an open operation since it has a filepath
|
||||
avalon.api.emit("open", args)
|
||||
else:
|
||||
avalon.api.emit("new", args)
|
||||
|
||||
ops.OpenFileCacher.post_load()
|
||||
|
||||
|
||||
def _register_callbacks():
|
||||
"""Register callbacks for certain events."""
|
||||
def _remove_handler(handlers: List, callback: Callable):
|
||||
"""Remove the callback from the given handler list."""
|
||||
|
||||
try:
|
||||
handlers.remove(callback)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
# TODO (jasper): implement on_init callback?
|
||||
|
||||
# Be sure to remove existig ones first.
|
||||
_remove_handler(bpy.app.handlers.save_pre, _on_save_pre)
|
||||
_remove_handler(bpy.app.handlers.save_post, _on_save_post)
|
||||
_remove_handler(bpy.app.handlers.load_post, _on_load_post)
|
||||
|
||||
bpy.app.handlers.save_pre.append(_on_save_pre)
|
||||
bpy.app.handlers.save_post.append(_on_save_post)
|
||||
bpy.app.handlers.load_post.append(_on_load_post)
|
||||
|
||||
log.info("Installed event handler _on_save_pre...")
|
||||
log.info("Installed event handler _on_save_post...")
|
||||
log.info("Installed event handler _on_load_post...")
|
||||
|
||||
|
||||
def _on_task_changed(*args):
|
||||
"""Callback for when the task in the context is changed."""
|
||||
|
||||
# TODO (jasper): Blender has no concept of projects or workspace.
|
||||
# It would be nice to override 'bpy.ops.wm.open_mainfile' so it takes the
|
||||
# workdir as starting directory. But I don't know if that is possible.
|
||||
# Another option would be to create a custom 'File Selector' and add the
|
||||
# `directory` attribute, so it opens in that directory (does it?).
|
||||
# https://docs.blender.org/api/blender2.8/bpy.types.Operator.html#calling-a-file-selector
|
||||
# https://docs.blender.org/api/blender2.8/bpy.types.WindowManager.html#bpy.types.WindowManager.fileselect_add
|
||||
workdir = avalon.api.Session["AVALON_WORKDIR"]
|
||||
log.debug("New working directory: %s", workdir)
|
||||
|
||||
|
||||
def _register_events():
|
||||
"""Install callbacks for specific events."""
|
||||
|
||||
avalon.api.on("taskChanged", _on_task_changed)
|
||||
log.info("Installed event callback for 'taskChanged'...")
|
||||
|
||||
|
||||
def reload_pipeline(*args):
|
||||
"""Attempt to reload pipeline at run-time.
|
||||
|
||||
Warning:
|
||||
This is primarily for development and debugging purposes and not well
|
||||
tested.
|
||||
|
||||
"""
|
||||
|
||||
avalon.api.uninstall()
|
||||
|
||||
for module in (
|
||||
"avalon.io",
|
||||
"avalon.lib",
|
||||
"avalon.pipeline",
|
||||
"avalon.tools.creator.app",
|
||||
"avalon.tools.manager.app",
|
||||
"avalon.api",
|
||||
"avalon.tools",
|
||||
):
|
||||
module = importlib.import_module(module)
|
||||
importlib.reload(module)
|
||||
|
||||
|
||||
def _discover_gui() -> Optional[Callable]:
|
||||
"""Return the most desirable of the currently registered GUIs"""
|
||||
|
||||
# Prefer last registered
|
||||
guis = reversed(pyblish.api.registered_guis())
|
||||
|
||||
for gui in guis:
|
||||
try:
|
||||
gui = __import__(gui).show
|
||||
except (ImportError, AttributeError):
|
||||
continue
|
||||
else:
|
||||
return gui
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def add_to_avalon_container(container: bpy.types.Collection):
|
||||
"""Add the container to the Avalon container."""
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
|
||||
# Link the container to the scene so it's easily visible to the artist
|
||||
# and can be managed easily. Otherwise it's only found in "Blender
|
||||
# File" view and it will be removed by Blenders garbage collection,
|
||||
# unless you set a 'fake user'.
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
avalon_container.children.link(container)
|
||||
|
||||
# Disable Avalon containers for the view layers.
|
||||
for view_layer in bpy.context.scene.view_layers:
|
||||
for child in view_layer.layer_collection.children:
|
||||
if child.collection == avalon_container:
|
||||
child.exclude = True
|
||||
|
||||
|
||||
def metadata_update(node: bpy.types.bpy_struct_meta_idprop, data: Dict):
|
||||
"""Imprint the node with metadata.
|
||||
|
||||
Existing metadata will be updated.
|
||||
"""
|
||||
|
||||
if not node.get(AVALON_PROPERTY):
|
||||
node[AVALON_PROPERTY] = dict()
|
||||
for key, value in data.items():
|
||||
if value is None:
|
||||
continue
|
||||
node[AVALON_PROPERTY][key] = value
|
||||
|
||||
|
||||
def containerise(name: str,
|
||||
namespace: str,
|
||||
nodes: List,
|
||||
context: Dict,
|
||||
loader: Optional[str] = None,
|
||||
suffix: Optional[str] = "CON") -> bpy.types.Collection:
|
||||
"""Bundle `nodes` into an assembly and imprint it with metadata
|
||||
|
||||
Containerisation enables a tracking of version, author and origin
|
||||
for loaded assets.
|
||||
|
||||
Arguments:
|
||||
name: Name of resulting assembly
|
||||
namespace: Namespace under which to host container
|
||||
nodes: Long names of nodes to containerise
|
||||
context: Asset information
|
||||
loader: Name of loader used to produce this container.
|
||||
suffix: Suffix of container, defaults to `_CON`.
|
||||
|
||||
Returns:
|
||||
The container assembly
|
||||
|
||||
"""
|
||||
|
||||
node_name = f"{context['asset']['name']}_{name}"
|
||||
if namespace:
|
||||
node_name = f"{namespace}:{node_name}"
|
||||
if suffix:
|
||||
node_name = f"{node_name}_{suffix}"
|
||||
container = bpy.data.collections.new(name=node_name)
|
||||
# Link the children nodes
|
||||
for obj in nodes:
|
||||
container.objects.link(obj)
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(loader),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
}
|
||||
|
||||
metadata_update(container, data)
|
||||
add_to_avalon_container(container)
|
||||
|
||||
return container
|
||||
|
||||
|
||||
def containerise_existing(
|
||||
container: bpy.types.Collection,
|
||||
name: str,
|
||||
namespace: str,
|
||||
context: Dict,
|
||||
loader: Optional[str] = None,
|
||||
suffix: Optional[str] = "CON") -> bpy.types.Collection:
|
||||
"""Imprint or update container with metadata.
|
||||
|
||||
Arguments:
|
||||
name: Name of resulting assembly
|
||||
namespace: Namespace under which to host container
|
||||
context: Asset information
|
||||
loader: Name of loader used to produce this container.
|
||||
suffix: Suffix of container, defaults to `_CON`.
|
||||
|
||||
Returns:
|
||||
The container assembly
|
||||
"""
|
||||
|
||||
node_name = container.name
|
||||
if suffix:
|
||||
node_name = f"{node_name}_{suffix}"
|
||||
container.name = node_name
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(loader),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
}
|
||||
|
||||
metadata_update(container, data)
|
||||
add_to_avalon_container(container)
|
||||
|
||||
return container
|
||||
|
||||
|
||||
def parse_container(container: bpy.types.Collection,
|
||||
validate: bool = True) -> Dict:
|
||||
"""Return the container node's full container data.
|
||||
|
||||
Args:
|
||||
container: A container node name.
|
||||
validate: turn the validation for the container on or off
|
||||
|
||||
Returns:
|
||||
The container schema data for this container node.
|
||||
|
||||
"""
|
||||
|
||||
data = lib.read(container)
|
||||
|
||||
# Append transient data
|
||||
data["objectName"] = container.name
|
||||
|
||||
if validate:
|
||||
schema.validate(data)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def ls() -> Iterator:
|
||||
"""List containers from active Blender scene.
|
||||
|
||||
This is the host-equivalent of api.ls(), but instead of listing assets on
|
||||
disk, it lists assets already loaded in Blender; once loaded they are
|
||||
called containers.
|
||||
"""
|
||||
|
||||
for container in lib.lsattr("id", AVALON_CONTAINER_ID):
|
||||
yield parse_container(container)
|
||||
|
||||
|
||||
def update_hierarchy(containers):
|
||||
"""Hierarchical container support
|
||||
|
||||
This is the function to support Scene Inventory to draw hierarchical
|
||||
view for containers.
|
||||
|
||||
We need both parent and children to visualize the graph.
|
||||
|
||||
"""
|
||||
|
||||
all_containers = set(ls()) # lookup set
|
||||
|
||||
for container in containers:
|
||||
# Find parent
|
||||
# FIXME (jasperge): re-evaluate this. How would it be possible
|
||||
# to 'nest' assets? Collections can have several parents, for
|
||||
# now assume it has only 1 parent
|
||||
parent = [
|
||||
coll for coll in bpy.data.collections if container in coll.children
|
||||
]
|
||||
for node in parent:
|
||||
if node in all_containers:
|
||||
container["parent"] = node
|
||||
break
|
||||
|
||||
log.debug("Container: %s", container)
|
||||
|
||||
yield container
|
||||
|
||||
|
||||
def publish():
|
||||
"""Shorthand to publish from within host."""
|
||||
|
||||
return pyblish.util.publish()
|
||||
|
|
@ -5,10 +5,17 @@ from typing import Dict, List, Optional
|
|||
|
||||
import bpy
|
||||
|
||||
from avalon import api, blender
|
||||
from avalon.blender import ops
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
import avalon.api
|
||||
from openpype.api import PypeCreatorMixin
|
||||
from .pipeline import AVALON_CONTAINERS
|
||||
from .ops import (
|
||||
MainThreadItem,
|
||||
execute_in_main_thread
|
||||
)
|
||||
from .lib import (
|
||||
imprint,
|
||||
get_selection
|
||||
)
|
||||
|
||||
VALID_EXTENSIONS = [".blend", ".json", ".abc", ".fbx"]
|
||||
|
||||
|
|
@ -119,11 +126,27 @@ def deselect_all():
|
|||
bpy.context.view_layer.objects.active = active
|
||||
|
||||
|
||||
class Creator(PypeCreatorMixin, blender.Creator):
|
||||
pass
|
||||
class Creator(PypeCreatorMixin, avalon.api.Creator):
|
||||
"""Base class for Creator plug-ins."""
|
||||
def process(self):
|
||||
collection = bpy.data.collections.new(name=self.data["subset"])
|
||||
bpy.context.scene.collection.children.link(collection)
|
||||
imprint(collection, self.data)
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
for obj in get_selection():
|
||||
collection.objects.link(obj)
|
||||
|
||||
return collection
|
||||
|
||||
|
||||
class AssetLoader(api.Loader):
|
||||
class Loader(avalon.api.Loader):
|
||||
"""Base class for Loader plug-ins."""
|
||||
|
||||
hosts = ["blender"]
|
||||
|
||||
|
||||
class AssetLoader(avalon.api.Loader):
|
||||
"""A basic AssetLoader for Blender
|
||||
|
||||
This will implement the basic logic for linking/appending assets
|
||||
|
|
@ -191,8 +214,8 @@ class AssetLoader(api.Loader):
|
|||
namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None) -> Optional[bpy.types.Collection]:
|
||||
""" Run the loader on Blender main thread"""
|
||||
mti = ops.MainThreadItem(self._load, context, name, namespace, options)
|
||||
ops.execute_in_main_thread(mti)
|
||||
mti = MainThreadItem(self._load, context, name, namespace, options)
|
||||
execute_in_main_thread(mti)
|
||||
|
||||
def _load(self,
|
||||
context: dict,
|
||||
|
|
@ -257,8 +280,8 @@ class AssetLoader(api.Loader):
|
|||
|
||||
def update(self, container: Dict, representation: Dict):
|
||||
""" Run the update on Blender main thread"""
|
||||
mti = ops.MainThreadItem(self.exec_update, container, representation)
|
||||
ops.execute_in_main_thread(mti)
|
||||
mti = MainThreadItem(self.exec_update, container, representation)
|
||||
execute_in_main_thread(mti)
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""Must be implemented by a sub-class"""
|
||||
|
|
@ -266,5 +289,5 @@ class AssetLoader(api.Loader):
|
|||
|
||||
def remove(self, container: Dict) -> bool:
|
||||
""" Run the remove on Blender main thread"""
|
||||
mti = ops.MainThreadItem(self.exec_remove, container)
|
||||
ops.execute_in_main_thread(mti)
|
||||
mti = MainThreadItem(self.exec_remove, container)
|
||||
execute_in_main_thread(mti)
|
||||
|
|
|
|||
90
openpype/hosts/blender/api/workio.py
Normal file
90
openpype/hosts/blender/api/workio.py
Normal file
|
|
@ -0,0 +1,90 @@
|
|||
"""Host API required for Work Files."""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import List, Optional
|
||||
|
||||
import bpy
|
||||
from avalon import api
|
||||
|
||||
|
||||
class OpenFileCacher:
|
||||
"""Store information about opening file.
|
||||
|
||||
When file is opening QApplcation events should not be processed.
|
||||
"""
|
||||
opening_file = False
|
||||
|
||||
@classmethod
|
||||
def post_load(cls):
|
||||
cls.opening_file = False
|
||||
|
||||
@classmethod
|
||||
def set_opening(cls):
|
||||
cls.opening_file = True
|
||||
|
||||
|
||||
def open_file(filepath: str) -> Optional[str]:
|
||||
"""Open the scene file in Blender."""
|
||||
OpenFileCacher.set_opening()
|
||||
|
||||
preferences = bpy.context.preferences
|
||||
load_ui = preferences.filepaths.use_load_ui
|
||||
use_scripts = preferences.filepaths.use_scripts_auto_execute
|
||||
result = bpy.ops.wm.open_mainfile(
|
||||
filepath=filepath,
|
||||
load_ui=load_ui,
|
||||
use_scripts=use_scripts,
|
||||
)
|
||||
|
||||
if result == {'FINISHED'}:
|
||||
return filepath
|
||||
return None
|
||||
|
||||
|
||||
def save_file(filepath: str, copy: bool = False) -> Optional[str]:
|
||||
"""Save the open scene file."""
|
||||
|
||||
preferences = bpy.context.preferences
|
||||
compress = preferences.filepaths.use_file_compression
|
||||
relative_remap = preferences.filepaths.use_relative_paths
|
||||
result = bpy.ops.wm.save_as_mainfile(
|
||||
filepath=filepath,
|
||||
compress=compress,
|
||||
relative_remap=relative_remap,
|
||||
copy=copy,
|
||||
)
|
||||
|
||||
if result == {'FINISHED'}:
|
||||
return filepath
|
||||
return None
|
||||
|
||||
|
||||
def current_file() -> Optional[str]:
|
||||
"""Return the path of the open scene file."""
|
||||
|
||||
current_filepath = bpy.data.filepath
|
||||
if Path(current_filepath).is_file():
|
||||
return current_filepath
|
||||
return None
|
||||
|
||||
|
||||
def has_unsaved_changes() -> bool:
|
||||
"""Does the open scene file have unsaved changes?"""
|
||||
|
||||
return bpy.data.is_dirty
|
||||
|
||||
|
||||
def file_extensions() -> List[str]:
|
||||
"""Return the supported file extensions for Blender scene files."""
|
||||
|
||||
return api.HOST_WORKFILE_EXTENSIONS["blender"]
|
||||
|
||||
|
||||
def work_root(session: dict) -> str:
|
||||
"""Return the default root to browse for work files."""
|
||||
|
||||
work_dir = session["AVALON_WORKDIR"]
|
||||
scene_dir = session.get("AVALON_SCENEDIR")
|
||||
if scene_dir:
|
||||
return str(Path(work_dir, scene_dir))
|
||||
return work_dir
|
||||
4
openpype/hosts/blender/blender_addon/startup/init.py
Normal file
4
openpype/hosts/blender/blender_addon/startup/init.py
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
from avalon import pipeline
|
||||
from openpype.hosts.blender import api
|
||||
|
||||
pipeline.install(api)
|
||||
|
|
@ -21,7 +21,7 @@ class InstallPySideToBlender(PreLaunchHook):
|
|||
platforms = ["windows"]
|
||||
|
||||
def execute(self):
|
||||
# Prelaunch hook is not crutial
|
||||
# Prelaunch hook is not crucial
|
||||
try:
|
||||
self.inner_execute()
|
||||
except Exception:
|
||||
|
|
@ -156,7 +156,7 @@ class InstallPySideToBlender(PreLaunchHook):
|
|||
except pywintypes.error:
|
||||
pass
|
||||
|
||||
self.log.warning("Failed to instal PySide2 module to blender.")
|
||||
self.log.warning("Failed to install PySide2 module to blender.")
|
||||
|
||||
def is_pyside_installed(self, python_executable):
|
||||
"""Check if PySide2 module is in blender's pip list.
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import bpy
|
|||
|
||||
from avalon import api
|
||||
import openpype.hosts.blender.api.plugin
|
||||
from avalon.blender import lib
|
||||
from openpype.hosts.blender.api import lib
|
||||
|
||||
|
||||
class CreateAction(openpype.hosts.blender.api.plugin.Creator):
|
||||
|
|
|
|||
|
|
@ -3,9 +3,8 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib, ops
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateAnimation(plugin.Creator):
|
||||
|
|
@ -22,7 +21,7 @@ class CreateAnimation(plugin.Creator):
|
|||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
|
|
|
|||
|
|
@ -3,9 +3,8 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib, ops
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateCamera(plugin.Creator):
|
||||
|
|
@ -22,7 +21,7 @@ class CreateCamera(plugin.Creator):
|
|||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
|
|
|
|||
|
|
@ -3,9 +3,8 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib, ops
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateLayout(plugin.Creator):
|
||||
|
|
@ -22,7 +21,7 @@ class CreateLayout(plugin.Creator):
|
|||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
|
|
|
|||
|
|
@ -3,9 +3,8 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib, ops
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateModel(plugin.Creator):
|
||||
|
|
@ -22,7 +21,7 @@ class CreateModel(plugin.Creator):
|
|||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
|
|
|
|||
|
|
@ -3,8 +3,8 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
import openpype.hosts.blender.api.plugin
|
||||
from openpype.hosts.blender.api import lib
|
||||
|
||||
|
||||
class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
|
||||
|
|
|
|||
|
|
@ -3,9 +3,8 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib, ops
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateRig(plugin.Creator):
|
||||
|
|
@ -22,7 +21,7 @@ class CreateRig(plugin.Creator):
|
|||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
|
|
|
|||
|
|
@ -7,11 +7,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.blender.api import plugin, lib
|
||||
|
||||
|
||||
class CacheModelLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -1,16 +1,11 @@
|
|||
"""Load an animation in Blender."""
|
||||
|
||||
import logging
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
logger = logging.getLogger("openpype").getChild(
|
||||
"blender").getChild("load_animation")
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
|
||||
|
||||
class BlendAnimationLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -7,10 +7,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
|
||||
class AudioLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -8,10 +8,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
logger = logging.getLogger("openpype").getChild(
|
||||
"blender").getChild("load_camera")
|
||||
|
|
|
|||
|
|
@ -7,11 +7,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
|
||||
class FbxCameraLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -7,11 +7,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api import plugin, lib
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
|
||||
class FbxModelLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -7,10 +7,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
|
||||
class BlendLayoutLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -1,18 +1,20 @@
|
|||
"""Load a layout in Blender."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, Optional
|
||||
|
||||
import bpy
|
||||
import json
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype import lib
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_INSTANCES,
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -8,8 +8,12 @@ import os
|
|||
import json
|
||||
import bpy
|
||||
|
||||
from avalon import api, blender
|
||||
import openpype.hosts.blender.api.plugin as plugin
|
||||
from avalon import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
containerise_existing,
|
||||
AVALON_PROPERTY
|
||||
)
|
||||
|
||||
|
||||
class BlendLookLoader(plugin.AssetLoader):
|
||||
|
|
@ -105,7 +109,7 @@ class BlendLookLoader(plugin.AssetLoader):
|
|||
|
||||
container = bpy.data.collections.new(lib_container)
|
||||
container.name = container_name
|
||||
blender.pipeline.containerise_existing(
|
||||
containerise_existing(
|
||||
container,
|
||||
name,
|
||||
namespace,
|
||||
|
|
@ -113,7 +117,7 @@ class BlendLookLoader(plugin.AssetLoader):
|
|||
self.__class__.__name__,
|
||||
)
|
||||
|
||||
metadata = container.get(blender.pipeline.AVALON_PROPERTY)
|
||||
metadata = container.get(AVALON_PROPERTY)
|
||||
|
||||
metadata["libpath"] = libpath
|
||||
metadata["lib_container"] = lib_container
|
||||
|
|
@ -161,7 +165,7 @@ class BlendLookLoader(plugin.AssetLoader):
|
|||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(blender.pipeline.AVALON_PROPERTY)
|
||||
collection_metadata = collection.get(AVALON_PROPERTY)
|
||||
collection_libpath = collection_metadata["libpath"]
|
||||
|
||||
normalized_collection_libpath = (
|
||||
|
|
@ -204,7 +208,7 @@ class BlendLookLoader(plugin.AssetLoader):
|
|||
if not collection:
|
||||
return False
|
||||
|
||||
collection_metadata = collection.get(blender.pipeline.AVALON_PROPERTY)
|
||||
collection_metadata = collection.get(AVALON_PROPERTY)
|
||||
|
||||
for obj in collection_metadata['objects']:
|
||||
for child in self.get_all_children(obj):
|
||||
|
|
|
|||
|
|
@ -7,10 +7,12 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
|
||||
class BlendModelLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -7,11 +7,13 @@ from typing import Dict, List, Optional
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype import lib
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
|
||||
|
||||
class BlendRigLoader(plugin.AssetLoader):
|
||||
|
|
|
|||
|
|
@ -1,11 +1,13 @@
|
|||
import json
|
||||
from typing import Generator
|
||||
|
||||
import bpy
|
||||
import json
|
||||
|
||||
import pyblish.api
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_INSTANCES,
|
||||
AVALON_PROPERTY,
|
||||
)
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
|
||||
import bpy
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
|
||||
|
||||
class ExtractABC(api.Extractor):
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ import os
|
|||
|
||||
import bpy
|
||||
|
||||
# import avalon.blender.workio
|
||||
import openpype.api
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
import bpy
|
||||
|
||||
|
||||
class ExtractCamera(api.Extractor):
|
||||
"""Extract as the camera as FBX."""
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
|
||||
import bpy
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
|
||||
|
||||
class ExtractFBX(api.Extractor):
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import bpy_extras.anim_utils
|
|||
|
||||
from openpype import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
|
||||
|
||||
class ExtractAnimationFBX(api.Extractor):
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import json
|
|||
import bpy
|
||||
|
||||
from avalon import io
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
import openpype.api
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import pyblish.api
|
||||
import avalon.blender.workio
|
||||
from openpype.hosts.blender.api.workio import save_file
|
||||
|
||||
|
||||
class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
|
||||
|
|
@ -14,12 +14,12 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
|
||||
assert all(result["success"] for result in context.data["results"]), (
|
||||
"Publishing not succesfull so version is not increased.")
|
||||
"Publishing not successful so version is not increased.")
|
||||
|
||||
from openpype.lib import version_up
|
||||
path = context.data["currentFile"]
|
||||
filepath = version_up(path)
|
||||
|
||||
avalon.blender.workio.save_file(filepath, copy=False)
|
||||
save_file(filepath, copy=False)
|
||||
|
||||
self.log.info('Incrementing script version')
|
||||
|
|
|
|||
|
|
@ -1,3 +0,0 @@
|
|||
from openpype.hosts.blender import api
|
||||
|
||||
api.install()
|
||||
|
|
@ -32,7 +32,7 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
|
|||
repr = next((r for r in reprs), None)
|
||||
if not repr:
|
||||
raise "Missing `audioMain` representation"
|
||||
self.log.info(f"represetation is: {repr}")
|
||||
self.log.info(f"representation is: {repr}")
|
||||
|
||||
audio_file = repr.get('data', {}).get('path', "")
|
||||
|
||||
|
|
@ -56,7 +56,7 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
|
|||
representations (list): list for all representations
|
||||
|
||||
Returns:
|
||||
dict: subsets with version and representaions in keys
|
||||
dict: subsets with version and representations in keys
|
||||
"""
|
||||
|
||||
# Query all subsets for asset
|
||||
|
|
|
|||
|
|
@ -23,10 +23,17 @@ from .lib import (
|
|||
get_sequence_segments,
|
||||
maintained_segment_selection,
|
||||
reset_segment_selection,
|
||||
get_segment_attributes
|
||||
get_segment_attributes,
|
||||
get_clips_in_reels,
|
||||
get_reformated_filename,
|
||||
get_frame_from_filename,
|
||||
get_padding_from_filename,
|
||||
maintained_object_duplication
|
||||
)
|
||||
from .utils import (
|
||||
setup
|
||||
setup,
|
||||
get_flame_version,
|
||||
get_flame_install_root
|
||||
)
|
||||
from .pipeline import (
|
||||
install,
|
||||
|
|
@ -55,6 +62,10 @@ from .workio import (
|
|||
file_extensions,
|
||||
work_root
|
||||
)
|
||||
from .render_utils import (
|
||||
export_clip,
|
||||
get_preset_path_by_xml_name
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# constants
|
||||
|
|
@ -80,6 +91,11 @@ __all__ = [
|
|||
"maintained_segment_selection",
|
||||
"reset_segment_selection",
|
||||
"get_segment_attributes",
|
||||
"get_clips_in_reels",
|
||||
"get_reformated_filename",
|
||||
"get_frame_from_filename",
|
||||
"get_padding_from_filename",
|
||||
"maintained_object_duplication",
|
||||
|
||||
# pipeline
|
||||
"install",
|
||||
|
|
@ -96,6 +112,8 @@ __all__ = [
|
|||
|
||||
# utils
|
||||
"setup",
|
||||
"get_flame_version",
|
||||
"get_flame_install_root",
|
||||
|
||||
# menu
|
||||
"FlameMenuProjectConnect",
|
||||
|
|
@ -111,5 +129,9 @@ __all__ = [
|
|||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
"work_root",
|
||||
|
||||
# render utils
|
||||
"export_clip",
|
||||
"get_preset_path_by_xml_name"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ from openpype.api import Logger
|
|||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]")
|
||||
|
||||
class CTX:
|
||||
# singleton used for passing data between api modules
|
||||
|
|
@ -445,6 +446,8 @@ def get_sequence_segments(sequence, selected=False):
|
|||
for segment in track.segments:
|
||||
if segment.name.get_value() == "":
|
||||
continue
|
||||
if segment.hidden.get_value() is True:
|
||||
continue
|
||||
if (
|
||||
selected is True
|
||||
and segment.selected.get_value() is not True
|
||||
|
|
@ -519,7 +522,7 @@ def _get_shot_tokens_values(clip, tokens):
|
|||
|
||||
|
||||
def get_segment_attributes(segment):
|
||||
if str(segment.name)[1:-1] == "":
|
||||
if segment.name.get_value() == "":
|
||||
return None
|
||||
|
||||
# Add timeline segment to tree
|
||||
|
|
@ -532,6 +535,12 @@ def get_segment_attributes(segment):
|
|||
"PySegment": segment
|
||||
}
|
||||
|
||||
# head and tail with forward compatibility
|
||||
if segment.head:
|
||||
clip_data["segment_head"] = int(segment.head)
|
||||
if segment.tail:
|
||||
clip_data["segment_tail"] = int(segment.tail)
|
||||
|
||||
# add all available shot tokens
|
||||
shot_tokens = _get_shot_tokens_values(segment, [
|
||||
"<colour space>", "<width>", "<height>", "<depth>", "<segment>",
|
||||
|
|
@ -551,7 +560,7 @@ def get_segment_attributes(segment):
|
|||
attr = getattr(segment, attr_name)
|
||||
segment_attrs_data[attr] = str(attr).replace("+", ":")
|
||||
|
||||
if attr in ["record_in", "record_out"]:
|
||||
if attr_name in ["record_in", "record_out"]:
|
||||
clip_data[attr_name] = attr.relative_frame
|
||||
else:
|
||||
clip_data[attr_name] = attr.frame
|
||||
|
|
@ -559,3 +568,127 @@ def get_segment_attributes(segment):
|
|||
clip_data["segment_timecodes"] = segment_attrs_data
|
||||
|
||||
return clip_data
|
||||
|
||||
|
||||
def get_clips_in_reels(project):
|
||||
output_clips = []
|
||||
project_desktop = project.current_workspace.desktop
|
||||
|
||||
for reel_group in project_desktop.reel_groups:
|
||||
for reel in reel_group.reels:
|
||||
for clip in reel.clips:
|
||||
clip_data = {
|
||||
"PyClip": clip,
|
||||
"fps": float(str(clip.frame_rate)[:-4])
|
||||
}
|
||||
|
||||
attrs = [
|
||||
"name", "width", "height",
|
||||
"ratio", "sample_rate", "bit_depth"
|
||||
]
|
||||
|
||||
for attr in attrs:
|
||||
val = getattr(clip, attr)
|
||||
clip_data[attr] = val
|
||||
|
||||
version = clip.versions[-1]
|
||||
track = version.tracks[-1]
|
||||
for segment in track.segments:
|
||||
segment_data = get_segment_attributes(segment)
|
||||
clip_data.update(segment_data)
|
||||
|
||||
output_clips.append(clip_data)
|
||||
|
||||
return output_clips
|
||||
|
||||
|
||||
def get_reformated_filename(filename, padded=True):
|
||||
"""
|
||||
Return fixed python expression path
|
||||
|
||||
Args:
|
||||
filename (str): file name
|
||||
|
||||
Returns:
|
||||
type: string with reformated path
|
||||
|
||||
Example:
|
||||
get_reformated_filename("plate.1001.exr") > plate.%04d.exr
|
||||
|
||||
"""
|
||||
found = FRAME_PATTERN.search(filename)
|
||||
|
||||
if not found:
|
||||
log.info("File name is not sequence: {}".format(filename))
|
||||
return filename
|
||||
|
||||
padding = get_padding_from_filename(filename)
|
||||
|
||||
replacement = "%0{}d".format(padding) if padded else "%d"
|
||||
start_idx, end_idx = found.span(1)
|
||||
|
||||
return replacement.join(
|
||||
[filename[:start_idx], filename[end_idx:]]
|
||||
)
|
||||
|
||||
|
||||
def get_padding_from_filename(filename):
|
||||
"""
|
||||
Return padding number from Flame path style
|
||||
|
||||
Args:
|
||||
filename (str): file name
|
||||
|
||||
Returns:
|
||||
int: padding number
|
||||
|
||||
Example:
|
||||
get_padding_from_filename("plate.0001.exr") > 4
|
||||
|
||||
"""
|
||||
found = get_frame_from_filename(filename)
|
||||
|
||||
return len(found) if found else None
|
||||
|
||||
|
||||
def get_frame_from_filename(filename):
|
||||
"""
|
||||
Return sequence number from Flame path style
|
||||
|
||||
Args:
|
||||
filename (str): file name
|
||||
|
||||
Returns:
|
||||
int: sequence frame number
|
||||
|
||||
Example:
|
||||
def get_frame_from_filename(path):
|
||||
("plate.0001.exr") > 0001
|
||||
|
||||
"""
|
||||
|
||||
found = re.findall(FRAME_PATTERN, filename)
|
||||
|
||||
return found.pop() if found else None
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_object_duplication(item):
|
||||
"""Maintain input item duplication
|
||||
|
||||
Attributes:
|
||||
item (any flame.PyObject): python api object
|
||||
|
||||
Yield:
|
||||
duplicate input PyObject type
|
||||
"""
|
||||
import flame
|
||||
# Duplicate the clip to avoid modifying the original clip
|
||||
duplicate = flame.duplicate(item)
|
||||
|
||||
try:
|
||||
# do the operation on selected segments
|
||||
yield duplicate
|
||||
finally:
|
||||
# delete the item at the end
|
||||
flame.delete(duplicate)
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ class _FlameMenuApp(object):
|
|||
self.menu_group_name = menu_group_name
|
||||
self.dynamic_menu_data = {}
|
||||
|
||||
# flame module is only avaliable when a
|
||||
# flame module is only available when a
|
||||
# flame project is loaded and initialized
|
||||
self.flame = None
|
||||
try:
|
||||
|
|
|
|||
125
openpype/hosts/flame/api/render_utils.py
Normal file
125
openpype/hosts/flame/api/render_utils.py
Normal file
|
|
@ -0,0 +1,125 @@
|
|||
import os
|
||||
|
||||
|
||||
def export_clip(export_path, clip, preset_path, **kwargs):
|
||||
"""Flame exported wrapper
|
||||
|
||||
Args:
|
||||
export_path (str): exporting directory path
|
||||
clip (PyClip): flame api object
|
||||
preset_path (str): full export path to xml file
|
||||
|
||||
Kwargs:
|
||||
thumb_frame_number (int)[optional]: source frame number
|
||||
in_mark (int)[optional]: cut in mark
|
||||
out_mark (int)[optional]: cut out mark
|
||||
|
||||
Raises:
|
||||
KeyError: Missing input kwarg `thumb_frame_number`
|
||||
in case `thumbnail` in `export_preset`
|
||||
FileExistsError: Missing export preset in shared folder
|
||||
"""
|
||||
import flame
|
||||
|
||||
in_mark = out_mark = None
|
||||
|
||||
# Set exporter
|
||||
exporter = flame.PyExporter()
|
||||
exporter.foreground = True
|
||||
exporter.export_between_marks = True
|
||||
|
||||
if kwargs.get("thumb_frame_number"):
|
||||
thumb_frame_number = kwargs["thumb_frame_number"]
|
||||
# make sure it exists in kwargs
|
||||
if not thumb_frame_number:
|
||||
raise KeyError(
|
||||
"Missing key `thumb_frame_number` in input kwargs")
|
||||
|
||||
in_mark = int(thumb_frame_number)
|
||||
out_mark = int(thumb_frame_number) + 1
|
||||
|
||||
elif kwargs.get("in_mark") and kwargs.get("out_mark"):
|
||||
in_mark = int(kwargs["in_mark"])
|
||||
out_mark = int(kwargs["out_mark"])
|
||||
else:
|
||||
exporter.export_between_marks = False
|
||||
|
||||
try:
|
||||
# set in and out marks if they are available
|
||||
if in_mark and out_mark:
|
||||
clip.in_mark = in_mark
|
||||
clip.out_mark = out_mark
|
||||
|
||||
# export with exporter
|
||||
exporter.export(clip, preset_path, export_path)
|
||||
finally:
|
||||
print('Exported: {} at {}-{}'.format(
|
||||
clip.name.get_value(),
|
||||
clip.in_mark,
|
||||
clip.out_mark
|
||||
))
|
||||
|
||||
|
||||
def get_preset_path_by_xml_name(xml_preset_name):
|
||||
def _search_path(root):
|
||||
output = []
|
||||
for root, _dirs, files in os.walk(root):
|
||||
for f in files:
|
||||
if f != xml_preset_name:
|
||||
continue
|
||||
file_path = os.path.join(root, f)
|
||||
output.append(file_path)
|
||||
return output
|
||||
|
||||
def _validate_results(results):
|
||||
if results and len(results) == 1:
|
||||
return results.pop()
|
||||
elif results and len(results) > 1:
|
||||
print((
|
||||
"More matching presets for `{}`: /n"
|
||||
"{}").format(xml_preset_name, results))
|
||||
return results.pop()
|
||||
else:
|
||||
return None
|
||||
|
||||
from .utils import (
|
||||
get_flame_install_root,
|
||||
get_flame_version
|
||||
)
|
||||
|
||||
# get actual flame version and install path
|
||||
_version = get_flame_version()["full"]
|
||||
_install_root = get_flame_install_root()
|
||||
|
||||
# search path templates
|
||||
shared_search_root = "{install_root}/shared/export/presets"
|
||||
install_search_root = (
|
||||
"{install_root}/presets/{version}/export/presets/flame")
|
||||
|
||||
# fill templates
|
||||
shared_search_root = shared_search_root.format(
|
||||
install_root=_install_root
|
||||
)
|
||||
install_search_root = install_search_root.format(
|
||||
install_root=_install_root,
|
||||
version=_version
|
||||
)
|
||||
|
||||
# get search results
|
||||
shared_results = _search_path(shared_search_root)
|
||||
installed_results = _search_path(install_search_root)
|
||||
|
||||
# first try to return shared results
|
||||
shared_preset_path = _validate_results(shared_results)
|
||||
|
||||
if shared_preset_path:
|
||||
return os.path.dirname(shared_preset_path)
|
||||
|
||||
# then try installed results
|
||||
installed_preset_path = _validate_results(installed_results)
|
||||
|
||||
if installed_preset_path:
|
||||
return os.path.dirname(installed_preset_path)
|
||||
|
||||
# if nothing found then return False
|
||||
return False
|
||||
|
|
@ -25,7 +25,7 @@ class WireTapCom(object):
|
|||
|
||||
This way we are able to set new project with settings and
|
||||
correct colorspace policy. Also we are able to create new user
|
||||
or get actuall user with similar name (users are usually cloning
|
||||
or get actual user with similar name (users are usually cloning
|
||||
their profiles and adding date stamp into suffix).
|
||||
"""
|
||||
|
||||
|
|
@ -214,7 +214,7 @@ class WireTapCom(object):
|
|||
|
||||
volumes = []
|
||||
|
||||
# go trough all children and get volume names
|
||||
# go through all children and get volume names
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
|
||||
|
|
@ -254,7 +254,7 @@ class WireTapCom(object):
|
|||
filtered_users = [user for user in used_names if user_name in user]
|
||||
|
||||
if filtered_users:
|
||||
# todo: need to find lastly created following regex patern for
|
||||
# todo: need to find lastly created following regex pattern for
|
||||
# date used in name
|
||||
return filtered_users.pop()
|
||||
|
||||
|
|
@ -299,7 +299,7 @@ class WireTapCom(object):
|
|||
|
||||
usernames = []
|
||||
|
||||
# go trough all children and get volume names
|
||||
# go through all children and get volume names
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
|
||||
|
|
@ -346,7 +346,7 @@ class WireTapCom(object):
|
|||
if not requested:
|
||||
raise AttributeError((
|
||||
"Error: Cannot request number of "
|
||||
"childrens from the node {}. Make sure your "
|
||||
"children from the node {}. Make sure your "
|
||||
"wiretap service is running: {}").format(
|
||||
parent_path, parent.lastError())
|
||||
)
|
||||
|
|
|
|||
|
|
@ -234,7 +234,7 @@ class FtrackComponentCreator:
|
|||
).first()
|
||||
|
||||
if component_entity:
|
||||
# overwrite existing members in component enity
|
||||
# overwrite existing members in component entity
|
||||
# - get data for member from `ftrack.origin` location
|
||||
self._overwrite_members(component_entity, comp_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -304,7 +304,7 @@ class FlameToFtrackPanel(object):
|
|||
self._resolve_project_entity()
|
||||
self._save_ui_state_to_cfg()
|
||||
|
||||
# get hanldes from gui input
|
||||
# get handles from gui input
|
||||
handles = self.handles_input.text()
|
||||
|
||||
# get frame start from gui input
|
||||
|
|
@ -517,7 +517,7 @@ class FlameToFtrackPanel(object):
|
|||
if self.temp_data_dir:
|
||||
shutil.rmtree(self.temp_data_dir)
|
||||
self.temp_data_dir = None
|
||||
print("All Temp data were destroied ...")
|
||||
print("All Temp data were destroyed ...")
|
||||
|
||||
def close(self):
|
||||
self._save_ui_state_to_cfg()
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ def openpype_install():
|
|||
"""
|
||||
openpype.install()
|
||||
avalon.api.install(opfapi)
|
||||
print("Avalon registred hosts: {}".format(
|
||||
print("Avalon registered hosts: {}".format(
|
||||
avalon.api.registered_host()))
|
||||
|
||||
|
||||
|
|
@ -101,7 +101,7 @@ def app_initialized(parent=None):
|
|||
"""
|
||||
Initialisation of the hook is starting from here
|
||||
|
||||
First it needs to test if it can import the flame modul.
|
||||
First it needs to test if it can import the flame module.
|
||||
This will happen only in case a project has been loaded.
|
||||
Then `app_initialized` will load main Framework which will load
|
||||
all menu objects as flame_apps.
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ def _sync_utility_scripts(env=None):
|
|||
if _itm not in remove_black_list:
|
||||
skip = True
|
||||
|
||||
# do not skyp if pyc in extension
|
||||
# do not skip if pyc in extension
|
||||
if not os.path.isdir(_itm) and "pyc" in os.path.splitext(_itm)[-1]:
|
||||
skip = False
|
||||
|
||||
|
|
@ -125,3 +125,18 @@ def setup(env=None):
|
|||
_sync_utility_scripts(env)
|
||||
|
||||
log.info("Flame OpenPype wrapper has been installed")
|
||||
|
||||
|
||||
def get_flame_version():
|
||||
import flame
|
||||
|
||||
return {
|
||||
"full": flame.get_version(),
|
||||
"major": flame.get_version_major(),
|
||||
"minor": flame.get_version_minor(),
|
||||
"patch": flame.get_version_patch()
|
||||
}
|
||||
|
||||
|
||||
def get_flame_install_root():
|
||||
return "/opt/Autodesk"
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from pprint import pformat
|
|||
class FlamePrelaunch(PreLaunchHook):
|
||||
""" Flame prelaunch hook
|
||||
|
||||
Will make sure flame_script_dirs are coppied to user's folder defined
|
||||
Will make sure flame_script_dirs are copied to user's folder defined
|
||||
in environment var FLAME_SCRIPT_DIR.
|
||||
"""
|
||||
app_groups = ["flame"]
|
||||
|
|
|
|||
|
|
@ -127,7 +127,7 @@ def create_time_effects(otio_clip, item):
|
|||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
|
||||
# # loop trought and get all Timewarps
|
||||
# # loop through and get all Timewarps
|
||||
# for effect in subTrackItems:
|
||||
# if ((track_item not in effect.linkedItems())
|
||||
# and (len(effect.linkedItems()) > 0)):
|
||||
|
|
@ -284,23 +284,20 @@ def create_otio_reference(clip_data):
|
|||
# get padding and other file infos
|
||||
log.debug("_ path: {}".format(path))
|
||||
|
||||
is_sequence = padding = utils.get_frame_from_path(path)
|
||||
if is_sequence:
|
||||
number = utils.get_frame_from_path(path)
|
||||
file_head = file_name.split(number)[:-1]
|
||||
frame_start = int(number)
|
||||
|
||||
frame_duration = clip_data["source_duration"]
|
||||
otio_ex_ref_item = None
|
||||
|
||||
is_sequence = frame_number = utils.get_frame_from_filename(file_name)
|
||||
if is_sequence:
|
||||
file_head = file_name.split(frame_number)[:-1]
|
||||
frame_start = int(frame_number)
|
||||
padding = len(frame_number)
|
||||
|
||||
metadata.update({
|
||||
"isSequence": True,
|
||||
"padding": padding
|
||||
})
|
||||
|
||||
otio_ex_ref_item = None
|
||||
|
||||
if is_sequence:
|
||||
# if it is file sequence try to create `ImageSequenceReference`
|
||||
# the OTIO might not be compatible so return nothing and do it old way
|
||||
try:
|
||||
|
|
@ -322,10 +319,12 @@ def create_otio_reference(clip_data):
|
|||
pass
|
||||
|
||||
if not otio_ex_ref_item:
|
||||
reformat_path = utils.get_reformated_path(path, padded=False)
|
||||
dirname, file_name = os.path.split(path)
|
||||
file_name = utils.get_reformated_filename(file_name, padded=False)
|
||||
reformated_path = os.path.join(dirname, file_name)
|
||||
# in case old OTIO or video file create `ExternalReference`
|
||||
otio_ex_ref_item = otio.schema.ExternalReference(
|
||||
target_url=reformat_path,
|
||||
target_url=reformated_path,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
|
|
@ -346,7 +345,7 @@ def create_otio_clip(clip_data):
|
|||
media_reference = create_otio_reference(clip_data)
|
||||
|
||||
# calculate source in
|
||||
first_frame = utils.get_frame_from_path(clip_data["fpath"]) or 0
|
||||
first_frame = utils.get_frame_from_filename(clip_data["fpath"]) or 0
|
||||
source_in = int(clip_data["source_in"]) - int(first_frame)
|
||||
|
||||
# creatae source range
|
||||
|
|
@ -615,11 +614,11 @@ def create_otio_timeline(sequence):
|
|||
# Add Gap if needed
|
||||
if itemindex == 0:
|
||||
# if it is first track item at track then add
|
||||
# it to previouse item
|
||||
# it to previous item
|
||||
prev_item = segment_data
|
||||
|
||||
else:
|
||||
# get previouse item
|
||||
# get previous item
|
||||
prev_item = segments_ordered[itemindex - 1]
|
||||
|
||||
log.debug("_ segment_data: {}".format(segment_data))
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ import opentimelineio as otio
|
|||
import logging
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]")
|
||||
|
||||
|
||||
def timecode_to_frames(timecode, framerate):
|
||||
rt = otio.opentime.from_timecode(timecode, framerate)
|
||||
|
|
@ -19,77 +21,71 @@ def frames_to_seconds(frames, framerate):
|
|||
return otio.opentime.to_seconds(rt)
|
||||
|
||||
|
||||
def get_reformated_path(path, padded=True):
|
||||
def get_reformated_filename(filename, padded=True):
|
||||
"""
|
||||
Return fixed python expression path
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
filename (str): file name
|
||||
|
||||
Returns:
|
||||
type: string with reformated path
|
||||
|
||||
Example:
|
||||
get_reformated_path("plate.1001.exr") > plate.%04d.exr
|
||||
get_reformated_filename("plate.1001.exr") > plate.%04d.exr
|
||||
|
||||
"""
|
||||
padding = get_padding_from_path(path)
|
||||
found = get_frame_from_path(path)
|
||||
found = FRAME_PATTERN.search(filename)
|
||||
|
||||
if not found:
|
||||
log.info("Path is not sequence: {}".format(path))
|
||||
return path
|
||||
log.info("File name is not sequence: {}".format(filename))
|
||||
return filename
|
||||
|
||||
if padded:
|
||||
path = path.replace(found, "%0{}d".format(padding))
|
||||
else:
|
||||
path = path.replace(found, "%d")
|
||||
padding = get_padding_from_filename(filename)
|
||||
|
||||
return path
|
||||
replacement = "%0{}d".format(padding) if padded else "%d"
|
||||
start_idx, end_idx = found.span(1)
|
||||
|
||||
return replacement.join(
|
||||
[filename[:start_idx], filename[end_idx:]]
|
||||
)
|
||||
|
||||
|
||||
def get_padding_from_path(path):
|
||||
def get_padding_from_filename(filename):
|
||||
"""
|
||||
Return padding number from Flame path style
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
filename (str): file name
|
||||
|
||||
Returns:
|
||||
int: padding number
|
||||
|
||||
Example:
|
||||
get_padding_from_path("plate.0001.exr") > 4
|
||||
get_padding_from_filename("plate.0001.exr") > 4
|
||||
|
||||
"""
|
||||
found = get_frame_from_path(path)
|
||||
found = get_frame_from_filename(filename)
|
||||
|
||||
if found:
|
||||
return len(found)
|
||||
else:
|
||||
return None
|
||||
return len(found) if found else None
|
||||
|
||||
|
||||
def get_frame_from_path(path):
|
||||
def get_frame_from_filename(filename):
|
||||
"""
|
||||
Return sequence number from Flame path style
|
||||
|
||||
Args:
|
||||
path (str): path url or simple file name
|
||||
filename (str): file name
|
||||
|
||||
Returns:
|
||||
int: sequence frame number
|
||||
|
||||
Example:
|
||||
def get_frame_from_path(path):
|
||||
def get_frame_from_filename(path):
|
||||
("plate.0001.exr") > 0001
|
||||
|
||||
"""
|
||||
frame_pattern = re.compile(r"[._](\d+)[.]")
|
||||
|
||||
found = re.findall(frame_pattern, path)
|
||||
found = re.findall(FRAME_PATTERN, filename)
|
||||
|
||||
if found:
|
||||
return found.pop()
|
||||
else:
|
||||
return None
|
||||
return found.pop() if found else None
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ class CollectTestSelection(pyblish.api.ContextPlugin):
|
|||
order = pyblish.api.CollectorOrder
|
||||
label = "test selection"
|
||||
hosts = ["flame"]
|
||||
active = False
|
||||
|
||||
def process(self, context):
|
||||
self.log.info(
|
||||
|
|
|
|||
|
|
@ -0,0 +1,256 @@
|
|||
import pyblish
|
||||
import openpype
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export
|
||||
|
||||
# # developer reload modules
|
||||
from pprint import pformat
|
||||
|
||||
|
||||
class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
||||
"""Collect all Timeline segment selection."""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.09
|
||||
label = "Collect timeline Instances"
|
||||
hosts = ["flame"]
|
||||
|
||||
audio_track_items = []
|
||||
|
||||
def process(self, context):
|
||||
project = context.data["flameProject"]
|
||||
sequence = context.data["flameSequence"]
|
||||
self.otio_timeline = context.data["otioTimeline"]
|
||||
self.clips_in_reels = opfapi.get_clips_in_reels(project)
|
||||
self.fps = context.data["fps"]
|
||||
|
||||
# process all sellected
|
||||
with opfapi.maintained_segment_selection(sequence) as segments:
|
||||
for segment in segments:
|
||||
clip_data = opfapi.get_segment_attributes(segment)
|
||||
clip_name = clip_data["segment_name"]
|
||||
self.log.debug("clip_name: {}".format(clip_name))
|
||||
|
||||
# get openpype tag data
|
||||
marker_data = opfapi.get_segment_data_marker(segment)
|
||||
self.log.debug("__ marker_data: {}".format(
|
||||
pformat(marker_data)))
|
||||
|
||||
if not marker_data:
|
||||
continue
|
||||
|
||||
if marker_data.get("id") != "pyblish.avalon.instance":
|
||||
continue
|
||||
|
||||
# get file path
|
||||
file_path = clip_data["fpath"]
|
||||
|
||||
# get source clip
|
||||
source_clip = self._get_reel_clip(file_path)
|
||||
|
||||
first_frame = opfapi.get_frame_from_filename(file_path) or 0
|
||||
|
||||
head, tail = self._get_head_tail(clip_data, first_frame)
|
||||
|
||||
# solve handles length
|
||||
marker_data["handleStart"] = min(
|
||||
marker_data["handleStart"], head)
|
||||
marker_data["handleEnd"] = min(
|
||||
marker_data["handleEnd"], tail)
|
||||
|
||||
with_audio = bool(marker_data.pop("audio"))
|
||||
|
||||
# add marker data to instance data
|
||||
inst_data = dict(marker_data.items())
|
||||
|
||||
asset = marker_data["asset"]
|
||||
subset = marker_data["subset"]
|
||||
|
||||
# insert family into families
|
||||
family = marker_data["family"]
|
||||
families = [str(f) for f in marker_data["families"]]
|
||||
families.insert(0, str(family))
|
||||
|
||||
# form label
|
||||
label = asset
|
||||
if asset != clip_name:
|
||||
label += " ({})".format(clip_name)
|
||||
label += " {}".format(subset)
|
||||
label += " {}".format("[" + ", ".join(families) + "]")
|
||||
|
||||
inst_data.update({
|
||||
"name": "{}_{}".format(asset, subset),
|
||||
"label": label,
|
||||
"asset": asset,
|
||||
"item": segment,
|
||||
"families": families,
|
||||
"publish": marker_data["publish"],
|
||||
"fps": self.fps,
|
||||
"flameSourceClip": source_clip,
|
||||
"sourceFirstFrame": int(first_frame),
|
||||
"path": file_path
|
||||
})
|
||||
|
||||
# get otio clip data
|
||||
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
|
||||
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
|
||||
|
||||
# add to instance data
|
||||
inst_data.update(otio_data)
|
||||
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
|
||||
|
||||
# add resolution
|
||||
self._get_resolution_to_data(inst_data, context)
|
||||
|
||||
# create instance
|
||||
instance = context.create_instance(**inst_data)
|
||||
|
||||
# add colorspace data
|
||||
instance.data.update({
|
||||
"versionData": {
|
||||
"colorspace": clip_data["colour_space"],
|
||||
}
|
||||
})
|
||||
|
||||
# create shot instance for shot attributes create/update
|
||||
self._create_shot_instance(context, clip_name, **inst_data)
|
||||
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
self.log.info(
|
||||
"_ instance.data: {}".format(pformat(instance.data)))
|
||||
|
||||
if not with_audio:
|
||||
continue
|
||||
|
||||
# add audioReview attribute to plate instance data
|
||||
# if reviewTrack is on
|
||||
if marker_data.get("reviewTrack") is not None:
|
||||
instance.data["reviewAudio"] = True
|
||||
|
||||
def _get_head_tail(self, clip_data, first_frame):
|
||||
# calculate head and tail with forward compatibility
|
||||
head = clip_data.get("segment_head")
|
||||
tail = clip_data.get("segment_tail")
|
||||
|
||||
if not head:
|
||||
head = int(clip_data["source_in"]) - int(first_frame)
|
||||
if not tail:
|
||||
tail = int(
|
||||
clip_data["source_duration"] - (
|
||||
head + clip_data["record_duration"]
|
||||
)
|
||||
)
|
||||
return head, tail
|
||||
|
||||
def _get_reel_clip(self, path):
|
||||
match_reel_clip = [
|
||||
clip for clip in self.clips_in_reels
|
||||
if clip["fpath"] == path
|
||||
]
|
||||
if match_reel_clip:
|
||||
return match_reel_clip.pop()
|
||||
|
||||
def _get_resolution_to_data(self, data, context):
|
||||
assert data.get("otioClip"), "Missing `otioClip` data"
|
||||
|
||||
# solve source resolution option
|
||||
if data.get("sourceResolution", None):
|
||||
otio_clip_metadata = data[
|
||||
"otioClip"].media_reference.metadata
|
||||
data.update({
|
||||
"resolutionWidth": otio_clip_metadata[
|
||||
"openpype.source.width"],
|
||||
"resolutionHeight": otio_clip_metadata[
|
||||
"openpype.source.height"],
|
||||
"pixelAspect": otio_clip_metadata[
|
||||
"openpype.source.pixelAspect"]
|
||||
})
|
||||
else:
|
||||
otio_tl_metadata = context.data["otioTimeline"].metadata
|
||||
data.update({
|
||||
"resolutionWidth": otio_tl_metadata["openpype.timeline.width"],
|
||||
"resolutionHeight": otio_tl_metadata[
|
||||
"openpype.timeline.height"],
|
||||
"pixelAspect": otio_tl_metadata[
|
||||
"openpype.timeline.pixelAspect"]
|
||||
})
|
||||
|
||||
def _create_shot_instance(self, context, clip_name, **data):
|
||||
master_layer = data.get("heroTrack")
|
||||
hierarchy_data = data.get("hierarchyData")
|
||||
asset = data.get("asset")
|
||||
|
||||
if not master_layer:
|
||||
return
|
||||
|
||||
if not hierarchy_data:
|
||||
return
|
||||
|
||||
asset = data["asset"]
|
||||
subset = "shotMain"
|
||||
|
||||
# insert family into families
|
||||
family = "shot"
|
||||
|
||||
# form label
|
||||
label = asset
|
||||
if asset != clip_name:
|
||||
label += " ({}) ".format(clip_name)
|
||||
label += " {}".format(subset)
|
||||
label += " [{}]".format(family)
|
||||
|
||||
data.update({
|
||||
"name": "{}_{}".format(asset, subset),
|
||||
"label": label,
|
||||
"subset": subset,
|
||||
"asset": asset,
|
||||
"family": family,
|
||||
"families": []
|
||||
})
|
||||
|
||||
instance = context.create_instance(**data)
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
self.log.debug(
|
||||
"_ instance.data: {}".format(pformat(instance.data)))
|
||||
|
||||
def _get_otio_clip_instance_data(self, clip_data):
|
||||
"""
|
||||
Return otio objects for timeline, track and clip
|
||||
|
||||
Args:
|
||||
timeline_item_data (dict): timeline_item_data from list returned by
|
||||
resolve.get_current_timeline_items()
|
||||
otio_timeline (otio.schema.Timeline): otio object
|
||||
|
||||
Returns:
|
||||
dict: otio clip object
|
||||
|
||||
"""
|
||||
segment = clip_data["PySegment"]
|
||||
s_track_name = segment.parent.name.get_value()
|
||||
timeline_range = self._create_otio_time_range_from_timeline_item_data(
|
||||
clip_data)
|
||||
|
||||
for otio_clip in self.otio_timeline.each_clip():
|
||||
track_name = otio_clip.parent().name
|
||||
parent_range = otio_clip.range_in_parent()
|
||||
if s_track_name not in track_name:
|
||||
continue
|
||||
if otio_clip.name not in segment.name.get_value():
|
||||
continue
|
||||
if openpype.lib.is_overlapping_otio_ranges(
|
||||
parent_range, timeline_range, strict=True):
|
||||
|
||||
# add pypedata marker to otio_clip metadata
|
||||
for marker in otio_clip.markers:
|
||||
if opfapi.MARKER_NAME in marker.name:
|
||||
otio_clip.metadata.update(marker.metadata)
|
||||
return {"otioClip": otio_clip}
|
||||
|
||||
return None
|
||||
|
||||
def _create_otio_time_range_from_timeline_item_data(self, clip_data):
|
||||
frame_start = int(clip_data["record_in"])
|
||||
frame_duration = int(clip_data["record_duration"])
|
||||
|
||||
return flame_export.create_otio_time_range(
|
||||
frame_start, frame_duration, self.fps)
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
import pyblish.api
|
||||
import avalon.api as avalon
|
||||
import openpype.lib as oplib
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export
|
||||
|
||||
|
||||
class CollecTimelineOTIO(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working context into publish context"""
|
||||
|
||||
label = "Collect Timeline OTIO"
|
||||
order = pyblish.api.CollectorOrder - 0.099
|
||||
|
||||
def process(self, context):
|
||||
# plugin defined
|
||||
family = "workfile"
|
||||
variant = "otioTimeline"
|
||||
|
||||
# main
|
||||
asset_doc = context.data["assetEntity"]
|
||||
task_name = avalon.Session["AVALON_TASK"]
|
||||
project = opfapi.get_current_project()
|
||||
sequence = opfapi.get_current_sequence(opfapi.CTX.selection)
|
||||
|
||||
# create subset name
|
||||
subset_name = oplib.get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
)
|
||||
|
||||
# adding otio timeline to context
|
||||
with opfapi.maintained_segment_selection(sequence):
|
||||
otio_timeline = flame_export.create_otio_timeline(sequence)
|
||||
|
||||
instance_data = {
|
||||
"name": subset_name,
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset_name,
|
||||
"family": "workfile"
|
||||
}
|
||||
|
||||
# create instance with workfile
|
||||
instance = context.create_instance(**instance_data)
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
|
||||
# update context with main project attributes
|
||||
context.data.update({
|
||||
"flameProject": project,
|
||||
"flameSequence": sequence,
|
||||
"otioTimeline": otio_timeline,
|
||||
"currentFile": "Flame/{}/{}".format(
|
||||
project.name, sequence.name
|
||||
),
|
||||
"fps": float(str(sequence.frame_rate)[:-4])
|
||||
})
|
||||
43
openpype/hosts/flame/plugins/publish/extract_otio_file.py
Normal file
43
openpype/hosts/flame/plugins/publish/extract_otio_file.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import opentimelineio as otio
|
||||
|
||||
|
||||
class ExtractOTIOFile(openpype.api.Extractor):
|
||||
"""
|
||||
Extractor export OTIO file
|
||||
"""
|
||||
|
||||
label = "Extract OTIO file"
|
||||
order = pyblish.api.ExtractorOrder - 0.45
|
||||
families = ["workfile"]
|
||||
hosts = ["flame"]
|
||||
|
||||
def process(self, instance):
|
||||
# create representation data
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
name = instance.data["name"]
|
||||
staging_dir = self.staging_dir(instance)
|
||||
|
||||
otio_timeline = instance.context.data["otioTimeline"]
|
||||
# create otio timeline representation
|
||||
otio_file_name = name + ".otio"
|
||||
otio_file_path = os.path.join(staging_dir, otio_file_name)
|
||||
|
||||
# export otio file to temp dir
|
||||
otio.adapters.write_to_file(otio_timeline, otio_file_path)
|
||||
|
||||
representation_otio = {
|
||||
'name': "otio",
|
||||
'ext': "otio",
|
||||
'files': otio_file_name,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation_otio)
|
||||
|
||||
self.log.info("Added OTIO file representation: {}".format(
|
||||
representation_otio))
|
||||
172
openpype/hosts/flame/plugins/publish/extract_subset_resources.py
Normal file
172
openpype/hosts/flame/plugins/publish/extract_subset_resources.py
Normal file
|
|
@ -0,0 +1,172 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
from copy import deepcopy
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.hosts.flame import api as opfapi
|
||||
|
||||
|
||||
class ExtractSubsetResources(openpype.api.Extractor):
|
||||
"""
|
||||
Extractor for transcoding files from Flame clip
|
||||
"""
|
||||
|
||||
label = "Extract subset resources"
|
||||
order = pyblish.api.ExtractorOrder
|
||||
families = ["clip"]
|
||||
hosts = ["flame"]
|
||||
|
||||
# plugin defaults
|
||||
default_presets = {
|
||||
"thumbnail": {
|
||||
"ext": "jpg",
|
||||
"xml_preset_file": "Jpeg (8-bit).xml",
|
||||
"xml_preset_dir": "",
|
||||
"representation_add_range": False,
|
||||
"representation_tags": ["thumbnail"]
|
||||
},
|
||||
"ftrackpreview": {
|
||||
"ext": "mov",
|
||||
"xml_preset_file": "Apple iPad (1920x1080).xml",
|
||||
"xml_preset_dir": "",
|
||||
"representation_add_range": True,
|
||||
"representation_tags": [
|
||||
"review",
|
||||
"delete"
|
||||
]
|
||||
}
|
||||
}
|
||||
keep_original_representation = False
|
||||
|
||||
# hide publisher during exporting
|
||||
hide_ui_on_process = True
|
||||
|
||||
# settings
|
||||
export_presets_mapping = {}
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
if (
|
||||
self.keep_original_representation
|
||||
and "representations" not in instance.data
|
||||
or not self.keep_original_representation
|
||||
):
|
||||
instance.data["representations"] = []
|
||||
|
||||
frame_start = instance.data["frameStart"]
|
||||
handle_start = instance.data["handleStart"]
|
||||
frame_start_handle = frame_start - handle_start
|
||||
source_first_frame = instance.data["sourceFirstFrame"]
|
||||
source_start_handles = instance.data["sourceStartH"]
|
||||
source_end_handles = instance.data["sourceEndH"]
|
||||
source_duration_handles = (
|
||||
source_end_handles - source_start_handles) + 1
|
||||
|
||||
clip_data = instance.data["flameSourceClip"]
|
||||
clip = clip_data["PyClip"]
|
||||
|
||||
in_mark = (source_start_handles - source_first_frame) + 1
|
||||
out_mark = in_mark + source_duration_handles
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
|
||||
# add default preset type for thumbnail and reviewable video
|
||||
# update them with settings and overide in case the same
|
||||
# are found in there
|
||||
export_presets = deepcopy(self.default_presets)
|
||||
export_presets.update(self.export_presets_mapping)
|
||||
|
||||
# with maintained duplication loop all presets
|
||||
with opfapi.maintained_object_duplication(clip) as duplclip:
|
||||
# loop all preset names and
|
||||
for unique_name, preset_config in export_presets.items():
|
||||
kwargs = {}
|
||||
preset_file = preset_config["xml_preset_file"]
|
||||
preset_dir = preset_config["xml_preset_dir"]
|
||||
repre_tags = preset_config["representation_tags"]
|
||||
|
||||
# validate xml preset file is filled
|
||||
if preset_file == "":
|
||||
raise ValueError(
|
||||
("Check Settings for {} preset: "
|
||||
"`XML preset file` is not filled").format(
|
||||
unique_name)
|
||||
)
|
||||
|
||||
# resolve xml preset dir if not filled
|
||||
if preset_dir == "":
|
||||
preset_dir = opfapi.get_preset_path_by_xml_name(
|
||||
preset_file)
|
||||
|
||||
if not preset_dir:
|
||||
raise ValueError(
|
||||
("Check Settings for {} preset: "
|
||||
"`XML preset file` {} is not found").format(
|
||||
unique_name, preset_file)
|
||||
)
|
||||
|
||||
# create preset path
|
||||
preset_path = str(os.path.join(
|
||||
preset_dir, preset_file
|
||||
))
|
||||
|
||||
# define kwargs based on preset type
|
||||
if "thumbnail" in unique_name:
|
||||
kwargs["thumb_frame_number"] = in_mark + (
|
||||
source_duration_handles / 2)
|
||||
else:
|
||||
kwargs.update({
|
||||
"in_mark": in_mark,
|
||||
"out_mark": out_mark
|
||||
})
|
||||
|
||||
export_dir_path = str(os.path.join(
|
||||
staging_dir, unique_name
|
||||
))
|
||||
os.makedirs(export_dir_path)
|
||||
|
||||
# export
|
||||
opfapi.export_clip(
|
||||
export_dir_path, duplclip, preset_path, **kwargs)
|
||||
|
||||
# create representation data
|
||||
representation_data = {
|
||||
"name": unique_name,
|
||||
"outputName": unique_name,
|
||||
"ext": preset_config["ext"],
|
||||
"stagingDir": export_dir_path,
|
||||
"tags": repre_tags
|
||||
}
|
||||
|
||||
files = os.listdir(export_dir_path)
|
||||
|
||||
# add files to represetation but add
|
||||
# imagesequence as list
|
||||
if (
|
||||
"movie_file" in preset_path
|
||||
or unique_name == "thumbnail"
|
||||
):
|
||||
representation_data["files"] = files.pop()
|
||||
else:
|
||||
representation_data["files"] = files
|
||||
|
||||
# add frame range
|
||||
if preset_config["representation_add_range"]:
|
||||
representation_data.update({
|
||||
"frameStart": frame_start_handle,
|
||||
"frameEnd": (
|
||||
frame_start_handle + source_duration_handles),
|
||||
"fps": instance.data["fps"]
|
||||
})
|
||||
|
||||
instance.data["representations"].append(representation_data)
|
||||
|
||||
# add review family if found in tags
|
||||
if "review" in repre_tags:
|
||||
instance.data["families"].append("review")
|
||||
|
||||
self.log.info("Added representation: {}".format(
|
||||
representation_data))
|
||||
|
||||
self.log.debug("All representations: {}".format(
|
||||
pformat(instance.data["representations"])))
|
||||
|
|
@ -52,7 +52,7 @@ def install():
|
|||
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall all tha was installed
|
||||
"""Uninstall all that was installed
|
||||
|
||||
This is where you undo everything that was done in `install()`.
|
||||
That means, removing menus, deregistering families and data
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import os
|
||||
import importlib
|
||||
from openpype.lib import PreLaunchHook
|
||||
from openpype.lib import PreLaunchHook, ApplicationLaunchFailed
|
||||
from openpype.hosts.fusion.api import utils
|
||||
|
||||
|
||||
|
|
@ -12,27 +12,29 @@ class FusionPrelaunch(PreLaunchHook):
|
|||
app_groups = ["fusion"]
|
||||
|
||||
def execute(self):
|
||||
# making sure pyton 3.6 is installed at provided path
|
||||
# making sure python 3.6 is installed at provided path
|
||||
py36_dir = os.path.normpath(self.launch_context.env.get("PYTHON36", ""))
|
||||
assert os.path.isdir(py36_dir), (
|
||||
"Python 3.6 is not installed at the provided folder path. Either "
|
||||
"make sure the `environments\resolve.json` is having correctly "
|
||||
"set `PYTHON36` or make sure Python 3.6 is installed "
|
||||
f"in given path. \nPYTHON36E: `{py36_dir}`"
|
||||
)
|
||||
self.log.info(f"Path to Fusion Python folder: `{py36_dir}`...")
|
||||
if not os.path.isdir(py36_dir):
|
||||
raise ApplicationLaunchFailed(
|
||||
"Python 3.6 is not installed at the provided path.\n"
|
||||
"Either make sure the 'environments/fusion.json' has "
|
||||
"'PYTHON36' set corectly or make sure Python 3.6 is installed "
|
||||
f"in the given path.\n\nPYTHON36: {py36_dir}"
|
||||
)
|
||||
self.log.info(f"Path to Fusion Python folder: '{py36_dir}'...")
|
||||
self.launch_context.env["PYTHON36"] = py36_dir
|
||||
|
||||
# setting utility scripts dir for scripts syncing
|
||||
us_dir = os.path.normpath(
|
||||
self.launch_context.env.get("FUSION_UTILITY_SCRIPTS_DIR", "")
|
||||
)
|
||||
assert os.path.isdir(us_dir), (
|
||||
"Fusion utility script dir does not exists. Either make sure "
|
||||
"the `environments\fusion.json` is having correctly set "
|
||||
"`FUSION_UTILITY_SCRIPTS_DIR` or reinstall DaVinci Resolve. \n"
|
||||
f"FUSION_UTILITY_SCRIPTS_DIR: `{us_dir}`"
|
||||
)
|
||||
if not os.path.isdir(us_dir):
|
||||
raise ApplicationLaunchFailed(
|
||||
"Fusion utility script dir does not exist. Either make sure "
|
||||
"the 'environments/fusion.json' has "
|
||||
"'FUSION_UTILITY_SCRIPTS_DIR' set correctly or reinstall "
|
||||
f"Fusion.\n\nFUSION_UTILITY_SCRIPTS_DIR: '{us_dir}'"
|
||||
)
|
||||
|
||||
try:
|
||||
__import__("avalon.fusion")
|
||||
|
|
|
|||
|
|
@ -185,22 +185,22 @@ class FusionLoadSequence(api.Loader):
|
|||
- We do the same like Fusion - allow fusion to take control.
|
||||
|
||||
- HoldFirstFrame: Fusion resets this to 0
|
||||
- We preverse the value.
|
||||
- We preserve the value.
|
||||
|
||||
- HoldLastFrame: Fusion resets this to 0
|
||||
- We preverse the value.
|
||||
- We preserve the value.
|
||||
|
||||
- Reverse: Fusion resets to disabled if "Loop" is not enabled.
|
||||
- We preserve the value.
|
||||
|
||||
- Depth: Fusion resets to "Format"
|
||||
- We preverse the value.
|
||||
- We preserve the value.
|
||||
|
||||
- KeyCode: Fusion resets to ""
|
||||
- We preverse the value.
|
||||
- We preserve the value.
|
||||
|
||||
- TimeCodeOffset: Fusion resets to 0
|
||||
- We preverse the value.
|
||||
- We preserve the value.
|
||||
|
||||
"""
|
||||
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ class FusionSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
# Include critical variables with submission
|
||||
keys = [
|
||||
# TODO: This won't work if the slaves don't have accesss to
|
||||
# TODO: This won't work if the slaves don't have access to
|
||||
# these paths, such as if slaves are running Linux and the
|
||||
# submitter is on Windows.
|
||||
"PYTHONPATH",
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ def _format_filepath(session):
|
|||
new_filename = "{}_{}_slapcomp_v001.comp".format(project, asset)
|
||||
new_filepath = os.path.join(slapcomp_dir, new_filename)
|
||||
|
||||
# Create new unqiue filepath
|
||||
# Create new unique filepath
|
||||
if os.path.exists(new_filepath):
|
||||
new_filepath = pype.version_up(new_filepath)
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ def main(env):
|
|||
# activate resolve from pype
|
||||
avalon.api.install(avalon.fusion)
|
||||
|
||||
log.info(f"Avalon registred hosts: {avalon.api.registered_host()}")
|
||||
log.info(f"Avalon registered hosts: {avalon.api.registered_host()}")
|
||||
|
||||
menu.launch_openpype_menu()
|
||||
|
||||
|
|
|
|||
|
|
@ -2,13 +2,13 @@
|
|||
|
||||
### Development
|
||||
|
||||
#### Setting up ESLint as linter for javasript code
|
||||
#### Setting up ESLint as linter for javascript code
|
||||
|
||||
You nee [node.js](https://nodejs.org/en/) installed. All you need to do then
|
||||
is to run:
|
||||
|
||||
```sh
|
||||
npm intall
|
||||
npm install
|
||||
```
|
||||
in **js** directory. This will install eslint and all requirements locally.
|
||||
|
||||
|
|
|
|||
|
|
@ -18,11 +18,11 @@ if (typeof $ === 'undefined'){
|
|||
* @classdesc Image Sequence loader JS code.
|
||||
*/
|
||||
var ImageSequenceLoader = function() {
|
||||
this.PNGTransparencyMode = 0; // Premultiplied wih Black
|
||||
this.TGATransparencyMode = 0; // Premultiplied wih Black
|
||||
this.SGITransparencyMode = 0; // Premultiplied wih Black
|
||||
this.PNGTransparencyMode = 0; // Premultiplied with Black
|
||||
this.TGATransparencyMode = 0; // Premultiplied with Black
|
||||
this.SGITransparencyMode = 0; // Premultiplied with Black
|
||||
this.LayeredPSDTransparencyMode = 1; // Straight
|
||||
this.FlatPSDTransparencyMode = 2; // Premultiplied wih White
|
||||
this.FlatPSDTransparencyMode = 2; // Premultiplied with White
|
||||
};
|
||||
|
||||
|
||||
|
|
@ -84,7 +84,7 @@ ImageSequenceLoader.getUniqueColumnName = function(columnPrefix) {
|
|||
* @return {string} Read node name
|
||||
*
|
||||
* @example
|
||||
* // Agrguments are in following order:
|
||||
* // Arguments are in following order:
|
||||
* var args = [
|
||||
* files, // Files in file sequences.
|
||||
* asset, // Asset name.
|
||||
|
|
@ -97,11 +97,11 @@ ImageSequenceLoader.prototype.importFiles = function(args) {
|
|||
MessageLog.trace("ImageSequence:: " + typeof PypeHarmony);
|
||||
MessageLog.trace("ImageSequence $:: " + typeof $);
|
||||
MessageLog.trace("ImageSequence OH:: " + typeof PypeHarmony.OpenHarmony);
|
||||
var PNGTransparencyMode = 0; // Premultiplied wih Black
|
||||
var TGATransparencyMode = 0; // Premultiplied wih Black
|
||||
var SGITransparencyMode = 0; // Premultiplied wih Black
|
||||
var PNGTransparencyMode = 0; // Premultiplied with Black
|
||||
var TGATransparencyMode = 0; // Premultiplied with Black
|
||||
var SGITransparencyMode = 0; // Premultiplied with Black
|
||||
var LayeredPSDTransparencyMode = 1; // Straight
|
||||
var FlatPSDTransparencyMode = 2; // Premultiplied wih White
|
||||
var FlatPSDTransparencyMode = 2; // Premultiplied with White
|
||||
|
||||
var doc = $.scn;
|
||||
var files = args[0];
|
||||
|
|
@ -224,7 +224,7 @@ ImageSequenceLoader.prototype.importFiles = function(args) {
|
|||
* @return {string} Read node name
|
||||
*
|
||||
* @example
|
||||
* // Agrguments are in following order:
|
||||
* // Arguments are in following order:
|
||||
* var args = [
|
||||
* files, // Files in file sequences
|
||||
* name, // Node name
|
||||
|
|
|
|||
|
|
@ -13,11 +13,11 @@ copy_files = """function copyFile(srcFilename, dstFilename)
|
|||
}
|
||||
"""
|
||||
|
||||
import_files = """var PNGTransparencyMode = 1; //Premultiplied wih Black
|
||||
var TGATransparencyMode = 0; //Premultiplied wih Black
|
||||
var SGITransparencyMode = 0; //Premultiplied wih Black
|
||||
import_files = """var PNGTransparencyMode = 1; //Premultiplied with Black
|
||||
var TGATransparencyMode = 0; //Premultiplied with Black
|
||||
var SGITransparencyMode = 0; //Premultiplied with Black
|
||||
var LayeredPSDTransparencyMode = 1; //Straight
|
||||
var FlatPSDTransparencyMode = 2; //Premultiplied wih White
|
||||
var FlatPSDTransparencyMode = 2; //Premultiplied with White
|
||||
|
||||
function getUniqueColumnName( column_prefix )
|
||||
{
|
||||
|
|
@ -140,11 +140,11 @@ function import_files(args)
|
|||
import_files
|
||||
"""
|
||||
|
||||
replace_files = """var PNGTransparencyMode = 1; //Premultiplied wih Black
|
||||
var TGATransparencyMode = 0; //Premultiplied wih Black
|
||||
var SGITransparencyMode = 0; //Premultiplied wih Black
|
||||
replace_files = """var PNGTransparencyMode = 1; //Premultiplied with Black
|
||||
var TGATransparencyMode = 0; //Premultiplied with Black
|
||||
var SGITransparencyMode = 0; //Premultiplied with Black
|
||||
var LayeredPSDTransparencyMode = 1; //Straight
|
||||
var FlatPSDTransparencyMode = 2; //Premultiplied wih White
|
||||
var FlatPSDTransparencyMode = 2; //Premultiplied with White
|
||||
|
||||
function replace_files(args)
|
||||
{
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ def beforeNewProjectCreated(event):
|
|||
|
||||
def afterNewProjectCreated(event):
|
||||
log.info("after new project created event...")
|
||||
# sync avalon data to project properities
|
||||
# sync avalon data to project properties
|
||||
sync_avalon_data_to_workfile()
|
||||
|
||||
# add tags from preset
|
||||
|
|
@ -51,7 +51,7 @@ def beforeProjectLoad(event):
|
|||
|
||||
def afterProjectLoad(event):
|
||||
log.info("after project load event...")
|
||||
# sync avalon data to project properities
|
||||
# sync avalon data to project properties
|
||||
sync_avalon_data_to_workfile()
|
||||
|
||||
# add tags from preset
|
||||
|
|
|
|||
|
|
@ -299,7 +299,7 @@ def get_track_item_pype_data(track_item):
|
|||
if not tag:
|
||||
return None
|
||||
|
||||
# get tag metadata attribut
|
||||
# get tag metadata attribute
|
||||
tag_data = tag.metadata()
|
||||
# convert tag metadata to normal keys names and values to correct types
|
||||
for k, v in dict(tag_data).items():
|
||||
|
|
@ -402,7 +402,7 @@ def sync_avalon_data_to_workfile():
|
|||
try:
|
||||
project.setProjectDirectory(active_project_root)
|
||||
except Exception:
|
||||
# old way of seting it
|
||||
# old way of setting it
|
||||
project.setProjectRoot(active_project_root)
|
||||
|
||||
# get project data from avalon db
|
||||
|
|
@ -614,7 +614,7 @@ def create_nuke_workfile_clips(nuke_workfiles, seq=None):
|
|||
if not seq:
|
||||
seq = hiero.core.Sequence('NewSequences')
|
||||
root.addItem(hiero.core.BinItem(seq))
|
||||
# todo will ned to define this better
|
||||
# todo will need to define this better
|
||||
# track = seq[1] # lazy example to get a destination# track
|
||||
clips_lst = []
|
||||
for nk in nuke_workfiles:
|
||||
|
|
@ -838,7 +838,7 @@ def apply_colorspace_project():
|
|||
# remove the TEMP file as we dont need it
|
||||
os.remove(copy_current_file_tmp)
|
||||
|
||||
# use the code from bellow for changing xml hrox Attributes
|
||||
# use the code from below for changing xml hrox Attributes
|
||||
presets.update({"name": os.path.basename(copy_current_file)})
|
||||
|
||||
# read HROX in as QDomSocument
|
||||
|
|
@ -874,7 +874,7 @@ def apply_colorspace_clips():
|
|||
if "default" in clip_colorspace:
|
||||
continue
|
||||
|
||||
# check if any colorspace presets for read is mathing
|
||||
# check if any colorspace presets for read is matching
|
||||
preset_clrsp = None
|
||||
for k in presets:
|
||||
if not bool(re.search(k["regex"], clip_media_source_path)):
|
||||
|
|
@ -931,7 +931,7 @@ def get_sequence_pattern_and_padding(file):
|
|||
Can find file.0001.ext, file.%02d.ext, file.####.ext
|
||||
|
||||
Return:
|
||||
string: any matching sequence patern
|
||||
string: any matching sequence pattern
|
||||
int: padding of sequnce numbering
|
||||
"""
|
||||
foundall = re.findall(
|
||||
|
|
@ -950,7 +950,7 @@ def get_sequence_pattern_and_padding(file):
|
|||
|
||||
|
||||
def sync_clip_name_to_data_asset(track_items_list):
|
||||
# loop trough all selected clips
|
||||
# loop through all selected clips
|
||||
for track_item in track_items_list:
|
||||
# ignore if parent track is locked or disabled
|
||||
if track_item.parent().isLocked():
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ def create_time_effects(otio_clip, track_item):
|
|||
# add otio effect to clip effects
|
||||
otio_clip.effects.append(otio_effect)
|
||||
|
||||
# loop trought and get all Timewarps
|
||||
# loop through and get all Timewarps
|
||||
for effect in subTrackItems:
|
||||
if ((track_item not in effect.linkedItems())
|
||||
and (len(effect.linkedItems()) > 0)):
|
||||
|
|
@ -388,11 +388,11 @@ def create_otio_timeline():
|
|||
# Add Gap if needed
|
||||
if itemindex == 0:
|
||||
# if it is first track item at track then add
|
||||
# it to previouse item
|
||||
# it to previous item
|
||||
return track_item
|
||||
|
||||
else:
|
||||
# get previouse item
|
||||
# get previous item
|
||||
return track_item.parent().items()[itemindex - 1]
|
||||
|
||||
# get current timeline
|
||||
|
|
@ -416,11 +416,11 @@ def create_otio_timeline():
|
|||
# Add Gap if needed
|
||||
if itemindex == 0:
|
||||
# if it is first track item at track then add
|
||||
# it to previouse item
|
||||
# it to previous item
|
||||
prev_item = track_item
|
||||
|
||||
else:
|
||||
# get previouse item
|
||||
# get previous item
|
||||
prev_item = track_item.parent().items()[itemindex - 1]
|
||||
|
||||
# calculate clip frame range difference from each other
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ class CreatorWidget(QtWidgets.QDialog):
|
|||
# convert label text to normal capitalized text with spaces
|
||||
label_text = self.camel_case_split(text)
|
||||
|
||||
# assign the new text to lable widget
|
||||
# assign the new text to label widget
|
||||
label = QtWidgets.QLabel(label_text)
|
||||
label.setObjectName("LineLabel")
|
||||
|
||||
|
|
@ -337,7 +337,7 @@ class SequenceLoader(avalon.Loader):
|
|||
"Sequentially in order"
|
||||
],
|
||||
default="Original timing",
|
||||
help="Would you like to place it at orignal timing?"
|
||||
help="Would you like to place it at original timing?"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
@ -475,7 +475,7 @@ class ClipLoader:
|
|||
def _get_asset_data(self):
|
||||
""" Get all available asset data
|
||||
|
||||
joint `data` key with asset.data dict into the representaion
|
||||
joint `data` key with asset.data dict into the representation
|
||||
|
||||
"""
|
||||
asset_name = self.context["representation"]["context"]["asset"]
|
||||
|
|
@ -550,7 +550,7 @@ class ClipLoader:
|
|||
(self.timeline_out - self.timeline_in + 1)
|
||||
+ self.handle_start + self.handle_end) < self.media_duration)
|
||||
|
||||
# if slate is on then remove the slate frame from begining
|
||||
# if slate is on then remove the slate frame from beginning
|
||||
if slate_on:
|
||||
self.media_duration -= 1
|
||||
self.handle_start += 1
|
||||
|
|
@ -634,8 +634,8 @@ class PublishClip:
|
|||
"track": "sequence",
|
||||
}
|
||||
|
||||
# parents search patern
|
||||
parents_search_patern = r"\{([a-z]*?)\}"
|
||||
# parents search pattern
|
||||
parents_search_pattern = r"\{([a-z]*?)\}"
|
||||
|
||||
# default templates for non-ui use
|
||||
rename_default = False
|
||||
|
|
@ -719,7 +719,7 @@ class PublishClip:
|
|||
return self.track_item
|
||||
|
||||
def _populate_track_item_default_data(self):
|
||||
""" Populate default formating data from track item. """
|
||||
""" Populate default formatting data from track item. """
|
||||
|
||||
self.track_item_default_data = {
|
||||
"_folder_": "shots",
|
||||
|
|
@ -814,7 +814,7 @@ class PublishClip:
|
|||
# mark review layer
|
||||
if self.review_track and (
|
||||
self.review_track not in self.review_track_default):
|
||||
# if review layer is defined and not the same as defalut
|
||||
# if review layer is defined and not the same as default
|
||||
self.review_layer = self.review_track
|
||||
# shot num calculate
|
||||
if self.rename_index == 0:
|
||||
|
|
@ -863,7 +863,7 @@ class PublishClip:
|
|||
# in case track name and subset name is the same then add
|
||||
if self.subset_name == self.track_name:
|
||||
hero_data["subset"] = self.subset
|
||||
# assing data to return hierarchy data to tag
|
||||
# assign data to return hierarchy data to tag
|
||||
tag_hierarchy_data = hero_data
|
||||
|
||||
# add data to return data dict
|
||||
|
|
@ -897,7 +897,7 @@ class PublishClip:
|
|||
type
|
||||
)
|
||||
|
||||
# first collect formating data to use for formating template
|
||||
# first collect formatting data to use for formatting template
|
||||
formating_data = {}
|
||||
for _k, _v in self.hierarchy_data.items():
|
||||
value = _v["value"].format(
|
||||
|
|
@ -915,9 +915,9 @@ class PublishClip:
|
|||
""" Create parents and return it in list. """
|
||||
self.parents = []
|
||||
|
||||
patern = re.compile(self.parents_search_patern)
|
||||
pattern = re.compile(self.parents_search_pattern)
|
||||
|
||||
par_split = [(patern.findall(t).pop(), t)
|
||||
par_split = [(pattern.findall(t).pop(), t)
|
||||
for t in self.hierarchy.split("/")]
|
||||
|
||||
for type, template in par_split:
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# PimpMySpreadsheet 1.0, Antony Nasce, 23/05/13.
|
||||
# Adds custom spreadsheet columns and right-click menu for setting the Shot Status, and Artist Shot Assignement.
|
||||
# Adds custom spreadsheet columns and right-click menu for setting the Shot Status, and Artist Shot Assignment.
|
||||
# gStatusTags is a global dictionary of key(status)-value(icon) pairs, which can be overridden with custom icons if required
|
||||
# Requires Hiero 1.7v2 or later.
|
||||
# Install Instructions: Copy to ~/.hiero/Python/StartupUI
|
||||
|
|
|
|||
|
|
@ -172,7 +172,7 @@ def add_tags_to_workfile():
|
|||
}
|
||||
}
|
||||
|
||||
# loop trough tag data dict and create deep tag structure
|
||||
# loop through tag data dict and create deep tag structure
|
||||
for _k, _val in nks_pres_tags.items():
|
||||
# check if key is not decorated with [] so it is defined as bin
|
||||
bin_find = None
|
||||
|
|
|
|||
|
|
@ -139,7 +139,7 @@ class CreateShotClip(phiero.Creator):
|
|||
"type": "QComboBox",
|
||||
"label": "Subset Name",
|
||||
"target": "ui",
|
||||
"toolTip": "chose subset name patern, if <track_name> is selected, name of track layer will be used", # noqa
|
||||
"toolTip": "chose subset name pattern, if <track_name> is selected, name of track layer will be used", # noqa
|
||||
"order": 0},
|
||||
"subsetFamily": {
|
||||
"value": ["plate", "take"],
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ class PreCollectClipEffects(pyblish.api.InstancePlugin):
|
|||
if clip_effect_items:
|
||||
tracks_effect_items[track_index] = clip_effect_items
|
||||
|
||||
# process all effects and devide them to instance
|
||||
# process all effects and divide them to instance
|
||||
for _track_index, sub_track_items in tracks_effect_items.items():
|
||||
# skip if track index is the same as review track index
|
||||
if review and review_track_index == _track_index:
|
||||
|
|
@ -156,7 +156,7 @@ class PreCollectClipEffects(pyblish.api.InstancePlugin):
|
|||
'postage_stamp_frame', 'maskChannel', 'export_cc',
|
||||
'select_cccid', 'mix', 'version', 'matrix']
|
||||
|
||||
# loop trough all knobs and collect not ignored
|
||||
# loop through all knobs and collect not ignored
|
||||
# and any with any value
|
||||
for knob in node.knobs().keys():
|
||||
# skip nodes in ignore keys
|
||||
|
|
|
|||
|
|
@ -264,7 +264,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
timeline_range = self.create_otio_time_range_from_timeline_item_data(
|
||||
track_item)
|
||||
|
||||
# loop trough audio track items and search for overlaping clip
|
||||
# loop through audio track items and search for overlapping clip
|
||||
for otio_audio in self.audio_track_items:
|
||||
parent_range = otio_audio.range_in_parent()
|
||||
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ class CollectClipResolution(pyblish.api.InstancePlugin):
|
|||
"""Collect clip geometry resolution"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
label = "Collect Clip Resoluton"
|
||||
label = "Collect Clip Resolution"
|
||||
hosts = ["hiero"]
|
||||
families = ["clip"]
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ class PrecollectRetime(api.InstancePlugin):
|
|||
handle_end
|
||||
))
|
||||
|
||||
# loop withing subtrack items
|
||||
# loop within subtrack items
|
||||
time_warp_nodes = []
|
||||
source_in_change = 0
|
||||
source_out_change = 0
|
||||
|
|
@ -76,7 +76,7 @@ class PrecollectRetime(api.InstancePlugin):
|
|||
(timeline_in - handle_start),
|
||||
(timeline_out + handle_end) + 1)
|
||||
]
|
||||
# calculate differnce
|
||||
# calculate difference
|
||||
diff_in = (node["lookup"].getValueAt(
|
||||
timeline_in)) - timeline_in
|
||||
diff_out = (node["lookup"].getValueAt(
|
||||
|
|
|
|||
|
|
@ -74,6 +74,9 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
instance = context.create_instance(label)
|
||||
|
||||
# Include `families` using `family` data
|
||||
instance.data["families"] = [instance.data["family"]]
|
||||
|
||||
instance[:] = [node]
|
||||
instance.data.update(data)
|
||||
|
||||
|
|
|
|||
|
|
@ -37,5 +37,7 @@ class ExtractVDBCache(openpype.api.Extractor):
|
|||
"ext": "vdb",
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -93,31 +93,20 @@ def override_toolbox_ui():
|
|||
return
|
||||
|
||||
# Create our controls
|
||||
background_color = (0.267, 0.267, 0.267)
|
||||
controls = []
|
||||
look_assigner = None
|
||||
try:
|
||||
look_assigner = host_tools.get_tool_by_name(
|
||||
"lookassigner",
|
||||
parent=pipeline._parent
|
||||
)
|
||||
except Exception:
|
||||
log.warning("Couldn't create Look assigner window.", exc_info=True)
|
||||
|
||||
if look_assigner is not None:
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_lookmanager",
|
||||
annotation="Look Manager",
|
||||
label="Look Manager",
|
||||
image=os.path.join(icons, "lookmanager.png"),
|
||||
command=host_tools.show_look_assigner,
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_lookmanager",
|
||||
annotation="Look Manager",
|
||||
label="Look Manager",
|
||||
image=os.path.join(icons, "lookmanager.png"),
|
||||
command=host_tools.show_look_assigner,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
)
|
||||
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
|
|
@ -128,7 +117,6 @@ def override_toolbox_ui():
|
|||
command=lambda: host_tools.show_workfiles(
|
||||
parent=pipeline._parent
|
||||
),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
|
|
@ -144,7 +132,6 @@ def override_toolbox_ui():
|
|||
command=lambda: host_tools.show_loader(
|
||||
parent=pipeline._parent, use_context=True
|
||||
),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
|
|
@ -160,7 +147,6 @@ def override_toolbox_ui():
|
|||
command=lambda: host_tools.show_scene_inventory(
|
||||
parent=pipeline._parent
|
||||
),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
|
|
|
|||
|
|
@ -184,7 +184,7 @@ def uv_from_element(element):
|
|||
parent = element.split(".", 1)[0]
|
||||
|
||||
# Maya is funny in that when the transform of the shape
|
||||
# of the component elemen has children, the name returned
|
||||
# of the component element has children, the name returned
|
||||
# by that elementection is the shape. Otherwise, it is
|
||||
# the transform. So lets see what type we're dealing with here.
|
||||
if cmds.nodeType(parent) in supported:
|
||||
|
|
@ -280,7 +280,7 @@ def shape_from_element(element):
|
|||
return node
|
||||
|
||||
|
||||
def collect_animation_data():
|
||||
def collect_animation_data(fps=False):
|
||||
"""Get the basic animation data
|
||||
|
||||
Returns:
|
||||
|
|
@ -291,7 +291,6 @@ def collect_animation_data():
|
|||
# get scene values as defaults
|
||||
start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
end = cmds.playbackOptions(query=True, animationEndTime=True)
|
||||
fps = mel.eval('currentTimeUnitToFPS()')
|
||||
|
||||
# build attributes
|
||||
data = OrderedDict()
|
||||
|
|
@ -299,7 +298,9 @@ def collect_animation_data():
|
|||
data["frameEnd"] = end
|
||||
data["handles"] = 0
|
||||
data["step"] = 1.0
|
||||
data["fps"] = fps
|
||||
|
||||
if fps:
|
||||
data["fps"] = mel.eval('currentTimeUnitToFPS()')
|
||||
|
||||
return data
|
||||
|
||||
|
|
@ -733,7 +734,7 @@ def namespaced(namespace, new=True):
|
|||
str: The namespace that is used during the context
|
||||
|
||||
"""
|
||||
original = cmds.namespaceInfo(cur=True)
|
||||
original = cmds.namespaceInfo(cur=True, absoluteName=True)
|
||||
if new:
|
||||
namespace = avalon.maya.lib.unique_namespace(namespace)
|
||||
cmds.namespace(add=namespace)
|
||||
|
|
@ -1630,7 +1631,7 @@ def get_container_transforms(container, members=None, root=False):
|
|||
Args:
|
||||
container (dict): the container
|
||||
members (list): optional and convenience argument
|
||||
root (bool): return highest node in hierachy if True
|
||||
root (bool): return highest node in hierarchy if True
|
||||
|
||||
Returns:
|
||||
root (list / str):
|
||||
|
|
@ -2517,7 +2518,7 @@ class shelf():
|
|||
def _get_render_instances():
|
||||
"""Return all 'render-like' instances.
|
||||
|
||||
This returns list of instance sets that needs to receive informations
|
||||
This returns list of instance sets that needs to receive information
|
||||
about render layer changes.
|
||||
|
||||
Returns:
|
||||
|
|
@ -2853,3 +2854,27 @@ def set_colorspace():
|
|||
cmds.colorManagementPrefs(e=True, renderingSpaceName=renderSpace)
|
||||
viewTransform = root_dict["viewTransform"]
|
||||
cmds.colorManagementPrefs(e=True, viewTransformName=viewTransform)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def root_parent(nodes):
|
||||
# type: (list) -> list
|
||||
"""Context manager to un-parent provided nodes and return then back."""
|
||||
import pymel.core as pm # noqa
|
||||
|
||||
node_parents = []
|
||||
for node in nodes:
|
||||
n = pm.PyNode(node)
|
||||
try:
|
||||
root = pm.listRelatives(n, parent=1)[0]
|
||||
except IndexError:
|
||||
root = None
|
||||
node_parents.append((n, root))
|
||||
try:
|
||||
for node in node_parents:
|
||||
node[0].setParent(world=True)
|
||||
yield
|
||||
finally:
|
||||
for node in node_parents:
|
||||
if node[1]:
|
||||
node[0].setParent(node[1])
|
||||
|
|
|
|||
|
|
@ -506,8 +506,8 @@
|
|||
"transforms",
|
||||
"local"
|
||||
],
|
||||
"title": "# Copy Local Transfroms",
|
||||
"tooltip": "Copy local transfroms"
|
||||
"title": "# Copy Local Transforms",
|
||||
"tooltip": "Copy local transforms"
|
||||
},
|
||||
{
|
||||
"type": "action",
|
||||
|
|
@ -520,8 +520,8 @@
|
|||
"transforms",
|
||||
"matrix"
|
||||
],
|
||||
"title": "# Copy Matrix Transfroms",
|
||||
"tooltip": "Copy Matrix transfroms"
|
||||
"title": "# Copy Matrix Transforms",
|
||||
"tooltip": "Copy Matrix transforms"
|
||||
},
|
||||
{
|
||||
"type": "action",
|
||||
|
|
@ -842,7 +842,7 @@
|
|||
"sourcetype": "file",
|
||||
"tags": ["cleanup", "remove_user_defined_attributes"],
|
||||
"title": "# Remove User Defined Attributes",
|
||||
"tooltip": "Remove all user-defined attributs from all nodes"
|
||||
"tooltip": "Remove all user-defined attributes from all nodes"
|
||||
},
|
||||
{
|
||||
"type": "action",
|
||||
|
|
|
|||
|
|
@ -794,8 +794,8 @@
|
|||
"transforms",
|
||||
"local"
|
||||
],
|
||||
"title": "Copy Local Transfroms",
|
||||
"tooltip": "Copy local transfroms"
|
||||
"title": "Copy Local Transforms",
|
||||
"tooltip": "Copy local transforms"
|
||||
},
|
||||
{
|
||||
"type": "action",
|
||||
|
|
@ -808,8 +808,8 @@
|
|||
"transforms",
|
||||
"matrix"
|
||||
],
|
||||
"title": "Copy Matrix Transfroms",
|
||||
"tooltip": "Copy Matrix transfroms"
|
||||
"title": "Copy Matrix Transforms",
|
||||
"tooltip": "Copy Matrix transforms"
|
||||
},
|
||||
{
|
||||
"type": "action",
|
||||
|
|
@ -1274,7 +1274,7 @@
|
|||
"sourcetype": "file",
|
||||
"tags": ["cleanup", "remove_user_defined_attributes"],
|
||||
"title": "Remove User Defined Attributes",
|
||||
"tooltip": "Remove all user-defined attributs from all nodes"
|
||||
"tooltip": "Remove all user-defined attributes from all nodes"
|
||||
},
|
||||
{
|
||||
"type": "action",
|
||||
|
|
|
|||
|
|
@ -341,7 +341,7 @@ def update_package(set_container, representation):
|
|||
def update_scene(set_container, containers, current_data, new_data, new_file):
|
||||
"""Updates the hierarchy, assets and their matrix
|
||||
|
||||
Updates the following withing the scene:
|
||||
Updates the following within the scene:
|
||||
* Setdress hierarchy alembic
|
||||
* Matrix
|
||||
* Parenting
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ class ShaderDefinitionsEditor(QtWidgets.QWidget):
|
|||
def _write_definition_file(self, content, force=False):
|
||||
"""Write content as definition to file in database.
|
||||
|
||||
Before file is writen, check is made if its content has not
|
||||
Before file is written, check is made if its content has not
|
||||
changed. If is changed, warning is issued to user if he wants
|
||||
it to overwrite. Note: GridFs doesn't allow changing file content.
|
||||
You need to delete existing file and create new one.
|
||||
|
|
|
|||
|
|
@ -53,8 +53,8 @@ class CreateRender(plugin.Creator):
|
|||
renderer.
|
||||
ass (bool): Submit as ``ass`` file for standalone Arnold renderer.
|
||||
tileRendering (bool): Instance is set to tile rendering mode. We
|
||||
won't submit actuall render, but we'll make publish job to wait
|
||||
for Tile Assemly job done and then publish.
|
||||
won't submit actual render, but we'll make publish job to wait
|
||||
for Tile Assembly job done and then publish.
|
||||
|
||||
See Also:
|
||||
https://pype.club/docs/artist_hosts_maya#creating-basic-render-setup
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ class CreateReview(plugin.Creator):
|
|||
|
||||
# get basic animation data : start / end / handles / steps
|
||||
data = OrderedDict(**self.data)
|
||||
animation_data = lib.collect_animation_data()
|
||||
animation_data = lib.collect_animation_data(fps=True)
|
||||
for key, value in animation_data.items():
|
||||
data[key] = value
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +1,58 @@
|
|||
from openpype.hosts.maya.api import plugin
|
||||
# -*- coding: utf-8 -*-
|
||||
"""Creator for Unreal Static Meshes."""
|
||||
from openpype.hosts.maya.api import plugin, lib
|
||||
from avalon.api import Session
|
||||
from openpype.api import get_project_settings
|
||||
from maya import cmds # noqa
|
||||
|
||||
|
||||
class CreateUnrealStaticMesh(plugin.Creator):
|
||||
"""Unreal Static Meshes with collisions."""
|
||||
name = "staticMeshMain"
|
||||
label = "Unreal - Static Mesh"
|
||||
family = "unrealStaticMesh"
|
||||
icon = "cube"
|
||||
dynamic_subset_keys = ["asset"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor."""
|
||||
super(CreateUnrealStaticMesh, self).__init__(*args, **kwargs)
|
||||
self._project_settings = get_project_settings(
|
||||
Session["AVALON_PROJECT"])
|
||||
|
||||
@classmethod
|
||||
def get_dynamic_data(
|
||||
cls, variant, task_name, asset_id, project_name, host_name
|
||||
):
|
||||
dynamic_data = super(CreateUnrealStaticMesh, cls).get_dynamic_data(
|
||||
variant, task_name, asset_id, project_name, host_name
|
||||
)
|
||||
dynamic_data["asset"] = Session.get("AVALON_ASSET")
|
||||
|
||||
return dynamic_data
|
||||
|
||||
def process(self):
|
||||
with lib.undo_chunk():
|
||||
instance = super(CreateUnrealStaticMesh, self).process()
|
||||
content = cmds.sets(instance, query=True)
|
||||
|
||||
# empty set and process its former content
|
||||
cmds.sets(content, rm=instance)
|
||||
geometry_set = cmds.sets(name="geometry_SET", empty=True)
|
||||
collisions_set = cmds.sets(name="collisions_SET", empty=True)
|
||||
|
||||
cmds.sets([geometry_set, collisions_set], forceElement=instance)
|
||||
|
||||
members = cmds.ls(content, long=True) or []
|
||||
children = cmds.listRelatives(members, allDescendents=True,
|
||||
fullPath=True) or []
|
||||
children = cmds.ls(children, type="transform")
|
||||
for node in children:
|
||||
if cmds.listRelatives(node, type="shape"):
|
||||
if [
|
||||
n for n in self.collision_prefixes
|
||||
if node.startswith(n)
|
||||
]:
|
||||
cmds.sets(node, forceElement=collisions_set)
|
||||
else:
|
||||
cmds.sets(node, forceElement=geometry_set)
|
||||
|
|
|
|||
|
|
@ -4,6 +4,8 @@ import os
|
|||
import json
|
||||
import appdirs
|
||||
import requests
|
||||
import six
|
||||
import sys
|
||||
|
||||
from maya import cmds
|
||||
import maya.app.renderSetup.model.renderSetup as renderSetup
|
||||
|
|
@ -12,7 +14,15 @@ from openpype.hosts.maya.api import (
|
|||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.api import get_system_settings
|
||||
from openpype.api import (
|
||||
get_system_settings,
|
||||
get_project_settings
|
||||
)
|
||||
|
||||
from openpype.modules import ModulesManager
|
||||
|
||||
from avalon.api import Session
|
||||
from avalon.api import CreatorError
|
||||
|
||||
|
||||
class CreateVRayScene(plugin.Creator):
|
||||
|
|
@ -22,11 +32,40 @@ class CreateVRayScene(plugin.Creator):
|
|||
family = "vrayscene"
|
||||
icon = "cubes"
|
||||
|
||||
_project_settings = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Entry."""
|
||||
super(CreateVRayScene, self).__init__(*args, **kwargs)
|
||||
self._rs = renderSetup.instance()
|
||||
self.data["exportOnFarm"] = False
|
||||
deadline_settings = get_system_settings()["modules"]["deadline"]
|
||||
if not deadline_settings["enabled"]:
|
||||
self.deadline_servers = {}
|
||||
return
|
||||
self._project_settings = get_project_settings(
|
||||
Session["AVALON_PROJECT"])
|
||||
|
||||
try:
|
||||
default_servers = deadline_settings["deadline_urls"]
|
||||
project_servers = (
|
||||
self._project_settings["deadline"]["deadline_servers"]
|
||||
)
|
||||
self.deadline_servers = {
|
||||
k: default_servers[k]
|
||||
for k in project_servers
|
||||
if k in default_servers
|
||||
}
|
||||
|
||||
if not self.deadline_servers:
|
||||
self.deadline_servers = default_servers
|
||||
|
||||
except AttributeError:
|
||||
# Handle situation were we had only one url for deadline.
|
||||
manager = ModulesManager()
|
||||
deadline_module = manager.modules_by_name["deadline"]
|
||||
# get default deadline webservice url from deadline module
|
||||
self.deadline_servers = deadline_module.deadline_urls
|
||||
|
||||
def process(self):
|
||||
"""Entry point."""
|
||||
|
|
@ -37,10 +76,10 @@ class CreateVRayScene(plugin.Creator):
|
|||
use_selection = self.options.get("useSelection")
|
||||
with lib.undo_chunk():
|
||||
self._create_vray_instance_settings()
|
||||
instance = super(CreateVRayScene, self).process()
|
||||
self.instance = super(CreateVRayScene, self).process()
|
||||
|
||||
index = 1
|
||||
namespace_name = "_{}".format(str(instance))
|
||||
namespace_name = "_{}".format(str(self.instance))
|
||||
try:
|
||||
cmds.namespace(rm=namespace_name)
|
||||
except RuntimeError:
|
||||
|
|
@ -48,10 +87,19 @@ class CreateVRayScene(plugin.Creator):
|
|||
pass
|
||||
|
||||
while(cmds.namespace(exists=namespace_name)):
|
||||
namespace_name = "_{}{}".format(str(instance), index)
|
||||
namespace_name = "_{}{}".format(str(self.instance), index)
|
||||
index += 1
|
||||
|
||||
namespace = cmds.namespace(add=namespace_name)
|
||||
|
||||
# add Deadline server selection list
|
||||
if self.deadline_servers:
|
||||
cmds.scriptJob(
|
||||
attributeChange=[
|
||||
"{}.deadlineServers".format(self.instance),
|
||||
self._deadline_webservice_changed
|
||||
])
|
||||
|
||||
# create namespace with instance
|
||||
layers = self._rs.getRenderLayers()
|
||||
if use_selection:
|
||||
|
|
@ -62,7 +110,7 @@ class CreateVRayScene(plugin.Creator):
|
|||
render_set = cmds.sets(
|
||||
n="{}:{}".format(namespace, layer.name()))
|
||||
sets.append(render_set)
|
||||
cmds.sets(sets, forceElement=instance)
|
||||
cmds.sets(sets, forceElement=self.instance)
|
||||
|
||||
# if no render layers are present, create default one with
|
||||
# asterix selector
|
||||
|
|
@ -71,6 +119,52 @@ class CreateVRayScene(plugin.Creator):
|
|||
collection = render_layer.createCollection("defaultCollection")
|
||||
collection.getSelector().setPattern('*')
|
||||
|
||||
def _deadline_webservice_changed(self):
|
||||
"""Refresh Deadline server dependent options."""
|
||||
# get selected server
|
||||
from maya import cmds
|
||||
webservice = self.deadline_servers[
|
||||
self.server_aliases[
|
||||
cmds.getAttr("{}.deadlineServers".format(self.instance))
|
||||
]
|
||||
]
|
||||
pools = self._get_deadline_pools(webservice)
|
||||
cmds.deleteAttr("{}.primaryPool".format(self.instance))
|
||||
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
|
||||
cmds.addAttr(self.instance, longName="primaryPool",
|
||||
attributeType="enum",
|
||||
enumName=":".join(pools))
|
||||
cmds.addAttr(self.instance, longName="secondaryPool",
|
||||
attributeType="enum",
|
||||
enumName=":".join(["-"] + pools))
|
||||
|
||||
def _get_deadline_pools(self, webservice):
|
||||
# type: (str) -> list
|
||||
"""Get pools from Deadline.
|
||||
Args:
|
||||
webservice (str): Server url.
|
||||
Returns:
|
||||
list: Pools.
|
||||
Throws:
|
||||
RuntimeError: If deadline webservice is unreachable.
|
||||
|
||||
"""
|
||||
argument = "{}/api/pools?NamesOnly=true".format(webservice)
|
||||
try:
|
||||
response = self._requests_get(argument)
|
||||
except requests.exceptions.ConnectionError as exc:
|
||||
msg = 'Cannot connect to deadline web service'
|
||||
self.log.error(msg)
|
||||
six.reraise(
|
||||
CreatorError,
|
||||
CreatorError('{} - {}'.format(msg, exc)),
|
||||
sys.exc_info()[2])
|
||||
if not response.ok:
|
||||
self.log.warning("No pools retrieved")
|
||||
return []
|
||||
|
||||
return response.json()
|
||||
|
||||
def _create_vray_instance_settings(self):
|
||||
# get pools
|
||||
pools = []
|
||||
|
|
@ -79,31 +173,29 @@ class CreateVRayScene(plugin.Creator):
|
|||
|
||||
deadline_enabled = system_settings["deadline"]["enabled"]
|
||||
muster_enabled = system_settings["muster"]["enabled"]
|
||||
deadline_url = system_settings["deadline"]["DEADLINE_REST_URL"]
|
||||
muster_url = system_settings["muster"]["MUSTER_REST_URL"]
|
||||
|
||||
if deadline_enabled and muster_enabled:
|
||||
self.log.error(
|
||||
"Both Deadline and Muster are enabled. " "Cannot support both."
|
||||
)
|
||||
raise RuntimeError("Both Deadline and Muster are enabled")
|
||||
raise CreatorError("Both Deadline and Muster are enabled")
|
||||
|
||||
self.server_aliases = self.deadline_servers.keys()
|
||||
self.data["deadlineServers"] = self.server_aliases
|
||||
|
||||
if deadline_enabled:
|
||||
argument = "{}/api/pools?NamesOnly=true".format(deadline_url)
|
||||
# if default server is not between selected, use first one for
|
||||
# initial list of pools.
|
||||
try:
|
||||
response = self._requests_get(argument)
|
||||
except requests.exceptions.ConnectionError as e:
|
||||
msg = 'Cannot connect to deadline web service'
|
||||
self.log.error(msg)
|
||||
raise RuntimeError('{} - {}'.format(msg, e))
|
||||
if not response.ok:
|
||||
self.log.warning("No pools retrieved")
|
||||
else:
|
||||
pools = response.json()
|
||||
self.data["primaryPool"] = pools
|
||||
# We add a string "-" to allow the user to not
|
||||
# set any secondary pools
|
||||
self.data["secondaryPool"] = ["-"] + pools
|
||||
deadline_url = self.deadline_servers["default"]
|
||||
except KeyError:
|
||||
deadline_url = [
|
||||
self.deadline_servers[k]
|
||||
for k in self.deadline_servers.keys()
|
||||
][0]
|
||||
|
||||
pool_names = self._get_deadline_pools(deadline_url)
|
||||
|
||||
if muster_enabled:
|
||||
self.log.info(">>> Loading Muster credentials ...")
|
||||
|
|
@ -115,10 +207,10 @@ class CreateVRayScene(plugin.Creator):
|
|||
if e.startswith("401"):
|
||||
self.log.warning("access token expired")
|
||||
self._show_login()
|
||||
raise RuntimeError("Access token expired")
|
||||
raise CreatorError("Access token expired")
|
||||
except requests.exceptions.ConnectionError:
|
||||
self.log.error("Cannot connect to Muster API endpoint.")
|
||||
raise RuntimeError("Cannot connect to {}".format(muster_url))
|
||||
raise CreatorError("Cannot connect to {}".format(muster_url))
|
||||
pool_names = []
|
||||
for pool in pools:
|
||||
self.log.info(" - pool: {}".format(pool["name"]))
|
||||
|
|
@ -140,7 +232,7 @@ class CreateVRayScene(plugin.Creator):
|
|||
``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from presets.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If loaded credentials are invalid.
|
||||
CreatorError: If loaded credentials are invalid.
|
||||
AttributeError: If ``MUSTER_REST_URL`` is not set.
|
||||
|
||||
"""
|
||||
|
|
@ -152,7 +244,7 @@ class CreateVRayScene(plugin.Creator):
|
|||
self._token = muster_json.get("token", None)
|
||||
if not self._token:
|
||||
self._show_login()
|
||||
raise RuntimeError("Invalid access token for Muster")
|
||||
raise CreatorError("Invalid access token for Muster")
|
||||
file.close()
|
||||
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
|
||||
if not self.MUSTER_REST_URL:
|
||||
|
|
@ -162,7 +254,7 @@ class CreateVRayScene(plugin.Creator):
|
|||
"""Get render pools from Muster.
|
||||
|
||||
Raises:
|
||||
Exception: If pool list cannot be obtained from Muster.
|
||||
CreatorError: If pool list cannot be obtained from Muster.
|
||||
|
||||
"""
|
||||
params = {"authToken": self._token}
|
||||
|
|
@ -178,12 +270,12 @@ class CreateVRayScene(plugin.Creator):
|
|||
("Cannot get pools from "
|
||||
"Muster: {}").format(response.status_code)
|
||||
)
|
||||
raise Exception("Cannot get pools from Muster")
|
||||
raise CreatorError("Cannot get pools from Muster")
|
||||
try:
|
||||
pools = response.json()["ResponseData"]["pools"]
|
||||
except ValueError as e:
|
||||
self.log.error("Invalid response from Muster server {}".format(e))
|
||||
raise Exception("Invalid response from Muster server")
|
||||
raise CreatorError("Invalid response from Muster server")
|
||||
|
||||
return pools
|
||||
|
||||
|
|
@ -196,7 +288,7 @@ class CreateVRayScene(plugin.Creator):
|
|||
login_response = self._requests_get(api_url, timeout=1)
|
||||
if login_response.status_code != 200:
|
||||
self.log.error("Cannot show login form to Muster")
|
||||
raise Exception("Cannot show login form to Muster")
|
||||
raise CreatorError("Cannot show login form to Muster")
|
||||
|
||||
def _requests_post(self, *args, **kwargs):
|
||||
"""Wrap request post method.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
from avalon import api
|
||||
import openpype.hosts.maya.api.plugin
|
||||
from openpype.hosts.maya.api.plugin import get_reference_node
|
||||
import os
|
||||
from openpype.api import get_project_settings
|
||||
import clique
|
||||
|
|
@ -111,7 +112,7 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
# Get reference node from container members
|
||||
members = cmds.sets(node, query=True, nodesOnly=True)
|
||||
reference_node = self._get_reference_node(members)
|
||||
reference_node = get_reference_node(members)
|
||||
|
||||
assert os.path.exists(proxyPath), "%s does not exist." % proxyPath
|
||||
|
||||
|
|
|
|||
|
|
@ -3,16 +3,16 @@ from avalon.maya.pipeline import containerise
|
|||
from avalon.maya import lib
|
||||
from maya import cmds, mel
|
||||
|
||||
|
||||
class AudioLoader(api.Loader):
|
||||
"""Specific loader of audio."""
|
||||
|
||||
families = ["audio"]
|
||||
label = "Import audio."
|
||||
label = "Import audio"
|
||||
representations = ["wav"]
|
||||
icon = "volume-up"
|
||||
color = "orange"
|
||||
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
|
||||
start_frame = cmds.playbackOptions(query=True, min=True)
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
from avalon import api
|
||||
from openpype.api import get_project_settings
|
||||
|
||||
|
||||
class GpuCacheLoader(api.Loader):
|
||||
"""Load model Alembic as gpuCache"""
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ class ImagePlaneLoader(api.Loader):
|
|||
"""Specific loader of plate for image planes on selected camera."""
|
||||
|
||||
families = ["image", "plate", "render"]
|
||||
label = "Load imagePlane."
|
||||
label = "Load imagePlane"
|
||||
representations = ["mov", "exr", "preview", "png"]
|
||||
icon = "image"
|
||||
color = "orange"
|
||||
|
|
@ -118,7 +118,7 @@ class ImagePlaneLoader(api.Loader):
|
|||
camera = pm.createNode("camera")
|
||||
|
||||
if camera is None:
|
||||
return
|
||||
return
|
||||
|
||||
try:
|
||||
camera.displayResolution.set(1)
|
||||
|
|
|
|||
|
|
@ -63,6 +63,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
if current_namespace != ":":
|
||||
group_name = current_namespace + ":" + group_name
|
||||
|
||||
group_name = "|" + group_name
|
||||
|
||||
self[:] = new_nodes
|
||||
|
||||
if attach_to_root:
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue