Merge branch 'develop' into feature/OP-1539_Flame-Loading-published-clips-back

This commit is contained in:
Jakub Jezek 2022-02-01 20:59:25 +01:00
commit b554c94e2e
No known key found for this signature in database
GPG key ID: D8548FBF690B100A
186 changed files with 4128 additions and 931 deletions

View file

@ -1,62 +1,93 @@
# Changelog
## [3.8.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.8.1](https://github.com/pypeclub/OpenPype/tree/3.8.1) (2022-02-01)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.7.0...HEAD)
### 📖 Documentation
- Renamed to proper name [\#2546](https://github.com/pypeclub/OpenPype/pull/2546)
- Slack: Add review to notification message [\#2498](https://github.com/pypeclub/OpenPype/pull/2498)
**🆕 New features**
- Flame: OpenTimelineIO Export Modul [\#2398](https://github.com/pypeclub/OpenPype/pull/2398)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.8.0...3.8.1)
**🚀 Enhancements**
- Webpublisher: Thumbnail extractor [\#2600](https://github.com/pypeclub/OpenPype/pull/2600)
- Loader: Allow to toggle default family filters between "include" or "exclude" filtering [\#2541](https://github.com/pypeclub/OpenPype/pull/2541)
**🐛 Bug fixes**
- Release/3.8.0 [\#2619](https://github.com/pypeclub/OpenPype/pull/2619)
- hotfix: OIIO tool path - add extension on windows [\#2618](https://github.com/pypeclub/OpenPype/pull/2618)
- Settings: Enum does not store empty string if has single item to select [\#2615](https://github.com/pypeclub/OpenPype/pull/2615)
- switch distutils to sysconfig for `get\_platform\(\)` [\#2594](https://github.com/pypeclub/OpenPype/pull/2594)
- Fix poetry index and speedcopy update [\#2589](https://github.com/pypeclub/OpenPype/pull/2589)
- Webpublisher: Fix - subset names from processed .psd used wrong value for task [\#2586](https://github.com/pypeclub/OpenPype/pull/2586)
- `vrscene` creator Deadline webservice URL handling [\#2580](https://github.com/pypeclub/OpenPype/pull/2580)
- global: track name was failing if duplicated root word in name [\#2568](https://github.com/pypeclub/OpenPype/pull/2568)
- General: Do not validate version if build does not support it [\#2557](https://github.com/pypeclub/OpenPype/pull/2557)
- Validate Maya Rig produces no cycle errors [\#2484](https://github.com/pypeclub/OpenPype/pull/2484)
**Merged pull requests:**
- Bump pillow from 8.4.0 to 9.0.0 [\#2595](https://github.com/pypeclub/OpenPype/pull/2595)
- Webpublisher: Skip version collect [\#2591](https://github.com/pypeclub/OpenPype/pull/2591)
- build\(deps\): bump follow-redirects from 1.14.4 to 1.14.7 in /website [\#2534](https://github.com/pypeclub/OpenPype/pull/2534)
- build\(deps\): bump pillow from 8.4.0 to 9.0.0 [\#2523](https://github.com/pypeclub/OpenPype/pull/2523)
## [3.8.0](https://github.com/pypeclub/OpenPype/tree/3.8.0) (2022-01-24)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.8.0-nightly.7...3.8.0)
**🆕 New features**
- Flame: extracting segments with trans-coding [\#2547](https://github.com/pypeclub/OpenPype/pull/2547)
- Maya : V-Ray Proxy - load all ABC files via proxy [\#2544](https://github.com/pypeclub/OpenPype/pull/2544)
- Maya to Unreal: Extended static mesh workflow [\#2537](https://github.com/pypeclub/OpenPype/pull/2537)
- Flame: collecting publishable instances [\#2519](https://github.com/pypeclub/OpenPype/pull/2519)
- Flame: create publishable clips [\#2495](https://github.com/pypeclub/OpenPype/pull/2495)
**🚀 Enhancements**
- Webpublisher: Moved error at the beginning of the log [\#2559](https://github.com/pypeclub/OpenPype/pull/2559)
- Ftrack: Use ApplicationManager to get DJV path [\#2558](https://github.com/pypeclub/OpenPype/pull/2558)
- Webpublisher: Added endpoint to reprocess batch through UI [\#2555](https://github.com/pypeclub/OpenPype/pull/2555)
- Settings: PathInput strip passed string [\#2550](https://github.com/pypeclub/OpenPype/pull/2550)
- Global: Exctract Review anatomy fill data with output name [\#2548](https://github.com/pypeclub/OpenPype/pull/2548)
- Cosmetics: Clean up some cosmetics / typos [\#2542](https://github.com/pypeclub/OpenPype/pull/2542)
- Launcher: Added context menu to to skip opening last workfile [\#2536](https://github.com/pypeclub/OpenPype/pull/2536)
- General: Validate if current process OpenPype version is requested version [\#2529](https://github.com/pypeclub/OpenPype/pull/2529)
- General: Be able to use anatomy data in ffmpeg output arguments [\#2525](https://github.com/pypeclub/OpenPype/pull/2525)
- Expose toggle publish plug-in settings for Maya Look Shading Engine Naming [\#2521](https://github.com/pypeclub/OpenPype/pull/2521)
- Photoshop: Move implementation to OpenPype [\#2510](https://github.com/pypeclub/OpenPype/pull/2510)
- TimersManager: Move module one hierarchy higher [\#2501](https://github.com/pypeclub/OpenPype/pull/2501)
- Slack: notifications are sent with Openpype logo and bot name [\#2499](https://github.com/pypeclub/OpenPype/pull/2499)
- Ftrack: Event handlers settings [\#2496](https://github.com/pypeclub/OpenPype/pull/2496)
- Flame - create publishable clips [\#2495](https://github.com/pypeclub/OpenPype/pull/2495)
- Tools: Fix style and modality of errors in loader and creator [\#2489](https://github.com/pypeclub/OpenPype/pull/2489)
- Project Manager: Remove project button cleanup [\#2482](https://github.com/pypeclub/OpenPype/pull/2482)
- Tools: Be able to change models of tasks and assets widgets [\#2475](https://github.com/pypeclub/OpenPype/pull/2475)
- Publish pype: Reduce publish process defering [\#2464](https://github.com/pypeclub/OpenPype/pull/2464)
- Maya: Improve speed of Collect History logic [\#2460](https://github.com/pypeclub/OpenPype/pull/2460)
- Maya: Validate Rig Controllers - fix Error: in script editor [\#2459](https://github.com/pypeclub/OpenPype/pull/2459)
- Maya: Optimize Validate Locked Normals speed for dense polymeshes [\#2457](https://github.com/pypeclub/OpenPype/pull/2457)
- Fix \#2453 Refactor missing \_get\_reference\_node method [\#2455](https://github.com/pypeclub/OpenPype/pull/2455)
- Houdini: Remove broken unique name counter [\#2450](https://github.com/pypeclub/OpenPype/pull/2450)
- Maya: Improve lib.polyConstraint performance when Select tool is not the active tool context [\#2447](https://github.com/pypeclub/OpenPype/pull/2447)
- General: Validate third party before build [\#2425](https://github.com/pypeclub/OpenPype/pull/2425)
- Maya : add option to not group reference in ReferenceLoader [\#2383](https://github.com/pypeclub/OpenPype/pull/2383)
- Slack: Add review to notification message [\#2498](https://github.com/pypeclub/OpenPype/pull/2498)
- Maya: Collect 'fps' animation data only for "review" instances [\#2486](https://github.com/pypeclub/OpenPype/pull/2486)
**🐛 Bug fixes**
- AfterEffects: Fix - removed obsolete import [\#2577](https://github.com/pypeclub/OpenPype/pull/2577)
- General: OpenPype version updates [\#2575](https://github.com/pypeclub/OpenPype/pull/2575)
- Ftrack: Delete action revision [\#2563](https://github.com/pypeclub/OpenPype/pull/2563)
- Webpublisher: ftrack shows incorrect user names [\#2560](https://github.com/pypeclub/OpenPype/pull/2560)
- Webpublisher: Fixed progress reporting [\#2553](https://github.com/pypeclub/OpenPype/pull/2553)
- Fix Maya AssProxyLoader version switch [\#2551](https://github.com/pypeclub/OpenPype/pull/2551)
- General: Fix install thread in igniter [\#2549](https://github.com/pypeclub/OpenPype/pull/2549)
- Houdini: vdbcache family preserve frame numbers on publish integration + enable validate version for Houdini [\#2535](https://github.com/pypeclub/OpenPype/pull/2535)
- Maya: Fix Load VDB to V-Ray [\#2533](https://github.com/pypeclub/OpenPype/pull/2533)
- Maya: ReferenceLoader fix not unique group name error for attach to root [\#2532](https://github.com/pypeclub/OpenPype/pull/2532)
- Maya: namespaced context go back to original namespace when started from inside a namespace [\#2531](https://github.com/pypeclub/OpenPype/pull/2531)
- Fix create zip tool - path argument [\#2522](https://github.com/pypeclub/OpenPype/pull/2522)
- Maya: Fix Extract Look with space in names [\#2518](https://github.com/pypeclub/OpenPype/pull/2518)
- Fix published frame content for sequence starting with 0 [\#2513](https://github.com/pypeclub/OpenPype/pull/2513)
- Fix \#2497: reset empty string attributes correctly to "" instead of "None" [\#2506](https://github.com/pypeclub/OpenPype/pull/2506)
- General: Settings work if OpenPypeVersion is available [\#2494](https://github.com/pypeclub/OpenPype/pull/2494)
- General: PYTHONPATH may break OpenPype dependencies [\#2493](https://github.com/pypeclub/OpenPype/pull/2493)
- Workfiles tool: Files widget show files on first show [\#2488](https://github.com/pypeclub/OpenPype/pull/2488)
- General: Custom template paths filter fix [\#2483](https://github.com/pypeclub/OpenPype/pull/2483)
- Loader: Remove always on top flag in tray [\#2480](https://github.com/pypeclub/OpenPype/pull/2480)
- General: Anatomy does not return root envs as unicode [\#2465](https://github.com/pypeclub/OpenPype/pull/2465)
- Maya: Validate Shape Zero do not keep fixed geometry vertices selected/active after repair [\#2456](https://github.com/pypeclub/OpenPype/pull/2456)
- Maya: reset empty string attributes correctly to "" instead of "None" [\#2506](https://github.com/pypeclub/OpenPype/pull/2506)
- Improve FusionPreLaunch hook errors [\#2505](https://github.com/pypeclub/OpenPype/pull/2505)
- General: Modules import function output fix [\#2492](https://github.com/pypeclub/OpenPype/pull/2492)
### 📖 Documentation
- Variable in docs renamed to proper name [\#2546](https://github.com/pypeclub/OpenPype/pull/2546)
**Merged pull requests:**
- General: Fix install thread in igniter [\#2549](https://github.com/pypeclub/OpenPype/pull/2549)
- AfterEffects: Move implementation to OpenPype [\#2543](https://github.com/pypeclub/OpenPype/pull/2543)
- Fix create zip tool - path argument [\#2522](https://github.com/pypeclub/OpenPype/pull/2522)
- General: Modules import function output fix [\#2492](https://github.com/pypeclub/OpenPype/pull/2492)
- AE: fix hiding of alert window below Publish [\#2491](https://github.com/pypeclub/OpenPype/pull/2491)
- Maya: Validate NGONs re-use polyConstraint code from openpype.host.maya.api.lib [\#2458](https://github.com/pypeclub/OpenPype/pull/2458)
- Maya: Remove Maya Look Assigner check on startup [\#2540](https://github.com/pypeclub/OpenPype/pull/2540)
- build\(deps\): bump shelljs from 0.8.4 to 0.8.5 in /website [\#2538](https://github.com/pypeclub/OpenPype/pull/2538)
- Nuke: Merge avalon's implementation into OpenPype [\#2514](https://github.com/pypeclub/OpenPype/pull/2514)
## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04)
@ -65,45 +96,10 @@
**🚀 Enhancements**
- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462)
- Photoshop: New style validations for New publisher [\#2429](https://github.com/pypeclub/OpenPype/pull/2429)
- General: Environment variables groups [\#2424](https://github.com/pypeclub/OpenPype/pull/2424)
- Unreal: Dynamic menu created in Python [\#2422](https://github.com/pypeclub/OpenPype/pull/2422)
- Settings UI: Hyperlinks to settings [\#2420](https://github.com/pypeclub/OpenPype/pull/2420)
- Modules: JobQueue module moved one hierarchy level higher [\#2419](https://github.com/pypeclub/OpenPype/pull/2419)
- TimersManager: Start timer post launch hook [\#2418](https://github.com/pypeclub/OpenPype/pull/2418)
- General: Run applications as separate processes under linux [\#2408](https://github.com/pypeclub/OpenPype/pull/2408)
- Ftrack: Check existence of object type on recreation [\#2404](https://github.com/pypeclub/OpenPype/pull/2404)
- Enhancement: Global cleanup plugin that explicitly remove paths from context [\#2402](https://github.com/pypeclub/OpenPype/pull/2402)
- General: MongoDB ability to specify replica set groups [\#2401](https://github.com/pypeclub/OpenPype/pull/2401)
- Flame: moving `utility\_scripts` to api folder also with `scripts` [\#2385](https://github.com/pypeclub/OpenPype/pull/2385)
- Centos 7 dependency compatibility [\#2384](https://github.com/pypeclub/OpenPype/pull/2384)
- Enhancement: Settings: Use project settings values from another project [\#2382](https://github.com/pypeclub/OpenPype/pull/2382)
- Blender 3: Support auto install for new blender version [\#2377](https://github.com/pypeclub/OpenPype/pull/2377)
- Maya add render image path to settings [\#2375](https://github.com/pypeclub/OpenPype/pull/2375)
**🐛 Bug fixes**
- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471)
- Short Pyblish plugin path [\#2428](https://github.com/pypeclub/OpenPype/pull/2428)
- PS: Introduced settings for invalid characters to use in ValidateNaming plugin [\#2417](https://github.com/pypeclub/OpenPype/pull/2417)
- Settings UI: Breadcrumbs path does not create new entities [\#2416](https://github.com/pypeclub/OpenPype/pull/2416)
- AfterEffects: Variant 2022 is in defaults but missing in schemas [\#2412](https://github.com/pypeclub/OpenPype/pull/2412)
- Nuke: baking representations was not additive [\#2406](https://github.com/pypeclub/OpenPype/pull/2406)
- General: Fix access to environments from default settings [\#2403](https://github.com/pypeclub/OpenPype/pull/2403)
- Fix: Placeholder Input color set fix [\#2399](https://github.com/pypeclub/OpenPype/pull/2399)
- Settings: Fix state change of wrapper label [\#2396](https://github.com/pypeclub/OpenPype/pull/2396)
- Flame: fix ftrack publisher [\#2381](https://github.com/pypeclub/OpenPype/pull/2381)
- hiero: solve custom ocio path [\#2379](https://github.com/pypeclub/OpenPype/pull/2379)
- hiero: fix workio and flatten [\#2378](https://github.com/pypeclub/OpenPype/pull/2378)
- Nuke: fixing menu re-drawing during context change [\#2374](https://github.com/pypeclub/OpenPype/pull/2374)
- Webpublisher: Fix assignment of families of TVpaint instances [\#2373](https://github.com/pypeclub/OpenPype/pull/2373)
**Merged pull requests:**
- Forced cx\_freeze to include sqlite3 into build [\#2432](https://github.com/pypeclub/OpenPype/pull/2432)
- Maya: Replaced PATH usage with vendored oiio path for maketx utility [\#2405](https://github.com/pypeclub/OpenPype/pull/2405)
- \[Fix\]\[MAYA\] Handle message type attribute within CollectLook [\#2394](https://github.com/pypeclub/OpenPype/pull/2394)
- Add validator to check correct version of extension for PS and AE [\#2387](https://github.com/pypeclub/OpenPype/pull/2387)
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)

View file

@ -6,6 +6,8 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"""Add last workfile path to launch arguments.
This is not possible to do for all applications the same way.
Checks 'start_last_workfile', if set to False, it will not open last
workfile. This property is set explicitly in Launcher.
"""
# Execute after workfile template copy

View file

@ -43,6 +43,7 @@ class GlobalHostDataHook(PreLaunchHook):
"env": self.launch_context.env,
"start_last_workfile": self.data.get("start_last_workfile"),
"last_workfile_path": self.data.get("last_workfile_path"),
"log": self.log

View file

@ -40,7 +40,10 @@ class NonPythonHostHook(PreLaunchHook):
)
# Add workfile path if exists
workfile_path = self.data["last_workfile_path"]
if os.path.exists(workfile_path):
if (
self.data.get("start_last_workfile")
and workfile_path
and os.path.exists(workfile_path)):
new_launch_args.append(workfile_path)
# Append as whole list as these areguments should not be separated

View file

@ -3,7 +3,6 @@ import re
import tempfile
import attr
from avalon import aftereffects
import pyblish.api
from openpype.settings import get_project_settings

View file

@ -5,11 +5,8 @@ def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
# Prepare path to implementation script
implementation_user_script_path = os.path.join(
os.environ["OPENPYPE_REPOS_ROOT"],
"repos",
"avalon-core",
"setup",
"blender"
os.path.dirname(os.path.abspath(__file__)),
"blender_addon"
)
# Add blender implementation script path to PYTHONPATH

View file

@ -1,94 +1,64 @@
import os
import sys
import traceback
"""Public API
import bpy
Anything that isn't defined here is INTERNAL and unreliable for external use.
from .lib import append_user_scripts
"""
from avalon import api as avalon
from pyblish import api as pyblish
from .pipeline import (
install,
uninstall,
ls,
publish,
containerise,
)
import openpype.hosts.blender
from .plugin import (
Creator,
Loader,
)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
from .workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root,
)
ORIGINAL_EXCEPTHOOK = sys.excepthook
from .lib import (
lsattr,
lsattrs,
read,
maintained_selection,
get_selection,
# unique_name,
)
def pype_excepthook_handler(*args):
traceback.print_exception(*args)
__all__ = [
"install",
"uninstall",
"ls",
"publish",
"containerise",
"Creator",
"Loader",
def install():
"""Install Blender configuration for Avalon."""
sys.excepthook = pype_excepthook_handler
pyblish.register_plugin_path(str(PUBLISH_PATH))
avalon.register_plugin_path(avalon.Loader, str(LOAD_PATH))
avalon.register_plugin_path(avalon.Creator, str(CREATE_PATH))
append_user_scripts()
avalon.on("new", on_new)
avalon.on("open", on_open)
# Workfiles API
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
def uninstall():
"""Uninstall Blender configuration for Avalon."""
sys.excepthook = ORIGINAL_EXCEPTHOOK
pyblish.deregister_plugin_path(str(PUBLISH_PATH))
avalon.deregister_plugin_path(avalon.Loader, str(LOAD_PATH))
avalon.deregister_plugin_path(avalon.Creator, str(CREATE_PATH))
def set_start_end_frames():
from avalon import io
asset_name = io.Session["AVALON_ASSET"]
asset_doc = io.find_one({
"type": "asset",
"name": asset_name
})
scene = bpy.context.scene
# Default scene settings
frameStart = scene.frame_start
frameEnd = scene.frame_end
fps = scene.render.fps
resolution_x = scene.render.resolution_x
resolution_y = scene.render.resolution_y
# Check if settings are set
data = asset_doc.get("data")
if not data:
return
if data.get("frameStart"):
frameStart = data.get("frameStart")
if data.get("frameEnd"):
frameEnd = data.get("frameEnd")
if data.get("fps"):
fps = data.get("fps")
if data.get("resolutionWidth"):
resolution_x = data.get("resolutionWidth")
if data.get("resolutionHeight"):
resolution_y = data.get("resolutionHeight")
scene.frame_start = frameStart
scene.frame_end = frameEnd
scene.render.fps = fps
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y
def on_new(arg1, arg2):
set_start_end_frames()
def on_open(arg1, arg2):
set_start_end_frames()
# Utility functions
"maintained_selection",
"lsattr",
"lsattrs",
"read",
"get_selection",
# "unique_name",
]

Binary file not shown.

After

Width:  |  Height:  |  Size: 632 B

View file

@ -1,9 +1,16 @@
import os
import traceback
import importlib
import contextlib
from typing import Dict, List, Union
import bpy
import addon_utils
from openpype.api import Logger
from . import pipeline
log = Logger.get_logger(__name__)
def load_scripts(paths):
@ -125,3 +132,155 @@ def append_user_scripts():
except Exception:
print("Couldn't load user scripts \"{}\"".format(user_scripts))
traceback.print_exc()
def imprint(node: bpy.types.bpy_struct_meta_idprop, data: Dict):
r"""Write `data` to `node` as userDefined attributes
Arguments:
node: Long name of node
data: Dictionary of key/value pairs
Example:
>>> import bpy
>>> def compute():
... return 6
...
>>> bpy.ops.mesh.primitive_cube_add()
>>> cube = bpy.context.view_layer.objects.active
>>> imprint(cube, {
... "regularString": "myFamily",
... "computedValue": lambda: compute()
... })
...
>>> cube['avalon']['computedValue']
6
"""
imprint_data = dict()
for key, value in data.items():
if value is None:
continue
if callable(value):
# Support values evaluated at imprint
value = value()
if not isinstance(value, (int, float, bool, str, list)):
raise TypeError(f"Unsupported type: {type(value)}")
imprint_data[key] = value
pipeline.metadata_update(node, imprint_data)
def lsattr(attr: str,
value: Union[str, int, bool, List, Dict, None] = None) -> List:
r"""Return nodes matching `attr` and `value`
Arguments:
attr: Name of Blender property
value: Value of attribute. If none
is provided, return all nodes with this attribute.
Example:
>>> lsattr("id", "myId")
... [bpy.data.objects["myNode"]
>>> lsattr("id")
... [bpy.data.objects["myNode"], bpy.data.objects["myOtherNode"]]
Returns:
list
"""
return lsattrs({attr: value})
def lsattrs(attrs: Dict) -> List:
r"""Return nodes with the given attribute(s).
Arguments:
attrs: Name and value pairs of expected matches
Example:
>>> lsattrs({"age": 5}) # Return nodes with an `age` of 5
# Return nodes with both `age` and `color` of 5 and blue
>>> lsattrs({"age": 5, "color": "blue"})
Returns a list.
"""
# For now return all objects, not filtered by scene/collection/view_layer.
matches = set()
for coll in dir(bpy.data):
if not isinstance(
getattr(bpy.data, coll),
bpy.types.bpy_prop_collection,
):
continue
for node in getattr(bpy.data, coll):
for attr, value in attrs.items():
avalon_prop = node.get(pipeline.AVALON_PROPERTY)
if not avalon_prop:
continue
if (avalon_prop.get(attr)
and (value is None or avalon_prop.get(attr) == value)):
matches.add(node)
return list(matches)
def read(node: bpy.types.bpy_struct_meta_idprop):
"""Return user-defined attributes from `node`"""
data = dict(node.get(pipeline.AVALON_PROPERTY))
# Ignore hidden/internal data
data = {
key: value
for key, value in data.items() if not key.startswith("_")
}
return data
def get_selection() -> List[bpy.types.Object]:
"""Return the selected objects from the current scene."""
return [obj for obj in bpy.context.scene.objects if obj.select_get()]
@contextlib.contextmanager
def maintained_selection():
r"""Maintain selection during context
Example:
>>> with maintained_selection():
... # Modify selection
... bpy.ops.object.select_all(action='DESELECT')
>>> # Selection restored
"""
previous_selection = get_selection()
previous_active = bpy.context.view_layer.objects.active
try:
yield
finally:
# Clear the selection
for node in get_selection():
node.select_set(state=False)
if previous_selection:
for node in previous_selection:
try:
node.select_set(state=True)
except ReferenceError:
# This could happen if a selected node was deleted during
# the context.
log.exception("Failed to reselect")
continue
try:
bpy.context.view_layer.objects.active = previous_active
except ReferenceError:
# This could happen if the active node was deleted during the
# context.
log.exception("Failed to set active object.")

View file

@ -0,0 +1,410 @@
"""Blender operators and menus for use with Avalon."""
import os
import sys
import platform
import time
import traceback
import collections
from pathlib import Path
from types import ModuleType
from typing import Dict, List, Optional, Union
from Qt import QtWidgets, QtCore
import bpy
import bpy.utils.previews
import avalon.api
from openpype.tools.utils import host_tools
from openpype import style
from .workio import OpenFileCacher
PREVIEW_COLLECTIONS: Dict = dict()
# This seems like a good value to keep the Qt app responsive and doesn't slow
# down Blender. At least on macOS I the interace of Blender gets very laggy if
# you make it smaller.
TIMER_INTERVAL: float = 0.01
class BlenderApplication(QtWidgets.QApplication):
_instance = None
blender_windows = {}
def __init__(self, *args, **kwargs):
super(BlenderApplication, self).__init__(*args, **kwargs)
self.setQuitOnLastWindowClosed(False)
self.setStyleSheet(style.load_stylesheet())
self.lastWindowClosed.connect(self.__class__.reset)
@classmethod
def get_app(cls):
if cls._instance is None:
cls._instance = cls(sys.argv)
return cls._instance
@classmethod
def reset(cls):
cls._instance = None
@classmethod
def store_window(cls, identifier, window):
current_window = cls.get_window(identifier)
cls.blender_windows[identifier] = window
if current_window:
current_window.close()
# current_window.deleteLater()
@classmethod
def get_window(cls, identifier):
return cls.blender_windows.get(identifier)
class MainThreadItem:
"""Structure to store information about callback in main thread.
Item should be used to execute callback in main thread which may be needed
for execution of Qt objects.
Item store callback (callable variable), arguments and keyword arguments
for the callback. Item hold information about it's process.
"""
not_set = object()
sleep_time = 0.1
def __init__(self, callback, *args, **kwargs):
self.done = False
self.exception = self.not_set
self.result = self.not_set
self.callback = callback
self.args = args
self.kwargs = kwargs
def execute(self):
"""Execute callback and store it's result.
Method must be called from main thread. Item is marked as `done`
when callback execution finished. Store output of callback of exception
information when callback raise one.
"""
print("Executing process in main thread")
if self.done:
print("- item is already processed")
return
callback = self.callback
args = self.args
kwargs = self.kwargs
print("Running callback: {}".format(str(callback)))
try:
result = callback(*args, **kwargs)
self.result = result
except Exception:
self.exception = sys.exc_info()
finally:
print("Done")
self.done = True
def wait(self):
"""Wait for result from main thread.
This method stops current thread until callback is executed.
Returns:
object: Output of callback. May be any type or object.
Raises:
Exception: Reraise any exception that happened during callback
execution.
"""
while not self.done:
print(self.done)
time.sleep(self.sleep_time)
if self.exception is self.not_set:
return self.result
raise self.exception
class GlobalClass:
app = None
main_thread_callbacks = collections.deque()
is_windows = platform.system().lower() == "windows"
def execute_in_main_thread(main_thead_item):
print("execute_in_main_thread")
GlobalClass.main_thread_callbacks.append(main_thead_item)
def _process_app_events() -> Optional[float]:
"""Process the events of the Qt app if the window is still visible.
If the app has any top level windows and at least one of them is visible
return the time after which this function should be run again. Else return
None, so the function is not run again and will be unregistered.
"""
while GlobalClass.main_thread_callbacks:
main_thread_item = GlobalClass.main_thread_callbacks.popleft()
main_thread_item.execute()
if main_thread_item.exception is not MainThreadItem.not_set:
_clc, val, tb = main_thread_item.exception
msg = str(val)
detail = "\n".join(traceback.format_exception(_clc, val, tb))
dialog = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
"Error",
msg)
dialog.setMinimumWidth(500)
dialog.setDetailedText(detail)
dialog.exec_()
if not GlobalClass.is_windows:
if OpenFileCacher.opening_file:
return TIMER_INTERVAL
app = GlobalClass.app
if app._instance:
app.processEvents()
return TIMER_INTERVAL
return TIMER_INTERVAL
class LaunchQtApp(bpy.types.Operator):
"""A Base class for opertors to launch a Qt app."""
_app: QtWidgets.QApplication
_window = Union[QtWidgets.QDialog, ModuleType]
_tool_name: str = None
_init_args: Optional[List] = list()
_init_kwargs: Optional[Dict] = dict()
bl_idname: str = None
def __init__(self):
if self.bl_idname is None:
raise NotImplementedError("Attribute `bl_idname` must be set!")
print(f"Initialising {self.bl_idname}...")
self._app = BlenderApplication.get_app()
GlobalClass.app = self._app
bpy.app.timers.register(
_process_app_events,
persistent=True
)
def execute(self, context):
"""Execute the operator.
The child class must implement `execute()` where it only has to set
`self._window` to the desired Qt window and then simply run
`return super().execute(context)`.
`self._window` is expected to have a `show` method.
If the `show` method requires arguments, you can set `self._show_args`
and `self._show_kwargs`. `args` should be a list, `kwargs` a
dictionary.
"""
if self._tool_name is None:
if self._window is None:
raise AttributeError("`self._window` is not set.")
else:
window = self._app.get_window(self.bl_idname)
if window is None:
window = host_tools.get_tool_by_name(self._tool_name)
self._app.store_window(self.bl_idname, window)
self._window = window
if not isinstance(
self._window,
(QtWidgets.QMainWindow, QtWidgets.QDialog, ModuleType)
):
raise AttributeError(
"`window` should be a `QDialog or module`. Got: {}".format(
str(type(window))
)
)
self.before_window_show()
if isinstance(self._window, ModuleType):
self._window.show()
window = None
if hasattr(self._window, "window"):
window = self._window.window
elif hasattr(self._window, "_window"):
window = self._window.window
if window:
self._app.store_window(self.bl_idname, window)
else:
origin_flags = self._window.windowFlags()
on_top_flags = origin_flags | QtCore.Qt.WindowStaysOnTopHint
self._window.setWindowFlags(on_top_flags)
self._window.show()
if on_top_flags != origin_flags:
self._window.setWindowFlags(origin_flags)
self._window.show()
return {'FINISHED'}
def before_window_show(self):
return
class LaunchCreator(LaunchQtApp):
"""Launch Avalon Creator."""
bl_idname = "wm.avalon_creator"
bl_label = "Create..."
_tool_name = "creator"
def before_window_show(self):
self._window.refresh()
class LaunchLoader(LaunchQtApp):
"""Launch Avalon Loader."""
bl_idname = "wm.avalon_loader"
bl_label = "Load..."
_tool_name = "loader"
def before_window_show(self):
self._window.set_context(
{"asset": avalon.api.Session["AVALON_ASSET"]},
refresh=True
)
class LaunchPublisher(LaunchQtApp):
"""Launch Avalon Publisher."""
bl_idname = "wm.avalon_publisher"
bl_label = "Publish..."
def execute(self, context):
host_tools.show_publish()
return {"FINISHED"}
class LaunchManager(LaunchQtApp):
"""Launch Avalon Manager."""
bl_idname = "wm.avalon_manager"
bl_label = "Manage..."
_tool_name = "sceneinventory"
def before_window_show(self):
self._window.refresh()
class LaunchWorkFiles(LaunchQtApp):
"""Launch Avalon Work Files."""
bl_idname = "wm.avalon_workfiles"
bl_label = "Work Files..."
_tool_name = "workfiles"
def execute(self, context):
result = super().execute(context)
self._window.set_context({
"asset": avalon.api.Session["AVALON_ASSET"],
"silo": avalon.api.Session["AVALON_SILO"],
"task": avalon.api.Session["AVALON_TASK"]
})
return result
def before_window_show(self):
self._window.root = str(Path(
os.environ.get("AVALON_WORKDIR", ""),
os.environ.get("AVALON_SCENEDIR", ""),
))
self._window.refresh()
class TOPBAR_MT_avalon(bpy.types.Menu):
"""Avalon menu."""
bl_idname = "TOPBAR_MT_avalon"
bl_label = os.environ.get("AVALON_LABEL")
def draw(self, context):
"""Draw the menu in the UI."""
layout = self.layout
pcoll = PREVIEW_COLLECTIONS.get("avalon")
if pcoll:
pyblish_menu_icon = pcoll["pyblish_menu_icon"]
pyblish_menu_icon_id = pyblish_menu_icon.icon_id
else:
pyblish_menu_icon_id = 0
asset = avalon.api.Session['AVALON_ASSET']
task = avalon.api.Session['AVALON_TASK']
context_label = f"{asset}, {task}"
context_label_item = layout.row()
context_label_item.operator(
LaunchWorkFiles.bl_idname, text=context_label
)
context_label_item.enabled = False
layout.separator()
layout.operator(LaunchCreator.bl_idname, text="Create...")
layout.operator(LaunchLoader.bl_idname, text="Load...")
layout.operator(
LaunchPublisher.bl_idname,
text="Publish...",
icon_value=pyblish_menu_icon_id,
)
layout.operator(LaunchManager.bl_idname, text="Manage...")
layout.separator()
layout.operator(LaunchWorkFiles.bl_idname, text="Work Files...")
# TODO (jasper): maybe add 'Reload Pipeline', 'Reset Frame Range' and
# 'Reset Resolution'?
def draw_avalon_menu(self, context):
"""Draw the Avalon menu in the top bar."""
self.layout.menu(TOPBAR_MT_avalon.bl_idname)
classes = [
LaunchCreator,
LaunchLoader,
LaunchPublisher,
LaunchManager,
LaunchWorkFiles,
TOPBAR_MT_avalon,
]
def register():
"Register the operators and menu."
pcoll = bpy.utils.previews.new()
pyblish_icon_file = Path(__file__).parent / "icons" / "pyblish-32x32.png"
pcoll.load("pyblish_menu_icon", str(pyblish_icon_file.absolute()), 'IMAGE')
PREVIEW_COLLECTIONS["avalon"] = pcoll
for cls in classes:
bpy.utils.register_class(cls)
bpy.types.TOPBAR_MT_editor_menus.append(draw_avalon_menu)
def unregister():
"""Unregister the operators and menu."""
pcoll = PREVIEW_COLLECTIONS.pop("avalon")
bpy.utils.previews.remove(pcoll)
bpy.types.TOPBAR_MT_editor_menus.remove(draw_avalon_menu)
for cls in reversed(classes):
bpy.utils.unregister_class(cls)

View file

@ -0,0 +1,427 @@
import os
import sys
import importlib
import traceback
from typing import Callable, Dict, Iterator, List, Optional
import bpy
from . import lib
from . import ops
import pyblish.api
import avalon.api
from avalon import io, schema
from avalon.pipeline import AVALON_CONTAINER_ID
from openpype.api import Logger
import openpype.hosts.blender
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
ORIGINAL_EXCEPTHOOK = sys.excepthook
AVALON_INSTANCES = "AVALON_INSTANCES"
AVALON_CONTAINERS = "AVALON_CONTAINERS"
AVALON_PROPERTY = 'avalon'
IS_HEADLESS = bpy.app.background
log = Logger.get_logger(__name__)
def pype_excepthook_handler(*args):
traceback.print_exception(*args)
def install():
"""Install Blender configuration for Avalon."""
sys.excepthook = pype_excepthook_handler
pyblish.api.register_host("blender")
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
avalon.api.register_plugin_path(avalon.api.Loader, str(LOAD_PATH))
avalon.api.register_plugin_path(avalon.api.Creator, str(CREATE_PATH))
lib.append_user_scripts()
avalon.api.on("new", on_new)
avalon.api.on("open", on_open)
_register_callbacks()
_register_events()
if not IS_HEADLESS:
ops.register()
def uninstall():
"""Uninstall Blender configuration for Avalon."""
sys.excepthook = ORIGINAL_EXCEPTHOOK
pyblish.api.deregister_host("blender")
pyblish.api.deregister_plugin_path(str(PUBLISH_PATH))
avalon.api.deregister_plugin_path(avalon.api.Loader, str(LOAD_PATH))
avalon.api.deregister_plugin_path(avalon.api.Creator, str(CREATE_PATH))
if not IS_HEADLESS:
ops.unregister()
def set_start_end_frames():
asset_name = io.Session["AVALON_ASSET"]
asset_doc = io.find_one({
"type": "asset",
"name": asset_name
})
scene = bpy.context.scene
# Default scene settings
frameStart = scene.frame_start
frameEnd = scene.frame_end
fps = scene.render.fps
resolution_x = scene.render.resolution_x
resolution_y = scene.render.resolution_y
# Check if settings are set
data = asset_doc.get("data")
if not data:
return
if data.get("frameStart"):
frameStart = data.get("frameStart")
if data.get("frameEnd"):
frameEnd = data.get("frameEnd")
if data.get("fps"):
fps = data.get("fps")
if data.get("resolutionWidth"):
resolution_x = data.get("resolutionWidth")
if data.get("resolutionHeight"):
resolution_y = data.get("resolutionHeight")
scene.frame_start = frameStart
scene.frame_end = frameEnd
scene.render.fps = fps
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y
def on_new(arg1, arg2):
set_start_end_frames()
def on_open(arg1, arg2):
set_start_end_frames()
@bpy.app.handlers.persistent
def _on_save_pre(*args):
avalon.api.emit("before_save", args)
@bpy.app.handlers.persistent
def _on_save_post(*args):
avalon.api.emit("save", args)
@bpy.app.handlers.persistent
def _on_load_post(*args):
# Detect new file or opening an existing file
if bpy.data.filepath:
# Likely this was an open operation since it has a filepath
avalon.api.emit("open", args)
else:
avalon.api.emit("new", args)
ops.OpenFileCacher.post_load()
def _register_callbacks():
"""Register callbacks for certain events."""
def _remove_handler(handlers: List, callback: Callable):
"""Remove the callback from the given handler list."""
try:
handlers.remove(callback)
except ValueError:
pass
# TODO (jasper): implement on_init callback?
# Be sure to remove existig ones first.
_remove_handler(bpy.app.handlers.save_pre, _on_save_pre)
_remove_handler(bpy.app.handlers.save_post, _on_save_post)
_remove_handler(bpy.app.handlers.load_post, _on_load_post)
bpy.app.handlers.save_pre.append(_on_save_pre)
bpy.app.handlers.save_post.append(_on_save_post)
bpy.app.handlers.load_post.append(_on_load_post)
log.info("Installed event handler _on_save_pre...")
log.info("Installed event handler _on_save_post...")
log.info("Installed event handler _on_load_post...")
def _on_task_changed(*args):
"""Callback for when the task in the context is changed."""
# TODO (jasper): Blender has no concept of projects or workspace.
# It would be nice to override 'bpy.ops.wm.open_mainfile' so it takes the
# workdir as starting directory. But I don't know if that is possible.
# Another option would be to create a custom 'File Selector' and add the
# `directory` attribute, so it opens in that directory (does it?).
# https://docs.blender.org/api/blender2.8/bpy.types.Operator.html#calling-a-file-selector
# https://docs.blender.org/api/blender2.8/bpy.types.WindowManager.html#bpy.types.WindowManager.fileselect_add
workdir = avalon.api.Session["AVALON_WORKDIR"]
log.debug("New working directory: %s", workdir)
def _register_events():
"""Install callbacks for specific events."""
avalon.api.on("taskChanged", _on_task_changed)
log.info("Installed event callback for 'taskChanged'...")
def reload_pipeline(*args):
"""Attempt to reload pipeline at run-time.
Warning:
This is primarily for development and debugging purposes and not well
tested.
"""
avalon.api.uninstall()
for module in (
"avalon.io",
"avalon.lib",
"avalon.pipeline",
"avalon.tools.creator.app",
"avalon.tools.manager.app",
"avalon.api",
"avalon.tools",
):
module = importlib.import_module(module)
importlib.reload(module)
def _discover_gui() -> Optional[Callable]:
"""Return the most desirable of the currently registered GUIs"""
# Prefer last registered
guis = reversed(pyblish.api.registered_guis())
for gui in guis:
try:
gui = __import__(gui).show
except (ImportError, AttributeError):
continue
else:
return gui
return None
def add_to_avalon_container(container: bpy.types.Collection):
"""Add the container to the Avalon container."""
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
# Link the container to the scene so it's easily visible to the artist
# and can be managed easily. Otherwise it's only found in "Blender
# File" view and it will be removed by Blenders garbage collection,
# unless you set a 'fake user'.
bpy.context.scene.collection.children.link(avalon_container)
avalon_container.children.link(container)
# Disable Avalon containers for the view layers.
for view_layer in bpy.context.scene.view_layers:
for child in view_layer.layer_collection.children:
if child.collection == avalon_container:
child.exclude = True
def metadata_update(node: bpy.types.bpy_struct_meta_idprop, data: Dict):
"""Imprint the node with metadata.
Existing metadata will be updated.
"""
if not node.get(AVALON_PROPERTY):
node[AVALON_PROPERTY] = dict()
for key, value in data.items():
if value is None:
continue
node[AVALON_PROPERTY][key] = value
def containerise(name: str,
namespace: str,
nodes: List,
context: Dict,
loader: Optional[str] = None,
suffix: Optional[str] = "CON") -> bpy.types.Collection:
"""Bundle `nodes` into an assembly and imprint it with metadata
Containerisation enables a tracking of version, author and origin
for loaded assets.
Arguments:
name: Name of resulting assembly
namespace: Namespace under which to host container
nodes: Long names of nodes to containerise
context: Asset information
loader: Name of loader used to produce this container.
suffix: Suffix of container, defaults to `_CON`.
Returns:
The container assembly
"""
node_name = f"{context['asset']['name']}_{name}"
if namespace:
node_name = f"{namespace}:{node_name}"
if suffix:
node_name = f"{node_name}_{suffix}"
container = bpy.data.collections.new(name=node_name)
# Link the children nodes
for obj in nodes:
container.objects.link(obj)
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or '',
"loader": str(loader),
"representation": str(context["representation"]["_id"]),
}
metadata_update(container, data)
add_to_avalon_container(container)
return container
def containerise_existing(
container: bpy.types.Collection,
name: str,
namespace: str,
context: Dict,
loader: Optional[str] = None,
suffix: Optional[str] = "CON") -> bpy.types.Collection:
"""Imprint or update container with metadata.
Arguments:
name: Name of resulting assembly
namespace: Namespace under which to host container
context: Asset information
loader: Name of loader used to produce this container.
suffix: Suffix of container, defaults to `_CON`.
Returns:
The container assembly
"""
node_name = container.name
if suffix:
node_name = f"{node_name}_{suffix}"
container.name = node_name
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or '',
"loader": str(loader),
"representation": str(context["representation"]["_id"]),
}
metadata_update(container, data)
add_to_avalon_container(container)
return container
def parse_container(container: bpy.types.Collection,
validate: bool = True) -> Dict:
"""Return the container node's full container data.
Args:
container: A container node name.
validate: turn the validation for the container on or off
Returns:
The container schema data for this container node.
"""
data = lib.read(container)
# Append transient data
data["objectName"] = container.name
if validate:
schema.validate(data)
return data
def ls() -> Iterator:
"""List containers from active Blender scene.
This is the host-equivalent of api.ls(), but instead of listing assets on
disk, it lists assets already loaded in Blender; once loaded they are
called containers.
"""
for container in lib.lsattr("id", AVALON_CONTAINER_ID):
yield parse_container(container)
def update_hierarchy(containers):
"""Hierarchical container support
This is the function to support Scene Inventory to draw hierarchical
view for containers.
We need both parent and children to visualize the graph.
"""
all_containers = set(ls()) # lookup set
for container in containers:
# Find parent
# FIXME (jasperge): re-evaluate this. How would it be possible
# to 'nest' assets? Collections can have several parents, for
# now assume it has only 1 parent
parent = [
coll for coll in bpy.data.collections if container in coll.children
]
for node in parent:
if node in all_containers:
container["parent"] = node
break
log.debug("Container: %s", container)
yield container
def publish():
"""Shorthand to publish from within host."""
return pyblish.util.publish()

View file

@ -5,10 +5,17 @@ from typing import Dict, List, Optional
import bpy
from avalon import api, blender
from avalon.blender import ops
from avalon.blender.pipeline import AVALON_CONTAINERS
import avalon.api
from openpype.api import PypeCreatorMixin
from .pipeline import AVALON_CONTAINERS
from .ops import (
MainThreadItem,
execute_in_main_thread
)
from .lib import (
imprint,
get_selection
)
VALID_EXTENSIONS = [".blend", ".json", ".abc", ".fbx"]
@ -42,10 +49,13 @@ def get_unique_number(
return f"{count:0>2}"
def prepare_data(data, container_name):
def prepare_data(data, container_name=None):
name = data.name
local_data = data.make_local()
local_data.name = f"{container_name}:{name}"
if container_name:
local_data.name = f"{container_name}:{name}"
else:
local_data.name = f"{name}"
return local_data
@ -119,11 +129,27 @@ def deselect_all():
bpy.context.view_layer.objects.active = active
class Creator(PypeCreatorMixin, blender.Creator):
pass
class Creator(PypeCreatorMixin, avalon.api.Creator):
"""Base class for Creator plug-ins."""
def process(self):
collection = bpy.data.collections.new(name=self.data["subset"])
bpy.context.scene.collection.children.link(collection)
imprint(collection, self.data)
if (self.options or {}).get("useSelection"):
for obj in get_selection():
collection.objects.link(obj)
return collection
class AssetLoader(api.Loader):
class Loader(avalon.api.Loader):
"""Base class for Loader plug-ins."""
hosts = ["blender"]
class AssetLoader(avalon.api.Loader):
"""A basic AssetLoader for Blender
This will implement the basic logic for linking/appending assets
@ -191,8 +217,8 @@ class AssetLoader(api.Loader):
namespace: Optional[str] = None,
options: Optional[Dict] = None) -> Optional[bpy.types.Collection]:
""" Run the loader on Blender main thread"""
mti = ops.MainThreadItem(self._load, context, name, namespace, options)
ops.execute_in_main_thread(mti)
mti = MainThreadItem(self._load, context, name, namespace, options)
execute_in_main_thread(mti)
def _load(self,
context: dict,
@ -257,8 +283,8 @@ class AssetLoader(api.Loader):
def update(self, container: Dict, representation: Dict):
""" Run the update on Blender main thread"""
mti = ops.MainThreadItem(self.exec_update, container, representation)
ops.execute_in_main_thread(mti)
mti = MainThreadItem(self.exec_update, container, representation)
execute_in_main_thread(mti)
def exec_remove(self, container: Dict) -> bool:
"""Must be implemented by a sub-class"""
@ -266,5 +292,5 @@ class AssetLoader(api.Loader):
def remove(self, container: Dict) -> bool:
""" Run the remove on Blender main thread"""
mti = ops.MainThreadItem(self.exec_remove, container)
ops.execute_in_main_thread(mti)
mti = MainThreadItem(self.exec_remove, container)
execute_in_main_thread(mti)

View file

@ -0,0 +1,90 @@
"""Host API required for Work Files."""
from pathlib import Path
from typing import List, Optional
import bpy
from avalon import api
class OpenFileCacher:
"""Store information about opening file.
When file is opening QApplcation events should not be processed.
"""
opening_file = False
@classmethod
def post_load(cls):
cls.opening_file = False
@classmethod
def set_opening(cls):
cls.opening_file = True
def open_file(filepath: str) -> Optional[str]:
"""Open the scene file in Blender."""
OpenFileCacher.set_opening()
preferences = bpy.context.preferences
load_ui = preferences.filepaths.use_load_ui
use_scripts = preferences.filepaths.use_scripts_auto_execute
result = bpy.ops.wm.open_mainfile(
filepath=filepath,
load_ui=load_ui,
use_scripts=use_scripts,
)
if result == {'FINISHED'}:
return filepath
return None
def save_file(filepath: str, copy: bool = False) -> Optional[str]:
"""Save the open scene file."""
preferences = bpy.context.preferences
compress = preferences.filepaths.use_file_compression
relative_remap = preferences.filepaths.use_relative_paths
result = bpy.ops.wm.save_as_mainfile(
filepath=filepath,
compress=compress,
relative_remap=relative_remap,
copy=copy,
)
if result == {'FINISHED'}:
return filepath
return None
def current_file() -> Optional[str]:
"""Return the path of the open scene file."""
current_filepath = bpy.data.filepath
if Path(current_filepath).is_file():
return current_filepath
return None
def has_unsaved_changes() -> bool:
"""Does the open scene file have unsaved changes?"""
return bpy.data.is_dirty
def file_extensions() -> List[str]:
"""Return the supported file extensions for Blender scene files."""
return api.HOST_WORKFILE_EXTENSIONS["blender"]
def work_root(session: dict) -> str:
"""Return the default root to browse for work files."""
work_dir = session["AVALON_WORKDIR"]
scene_dir = session.get("AVALON_SCENEDIR")
if scene_dir:
return str(Path(work_dir, scene_dir))
return work_dir

View file

@ -0,0 +1,4 @@
from avalon import pipeline
from openpype.hosts.blender import api
pipeline.install(api)

View file

@ -4,7 +4,7 @@ import bpy
from avalon import api
import openpype.hosts.blender.api.plugin
from avalon.blender import lib
from openpype.hosts.blender.api import lib
class CreateAction(openpype.hosts.blender.api.plugin.Creator):

View file

@ -3,9 +3,8 @@
import bpy
from avalon import api
from avalon.blender import lib, ops
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib, ops
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
class CreateAnimation(plugin.Creator):

View file

@ -3,9 +3,8 @@
import bpy
from avalon import api
from avalon.blender import lib, ops
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib, ops
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
class CreateCamera(plugin.Creator):

View file

@ -3,9 +3,8 @@
import bpy
from avalon import api
from avalon.blender import lib, ops
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib, ops
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
class CreateLayout(plugin.Creator):

View file

@ -3,9 +3,8 @@
import bpy
from avalon import api
from avalon.blender import lib, ops
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib, ops
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
class CreateModel(plugin.Creator):

View file

@ -3,8 +3,8 @@
import bpy
from avalon import api
from avalon.blender import lib
import openpype.hosts.blender.api.plugin
from openpype.hosts.blender.api import lib
class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):

View file

@ -3,9 +3,8 @@
import bpy
from avalon import api
from avalon.blender import lib, ops
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib, ops
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
class CreateRig(plugin.Creator):

View file

@ -7,11 +7,12 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender import lib
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
from openpype.hosts.blender.api import plugin, lib
class CacheModelLoader(plugin.AssetLoader):

View file

@ -1,16 +1,11 @@
"""Load an animation in Blender."""
import logging
from typing import Dict, List, Optional
import bpy
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
logger = logging.getLogger("openpype").getChild(
"blender").getChild("load_animation")
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class BlendAnimationLoader(plugin.AssetLoader):

View file

@ -7,10 +7,12 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
class AudioLoader(plugin.AssetLoader):

View file

@ -8,10 +8,12 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
logger = logging.getLogger("openpype").getChild(
"blender").getChild("load_camera")

View file

@ -7,11 +7,12 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender import lib
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
class FbxCameraLoader(plugin.AssetLoader):

View file

@ -7,11 +7,12 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender import lib
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api import plugin, lib
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
class FbxModelLoader(plugin.AssetLoader):

View file

@ -7,10 +7,13 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype import lib
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
class BlendLayoutLoader(plugin.AssetLoader):
@ -59,7 +62,9 @@ class BlendLayoutLoader(plugin.AssetLoader):
library = bpy.data.libraries.get(bpy.path.basename(libpath))
bpy.data.libraries.remove(library)
def _process(self, libpath, asset_group, group_name, actions):
def _process(
self, libpath, asset_group, group_name, asset, representation, actions
):
with bpy.data.libraries.load(
libpath, link=True, relative=False
) as (data_from, data_to):
@ -72,7 +77,8 @@ class BlendLayoutLoader(plugin.AssetLoader):
container = None
for empty in empties:
if empty.get(AVALON_PROPERTY):
if (empty.get(AVALON_PROPERTY) and
empty.get(AVALON_PROPERTY).get('family') == 'layout'):
container = empty
break
@ -83,12 +89,16 @@ class BlendLayoutLoader(plugin.AssetLoader):
objects = []
nodes = list(container.children)
for obj in nodes:
obj.parent = asset_group
allowed_types = ['ARMATURE', 'MESH', 'EMPTY']
for obj in nodes:
objects.append(obj)
nodes.extend(list(obj.children))
if obj.type in allowed_types:
obj.parent = asset_group
for obj in nodes:
if obj.type in allowed_types:
objects.append(obj)
nodes.extend(list(obj.children))
objects.reverse()
@ -106,7 +116,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
parent.objects.link(obj)
for obj in objects:
local_obj = plugin.prepare_data(obj, group_name)
local_obj = plugin.prepare_data(obj)
action = None
@ -114,7 +124,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
action = actions.get(local_obj.name, None)
if local_obj.type == 'MESH':
plugin.prepare_data(local_obj.data, group_name)
plugin.prepare_data(local_obj.data)
if obj != local_obj:
for constraint in constraints:
@ -123,15 +133,18 @@ class BlendLayoutLoader(plugin.AssetLoader):
for material_slot in local_obj.material_slots:
if material_slot.material:
plugin.prepare_data(material_slot.material, group_name)
plugin.prepare_data(material_slot.material)
elif local_obj.type == 'ARMATURE':
plugin.prepare_data(local_obj.data, group_name)
plugin.prepare_data(local_obj.data)
if action is not None:
if local_obj.animation_data is None:
local_obj.animation_data_create()
local_obj.animation_data.action = action
elif local_obj.animation_data.action is not None:
elif (local_obj.animation_data and
local_obj.animation_data.action is not None):
plugin.prepare_data(
local_obj.animation_data.action, group_name)
local_obj.animation_data.action)
# Set link the drivers to the local object
if local_obj.data.animation_data:
@ -140,6 +153,21 @@ class BlendLayoutLoader(plugin.AssetLoader):
for t in v.targets:
t.id = local_obj
elif local_obj.type == 'EMPTY':
creator_plugin = lib.get_creator_by_name("CreateAnimation")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateAnimation\" was "
"not found.")
api.create(
creator_plugin,
name=local_obj.name.split(':')[-1] + "_animation",
asset=asset,
options={"useSelection": False,
"asset_group": local_obj},
data={"dependencies": representation}
)
if not local_obj.get(AVALON_PROPERTY):
local_obj[AVALON_PROPERTY] = dict()
@ -148,7 +176,63 @@ class BlendLayoutLoader(plugin.AssetLoader):
objects.reverse()
bpy.data.orphans_purge(do_local_ids=False)
armatures = [
obj for obj in bpy.data.objects
if obj.type == 'ARMATURE' and obj.library is None]
arm_act = {}
# The armatures with an animation need to be at the center of the
# scene to be hooked correctly by the curves modifiers.
for armature in armatures:
if armature.animation_data and armature.animation_data.action:
arm_act[armature] = armature.animation_data.action
armature.animation_data.action = None
armature.location = (0.0, 0.0, 0.0)
for bone in armature.pose.bones:
bone.location = (0.0, 0.0, 0.0)
bone.rotation_euler = (0.0, 0.0, 0.0)
curves = [obj for obj in data_to.objects if obj.type == 'CURVE']
for curve in curves:
curve_name = curve.name.split(':')[0]
curve_obj = bpy.data.objects.get(curve_name)
local_obj = plugin.prepare_data(curve)
plugin.prepare_data(local_obj.data)
# Curves need to reset the hook, but to do that they need to be
# in the view layer.
parent.objects.link(local_obj)
plugin.deselect_all()
local_obj.select_set(True)
bpy.context.view_layer.objects.active = local_obj
if local_obj.library is None:
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.object.hook_reset()
bpy.ops.object.mode_set(mode='OBJECT')
parent.objects.unlink(local_obj)
local_obj.use_fake_user = True
for mod in local_obj.modifiers:
mod.object = bpy.data.objects.get(f"{mod.object.name}")
if not local_obj.get(AVALON_PROPERTY):
local_obj[AVALON_PROPERTY] = dict()
avalon_info = local_obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
local_obj.parent = curve_obj
objects.append(local_obj)
for armature in armatures:
if arm_act.get(armature):
armature.animation_data.action = arm_act[armature]
while bpy.data.orphans_purge(do_local_ids=False):
pass
plugin.deselect_all()
@ -168,6 +252,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
representation = str(context["representation"]["_id"])
asset_name = plugin.asset_name(asset, subset)
unique_number = plugin.get_unique_number(asset, subset)
@ -183,7 +268,8 @@ class BlendLayoutLoader(plugin.AssetLoader):
asset_group.empty_display_type = 'SINGLE_ARROW'
avalon_container.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name, None)
objects = self._process(
libpath, asset_group, group_name, asset, representation, None)
for child in asset_group.children:
if child.get(AVALON_PROPERTY):

View file

@ -1,18 +1,20 @@
"""Load a layout in Blender."""
import json
from pathlib import Path
from pprint import pformat
from typing import Dict, Optional
import bpy
import json
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype import lib
from openpype.hosts.blender.api.pipeline import (
AVALON_INSTANCES,
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
from openpype.hosts.blender.api import plugin
@ -92,6 +94,10 @@ class JsonLayoutLoader(plugin.AssetLoader):
'animation_asset': asset
}
if element.get('animation'):
options['animation_file'] = str(Path(libpath).with_suffix(
'')) + "." + element.get('animation')
# This should return the loaded asset, but the load call will be
# added to the queue to run in the Blender main thread, so
# at this time it will not return anything. The assets will be
@ -104,20 +110,22 @@ class JsonLayoutLoader(plugin.AssetLoader):
options=options
)
# Create the camera asset and the camera instance
creator_plugin = lib.get_creator_by_name("CreateCamera")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateCamera\" was "
"not found.")
# Camera creation when loading a layout is not necessary for now,
# but the code is worth keeping in case we need it in the future.
# # Create the camera asset and the camera instance
# creator_plugin = lib.get_creator_by_name("CreateCamera")
# if not creator_plugin:
# raise ValueError("Creator plugin \"CreateCamera\" was "
# "not found.")
api.create(
creator_plugin,
name="camera",
# name=f"{unique_number}_{subset}_animation",
asset=asset,
options={"useSelection": False}
# data={"dependencies": str(context["representation"]["_id"])}
)
# api.create(
# creator_plugin,
# name="camera",
# # name=f"{unique_number}_{subset}_animation",
# asset=asset,
# options={"useSelection": False}
# # data={"dependencies": str(context["representation"]["_id"])}
# )
def process_asset(self,
context: dict,

View file

@ -8,8 +8,12 @@ import os
import json
import bpy
from avalon import api, blender
import openpype.hosts.blender.api.plugin as plugin
from avalon import api
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
containerise_existing,
AVALON_PROPERTY
)
class BlendLookLoader(plugin.AssetLoader):
@ -105,7 +109,7 @@ class BlendLookLoader(plugin.AssetLoader):
container = bpy.data.collections.new(lib_container)
container.name = container_name
blender.pipeline.containerise_existing(
containerise_existing(
container,
name,
namespace,
@ -113,7 +117,7 @@ class BlendLookLoader(plugin.AssetLoader):
self.__class__.__name__,
)
metadata = container.get(blender.pipeline.AVALON_PROPERTY)
metadata = container.get(AVALON_PROPERTY)
metadata["libpath"] = libpath
metadata["lib_container"] = lib_container
@ -161,7 +165,7 @@ class BlendLookLoader(plugin.AssetLoader):
f"Unsupported file: {libpath}"
)
collection_metadata = collection.get(blender.pipeline.AVALON_PROPERTY)
collection_metadata = collection.get(AVALON_PROPERTY)
collection_libpath = collection_metadata["libpath"]
normalized_collection_libpath = (
@ -204,7 +208,7 @@ class BlendLookLoader(plugin.AssetLoader):
if not collection:
return False
collection_metadata = collection.get(blender.pipeline.AVALON_PROPERTY)
collection_metadata = collection.get(AVALON_PROPERTY)
for obj in collection_metadata['objects']:
for child in self.get_all_children(obj):

View file

@ -7,10 +7,12 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
class BlendModelLoader(plugin.AssetLoader):
@ -81,7 +83,8 @@ class BlendModelLoader(plugin.AssetLoader):
plugin.prepare_data(local_obj.data, group_name)
for material_slot in local_obj.material_slots:
plugin.prepare_data(material_slot.material, group_name)
if material_slot.material:
plugin.prepare_data(material_slot.material, group_name)
if not local_obj.get(AVALON_PROPERTY):
local_obj[AVALON_PROPERTY] = dict()
@ -245,7 +248,8 @@ class BlendModelLoader(plugin.AssetLoader):
# If it is the last object to use that library, remove it
if count == 1:
library = bpy.data.libraries.get(bpy.path.basename(group_libpath))
bpy.data.libraries.remove(library)
if library:
bpy.data.libraries.remove(library)
self._process(str(libpath), asset_group, object_name)
@ -253,6 +257,7 @@ class BlendModelLoader(plugin.AssetLoader):
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
metadata["parent"] = str(representation["parent"])
def exec_remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.

View file

@ -7,11 +7,14 @@ from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from avalon.blender import lib as avalon_lib
from openpype import lib
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
AVALON_CONTAINER_ID
)
class BlendRigLoader(plugin.AssetLoader):
@ -110,6 +113,8 @@ class BlendRigLoader(plugin.AssetLoader):
plugin.prepare_data(local_obj.data, group_name)
if action is not None:
if local_obj.animation_data is None:
local_obj.animation_data_create()
local_obj.animation_data.action = action
elif (local_obj.animation_data and
local_obj.animation_data.action is not None):
@ -194,12 +199,14 @@ class BlendRigLoader(plugin.AssetLoader):
plugin.deselect_all()
create_animation = False
anim_file = None
if options is not None:
parent = options.get('parent')
transform = options.get('transform')
action = options.get('action')
create_animation = options.get('create_animation')
anim_file = options.get('animation_file')
if parent and transform:
location = transform.get('translation')
@ -252,6 +259,26 @@ class BlendRigLoader(plugin.AssetLoader):
plugin.deselect_all()
if anim_file:
bpy.ops.import_scene.fbx(filepath=anim_file, anim_offset=0.0)
imported = avalon_lib.get_selection()
armature = [
o for o in asset_group.children if o.type == 'ARMATURE'][0]
imported_group = [
o for o in imported if o.type == 'EMPTY'][0]
for obj in imported:
if obj.type == 'ARMATURE':
if not armature.animation_data:
armature.animation_data_create()
armature.animation_data.action = obj.animation_data.action
self._remove(imported_group)
bpy.data.objects.remove(imported_group)
bpy.context.scene.collection.objects.link(asset_group)
asset_group[AVALON_PROPERTY] = {
@ -348,6 +375,7 @@ class BlendRigLoader(plugin.AssetLoader):
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
metadata["parent"] = str(representation["parent"])
def exec_remove(self, container: Dict) -> bool:
"""Remove an existing asset group from a Blender scene.

View file

@ -1,11 +1,13 @@
import json
from typing import Generator
import bpy
import json
import pyblish.api
from avalon.blender.pipeline import AVALON_PROPERTY
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api.pipeline import (
AVALON_INSTANCES,
AVALON_PROPERTY,
)
class CollectInstances(pyblish.api.ContextPlugin):

View file

@ -1,10 +1,10 @@
import os
import bpy
from openpype import api
from openpype.hosts.blender.api import plugin
from avalon.blender.pipeline import AVALON_PROPERTY
import bpy
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractABC(api.Extractor):

View file

@ -2,7 +2,6 @@ import os
import bpy
# import avalon.blender.workio
import openpype.api

View file

@ -29,12 +29,13 @@ class ExtractBlendAnimation(openpype.api.Extractor):
if isinstance(obj, bpy.types.Object) and obj.type == 'EMPTY':
child = obj.children[0]
if child and child.type == 'ARMATURE':
if not obj.animation_data:
obj.animation_data_create()
obj.animation_data.action = child.animation_data.action
obj.animation_data_clear()
data_blocks.add(child.animation_data.action)
data_blocks.add(obj)
if child.animation_data and child.animation_data.action:
if not obj.animation_data:
obj.animation_data_create()
obj.animation_data.action = child.animation_data.action
obj.animation_data_clear()
data_blocks.add(child.animation_data.action)
data_blocks.add(obj)
bpy.data.libraries.write(filepath, data_blocks)

View file

@ -1,10 +1,10 @@
import os
import bpy
from openpype import api
from openpype.hosts.blender.api import plugin
import bpy
class ExtractCamera(api.Extractor):
"""Extract as the camera as FBX."""

View file

@ -1,10 +1,10 @@
import os
import bpy
from openpype import api
from openpype.hosts.blender.api import plugin
from avalon.blender.pipeline import AVALON_PROPERTY
import bpy
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractFBX(api.Extractor):
@ -50,6 +50,9 @@ class ExtractFBX(api.Extractor):
new_materials.append(mat)
new_materials_objs.append(obj)
scale_length = bpy.context.scene.unit_settings.scale_length
bpy.context.scene.unit_settings.scale_length = 0.01
# We export the fbx
bpy.ops.export_scene.fbx(
context,
@ -60,6 +63,8 @@ class ExtractFBX(api.Extractor):
add_leaf_bones=False
)
bpy.context.scene.unit_settings.scale_length = scale_length
plugin.deselect_all()
for mat in new_materials:

View file

@ -7,7 +7,7 @@ import bpy_extras.anim_utils
from openpype import api
from openpype.hosts.blender.api import plugin
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractAnimationFBX(api.Extractor):
@ -37,13 +37,6 @@ class ExtractAnimationFBX(api.Extractor):
armature = [
obj for obj in asset_group.children if obj.type == 'ARMATURE'][0]
asset_group_name = asset_group.name
asset_group.name = asset_group.get(AVALON_PROPERTY).get("asset_name")
armature_name = armature.name
original_name = armature_name.split(':')[1]
armature.name = original_name
object_action_pairs = []
original_actions = []
@ -66,6 +59,13 @@ class ExtractAnimationFBX(api.Extractor):
self.log.info("Object have no animation.")
return
asset_group_name = asset_group.name
asset_group.name = asset_group.get(AVALON_PROPERTY).get("asset_name")
armature_name = armature.name
original_name = armature_name.split(':')[1]
armature.name = original_name
object_action_pairs.append((armature, copy_action))
original_actions.append(curr_action)
@ -123,7 +123,7 @@ class ExtractAnimationFBX(api.Extractor):
json_path = os.path.join(stagingdir, json_filename)
json_dict = {
"instance_name": asset_group.get(AVALON_PROPERTY).get("namespace")
"instance_name": asset_group.get(AVALON_PROPERTY).get("objectName")
}
# collection = instance.data.get("name")

View file

@ -2,9 +2,12 @@ import os
import json
import bpy
import bpy_extras
import bpy_extras.anim_utils
from avalon import io
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
import openpype.api
@ -16,6 +19,99 @@ class ExtractLayout(openpype.api.Extractor):
families = ["layout"]
optional = True
def _export_animation(self, asset, instance, stagingdir, fbx_count):
n = fbx_count
for obj in asset.children:
if obj.type != "ARMATURE":
continue
object_action_pairs = []
original_actions = []
starting_frames = []
ending_frames = []
# For each armature, we make a copy of the current action
curr_action = None
copy_action = None
if obj.animation_data and obj.animation_data.action:
curr_action = obj.animation_data.action
copy_action = curr_action.copy()
curr_frame_range = curr_action.frame_range
starting_frames.append(curr_frame_range[0])
ending_frames.append(curr_frame_range[1])
else:
self.log.info("Object have no animation.")
continue
asset_group_name = asset.name
asset.name = asset.get(AVALON_PROPERTY).get("asset_name")
armature_name = obj.name
original_name = armature_name.split(':')[1]
obj.name = original_name
object_action_pairs.append((obj, copy_action))
original_actions.append(curr_action)
# We compute the starting and ending frames
max_frame = min(starting_frames)
min_frame = max(ending_frames)
# We bake the copy of the current action for each object
bpy_extras.anim_utils.bake_action_objects(
object_action_pairs,
frames=range(int(min_frame), int(max_frame)),
do_object=False,
do_clean=False
)
for o in bpy.data.objects:
o.select_set(False)
asset.select_set(True)
obj.select_set(True)
fbx_filename = f"{n:03d}.fbx"
filepath = os.path.join(stagingdir, fbx_filename)
override = plugin.create_blender_context(
active=asset, selected=[asset, obj])
bpy.ops.export_scene.fbx(
override,
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'EMPTY', 'ARMATURE'}
)
obj.name = armature_name
asset.name = asset_group_name
asset.select_set(False)
obj.select_set(False)
# We delete the baked action and set the original one back
for i in range(0, len(object_action_pairs)):
pair = object_action_pairs[i]
action = original_actions[i]
if action:
pair[0].animation_data.action = action
if pair[1]:
pair[1].user_clear()
bpy.data.actions.remove(pair[1])
return fbx_filename, n + 1
return None, n
def process(self, instance):
# Define extract output file path
stagingdir = self.staging_dir(instance)
@ -23,10 +119,16 @@ class ExtractLayout(openpype.api.Extractor):
# Perform extraction
self.log.info("Performing extraction..")
if "representations" not in instance.data:
instance.data["representations"] = []
json_data = []
fbx_files = []
asset_group = bpy.data.objects[str(instance)]
fbx_count = 0
for asset in asset_group.children:
metadata = asset.get(AVALON_PROPERTY)
@ -34,6 +136,7 @@ class ExtractLayout(openpype.api.Extractor):
family = metadata["family"]
self.log.debug("Parent: {}".format(parent))
# Get blend reference
blend = io.find_one(
{
"type": "representation",
@ -41,10 +144,39 @@ class ExtractLayout(openpype.api.Extractor):
"name": "blend"
},
projection={"_id": True})
blend_id = blend["_id"]
blend_id = None
if blend:
blend_id = blend["_id"]
# Get fbx reference
fbx = io.find_one(
{
"type": "representation",
"parent": io.ObjectId(parent),
"name": "fbx"
},
projection={"_id": True})
fbx_id = None
if fbx:
fbx_id = fbx["_id"]
# Get abc reference
abc = io.find_one(
{
"type": "representation",
"parent": io.ObjectId(parent),
"name": "abc"
},
projection={"_id": True})
abc_id = None
if abc:
abc_id = abc["_id"]
json_element = {}
json_element["reference"] = str(blend_id)
if blend_id:
json_element["reference"] = str(blend_id)
if fbx_id:
json_element["reference_fbx"] = str(fbx_id)
if abc_id:
json_element["reference_abc"] = str(abc_id)
json_element["family"] = family
json_element["instance_name"] = asset.name
json_element["asset_name"] = metadata["asset_name"]
@ -67,6 +199,16 @@ class ExtractLayout(openpype.api.Extractor):
"z": asset.scale.z
}
}
# Extract the animation as well
if family == "rig":
f, n = self._export_animation(
asset, instance, stagingdir, fbx_count)
if f:
fbx_files.append(f)
json_element["animation"] = f
fbx_count = n
json_data.append(json_element)
json_filename = "{}.json".format(instance.name)
@ -75,16 +217,32 @@ class ExtractLayout(openpype.api.Extractor):
with open(json_path, "w+") as file:
json.dump(json_data, fp=file, indent=2)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
json_representation = {
'name': 'json',
'ext': 'json',
'files': json_filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
instance.data["representations"].append(json_representation)
self.log.debug(fbx_files)
if len(fbx_files) == 1:
fbx_representation = {
'name': 'fbx',
'ext': '000.fbx',
'files': fbx_files[0],
"stagingDir": stagingdir,
}
instance.data["representations"].append(fbx_representation)
elif len(fbx_files) > 1:
fbx_representation = {
'name': 'fbx',
'ext': 'fbx',
'files': fbx_files,
"stagingDir": stagingdir,
}
instance.data["representations"].append(fbx_representation)
self.log.info("Extracted instance '%s' to: %s",
instance.name, representation)
instance.name, json_representation)

View file

@ -1,5 +1,5 @@
import pyblish.api
import avalon.blender.workio
from openpype.hosts.blender.api.workio import save_file
class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
@ -9,7 +9,7 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
label = "Increment Workfile Version"
optional = True
hosts = ["blender"]
families = ["animation", "model", "rig", "action"]
families = ["animation", "model", "rig", "action", "layout"]
def process(self, context):
@ -20,6 +20,6 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
path = context.data["currentFile"]
filepath = version_up(path)
avalon.blender.workio.save_file(filepath, copy=False)
save_file(filepath, copy=False)
self.log.info('Incrementing script version')

View file

@ -5,15 +5,15 @@ import openpype.hosts.blender.api.action
class ValidateObjectIsInObjectMode(pyblish.api.InstancePlugin):
"""Validate that the current object is in Object Mode."""
"""Validate that the objects in the instance are in Object Mode."""
order = pyblish.api.ValidatorOrder - 0.01
hosts = ["blender"]
families = ["model", "rig"]
families = ["model", "rig", "layout"]
category = "geometry"
label = "Object is in Object Mode"
label = "Validate Object Mode"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
optional = True
optional = False
@classmethod
def get_invalid(cls, instance) -> List:

View file

@ -1,3 +0,0 @@
from openpype.hosts.blender import api
api.install()

View file

@ -83,7 +83,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
staging_dir = self.staging_dir(instance)
# add default preset type for thumbnail and reviewable video
# update them with settings and overide in case the same
# update them with settings and override in case the same
# are found in there
export_presets = deepcopy(self.default_presets)
export_presets.update(self.export_presets_mapping)

View file

@ -74,6 +74,9 @@ class CollectInstances(pyblish.api.ContextPlugin):
instance = context.create_instance(label)
# Include `families` using `family` data
instance.data["families"] = [instance.data["family"]]
instance[:] = [node]
instance.data.update(data)

View file

@ -37,5 +37,7 @@ class ExtractVDBCache(openpype.api.Extractor):
"ext": "vdb",
"files": output,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
}
instance.data["representations"].append(representation)

View file

@ -218,12 +218,10 @@ def on_task_changed(*args):
)
def before_workfile_save(workfile_path):
if not workfile_path:
return
workdir = os.path.dirname(workfile_path)
copy_workspace_mel(workdir)
def before_workfile_save(event):
workdir_path = event.workdir_path
if workdir_path:
copy_workspace_mel(workdir_path)
class MayaDirmap(HostDirmap):

View file

@ -280,7 +280,7 @@ def shape_from_element(element):
return node
def collect_animation_data():
def collect_animation_data(fps=False):
"""Get the basic animation data
Returns:
@ -291,7 +291,6 @@ def collect_animation_data():
# get scene values as defaults
start = cmds.playbackOptions(query=True, animationStartTime=True)
end = cmds.playbackOptions(query=True, animationEndTime=True)
fps = mel.eval('currentTimeUnitToFPS()')
# build attributes
data = OrderedDict()
@ -299,7 +298,9 @@ def collect_animation_data():
data["frameEnd"] = end
data["handles"] = 0
data["step"] = 1.0
data["fps"] = fps
if fps:
data["fps"] = mel.eval('currentTimeUnitToFPS()')
return data
@ -2853,3 +2854,27 @@ def set_colorspace():
cmds.colorManagementPrefs(e=True, renderingSpaceName=renderSpace)
viewTransform = root_dict["viewTransform"]
cmds.colorManagementPrefs(e=True, viewTransformName=viewTransform)
@contextlib.contextmanager
def root_parent(nodes):
# type: (list) -> list
"""Context manager to un-parent provided nodes and return then back."""
import pymel.core as pm # noqa
node_parents = []
for node in nodes:
n = pm.PyNode(node)
try:
root = pm.listRelatives(n, parent=1)[0]
except IndexError:
root = None
node_parents.append((n, root))
try:
for node in node_parents:
node[0].setParent(world=True)
yield
finally:
for node in node_parents:
if node[1]:
node[0].setParent(node[1])

View file

@ -22,7 +22,7 @@ class CreateReview(plugin.Creator):
# get basic animation data : start / end / handles / steps
data = OrderedDict(**self.data)
animation_data = lib.collect_animation_data()
animation_data = lib.collect_animation_data(fps=True)
for key, value in animation_data.items():
data[key] = value

View file

@ -1,11 +1,58 @@
from openpype.hosts.maya.api import plugin
# -*- coding: utf-8 -*-
"""Creator for Unreal Static Meshes."""
from openpype.hosts.maya.api import plugin, lib
from avalon.api import Session
from openpype.api import get_project_settings
from maya import cmds # noqa
class CreateUnrealStaticMesh(plugin.Creator):
"""Unreal Static Meshes with collisions."""
name = "staticMeshMain"
label = "Unreal - Static Mesh"
family = "unrealStaticMesh"
icon = "cube"
dynamic_subset_keys = ["asset"]
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CreateUnrealStaticMesh, self).__init__(*args, **kwargs)
self._project_settings = get_project_settings(
Session["AVALON_PROJECT"])
@classmethod
def get_dynamic_data(
cls, variant, task_name, asset_id, project_name, host_name
):
dynamic_data = super(CreateUnrealStaticMesh, cls).get_dynamic_data(
variant, task_name, asset_id, project_name, host_name
)
dynamic_data["asset"] = Session.get("AVALON_ASSET")
return dynamic_data
def process(self):
with lib.undo_chunk():
instance = super(CreateUnrealStaticMesh, self).process()
content = cmds.sets(instance, query=True)
# empty set and process its former content
cmds.sets(content, rm=instance)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
collisions_set = cmds.sets(name="collisions_SET", empty=True)
cmds.sets([geometry_set, collisions_set], forceElement=instance)
members = cmds.ls(content, long=True) or []
children = cmds.listRelatives(members, allDescendents=True,
fullPath=True) or []
children = cmds.ls(children, type="transform")
for node in children:
if cmds.listRelatives(node, type="shape"):
if [
n for n in self.collision_prefixes
if node.startswith(n)
]:
cmds.sets(node, forceElement=collisions_set)
else:
cmds.sets(node, forceElement=geometry_set)

View file

@ -4,6 +4,8 @@ import os
import json
import appdirs
import requests
import six
import sys
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
@ -12,7 +14,15 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.api import get_system_settings
from openpype.api import (
get_system_settings,
get_project_settings
)
from openpype.modules import ModulesManager
from avalon.api import Session
from avalon.api import CreatorError
class CreateVRayScene(plugin.Creator):
@ -22,11 +32,40 @@ class CreateVRayScene(plugin.Creator):
family = "vrayscene"
icon = "cubes"
_project_settings = None
def __init__(self, *args, **kwargs):
"""Entry."""
super(CreateVRayScene, self).__init__(*args, **kwargs)
self._rs = renderSetup.instance()
self.data["exportOnFarm"] = False
deadline_settings = get_system_settings()["modules"]["deadline"]
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
self._project_settings = get_project_settings(
Session["AVALON_PROJECT"])
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
self._project_settings["deadline"]["deadline_servers"]
)
self.deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
if not self.deadline_servers:
self.deadline_servers = default_servers
except AttributeError:
# Handle situation were we had only one url for deadline.
manager = ModulesManager()
deadline_module = manager.modules_by_name["deadline"]
# get default deadline webservice url from deadline module
self.deadline_servers = deadline_module.deadline_urls
def process(self):
"""Entry point."""
@ -37,10 +76,10 @@ class CreateVRayScene(plugin.Creator):
use_selection = self.options.get("useSelection")
with lib.undo_chunk():
self._create_vray_instance_settings()
instance = super(CreateVRayScene, self).process()
self.instance = super(CreateVRayScene, self).process()
index = 1
namespace_name = "_{}".format(str(instance))
namespace_name = "_{}".format(str(self.instance))
try:
cmds.namespace(rm=namespace_name)
except RuntimeError:
@ -48,10 +87,19 @@ class CreateVRayScene(plugin.Creator):
pass
while(cmds.namespace(exists=namespace_name)):
namespace_name = "_{}{}".format(str(instance), index)
namespace_name = "_{}{}".format(str(self.instance), index)
index += 1
namespace = cmds.namespace(add=namespace_name)
# add Deadline server selection list
if self.deadline_servers:
cmds.scriptJob(
attributeChange=[
"{}.deadlineServers".format(self.instance),
self._deadline_webservice_changed
])
# create namespace with instance
layers = self._rs.getRenderLayers()
if use_selection:
@ -62,7 +110,7 @@ class CreateVRayScene(plugin.Creator):
render_set = cmds.sets(
n="{}:{}".format(namespace, layer.name()))
sets.append(render_set)
cmds.sets(sets, forceElement=instance)
cmds.sets(sets, forceElement=self.instance)
# if no render layers are present, create default one with
# asterix selector
@ -71,6 +119,52 @@ class CreateVRayScene(plugin.Creator):
collection = render_layer.createCollection("defaultCollection")
collection.getSelector().setPattern('*')
def _deadline_webservice_changed(self):
"""Refresh Deadline server dependent options."""
# get selected server
from maya import cmds
webservice = self.deadline_servers[
self.server_aliases[
cmds.getAttr("{}.deadlineServers".format(self.instance))
]
]
pools = self._get_deadline_pools(webservice)
cmds.deleteAttr("{}.primaryPool".format(self.instance))
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
cmds.addAttr(self.instance, longName="primaryPool",
attributeType="enum",
enumName=":".join(pools))
cmds.addAttr(self.instance, longName="secondaryPool",
attributeType="enum",
enumName=":".join(["-"] + pools))
def _get_deadline_pools(self, webservice):
# type: (str) -> list
"""Get pools from Deadline.
Args:
webservice (str): Server url.
Returns:
list: Pools.
Throws:
RuntimeError: If deadline webservice is unreachable.
"""
argument = "{}/api/pools?NamesOnly=true".format(webservice)
try:
response = self._requests_get(argument)
except requests.exceptions.ConnectionError as exc:
msg = 'Cannot connect to deadline web service'
self.log.error(msg)
six.reraise(
CreatorError,
CreatorError('{} - {}'.format(msg, exc)),
sys.exc_info()[2])
if not response.ok:
self.log.warning("No pools retrieved")
return []
return response.json()
def _create_vray_instance_settings(self):
# get pools
pools = []
@ -79,31 +173,29 @@ class CreateVRayScene(plugin.Creator):
deadline_enabled = system_settings["deadline"]["enabled"]
muster_enabled = system_settings["muster"]["enabled"]
deadline_url = system_settings["deadline"]["DEADLINE_REST_URL"]
muster_url = system_settings["muster"]["MUSTER_REST_URL"]
if deadline_enabled and muster_enabled:
self.log.error(
"Both Deadline and Muster are enabled. " "Cannot support both."
)
raise RuntimeError("Both Deadline and Muster are enabled")
raise CreatorError("Both Deadline and Muster are enabled")
self.server_aliases = self.deadline_servers.keys()
self.data["deadlineServers"] = self.server_aliases
if deadline_enabled:
argument = "{}/api/pools?NamesOnly=true".format(deadline_url)
# if default server is not between selected, use first one for
# initial list of pools.
try:
response = self._requests_get(argument)
except requests.exceptions.ConnectionError as e:
msg = 'Cannot connect to deadline web service'
self.log.error(msg)
raise RuntimeError('{} - {}'.format(msg, e))
if not response.ok:
self.log.warning("No pools retrieved")
else:
pools = response.json()
self.data["primaryPool"] = pools
# We add a string "-" to allow the user to not
# set any secondary pools
self.data["secondaryPool"] = ["-"] + pools
deadline_url = self.deadline_servers["default"]
except KeyError:
deadline_url = [
self.deadline_servers[k]
for k in self.deadline_servers.keys()
][0]
pool_names = self._get_deadline_pools(deadline_url)
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
@ -115,10 +207,10 @@ class CreateVRayScene(plugin.Creator):
if e.startswith("401"):
self.log.warning("access token expired")
self._show_login()
raise RuntimeError("Access token expired")
raise CreatorError("Access token expired")
except requests.exceptions.ConnectionError:
self.log.error("Cannot connect to Muster API endpoint.")
raise RuntimeError("Cannot connect to {}".format(muster_url))
raise CreatorError("Cannot connect to {}".format(muster_url))
pool_names = []
for pool in pools:
self.log.info(" - pool: {}".format(pool["name"]))
@ -140,7 +232,7 @@ class CreateVRayScene(plugin.Creator):
``MUSTER_PASSWORD``, ``MUSTER_REST_URL`` is loaded from presets.
Raises:
RuntimeError: If loaded credentials are invalid.
CreatorError: If loaded credentials are invalid.
AttributeError: If ``MUSTER_REST_URL`` is not set.
"""
@ -152,7 +244,7 @@ class CreateVRayScene(plugin.Creator):
self._token = muster_json.get("token", None)
if not self._token:
self._show_login()
raise RuntimeError("Invalid access token for Muster")
raise CreatorError("Invalid access token for Muster")
file.close()
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
if not self.MUSTER_REST_URL:
@ -162,7 +254,7 @@ class CreateVRayScene(plugin.Creator):
"""Get render pools from Muster.
Raises:
Exception: If pool list cannot be obtained from Muster.
CreatorError: If pool list cannot be obtained from Muster.
"""
params = {"authToken": self._token}
@ -178,12 +270,12 @@ class CreateVRayScene(plugin.Creator):
("Cannot get pools from "
"Muster: {}").format(response.status_code)
)
raise Exception("Cannot get pools from Muster")
raise CreatorError("Cannot get pools from Muster")
try:
pools = response.json()["ResponseData"]["pools"]
except ValueError as e:
self.log.error("Invalid response from Muster server {}".format(e))
raise Exception("Invalid response from Muster server")
raise CreatorError("Invalid response from Muster server")
return pools
@ -196,7 +288,7 @@ class CreateVRayScene(plugin.Creator):
login_response = self._requests_get(api_url, timeout=1)
if login_response.status_code != 200:
self.log.error("Cannot show login form to Muster")
raise Exception("Cannot show login form to Muster")
raise CreatorError("Cannot show login form to Muster")
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.

View file

@ -2,6 +2,72 @@ from avalon import api
from openpype.api import get_project_settings
import os
from maya import cmds
# List of 3rd Party Channels Mapping names for VRayVolumeGrid
# See: https://docs.chaosgroup.com/display/VRAY4MAYA/Input
# #Input-3rdPartyChannelsMapping
THIRD_PARTY_CHANNELS = {
2: "Smoke",
1: "Temperature",
10: "Fuel",
4: "Velocity.x",
5: "Velocity.y",
6: "Velocity.z",
7: "Red",
8: "Green",
9: "Blue",
14: "Wavelet Energy",
19: "Wavelet.u",
20: "Wavelet.v",
21: "Wavelet.w",
# These are not in UI or documentation but V-Ray does seem to set these.
15: "AdvectionOrigin.x",
16: "AdvectionOrigin.y",
17: "AdvectionOrigin.z",
}
def _fix_duplicate_vvg_callbacks():
"""Workaround to kill duplicate VRayVolumeGrids attribute callbacks.
This fixes a huge lag in Maya on switching 3rd Party Channels Mappings
or to different .vdb file paths because it spams an attribute changed
callback: `vvgUserChannelMappingsUpdateUI`.
ChaosGroup bug ticket: 154-008-9890
Found with:
- Maya 2019.2 on Windows 10
- V-Ray: V-Ray Next for Maya, update 1 version 4.12.01.00001
Bug still present in:
- Maya 2022.1 on Windows 10
- V-Ray 5 for Maya, Update 2.1 (v5.20.01 from Dec 16 2021)
"""
# todo(roy): Remove when new V-Ray release fixes duplicate calls
jobs = cmds.scriptJob(listJobs=True)
matched = set()
for entry in jobs:
# Remove the number
index, callback = entry.split(":", 1)
callback = callback.strip()
# Detect whether it is a `vvgUserChannelMappingsUpdateUI`
# attribute change callback
if callback.startswith('"-runOnce" 1 "-attributeChange" "'):
if '"vvgUserChannelMappingsUpdateUI(' in callback:
if callback in matched:
# If we've seen this callback before then
# delete the duplicate callback
cmds.scriptJob(kill=int(index))
else:
matched.add(callback)
class LoadVDBtoVRay(api.Loader):
@ -14,15 +80,24 @@ class LoadVDBtoVRay(api.Loader):
def load(self, context, name, namespace, data):
from maya import cmds
import avalon.maya.lib as lib
from avalon.maya.pipeline import containerise
assert os.path.exists(self.fname), (
"Path does not exist: %s" % self.fname
)
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "vdbcache"
# Ensure V-ray is loaded with the vrayvolumegrid
if not cmds.pluginInfo("vrayformaya", query=True, loaded=True):
cmds.loadPlugin("vrayformaya")
if not cmds.pluginInfo("vrayvolumegrid", query=True, loaded=True):
cmds.loadPlugin("vrayvolumegrid")
# Check if viewport drawing engine is Open GL Core (compat)
render_engine = None
compatible = "OpenGLCoreProfileCompat"
@ -30,13 +105,11 @@ class LoadVDBtoVRay(api.Loader):
render_engine = cmds.optionVar(query="vp2RenderingEngine")
if not render_engine or render_engine != compatible:
raise RuntimeError("Current scene's settings are incompatible."
"See Preferences > Display > Viewport 2.0 to "
"set the render engine to '%s'" % compatible)
self.log.warning("Current scene's settings are incompatible."
"See Preferences > Display > Viewport 2.0 to "
"set the render engine to '%s'" % compatible)
asset = context['asset']
version = context["version"]
asset_name = asset["name"]
namespace = namespace or lib.unique_namespace(
asset_name + "_",
@ -45,7 +118,7 @@ class LoadVDBtoVRay(api.Loader):
)
# Root group
label = "{}:{}".format(namespace, name)
label = "{}:{}_VDB".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
@ -55,20 +128,24 @@ class LoadVDBtoVRay(api.Loader):
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0])/255),
(float(c[1])/255),
(float(c[2])/255)
)
float(c[0]) / 255,
float(c[1]) / 255,
float(c[2]) / 255)
# Create VR
# Create VRayVolumeGrid
grid_node = cmds.createNode("VRayVolumeGrid",
name="{}VVGShape".format(label),
name="{}Shape".format(label),
parent=root)
# Set attributes
cmds.setAttr("{}.inFile".format(grid_node), self.fname, type="string")
cmds.setAttr("{}.inReadOffset".format(grid_node),
version["startFrames"])
# Ensure .currentTime is connected to time1.outTime
cmds.connectAttr("time1.outTime", grid_node + ".currentTime")
# Set path
self._set_path(grid_node, self.fname, show_preset_popup=True)
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
@ -79,3 +156,132 @@ class LoadVDBtoVRay(api.Loader):
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def _set_path(self, grid_node, path, show_preset_popup=True):
from openpype.hosts.maya.api.lib import attribute_values
from maya import cmds
def _get_filename_from_folder(path):
# Using the sequence of .vdb files we check the frame range, etc.
# to set the filename with #### padding.
files = sorted(x for x in os.listdir(path) if x.endswith(".vdb"))
if not files:
raise RuntimeError("Couldn't find .vdb files in: %s" % path)
if len(files) == 1:
# Ensure check for single file is also done in folder
fname = files[0]
else:
# Sequence
from avalon.vendor import clique
# todo: check support for negative frames as input
collections, remainder = clique.assemble(files)
assert len(collections) == 1, (
"Must find a single image sequence, "
"found: %s" % (collections,)
)
collection = collections[0]
fname = collection.format('{head}{{padding}}{tail}')
padding = collection.padding
if padding == 0:
# Clique doesn't provide padding if the frame number never
# starts with a zero and thus has never any visual padding.
# So we fall back to the smallest frame number as padding.
padding = min(len(str(i)) for i in collection.indexes)
# Supply frame/padding with # signs
padding_str = "#" * padding
fname = fname.format(padding=padding_str)
return os.path.join(path, fname)
# The path is either a single file or sequence in a folder so
# we do a quick lookup for our files
if os.path.isfile(path):
path = os.path.dirname(path)
path = _get_filename_from_folder(path)
# Even when not applying a preset V-Ray will reset the 3rd Party
# Channels Mapping of the VRayVolumeGrid when setting the .inPath
# value. As such we try and preserve the values ourselves.
# Reported as ChaosGroup bug ticket: 154-011-2909
# todo(roy): Remove when new V-Ray release preserves values
original_user_mapping = cmds.getAttr(grid_node + ".usrchmap") or ""
# Workaround for V-Ray bug: fix lag on path change, see function
_fix_duplicate_vvg_callbacks()
# Suppress preset pop-up if we want.
popup_attr = "{0}.inDontOfferPresets".format(grid_node)
popup = {popup_attr: not show_preset_popup}
with attribute_values(popup):
cmds.setAttr(grid_node + ".inPath", path, type="string")
# Reapply the 3rd Party channels user mapping when no preset popup
# was shown to the user
if not show_preset_popup:
channels = cmds.getAttr(grid_node + ".usrchmapallch").split(";")
channels = set(channels) # optimize lookup
restored_mapping = ""
for entry in original_user_mapping.split(";"):
if not entry:
# Ignore empty entries
continue
# If 3rd Party Channels selection channel still exists then
# add it again.
index, channel = entry.split(",")
attr = THIRD_PARTY_CHANNELS.get(int(index),
# Fallback for when a mapping
# was set that is not in the
# documentation
"???")
if channel in channels:
restored_mapping += entry + ";"
else:
self.log.warning("Can't preserve '%s' mapping due to "
"missing channel '%s' on node: "
"%s" % (attr, channel, grid_node))
if restored_mapping:
cmds.setAttr(grid_node + ".usrchmap",
restored_mapping,
type="string")
def update(self, container, representation):
path = api.get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="VRayVolumeGrid", long=True)
assert len(grid_nodes) > 0, "This is a bug"
# Update the VRayVolumeGrid
for grid_node in grid_nodes:
self._set_path(grid_node, path=path, show_preset_popup=False)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -17,8 +17,8 @@ from openpype.api import get_project_settings
class VRayProxyLoader(api.Loader):
"""Load VRay Proxy with Alembic or VrayMesh."""
families = ["vrayproxy"]
representations = ["vrmesh"]
families = ["vrayproxy", "model", "pointcache", "animation"]
representations = ["vrmesh", "abc"]
label = "Import VRay Proxy"
order = -10

View file

@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
"""Cleanup leftover nodes."""
from maya import cmds # noqa
import pyblish.api
class CleanNodesUp(pyblish.api.InstancePlugin):
"""Cleans up the staging directory after a successful publish.
This will also clean published renders and delete their parent directories.
"""
order = pyblish.api.IntegratorOrder + 10
label = "Clean Nodes"
optional = True
active = True
def process(self, instance):
if not instance.data.get("cleanNodes"):
self.log.info("Nothing to clean.")
return
nodes_to_clean = instance.data.pop("cleanNodes", [])
self.log.info("Removing {} nodes".format(len(nodes_to_clean)))
for node in nodes_to_clean:
try:
cmds.delete(node)
except ValueError:
# object might be already deleted, don't complain about it
pass

View file

@ -4,25 +4,31 @@ import pyblish.api
class CollectUnrealStaticMesh(pyblish.api.InstancePlugin):
"""Collect unreal static mesh
"""Collect Unreal Static Mesh
Ensures always only a single frame is extracted (current frame). This
also sets correct FBX options for later extraction.
Note:
This is a workaround so that the `pype.model` family can use the
same pointcache extractor implementation as animation and pointcaches.
This always enforces the "current" frame to be published.
"""
order = pyblish.api.CollectorOrder + 0.2
label = "Collect Model Data"
label = "Collect Unreal Static Meshes"
families = ["unrealStaticMesh"]
def process(self, instance):
# add fbx family to trigger fbx extractor
instance.data["families"].append("fbx")
# take the name from instance (without the `S_` prefix)
instance.data["staticMeshCombinedName"] = instance.name[2:]
geometry_set = [i for i in instance if i == "geometry_SET"]
instance.data["membersToCombine"] = cmds.sets(
geometry_set, query=True)
collision_set = [i for i in instance if i == "collisions_SET"]
instance.data["collisionMembers"] = cmds.sets(
collision_set, query=True)
# set fbx overrides on instance
instance.data["smoothingGroups"] = True
instance.data["smoothMesh"] = True

View file

@ -7,7 +7,7 @@ from maya import cmds
import pyblish.api
from avalon import api
from openpype.hosts.maya import lib
from openpype.hosts.maya.api import lib
class CollectVrayScene(pyblish.api.InstancePlugin):

View file

@ -1,7 +1,9 @@
# -*- coding: utf-8 -*-
import os
from maya import cmds
import maya.mel as mel
from maya import cmds # noqa
import maya.mel as mel # noqa
from openpype.hosts.maya.api.lib import root_parent
import pyblish.api
import avalon.maya
@ -192,10 +194,7 @@ class ExtractFBX(openpype.api.Extractor):
if isinstance(value, bool):
value = str(value).lower()
template = "FBXExport{0} -v {1}"
if key == "UpAxis":
template = "FBXExport{0} {1}"
template = "FBXExport{0} {1}" if key == "UpAxis" else "FBXExport{0} -v {1}" # noqa
cmd = template.format(key, value)
self.log.info(cmd)
mel.eval(cmd)
@ -205,9 +204,16 @@ class ExtractFBX(openpype.api.Extractor):
mel.eval("FBXExportGenerateLog -v false")
# Export
with avalon.maya.maintained_selection():
cmds.select(members, r=1, noExpand=True)
mel.eval('FBXExport -f "{}" -s'.format(path))
if "unrealStaticMesh" in instance.data["families"]:
with avalon.maya.maintained_selection():
with root_parent(members):
self.log.info("Un-parenting: {}".format(members))
cmds.select(members, r=1, noExpand=True)
mel.eval('FBXExport -f "{}" -s'.format(path))
else:
with avalon.maya.maintained_selection():
cmds.select(members, r=1, noExpand=True)
mel.eval('FBXExport -f "{}" -s'.format(path))
if "representations" not in instance.data:
instance.data["representations"] = []

View file

@ -0,0 +1,33 @@
# -*- coding: utf-8 -*-
"""Create Unreal Static Mesh data to be extracted as FBX."""
import openpype.api
import pyblish.api
from maya import cmds # noqa
class ExtractUnrealStaticMesh(openpype.api.Extractor):
"""Extract FBX from Maya. """
order = pyblish.api.ExtractorOrder - 0.1
label = "Extract Unreal Static Mesh"
families = ["unrealStaticMesh"]
def process(self, instance):
to_combine = instance.data.get("membersToCombine")
static_mesh_name = instance.data.get("staticMeshCombinedName")
self.log.info(
"merging {} into {}".format(
" + ".join(to_combine), static_mesh_name))
duplicates = cmds.duplicate(to_combine, ic=True)
cmds.polyUnite(
*duplicates,
n=static_mesh_name, ch=False)
if not instance.data.get("cleanNodes"):
instance.data["cleanNodes"] = []
instance.data["cleanNodes"].append(static_mesh_name)
instance.data["cleanNodes"] += duplicates
instance.data["setMembers"] = [static_mesh_name]
instance.data["setMembers"] += instance.data["collisionMembers"]

View file

@ -30,7 +30,8 @@ class ValidateAssemblyName(pyblish.api.InstancePlugin):
descendants = cmds.listRelatives(content_instance,
allDescendents=True,
fullPath=True) or []
descendants = cmds.ls(descendants, noIntermediate=True, long=True)
descendants = cmds.ls(
descendants, noIntermediate=True, type="transform")
content_instance = list(set(content_instance + descendants))
assemblies = cmds.ls(content_instance, assemblies=True, long=True)

View file

@ -0,0 +1,34 @@
from maya import cmds
import pyblish.api
from avalon import maya
import openpype.api
import openpype.hosts.maya.api.action
class ValidateCycleError(pyblish.api.InstancePlugin):
"""Validate nodes produce no cycle errors."""
order = openpype.api.ValidateContentsOrder + 0.05
label = "Cycle Errors"
hosts = ["maya"]
families = ["rig"]
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
optional = True
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Nodes produce a cycle error: %s" % invalid)
@classmethod
def get_invalid(cls, instance):
with maya.maintained_selection():
cmds.select(instance[:], noExpand=True)
plugs = cmds.cycleCheck(all=False, # check selection only
list=True)
invalid = cmds.ls(plugs, objectsOnly=True, long=True)
return invalid

View file

@ -1,27 +1,30 @@
# -*- coding: utf-8 -*-
from maya import cmds
from maya import cmds # noqa
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
from avalon.api import Session
from openpype.api import get_project_settings
import re
class ValidateUnrealStaticmeshName(pyblish.api.InstancePlugin):
class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin):
"""Validate name of Unreal Static Mesh
Unreals naming convention states that staticMesh should start with `SM`
prefix - SM_[Name]_## (Eg. SM_sube_01). This plugin also validates other
types of meshes - collision meshes:
prefix - SM_[Name]_## (Eg. SM_sube_01).These prefixes can be configured
in Settings UI. This plugin also validates other types of
meshes - collision meshes:
UBX_[RenderMeshName]_##:
UBX_[RenderMeshName]*:
Boxes are created with the Box objects type in
Max or with the Cube polygonal primitive in Maya.
You cannot move the vertices around or deform it
in any way to make it something other than a
rectangular prism, or else it will not work.
UCP_[RenderMeshName]_##:
UCP_[RenderMeshName]*:
Capsules are created with the Capsule object type.
The capsule does not need to have many segments
(8 is a good number) at all because it is
@ -29,7 +32,7 @@ class ValidateUnrealStaticmeshName(pyblish.api.InstancePlugin):
boxes, you should not move the individual
vertices around.
USP_[RenderMeshName]_##:
USP_[RenderMeshName]*:
Spheres are created with the Sphere object type.
The sphere does not need to have many segments
(8 is a good number) at all because it is
@ -37,7 +40,7 @@ class ValidateUnrealStaticmeshName(pyblish.api.InstancePlugin):
boxes, you should not move the individual
vertices around.
UCX_[RenderMeshName]_##:
UCX_[RenderMeshName]*:
Convex objects can be any completely closed
convex 3D shape. For example, a box can also be
a convex object
@ -52,67 +55,86 @@ class ValidateUnrealStaticmeshName(pyblish.api.InstancePlugin):
families = ["unrealStaticMesh"]
label = "Unreal StaticMesh Name"
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
regex_mesh = r"SM_(?P<renderName>.*)_(\d{2})"
regex_collision = r"((UBX)|(UCP)|(USP)|(UCX))_(?P<renderName>.*)_(\d{2})"
regex_mesh = r"(?P<renderName>.*))"
regex_collision = r"(?P<renderName>.*)"
@classmethod
def get_invalid(cls, instance):
# find out if supplied transform is group or not
def is_group(groupName):
try:
children = cmds.listRelatives(groupName, children=True)
for child in children:
if not cmds.ls(child, transforms=True):
return False
invalid = []
project_settings = get_project_settings(Session["AVALON_PROJECT"])
collision_prefixes = (
project_settings
["maya"]
["create"]
["CreateUnrealStaticMesh"]
["collision_prefixes"]
)
combined_geometry_name = instance.data.get(
"staticMeshCombinedName", None)
if cls.validate_mesh:
# compile regex for testing names
regex_mesh = "{}{}".format(
("_" + cls.static_mesh_prefix) or "", cls.regex_mesh
)
sm_r = re.compile(regex_mesh)
if not sm_r.match(combined_geometry_name):
cls.log.error("Mesh doesn't comply with name validation.")
return True
except Exception:
if cls.validate_collision:
collision_set = instance.data.get("collisionMembers", None)
# soft-fail is there are no collision objects
if not collision_set:
cls.log.warning("No collision objects to validate.")
return False
invalid = []
content_instance = instance.data.get("setMembers", None)
if not content_instance:
cls.log.error("Instance has no nodes!")
return True
pass
descendants = cmds.listRelatives(content_instance,
allDescendents=True,
fullPath=True) or []
regex_collision = "{}{}".format(
"(?P<prefix>({}))_".format(
"|".join("{0}".format(p) for p in collision_prefixes)
) or "", cls.regex_collision
)
descendants = cmds.ls(descendants, noIntermediate=True, long=True)
trns = cmds.ls(descendants, long=False, type=('transform'))
cl_r = re.compile(regex_collision)
# filter out groups
filter = [node for node in trns if not is_group(node)]
# compile regex for testing names
sm_r = re.compile(cls.regex_mesh)
cl_r = re.compile(cls.regex_collision)
sm_names = []
col_names = []
for obj in filter:
sm_m = sm_r.match(obj)
if sm_m is None:
# test if it matches collision mesh
cl_r = sm_r.match(obj)
if cl_r is None:
cls.log.error("invalid mesh name on: {}".format(obj))
for obj in collision_set:
cl_m = cl_r.match(obj)
if not cl_m:
cls.log.error("{} is invalid".format(obj))
invalid.append(obj)
else:
col_names.append((cl_r.group("renderName"), obj))
else:
sm_names.append(sm_m.group("renderName"))
expected_collision = "{}_{}".format(
cl_m.group("prefix"),
combined_geometry_name
)
for c_mesh in col_names:
if c_mesh[0] not in sm_names:
cls.log.error(("collision name {} doesn't match any "
"static mesh names.").format(obj))
invalid.append(c_mesh[1])
if not obj.startswith(expected_collision):
cls.log.error(
"Collision object name doesn't match "
"static mesh name"
)
cls.log.error("{}_{} != {}_{}".format(
cl_m.group("prefix"),
cl_m.group("renderName"),
cl_m.group("prefix"),
combined_geometry_name,
))
invalid.append(obj)
return invalid
def process(self, instance):
if not self.validate_mesh and not self.validate_collision:
self.log.info("Validation of both mesh and collision names"
"is disabled.")
return
if not instance.data.get("collisionMembers", None):
self.log.info("There are no collision objects to validate")
return
invalid = self.get_invalid(instance)

View file

@ -4,7 +4,7 @@ import re
import pyblish.api
from openpype.lib import prepare_template_data
from openpype.lib.plugin_tools import parse_json
from openpype.lib.plugin_tools import parse_json, get_batch_asset_task_info
from openpype.hosts.photoshop import api as photoshop
@ -29,26 +29,32 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
def process(self, context):
self.log.info("CollectRemoteInstances")
self.log.info("mapping:: {}".format(self.color_code_mapping))
self.log.debug("mapping:: {}".format(self.color_code_mapping))
# parse variant if used in webpublishing, comes from webpublisher batch
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
variant = "Main"
task_data = None
if batch_dir and os.path.exists(batch_dir):
# TODO check if batch manifest is same as tasks manifests
task_data = parse_json(os.path.join(batch_dir,
"manifest.json"))
if not task_data:
raise ValueError(
"Cannot parse batch meta in {} folder".format(batch_dir))
variant = task_data["variant"]
if not task_data:
raise ValueError(
"Cannot parse batch meta in {} folder".format(batch_dir))
variant = task_data["variant"]
stub = photoshop.stub()
layers = stub.get_layers()
asset, task_name, task_type = get_batch_asset_task_info(
task_data["context"])
if not task_name:
task_name = task_type
instance_names = []
for layer in layers:
self.log.info("Layer:: {}".format(layer))
self.log.debug("Layer:: {}".format(layer))
resolved_family, resolved_subset_template = self._resolve_mapping(
layer
)
@ -57,7 +63,7 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
resolved_subset_template))
if not resolved_subset_template or not resolved_family:
self.log.debug("!!! Not marked, skip")
self.log.debug("!!! Not found family or template, skip")
continue
if layer.parents:
@ -68,8 +74,8 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
instance.append(layer)
instance.data["family"] = resolved_family
instance.data["publish"] = layer.visible
instance.data["asset"] = context.data["assetEntity"]["name"]
instance.data["task"] = context.data["taskType"]
instance.data["asset"] = asset
instance.data["task"] = task_name
fill_pairs = {
"variant": variant,
@ -114,7 +120,6 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
family_list.append(mapping["family"])
subset_name_list.append(mapping["subset_template_name"])
if len(subset_name_list) > 1:
self.log.warning("Multiple mappings found for '{}'".
format(layer.name))

View file

@ -71,8 +71,18 @@ class AnimationFBXLoader(api.Loader):
if instance_name:
automated = True
actor_name = 'PersistentLevel.' + instance_name
actor = unreal.EditorLevelLibrary.get_actor_reference(actor_name)
# Old method to get the actor
# actor_name = 'PersistentLevel.' + instance_name
# actor = unreal.EditorLevelLibrary.get_actor_reference(actor_name)
actors = unreal.EditorLevelLibrary.get_all_level_actors()
for a in actors:
if a.get_class().get_name() != "SkeletalMeshActor":
continue
if a.get_actor_label() == instance_name:
actor = a
break
if not actor:
raise Exception(f"Could not find actor {instance_name}")
skeleton = actor.skeletal_mesh_component.skeletal_mesh.skeleton
task.options.set_editor_property('skeleton', skeleton)
@ -173,20 +183,35 @@ class AnimationFBXLoader(api.Loader):
task.set_editor_property('destination_name', name)
task.set_editor_property('replace_existing', True)
task.set_editor_property('automated', True)
task.set_editor_property('save', False)
task.set_editor_property('save', True)
# set import options here
task.options.set_editor_property(
'automated_import_should_detect_type', True)
'automated_import_should_detect_type', False)
task.options.set_editor_property(
'original_import_type', unreal.FBXImportType.FBXIT_ANIMATION)
'original_import_type', unreal.FBXImportType.FBXIT_SKELETAL_MESH)
task.options.set_editor_property(
'mesh_type_to_import', unreal.FBXImportType.FBXIT_ANIMATION)
task.options.set_editor_property('import_mesh', False)
task.options.set_editor_property('import_animations', True)
task.options.set_editor_property('override_full_name', True)
task.options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_SKINNING_WEIGHTS
task.options.anim_sequence_import_data.set_editor_property(
'animation_length',
unreal.FBXAnimationLengthImportType.FBXALIT_EXPORTED_TIME
)
task.options.anim_sequence_import_data.set_editor_property(
'import_meshes_in_bone_hierarchy', False)
task.options.anim_sequence_import_data.set_editor_property(
'use_default_sample_rate', True)
task.options.anim_sequence_import_data.set_editor_property(
'import_custom_attribute', True)
task.options.anim_sequence_import_data.set_editor_property(
'import_bone_tracks', True)
task.options.anim_sequence_import_data.set_editor_property(
'remove_redundant_keys', True)
task.options.anim_sequence_import_data.set_editor_property(
'convert_scene', True)
skeletal_mesh = unreal.EditorAssetLibrary.load_asset(
container.get('namespace') + "/" + container.get('asset_name'))
@ -219,7 +244,7 @@ class AnimationFBXLoader(api.Loader):
unreal.EditorAssetLibrary.delete_directory(path)
asset_content = unreal.EditorAssetLibrary.list_assets(
parent_path, recursive=False
parent_path, recursive=False, include_folder=True
)
if len(asset_content) == 0:

View file

@ -0,0 +1,544 @@
import os
import json
from pathlib import Path
import unreal
from unreal import EditorAssetLibrary
from unreal import EditorLevelLibrary
from unreal import AssetToolsHelpers
from unreal import FBXImportType
from unreal import MathLibrary as umath
from avalon import api, pipeline
from avalon.unreal import lib
from avalon.unreal import pipeline as unreal_pipeline
class LayoutLoader(api.Loader):
"""Load Layout from a JSON file"""
families = ["layout"]
representations = ["json"]
label = "Load Layout"
icon = "code-fork"
color = "orange"
def _get_asset_containers(self, path):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
asset_content = EditorAssetLibrary.list_assets(
path, recursive=True)
asset_containers = []
# Get all the asset containers
for a in asset_content:
obj = ar.get_asset_by_object_path(a)
if obj.get_asset().get_class().get_name() == 'AssetContainer':
asset_containers.append(obj)
return asset_containers
def _get_fbx_loader(self, loaders, family):
name = ""
if family == 'rig':
name = "SkeletalMeshFBXLoader"
elif family == 'model':
name = "StaticMeshFBXLoader"
elif family == 'camera':
name = "CameraLoader"
if name == "":
return None
for loader in loaders:
if loader.__name__ == name:
return loader
return None
def _get_abc_loader(self, loaders, family):
name = ""
if family == 'rig':
name = "SkeletalMeshAlembicLoader"
elif family == 'model':
name = "StaticMeshAlembicLoader"
if name == "":
return None
for loader in loaders:
if loader.__name__ == name:
return loader
return None
def _process_family(self, assets, classname, transform, inst_name=None):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
actors = []
for asset in assets:
obj = ar.get_asset_by_object_path(asset).get_asset()
if obj.get_class().get_name() == classname:
actor = EditorLevelLibrary.spawn_actor_from_object(
obj,
transform.get('translation')
)
if inst_name:
try:
# Rename method leads to crash
# actor.rename(name=inst_name)
# The label works, although it make it slightly more
# complicated to check for the names, as we need to
# loop through all the actors in the level
actor.set_actor_label(inst_name)
except Exception as e:
print(e)
actor.set_actor_rotation(unreal.Rotator(
umath.radians_to_degrees(
transform.get('rotation').get('x')),
-umath.radians_to_degrees(
transform.get('rotation').get('y')),
umath.radians_to_degrees(
transform.get('rotation').get('z')),
), False)
actor.set_actor_scale3d(transform.get('scale'))
actors.append(actor)
return actors
def _import_animation(
self, asset_dir, path, instance_name, skeleton, actors_dict,
animation_file):
anim_file = Path(animation_file)
anim_file_name = anim_file.with_suffix('')
anim_path = f"{asset_dir}/animations/{anim_file_name}"
# Import animation
task = unreal.AssetImportTask()
task.options = unreal.FbxImportUI()
task.set_editor_property(
'filename', str(path.with_suffix(f".{animation_file}")))
task.set_editor_property('destination_path', anim_path)
task.set_editor_property(
'destination_name', f"{instance_name}_animation")
task.set_editor_property('replace_existing', False)
task.set_editor_property('automated', True)
task.set_editor_property('save', False)
# set import options here
task.options.set_editor_property(
'automated_import_should_detect_type', False)
task.options.set_editor_property(
'original_import_type', FBXImportType.FBXIT_SKELETAL_MESH)
task.options.set_editor_property(
'mesh_type_to_import', FBXImportType.FBXIT_ANIMATION)
task.options.set_editor_property('import_mesh', False)
task.options.set_editor_property('import_animations', True)
task.options.set_editor_property('override_full_name', True)
task.options.set_editor_property('skeleton', skeleton)
task.options.anim_sequence_import_data.set_editor_property(
'animation_length',
unreal.FBXAnimationLengthImportType.FBXALIT_EXPORTED_TIME
)
task.options.anim_sequence_import_data.set_editor_property(
'import_meshes_in_bone_hierarchy', False)
task.options.anim_sequence_import_data.set_editor_property(
'use_default_sample_rate', True)
task.options.anim_sequence_import_data.set_editor_property(
'import_custom_attribute', True)
task.options.anim_sequence_import_data.set_editor_property(
'import_bone_tracks', True)
task.options.anim_sequence_import_data.set_editor_property(
'remove_redundant_keys', True)
task.options.anim_sequence_import_data.set_editor_property(
'convert_scene', True)
AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
asset_content = unreal.EditorAssetLibrary.list_assets(
anim_path, recursive=False, include_folder=False
)
animation = None
for a in asset_content:
unreal.EditorAssetLibrary.save_asset(a)
imported_asset_data = unreal.EditorAssetLibrary.find_asset_data(a)
imported_asset = unreal.AssetRegistryHelpers.get_asset(
imported_asset_data)
if imported_asset.__class__ == unreal.AnimSequence:
animation = imported_asset
break
if animation:
actor = None
if actors_dict.get(instance_name):
for a in actors_dict.get(instance_name):
if a.get_class().get_name() == 'SkeletalMeshActor':
actor = a
break
animation.set_editor_property('enable_root_motion', True)
actor.skeletal_mesh_component.set_editor_property(
'animation_mode', unreal.AnimationMode.ANIMATION_SINGLE_NODE)
actor.skeletal_mesh_component.animation_data.set_editor_property(
'anim_to_play', animation)
def _process(self, libpath, asset_dir, loaded=None):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
with open(libpath, "r") as fp:
data = json.load(fp)
all_loaders = api.discover(api.Loader)
if not loaded:
loaded = []
path = Path(libpath)
skeleton_dict = {}
actors_dict = {}
for element in data:
reference = None
if element.get('reference_fbx'):
reference = element.get('reference_fbx')
elif element.get('reference_abc'):
reference = element.get('reference_abc')
# If reference is None, this element is skipped, as it cannot be
# imported in Unreal
if not reference:
continue
instance_name = element.get('instance_name')
skeleton = None
if reference not in loaded:
loaded.append(reference)
family = element.get('family')
loaders = api.loaders_from_representation(
all_loaders, reference)
loader = None
if reference == element.get('reference_fbx'):
loader = self._get_fbx_loader(loaders, family)
elif reference == element.get('reference_abc'):
loader = self._get_abc_loader(loaders, family)
if not loader:
continue
options = {
"asset_dir": asset_dir
}
assets = api.load(
loader,
reference,
namespace=instance_name,
options=options
)
instances = [
item for item in data
if (item.get('reference_fbx') == reference or
item.get('reference_abc') == reference)]
for instance in instances:
transform = instance.get('transform')
inst = instance.get('instance_name')
actors = []
if family == 'model':
actors = self._process_family(
assets, 'StaticMesh', transform, inst)
elif family == 'rig':
actors = self._process_family(
assets, 'SkeletalMesh', transform, inst)
actors_dict[inst] = actors
if family == 'rig':
# Finds skeleton among the imported assets
for asset in assets:
obj = ar.get_asset_by_object_path(asset).get_asset()
if obj.get_class().get_name() == 'Skeleton':
skeleton = obj
if skeleton:
break
if skeleton:
skeleton_dict[reference] = skeleton
else:
skeleton = skeleton_dict.get(reference)
animation_file = element.get('animation')
if animation_file and skeleton:
self._import_animation(
asset_dir, path, instance_name, skeleton,
actors_dict, animation_file)
def _remove_family(self, assets, components, classname, propname):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
objects = []
for a in assets:
obj = ar.get_asset_by_object_path(a)
if obj.get_asset().get_class().get_name() == classname:
objects.append(obj)
for obj in objects:
for comp in components:
if comp.get_editor_property(propname) == obj.get_asset():
comp.get_owner().destroy_actor()
def _remove_actors(self, path):
asset_containers = self._get_asset_containers(path)
# Get all the static and skeletal meshes components in the level
components = EditorLevelLibrary.get_all_level_actors_components()
static_meshes_comp = [
c for c in components
if c.get_class().get_name() == 'StaticMeshComponent']
skel_meshes_comp = [
c for c in components
if c.get_class().get_name() == 'SkeletalMeshComponent']
# For all the asset containers, get the static and skeletal meshes.
# Then, check the components in the level and destroy the matching
# actors.
for asset_container in asset_containers:
package_path = asset_container.get_editor_property('package_path')
family = EditorAssetLibrary.get_metadata_tag(
asset_container.get_asset(), 'family')
assets = EditorAssetLibrary.list_assets(
str(package_path), recursive=False)
if family == 'model':
self._remove_family(
assets, static_meshes_comp, 'StaticMesh', 'static_mesh')
elif family == 'rig':
self._remove_family(
assets, skel_meshes_comp, 'SkeletalMesh', 'skeletal_mesh')
def load(self, context, name, namespace, options):
"""
Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
Returns:
list(str): list of container content
"""
# Create directory for asset and avalon container
root = "/Game/Avalon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
"{}/{}/{}".format(root, asset, name), suffix="")
container_name += suffix
EditorAssetLibrary.make_directory(asset_dir)
self._process(self.fname, asset_dir)
# Create Asset Container
lib.create_avalon_container(
container=container_name, path=asset_dir)
data = {
"schema": "openpype:container-2.0",
"id": pipeline.AVALON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
asset_content = EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
return asset_content
def update(self, container, representation):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
source_path = api.get_representation_path(representation)
destination_path = container["namespace"]
libpath = Path(api.get_representation_path(representation))
self._remove_actors(destination_path)
# Delete old animations
anim_path = f"{destination_path}/animations/"
EditorAssetLibrary.delete_directory(anim_path)
with open(source_path, "r") as fp:
data = json.load(fp)
references = [e.get('reference_fbx') for e in data]
asset_containers = self._get_asset_containers(destination_path)
loaded = []
# Delete all the assets imported with the previous version of the
# layout, if they're not in the new layout.
for asset_container in asset_containers:
if asset_container.get_editor_property(
'asset_name') == container["objectName"]:
continue
ref = EditorAssetLibrary.get_metadata_tag(
asset_container.get_asset(), 'representation')
ppath = asset_container.get_editor_property('package_path')
if ref not in references:
# If the asset is not in the new layout, delete it.
# Also check if the parent directory is empty, and delete that
# as well, if it is.
EditorAssetLibrary.delete_directory(ppath)
parent = os.path.dirname(str(ppath))
parent_content = EditorAssetLibrary.list_assets(
parent, recursive=False, include_folder=True
)
if len(parent_content) == 0:
EditorAssetLibrary.delete_directory(parent)
else:
# If the asset is in the new layout, search the instances in
# the JSON file, and create actors for them.
actors_dict = {}
skeleton_dict = {}
for element in data:
reference = element.get('reference_fbx')
instance_name = element.get('instance_name')
skeleton = None
if reference == ref and ref not in loaded:
loaded.append(ref)
family = element.get('family')
assets = EditorAssetLibrary.list_assets(
ppath, recursive=True, include_folder=False)
instances = [
item for item in data
if item.get('reference_fbx') == reference]
for instance in instances:
transform = instance.get('transform')
inst = instance.get('instance_name')
actors = []
if family == 'model':
actors = self._process_family(
assets, 'StaticMesh', transform, inst)
elif family == 'rig':
actors = self._process_family(
assets, 'SkeletalMesh', transform, inst)
actors_dict[inst] = actors
if family == 'rig':
# Finds skeleton among the imported assets
for asset in assets:
obj = ar.get_asset_by_object_path(
asset).get_asset()
if obj.get_class().get_name() == 'Skeleton':
skeleton = obj
if skeleton:
break
if skeleton:
skeleton_dict[reference] = skeleton
else:
skeleton = skeleton_dict.get(reference)
animation_file = element.get('animation')
if animation_file and skeleton:
self._import_animation(
destination_path, libpath,
instance_name, skeleton,
actors_dict, animation_file)
self._process(source_path, destination_path, loaded)
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
asset_content = EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
def remove(self, container):
"""
First, destroy all actors of the assets to be removed. Then, deletes
the asset's directory.
"""
path = container["namespace"]
parent_path = os.path.dirname(path)
self._remove_actors(path)
EditorAssetLibrary.delete_directory(path)
asset_content = EditorAssetLibrary.list_assets(
parent_path, recursive=False, include_folder=True
)
if len(asset_content) == 0:
EditorAssetLibrary.delete_directory(parent_path)

View file

@ -15,7 +15,7 @@ class SkeletalMeshFBXLoader(api.Loader):
icon = "cube"
color = "orange"
def load(self, context, name, namespace, data):
def load(self, context, name, namespace, options):
"""
Load and containerise representation into Content Browser.
@ -40,6 +40,8 @@ class SkeletalMeshFBXLoader(api.Loader):
# Create directory for asset and avalon container
root = "/Game/Avalon/Assets"
if options and options.get("asset_dir"):
root = options["asset_dir"]
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:

View file

@ -40,7 +40,7 @@ class StaticMeshFBXLoader(api.Loader):
return task
def load(self, context, name, namespace, data):
def load(self, context, name, namespace, options):
"""
Load and containerise representation into Content Browser.
@ -65,6 +65,8 @@ class StaticMeshFBXLoader(api.Loader):
# Create directory for asset and avalon container
root = "/Game/Avalon/Assets"
if options and options.get("asset_dir"):
root = options["asset_dir"]
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:

View file

@ -0,0 +1,139 @@
import os
import shutil
import pyblish.api
from openpype.lib import (
get_ffmpeg_tool_path,
run_subprocess,
get_transcode_temp_directory,
convert_for_ffmpeg,
should_convert_for_ffmpeg
)
class ExtractThumbnail(pyblish.api.InstancePlugin):
"""Create jpg thumbnail from input using ffmpeg."""
label = "Extract Thumbnail"
order = pyblish.api.ExtractorOrder
families = [
"render",
"image"
]
hosts = ["webpublisher"]
targets = ["filespublish"]
def process(self, instance):
self.log.info("subset {}".format(instance.data['subset']))
filtered_repres = self._get_filtered_repres(instance)
for repre in filtered_repres:
repre_files = repre["files"]
if not isinstance(repre_files, (list, tuple)):
input_file = repre_files
else:
file_index = int(float(len(repre_files)) * 0.5)
input_file = repre_files[file_index]
stagingdir = os.path.normpath(repre["stagingDir"])
full_input_path = os.path.join(stagingdir, input_file)
self.log.info("Input filepath: {}".format(full_input_path))
do_convert = should_convert_for_ffmpeg(full_input_path)
# If result is None the requirement of conversion can't be
# determined
if do_convert is None:
self.log.info((
"Can't determine if representation requires conversion."
" Skipped."
))
continue
# Do conversion if needed
# - change staging dir of source representation
# - must be set back after output definitions processing
convert_dir = None
if do_convert:
convert_dir = get_transcode_temp_directory()
filename = os.path.basename(full_input_path)
convert_for_ffmpeg(
full_input_path,
convert_dir,
None,
None,
self.log
)
full_input_path = os.path.join(convert_dir, filename)
filename = os.path.splitext(input_file)[0]
while filename.endswith("."):
filename = filename[:-1]
thumbnail_filename = filename + "_thumbnail.jpg"
full_output_path = os.path.join(stagingdir, thumbnail_filename)
self.log.info("output {}".format(full_output_path))
ffmpeg_args = [
get_ffmpeg_tool_path("ffmpeg"),
"-y",
"-i", full_input_path,
"-vframes", "1",
full_output_path
]
# run subprocess
self.log.debug("{}".format(" ".join(ffmpeg_args)))
try: # temporary until oiiotool is supported cross platform
run_subprocess(
ffmpeg_args, logger=self.log
)
except RuntimeError as exp:
if "Compression" in str(exp):
self.log.debug(
"Unsupported compression on input files. Skipping!!!"
)
return
self.log.warning("Conversion crashed", exc_info=True)
raise
new_repre = {
"name": "thumbnail",
"ext": "jpg",
"files": thumbnail_filename,
"stagingDir": stagingdir,
"thumbnail": True,
"tags": ["thumbnail"]
}
# adding representation
self.log.debug("Adding: {}".format(new_repre))
instance.data["representations"].append(new_repre)
# Cleanup temp folder
if convert_dir is not None and os.path.exists(convert_dir):
shutil.rmtree(convert_dir)
def _get_filtered_repres(self, instance):
filtered_repres = []
repres = instance.data.get("representations") or []
for repre in repres:
self.log.debug(repre)
tags = repre.get("tags") or []
# Skip instance if already has thumbnail representation
if "thumbnail" in tags:
return []
if "review" not in tags:
continue
if not repre.get("files"):
self.log.info((
"Representation \"{}\" don't have files. Skipping"
).format(repre["name"]))
continue
filtered_repres.append(repre)
return filtered_repres

View file

@ -175,7 +175,8 @@ from .openpype_version import (
get_expected_version,
is_running_from_build,
is_running_staging,
is_current_version_studio_latest
is_current_version_studio_latest,
is_current_version_higher_than_expected
)
terminal = Terminal

View file

@ -1490,6 +1490,7 @@ def _prepare_last_workfile(data, workdir):
import avalon.api
log = data["log"]
_workdir_data = data.get("workdir_data")
if not _workdir_data:
log.info(
@ -1503,9 +1504,15 @@ def _prepare_last_workfile(data, workdir):
project_name = data["project_name"]
task_name = data["task_name"]
task_type = data["task_type"]
start_last_workfile = should_start_last_workfile(
project_name, app.host_name, task_name, task_type
)
start_last_workfile = data.get("start_last_workfile")
if start_last_workfile is None:
start_last_workfile = should_start_last_workfile(
project_name, app.host_name, task_name, task_type
)
else:
log.info("Opening of last workfile was disabled by user")
data["start_last_workfile"] = start_last_workfile
workfile_startup = should_workfile_tool_start(

View file

@ -195,3 +195,32 @@ def is_current_version_studio_latest():
expected_version = get_expected_version()
# Check if current version is expected version
return current_version == expected_version
def is_current_version_higher_than_expected():
"""Is current OpenPype version higher than version defined by studio.
Returns:
None: Can't determine. e.g. when running from code or the build is
too old.
bool: True when is higher than studio version.
"""
output = None
# Skip if is not running from build or build does not support version
# control or path to folder with zip files is not accessible
if (
not is_running_from_build()
or not op_version_control_available()
or not openpype_path_is_accessible()
):
return output
# Get OpenPypeVersion class
OpenPypeVersion = get_OpenPypeVersion()
# Convert current version to OpenPypeVersion object
current_version = OpenPypeVersion(version=get_openpype_version())
# Get expected version (from settings)
expected_version = get_expected_version()
# Check if current version is expected version
return current_version > expected_version

View file

@ -10,11 +10,12 @@ from .execute import get_openpype_execute_args
from .local_settings import get_local_site_id
from .openpype_version import (
is_running_from_build,
get_openpype_version
get_openpype_version,
get_build_version
)
def get_pype_info():
def get_openpype_info():
"""Information about currently used Pype process."""
executable_args = get_openpype_execute_args()
if is_running_from_build():
@ -23,6 +24,7 @@ def get_pype_info():
version_type = "code"
return {
"build_verison": get_build_version(),
"version": get_openpype_version(),
"version_type": version_type,
"executable": executable_args[-1],
@ -51,7 +53,7 @@ def get_workstation_info():
def get_all_current_info():
"""All information about current process in one dictionary."""
return {
"pype": get_pype_info(),
"pype": get_openpype_info(),
"workstation": get_workstation_info(),
"env": os.environ.copy(),
"local_settings": get_local_settings()

View file

@ -34,11 +34,17 @@ def get_vendor_bin_path(bin_app):
def get_oiio_tools_path(tool="oiiotool"):
"""Path to vendorized OpenImageIO tool executables.
On Window it adds .exe extension if missing from tool argument.
Args:
tool (string): Tool name (oiiotool, maketx, ...).
Default is "oiiotool".
"""
oiio_dir = get_vendor_bin_path("oiio")
if platform.system().lower() == "windows" and not tool.lower().endswith(
".exe"
):
tool = "{}.exe".format(tool)
return os.path.join(oiio_dir, tool)

View file

@ -164,7 +164,7 @@ class ProcessEventHub(SocketBaseEventHub):
sys.exit(0)
def wait(self, duration=None):
"""Overriden wait
"""Overridden wait
Event are loaded from Mongo DB when queue is empty. Handled event is
set as processed in Mongo DB.
"""

View file

@ -95,7 +95,7 @@ class DropboxHandler(AbstractProvider):
"key": "acting_as_member",
"label": "Acting As Member"
},
# roots could be overriden only on Project level, User cannot
# roots could be overridden only on Project level, User cannot
{
"key": "root",
"label": "Roots",

View file

@ -119,7 +119,7 @@ class GDriveHandler(AbstractProvider):
# {platform} tells that value is multiplatform and only specific OS
# should be returned
editable = [
# credentials could be overriden on Project or User level
# credentials could be overridden on Project or User level
{
"type": "path",
"key": "credentials_url",
@ -127,7 +127,7 @@ class GDriveHandler(AbstractProvider):
"multiplatform": True,
"placeholder": "Credentials url"
},
# roots could be overriden only on Project leve, User cannot
# roots could be overridden only on Project level, User cannot
{
"key": "root",
"label": "Roots",
@ -414,7 +414,7 @@ class GDriveHandler(AbstractProvider):
def delete_folder(self, path, force=False):
"""
Deletes folder on GDrive. Checks if folder contains any files or
subfolders. In that case raises error, could be overriden by
subfolders. In that case raises error, could be overridden by
'force' argument.
In that case deletes folder on 'path' and all its children.

View file

@ -97,7 +97,7 @@ class SFTPHandler(AbstractProvider):
# {platform} tells that value is multiplatform and only specific OS
# should be returned
editable = [
# credentials could be overriden on Project or User level
# credentials could be overridden on Project or User level
{
'key': "sftp_host",
'label': "SFTP host name",
@ -129,7 +129,7 @@ class SFTPHandler(AbstractProvider):
'label': "SFTP user ssh key password",
'type': 'text'
},
# roots could be overriden only on Project leve, User cannot
# roots could be overridden only on Project level, User cannot
{
"key": "root",
"label": "Roots",

View file

@ -1073,7 +1073,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
"""
Returns settings for 'studio' and user's local site
Returns base values from setting, not overriden by Local Settings,
Returns base values from setting, not overridden by Local Settings,
eg. value used to push TO LS not to get actual value for syncing.
"""
if not project_name:

View file

@ -115,7 +115,7 @@ class ITrayAction(ITrayModule):
Add action to tray menu which will trigger `on_action_trigger`.
It is expected to be used for showing tools.
Methods `tray_start`, `tray_exit` and `connect_with_modules` are overriden
Methods `tray_start`, `tray_exit` and `connect_with_modules` are overridden
as it's not expected that action will use them. But it is possible if
necessary.
"""

View file

@ -72,7 +72,7 @@ class WorkerRpc(JsonRpc):
self._job_queue.remove_worker(worker)
async def handle_websocket_request(self, http_request):
"""Overide this method to catch CLOSING messages."""
"""Override this method to catch CLOSING messages."""
http_request.msg_id = 0
http_request.pending = {}

View file

@ -1,3 +1,8 @@
from .events import (
BaseEvent,
BeforeWorkfileSave
)
from .attribute_definitions import (
AbtractAttrDef,
UnknownDef,
@ -9,6 +14,9 @@ from .attribute_definitions import (
__all__ = (
"BaseEvent",
"BeforeWorkfileSave",
"AbtractAttrDef",
"UnknownDef",
"NumberDef",

View file

@ -0,0 +1,51 @@
"""Events holding data about specific event."""
# Inherit from 'object' for Python 2 hosts
class BaseEvent(object):
"""Base event object.
Can be used to anything because data are not much specific. Only required
argument is topic which defines why event is happening and may be used for
filtering.
Arg:
topic (str): Identifier of event.
data (Any): Data specific for event. Dictionary is recommended.
"""
_data = {}
def __init__(self, topic, data=None):
self._topic = topic
if data is None:
data = {}
self._data = data
@property
def data(self):
return self._data
@property
def topic(self):
return self._topic
@classmethod
def emit(cls, *args, **kwargs):
"""Create object of event and emit.
Args:
Same args as '__init__' expects which may be class specific.
"""
from avalon import pipeline
obj = cls(*args, **kwargs)
pipeline.emit(obj.topic, [obj])
return obj
class BeforeWorkfileSave(BaseEvent):
"""Before workfile changes event data."""
def __init__(self, filename, workdir):
super(BeforeWorkfileSave, self).__init__("before.workfile.save")
self.filename = filename
self.workdir_path = workdir

View file

@ -31,7 +31,7 @@ class DiscoverResult:
def publish_plugins_discover(paths=None):
"""Find and return available pyblish plug-ins
Overriden function from `pyblish` module to be able collect crashed files
Overridden function from `pyblish` module to be able collect crashed files
and reason of their crash.
Arguments:

View file

@ -46,7 +46,7 @@ class CollectOtioReview(pyblish.api.InstancePlugin):
# loop all tracks and match with name in `reviewTrack`
for track in otio_timeline.tracks:
if review_track_name not in track.name:
if review_track_name != track.name:
continue
# process correct track

View file

@ -11,6 +11,7 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder
label = 'Collect Scene Version'
# configurable in Settings
hosts = [
"aftereffects",
"blender",
@ -26,7 +27,19 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
"tvpaint"
]
# in some cases of headless publishing (for example webpublisher using PS)
# you want to ignore version from name and let integrate use next version
skip_hosts_headless_publish = []
def process(self, context):
# tests should be close to regular publish as possible
if (
os.environ.get("HEADLESS_PUBLISH")
and not os.environ.get("IS_TEST")
and context.data["hostName"] in self.skip_hosts_headless_publish):
self.log.debug("Skipping for headless publishing")
return
assert context.data.get('currentFile'), "Cannot get current file"
filename = os.path.basename(context.data.get('currentFile'))

View file

@ -24,7 +24,7 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
"imagesequence", "render", "render2d",
"source", "plate", "take"
]
hosts = ["shell", "fusion", "resolve", "webpublisher"]
hosts = ["shell", "fusion", "resolve"]
enabled = False
# presetable attribute

View file

@ -273,6 +273,8 @@ class ExtractOTIOReview(openpype.api.Extractor):
src_start = int(avl_start + start)
avl_durtation = int(avl_range.duration.value)
self.need_offset = bool(avl_start != 0 and src_start != 0)
# if media start is les then clip requires
if src_start < avl_start:
# calculate gap
@ -408,11 +410,17 @@ class ExtractOTIOReview(openpype.api.Extractor):
"""
padding = "{{:0{}d}}".format(self.padding)
# create frame offset
offset = 0
if self.need_offset:
offset = 1
if end_offset:
new_frames = list()
start_frame = self.used_frames[-1]
for index in range((end_offset + 1),
(int(end_offset + duration) + 1)):
for index in range((end_offset + offset),
(int(end_offset + duration) + offset)):
seq_number = padding.format(start_frame + index)
self.log.debug(
"index: `{}` | seq_number: `{}`".format(index, seq_number))

View file

@ -389,6 +389,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
repre["ext"] = ext
template_data["ext"] = ext
self.log.info(template_name)
template = os.path.normpath(
anatomy.templates[template_name]["path"])

View file

@ -10,7 +10,7 @@ class ValidateVersion(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Version"
hosts = ["nuke", "maya", "blender", "standalonepublisher"]
hosts = ["nuke", "maya", "houdini", "blender", "standalonepublisher"]
optional = False
active = True

View file

@ -252,7 +252,7 @@ class ModifiedBurnins(ffmpeg_burnins.Burnins):
- required IF start frame is not set when using frames or timecode burnins
On initializing class can be set General options through "options_init" arg.
General can be overriden when adding burnin
General can be overridden when adding burnin
'''
TOP_CENTERED = ffmpeg_burnins.TOP_CENTERED
@ -549,7 +549,7 @@ def burnins_from_data(
codec_data (list): All codec related arguments in list.
options (dict): Options for burnins.
burnin_values (dict): Contain positioned values.
overwrite (bool): Output will be overriden if already exists,
overwrite (bool): Output will be overwritten if already exists,
True by default.
Presets must be set separately. Should be dict with 2 keys:

View file

@ -2,14 +2,14 @@ import re
# Metadata keys for work with studio and project overrides
M_OVERRIDEN_KEY = "__overriden_keys__"
M_OVERRIDDEN_KEY = "__overriden_keys__"
# Metadata key for storing information about environments
M_ENVIRONMENT_KEY = "__environment_keys__"
# Metadata key for storing dynamic created labels
M_DYNAMIC_KEY_LABEL = "__dynamic_keys_labels__"
METADATA_KEYS = (
M_OVERRIDEN_KEY,
M_OVERRIDDEN_KEY,
M_ENVIRONMENT_KEY,
M_DYNAMIC_KEY_LABEL
)
@ -32,7 +32,7 @@ KEY_REGEX = re.compile(r"^[{}]+$".format(KEY_ALLOWED_SYMBOLS))
__all__ = (
"M_OVERRIDEN_KEY",
"M_OVERRIDDEN_KEY",
"M_ENVIRONMENT_KEY",
"M_DYNAMIC_KEY_LABEL",

View file

@ -27,5 +27,10 @@
"path": "{@folder}/{@file}"
},
"delivery": {},
"unreal": {
"folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/{@version}",
"file": "{subset}_{@version}<_{output}><.{@frame}>.{ext}",
"path": "{@folder}/{@file}"
},
"others": {}
}

View file

@ -3,6 +3,24 @@
"CollectAnatomyInstanceData": {
"follow_workfile_version": false
},
"CollectSceneVersion": {
"hosts": [
"aftereffects",
"blender",
"celaction",
"fusion",
"harmony",
"hiero",
"houdini",
"maya",
"nuke",
"photoshop",
"resolve",
"tvpaint"
],
"skip_hosts_headless_publish": [
]
},
"ValidateEditorialAssetName": {
"enabled": true,
"optional": false
@ -219,7 +237,7 @@
"hosts": [],
"task_types": [],
"tasks": [],
"template": "{family}{Variant}"
"template": "{family}{variant}"
},
{
"families": [
@ -264,6 +282,17 @@
"task_types": [],
"tasks": [],
"template": "render{Task}{Variant}"
},
{
"families": [
"unrealStaticMesh"
],
"hosts": [
"maya"
],
"task_types": [],
"tasks": [],
"template": "S_{asset}{variant}"
}
]
},
@ -297,6 +326,7 @@
"family_filter_profiles": [
{
"hosts": [],
"is_include": true,
"task_types": [],
"filter_families": []
}

View file

@ -46,6 +46,20 @@
"aov_separator": "underscore",
"default_render_image_folder": "renders"
},
"CreateUnrealStaticMesh": {
"enabled": true,
"defaults": [
"",
"_Main"
],
"static_mesh_prefix": "S_",
"collision_prefixes": [
"UBX",
"UCP",
"USP",
"UCX"
]
},
"CreateAnimation": {
"enabled": true,
"defaults": [
@ -123,12 +137,6 @@
"Anim"
]
},
"CreateUnrealStaticMesh": {
"enabled": true,
"defaults": [
"Main"
]
},
"CreateVrayProxy": {
"enabled": true,
"defaults": [
@ -180,6 +188,18 @@
"whitelist_native_plugins": false,
"authorized_plugins": []
},
"ValidateCycleError": {
"enabled": true,
"optional": false,
"families": [
"rig"
]
},
"ValidateUnrealStaticMeshName": {
"enabled": true,
"validate_mesh": false,
"validate_collision": true
},
"ValidateRenderSettings": {
"arnold_render_attributes": [],
"vray_render_attributes": [],
@ -197,6 +217,11 @@
"regex": "(.*)_(\\d)*_(?P<shader>.*)_(GEO)",
"top_level_regex": ".*_GRP"
},
"ValidateModelContent": {
"enabled": true,
"optional": false,
"validate_top_group": true
},
"ValidateTransformNamingSuffix": {
"enabled": true,
"SUFFIX_NAMING_TABLE": {
@ -281,11 +306,6 @@
"optional": true,
"active": true
},
"ValidateModelContent": {
"enabled": true,
"optional": false,
"validate_top_group": true
},
"ValidateNoAnimation": {
"enabled": false,
"optional": true,

View file

@ -12,7 +12,7 @@
{
"color_code": [],
"layer_name_regex": [],
"family": "",
"family": "image",
"subset_template_name": ""
}
]

View file

@ -2,9 +2,6 @@
"studio_name": "Studio name",
"studio_code": "stu",
"admin_password": "",
"production_version": "",
"staging_version": "",
"version_check_interval": 5,
"environment": {
"__environment_keys__": {
"global": []
@ -19,5 +16,8 @@
"windows": [],
"darwin": [],
"linux": []
}
},
"production_version": "",
"staging_version": "",
"version_check_interval": 5
}

View file

@ -752,7 +752,7 @@ class BaseItemEntity(BaseEntity):
@abstractmethod
def _add_to_project_override(self, on_change_trigger):
"""Item's implementation to set values as overriden for project.
"""Item's implementation to set values as overridden for project.
Mark item and all it's children to be stored as project overrides.
"""
@ -794,7 +794,7 @@ class BaseItemEntity(BaseEntity):
"""Item's implementation to remove project overrides.
Mark item as does not have project overrides. Must not change
`was_overriden` attribute value.
`was_overridden` attribute value.
Args:
on_change_trigger (list): Callbacks of `on_change` should be stored

Some files were not shown because too many files have changed in this diff Show more