mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into feature/new_publisher_core
This commit is contained in:
commit
92838aa150
99 changed files with 4401 additions and 773 deletions
65
CHANGELOG.md
65
CHANGELOG.md
|
|
@ -1,8 +1,40 @@
|
|||
# Changelog
|
||||
|
||||
## [3.6.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.5.0...HEAD)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: a host basic integration [\#2165](https://github.com/pypeclub/OpenPype/pull/2165)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Tools: Experimental tools [\#2167](https://github.com/pypeclub/OpenPype/pull/2167)
|
||||
- Loader: Refactor and use OpenPype stylesheets [\#2166](https://github.com/pypeclub/OpenPype/pull/2166)
|
||||
- Add loader for linked smart objects in photoshop [\#2149](https://github.com/pypeclub/OpenPype/pull/2149)
|
||||
- Burnins: DNxHD profiles handling [\#2142](https://github.com/pypeclub/OpenPype/pull/2142)
|
||||
- Tools: Single access point for host tools [\#2139](https://github.com/pypeclub/OpenPype/pull/2139)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- StandalonePublisher: Source validator don't expect representations [\#2190](https://github.com/pypeclub/OpenPype/pull/2190)
|
||||
- MacOS: Launching of applications may cause Permissions error [\#2175](https://github.com/pypeclub/OpenPype/pull/2175)
|
||||
- Blender: Fix 'Deselect All' with object not in 'Object Mode' [\#2163](https://github.com/pypeclub/OpenPype/pull/2163)
|
||||
- Tools: Stylesheets are applied after tool show [\#2161](https://github.com/pypeclub/OpenPype/pull/2161)
|
||||
- Maya: Collect render - fix UNC path support 🐛 [\#2158](https://github.com/pypeclub/OpenPype/pull/2158)
|
||||
- Maya: Fix hotbox broken by scriptsmenu [\#2151](https://github.com/pypeclub/OpenPype/pull/2151)
|
||||
- Ftrack: Ignore save warnings exception in Prepare project action [\#2150](https://github.com/pypeclub/OpenPype/pull/2150)
|
||||
- Loader thumbnails with smooth edges [\#2147](https://github.com/pypeclub/OpenPype/pull/2147)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump pillow from 8.2.0 to 8.3.2 [\#2162](https://github.com/pypeclub/OpenPype/pull/2162)
|
||||
- Bump axios from 0.21.1 to 0.21.4 in /website [\#2059](https://github.com/pypeclub/OpenPype/pull/2059)
|
||||
|
||||
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.1...3.5.0)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)
|
||||
|
||||
**Deprecated:**
|
||||
|
||||
|
|
@ -37,6 +69,7 @@
|
|||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Added validator for source files for Standalone Publisher [\#2138](https://github.com/pypeclub/OpenPype/pull/2138)
|
||||
- Maya: fix model publishing [\#2130](https://github.com/pypeclub/OpenPype/pull/2130)
|
||||
- Fix - oiiotool wasn't recognized even if present [\#2129](https://github.com/pypeclub/OpenPype/pull/2129)
|
||||
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
|
||||
|
|
@ -66,10 +99,6 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.1-nightly.1...3.4.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Settings: Flag project as deactivated and hide from tools' view [\#2008](https://github.com/pypeclub/OpenPype/pull/2008)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Startup validations [\#2054](https://github.com/pypeclub/OpenPype/pull/2054)
|
||||
|
|
@ -79,11 +108,6 @@
|
|||
- Loader: Families filtering [\#2043](https://github.com/pypeclub/OpenPype/pull/2043)
|
||||
- Settings UI: Project view enhancements [\#2042](https://github.com/pypeclub/OpenPype/pull/2042)
|
||||
- Settings for Nuke IncrementScriptVersion [\#2039](https://github.com/pypeclub/OpenPype/pull/2039)
|
||||
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
|
||||
- Adding predefined project folders creation in PM [\#2030](https://github.com/pypeclub/OpenPype/pull/2030)
|
||||
- WebserverModule: Removed interface of webserver module [\#2028](https://github.com/pypeclub/OpenPype/pull/2028)
|
||||
- TimersManager: Removed interface of timers manager [\#2024](https://github.com/pypeclub/OpenPype/pull/2024)
|
||||
- Feature Maya import asset from scene inventory [\#2018](https://github.com/pypeclub/OpenPype/pull/2018)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -101,35 +125,16 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.0-nightly.6...3.4.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Nuke: Compatibility with Nuke 13 [\#2003](https://github.com/pypeclub/OpenPype/pull/2003)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Added possibility to configure of synchronization of workfile version… [\#2041](https://github.com/pypeclub/OpenPype/pull/2041)
|
||||
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
|
||||
- General: Task types in profiles [\#2036](https://github.com/pypeclub/OpenPype/pull/2036)
|
||||
- Console interpreter: Handle invalid sizes on initialization [\#2022](https://github.com/pypeclub/OpenPype/pull/2022)
|
||||
- Ftrack: Show OpenPype versions in event server status [\#2019](https://github.com/pypeclub/OpenPype/pull/2019)
|
||||
- General: Staging icon [\#2017](https://github.com/pypeclub/OpenPype/pull/2017)
|
||||
- Ftrack: Sync to avalon actions have jobs [\#2015](https://github.com/pypeclub/OpenPype/pull/2015)
|
||||
- Modules: Connect method is not required [\#2009](https://github.com/pypeclub/OpenPype/pull/2009)
|
||||
- Settings UI: Number with configurable steps [\#2001](https://github.com/pypeclub/OpenPype/pull/2001)
|
||||
- Moving project folder structure creation out of ftrack module \#1989 [\#1996](https://github.com/pypeclub/OpenPype/pull/1996)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Workfiles tool: Task selection [\#2040](https://github.com/pypeclub/OpenPype/pull/2040)
|
||||
- Ftrack: Delete old versions missing settings key [\#2037](https://github.com/pypeclub/OpenPype/pull/2037)
|
||||
- Nuke: typo on a button [\#2034](https://github.com/pypeclub/OpenPype/pull/2034)
|
||||
- Hiero: Fix "none" named tags [\#2033](https://github.com/pypeclub/OpenPype/pull/2033)
|
||||
- FFmpeg: Subprocess arguments as list [\#2032](https://github.com/pypeclub/OpenPype/pull/2032)
|
||||
- General: Fix Python 2 breaking line [\#2016](https://github.com/pypeclub/OpenPype/pull/2016)
|
||||
- Bugfix/webpublisher task type [\#2006](https://github.com/pypeclub/OpenPype/pull/2006)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
|
||||
|
||||
## [3.3.1](https://github.com/pypeclub/OpenPype/tree/3.3.1) (2021-08-20)
|
||||
|
||||
|
|
|
|||
|
|
@ -158,6 +158,25 @@ def publish(debug, paths, targets):
|
|||
PypeCommands.publish(list(paths), targets)
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("path")
|
||||
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
|
||||
@click.option("-h", "--host", help="Host")
|
||||
@click.option("-u", "--user", help="User email address")
|
||||
@click.option("-p", "--project", help="Project")
|
||||
@click.option("-t", "--targets", help="Targets", default=None,
|
||||
multiple=True)
|
||||
def remotepublishfromapp(debug, project, path, host, targets=None, user=None):
|
||||
"""Start CLI publishing.
|
||||
|
||||
Publish collects json from paths provided as an argument.
|
||||
More than one path is allowed.
|
||||
"""
|
||||
if debug:
|
||||
os.environ['OPENPYPE_DEBUG'] = '3'
|
||||
PypeCommands.remotepublishfromapp(project, path, host, user,
|
||||
targets=targets)
|
||||
|
||||
@main.command()
|
||||
@click.argument("path")
|
||||
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
|
|||
|
||||
# Should be as last hook because must change launch arguments to string
|
||||
order = 1000
|
||||
app_groups = ["nuke", "nukex", "hiero", "nukestudio"]
|
||||
app_groups = ["nuke", "nukex", "hiero", "nukestudio", "photoshop"]
|
||||
platforms = ["windows"]
|
||||
|
||||
def execute(self):
|
||||
|
|
|
|||
|
|
@ -3,11 +3,12 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
import openpype.hosts.blender.api.plugin
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
class CreateCamera(openpype.hosts.blender.api.plugin.Creator):
|
||||
class CreateCamera(plugin.Creator):
|
||||
"""Polygonal static geometry"""
|
||||
|
||||
name = "cameraMain"
|
||||
|
|
@ -16,17 +17,46 @@ class CreateCamera(openpype.hosts.blender.api.plugin.Creator):
|
|||
icon = "video-camera"
|
||||
|
||||
def process(self):
|
||||
""" Run the creator on Blender main thread"""
|
||||
mti = ops.MainThreadItem(self._process)
|
||||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
bpy.context.scene.collection.children.link(instances)
|
||||
|
||||
# Create instance object
|
||||
asset = self.data["asset"]
|
||||
subset = self.data["subset"]
|
||||
name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
|
||||
collection = bpy.data.collections.new(name=name)
|
||||
bpy.context.scene.collection.children.link(collection)
|
||||
name = plugin.asset_name(asset, subset)
|
||||
|
||||
camera = bpy.data.cameras.new(subset)
|
||||
camera_obj = bpy.data.objects.new(subset, camera)
|
||||
|
||||
instances.objects.link(camera_obj)
|
||||
|
||||
asset_group = bpy.data.objects.new(name=name, object_data=None)
|
||||
asset_group.empty_display_type = 'SINGLE_ARROW'
|
||||
instances.objects.link(asset_group)
|
||||
self.data['task'] = api.Session.get('AVALON_TASK')
|
||||
lib.imprint(collection, self.data)
|
||||
print(f"self.data: {self.data}")
|
||||
lib.imprint(asset_group, self.data)
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
for obj in lib.get_selection():
|
||||
collection.objects.link(obj)
|
||||
bpy.context.view_layer.objects.active = asset_group
|
||||
selected = lib.get_selection()
|
||||
for obj in selected:
|
||||
obj.select_set(True)
|
||||
selected.append(asset_group)
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
else:
|
||||
plugin.deselect_all()
|
||||
camera_obj.select_set(True)
|
||||
asset_group.select_set(True)
|
||||
bpy.context.view_layer.objects.active = asset_group
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
|
||||
return collection
|
||||
return asset_group
|
||||
|
|
|
|||
|
|
@ -1,247 +0,0 @@
|
|||
"""Load a camera asset in Blender."""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from avalon import api, blender
|
||||
import bpy
|
||||
import openpype.hosts.blender.api.plugin
|
||||
|
||||
logger = logging.getLogger("openpype").getChild("blender").getChild("load_camera")
|
||||
|
||||
|
||||
class BlendCameraLoader(openpype.hosts.blender.api.plugin.AssetLoader):
|
||||
"""Load a camera from a .blend file.
|
||||
|
||||
Warning:
|
||||
Loading the same asset more then once is not properly supported at the
|
||||
moment.
|
||||
"""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["blend"]
|
||||
|
||||
label = "Link Camera"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, objects, lib_container):
|
||||
for obj in list(objects):
|
||||
bpy.data.cameras.remove(obj.data)
|
||||
|
||||
bpy.data.collections.remove(bpy.data.collections[lib_container])
|
||||
|
||||
def _process(self, libpath, lib_container, container_name, actions):
|
||||
|
||||
relative = bpy.context.preferences.filepaths.use_relative_paths
|
||||
with bpy.data.libraries.load(
|
||||
libpath, link=True, relative=relative
|
||||
) as (_, data_to):
|
||||
data_to.collections = [lib_container]
|
||||
|
||||
scene = bpy.context.scene
|
||||
|
||||
scene.collection.children.link(bpy.data.collections[lib_container])
|
||||
|
||||
camera_container = scene.collection.children[lib_container].make_local()
|
||||
|
||||
objects_list = []
|
||||
|
||||
for obj in camera_container.objects:
|
||||
local_obj = obj.make_local()
|
||||
local_obj.data.make_local()
|
||||
|
||||
if not local_obj.get(blender.pipeline.AVALON_PROPERTY):
|
||||
local_obj[blender.pipeline.AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = local_obj[blender.pipeline.AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": container_name})
|
||||
|
||||
if actions[0] is not None:
|
||||
if local_obj.animation_data is None:
|
||||
local_obj.animation_data_create()
|
||||
local_obj.animation_data.action = actions[0]
|
||||
|
||||
if actions[1] is not None:
|
||||
if local_obj.data.animation_data is None:
|
||||
local_obj.data.animation_data_create()
|
||||
local_obj.data.animation_data.action = actions[1]
|
||||
|
||||
objects_list.append(local_obj)
|
||||
|
||||
camera_container.pop(blender.pipeline.AVALON_PROPERTY)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
return objects_list
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
lib_container = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
|
||||
container_name = openpype.hosts.blender.api.plugin.asset_name(
|
||||
asset, subset, namespace
|
||||
)
|
||||
|
||||
container = bpy.data.collections.new(lib_container)
|
||||
container.name = container_name
|
||||
blender.pipeline.containerise_existing(
|
||||
container,
|
||||
name,
|
||||
namespace,
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
)
|
||||
|
||||
container_metadata = container.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
|
||||
container_metadata["libpath"] = libpath
|
||||
container_metadata["lib_container"] = lib_container
|
||||
|
||||
objects_list = self._process(
|
||||
libpath, lib_container, container_name, (None, None))
|
||||
|
||||
# Save the list of objects in the metadata container
|
||||
container_metadata["objects"] = objects_list
|
||||
|
||||
nodes = list(container.objects)
|
||||
nodes.append(container)
|
||||
self[:] = nodes
|
||||
return nodes
|
||||
|
||||
def update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all objects of the current collection, load the new
|
||||
ones and add them to the collection.
|
||||
If the objects of the collection are used in another collection they
|
||||
will not be removed, only unlinked. Normally this should not be the
|
||||
case though.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
|
||||
collection = bpy.data.collections.get(
|
||||
container["objectName"]
|
||||
)
|
||||
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
logger.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert collection, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert not (collection.children), (
|
||||
"Nested collections are not supported."
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in openpype.hosts.blender.api.plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
collection_libpath = collection_metadata["libpath"]
|
||||
objects = collection_metadata["objects"]
|
||||
lib_container = collection_metadata["lib_container"]
|
||||
|
||||
normalized_collection_libpath = (
|
||||
str(Path(bpy.path.abspath(collection_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
logger.debug(
|
||||
"normalized_collection_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_collection_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_collection_libpath == normalized_libpath:
|
||||
logger.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
camera = objects[0]
|
||||
|
||||
camera_action = None
|
||||
camera_data_action = None
|
||||
|
||||
if camera.animation_data and camera.animation_data.action:
|
||||
camera_action = camera.animation_data.action
|
||||
|
||||
if camera.data.animation_data and camera.data.animation_data.action:
|
||||
camera_data_action = camera.data.animation_data.action
|
||||
|
||||
actions = (camera_action, camera_data_action)
|
||||
|
||||
self._remove(objects, lib_container)
|
||||
|
||||
objects_list = self._process(
|
||||
str(libpath), lib_container, collection.name, actions)
|
||||
|
||||
# Save the list of objects in the metadata container
|
||||
collection_metadata["objects"] = objects_list
|
||||
collection_metadata["libpath"] = str(libpath)
|
||||
collection_metadata["representation"] = str(representation["_id"])
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
def remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
|
||||
collection = bpy.data.collections.get(
|
||||
container["objectName"]
|
||||
)
|
||||
if not collection:
|
||||
return False
|
||||
assert not (collection.children), (
|
||||
"Nested collections are not supported."
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
objects = collection_metadata["objects"]
|
||||
lib_container = collection_metadata["lib_container"]
|
||||
|
||||
self._remove(objects, lib_container)
|
||||
|
||||
bpy.data.collections.remove(collection)
|
||||
|
||||
return True
|
||||
252
openpype/hosts/blender/plugins/load/load_camera_blend.py
Normal file
252
openpype/hosts/blender/plugins/load/load_camera_blend.py
Normal file
|
|
@ -0,0 +1,252 @@
|
|||
"""Load a camera asset in Blender."""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
logger = logging.getLogger("openpype").getChild(
|
||||
"blender").getChild("load_camera")
|
||||
|
||||
|
||||
class BlendCameraLoader(plugin.AssetLoader):
|
||||
"""Load a camera from a .blend file.
|
||||
|
||||
Warning:
|
||||
Loading the same asset more then once is not properly supported at the
|
||||
moment.
|
||||
"""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["blend"]
|
||||
|
||||
label = "Link Camera (Blend)"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, asset_group):
|
||||
objects = list(asset_group.children)
|
||||
|
||||
for obj in objects:
|
||||
if obj.type == 'CAMERA':
|
||||
bpy.data.cameras.remove(obj.data)
|
||||
|
||||
def _process(self, libpath, asset_group, group_name):
|
||||
with bpy.data.libraries.load(
|
||||
libpath, link=True, relative=False
|
||||
) as (data_from, data_to):
|
||||
data_to.objects = data_from.objects
|
||||
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
empties = [obj for obj in data_to.objects if obj.type == 'EMPTY']
|
||||
|
||||
container = None
|
||||
|
||||
for empty in empties:
|
||||
if empty.get(AVALON_PROPERTY):
|
||||
container = empty
|
||||
break
|
||||
|
||||
assert container, "No asset group found"
|
||||
|
||||
# Children must be linked before parents,
|
||||
# otherwise the hierarchy will break
|
||||
objects = []
|
||||
nodes = list(container.children)
|
||||
|
||||
for obj in nodes:
|
||||
obj.parent = asset_group
|
||||
|
||||
for obj in nodes:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
|
||||
objects.reverse()
|
||||
|
||||
for obj in objects:
|
||||
parent.objects.link(obj)
|
||||
|
||||
for obj in objects:
|
||||
local_obj = plugin.prepare_data(obj, group_name)
|
||||
|
||||
if local_obj.type != 'EMPTY':
|
||||
plugin.prepare_data(local_obj.data, group_name)
|
||||
|
||||
if not local_obj.get(AVALON_PROPERTY):
|
||||
local_obj[AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = local_obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
objects.reverse()
|
||||
|
||||
bpy.data.orphans_purge(do_local_ids=False)
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
asset_name = plugin.asset_name(asset, subset)
|
||||
unique_number = plugin.get_unique_number(asset, subset)
|
||||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
asset_group.empty_display_type = 'SINGLE_ARROW'
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"libpath": libpath,
|
||||
"asset_name": asset_name,
|
||||
"parent": str(context["representation"]["parent"]),
|
||||
"family": context["representation"]["context"]["family"],
|
||||
"objectName": group_name
|
||||
}
|
||||
|
||||
self[:] = objects
|
||||
return objects
|
||||
|
||||
def exec_update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all children of the asset group, load the new ones
|
||||
and add them as children of the group.
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
self.log.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert asset_group, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
metadata = asset_group.get(AVALON_PROPERTY)
|
||||
group_libpath = metadata["libpath"]
|
||||
|
||||
normalized_group_libpath = (
|
||||
str(Path(bpy.path.abspath(group_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
self.log.debug(
|
||||
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_group_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_group_libpath == normalized_libpath:
|
||||
self.log.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
# Check how many assets use the same library
|
||||
count = 0
|
||||
for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects:
|
||||
if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath:
|
||||
count += 1
|
||||
|
||||
mat = asset_group.matrix_basis.copy()
|
||||
|
||||
self._remove(asset_group)
|
||||
|
||||
# If it is the last object to use that library, remove it
|
||||
if count == 1:
|
||||
library = bpy.data.libraries.get(bpy.path.basename(group_libpath))
|
||||
if library:
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
self._process(str(libpath), asset_group, object_name)
|
||||
|
||||
asset_group.matrix_basis = mat
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
metadata["representation"] = str(representation["_id"])
|
||||
metadata["parent"] = str(representation["parent"])
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = asset_group.get(AVALON_PROPERTY).get('libpath')
|
||||
|
||||
# Check how many assets use the same library
|
||||
count = 0
|
||||
for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects:
|
||||
if obj.get(AVALON_PROPERTY).get('libpath') == libpath:
|
||||
count += 1
|
||||
|
||||
if not asset_group:
|
||||
return False
|
||||
|
||||
self._remove(asset_group)
|
||||
|
||||
bpy.data.objects.remove(asset_group)
|
||||
|
||||
# If it is the last object to use that library, remove it
|
||||
if count == 1:
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return True
|
||||
218
openpype/hosts/blender/plugins/load/load_camera_fbx.py
Normal file
218
openpype/hosts/blender/plugins/load/load_camera_fbx.py
Normal file
|
|
@ -0,0 +1,218 @@
|
|||
"""Load an asset in Blender from an Alembic file."""
|
||||
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
class FbxCameraLoader(plugin.AssetLoader):
|
||||
"""Load a camera from FBX.
|
||||
|
||||
Stores the imported asset in an empty named after the asset.
|
||||
"""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["fbx"]
|
||||
|
||||
label = "Load Camera (FBX)"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, asset_group):
|
||||
objects = list(asset_group.children)
|
||||
|
||||
for obj in objects:
|
||||
if obj.type == 'CAMERA':
|
||||
bpy.data.cameras.remove(obj.data)
|
||||
elif obj.type == 'EMPTY':
|
||||
objects.extend(obj.children)
|
||||
bpy.data.objects.remove(obj)
|
||||
|
||||
def _process(self, libpath, asset_group, group_name):
|
||||
plugin.deselect_all()
|
||||
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
|
||||
bpy.ops.import_scene.fbx(filepath=libpath)
|
||||
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
objects = lib.get_selection()
|
||||
|
||||
for obj in objects:
|
||||
obj.parent = asset_group
|
||||
|
||||
for obj in objects:
|
||||
parent.objects.link(obj)
|
||||
collection.objects.unlink(obj)
|
||||
|
||||
for obj in objects:
|
||||
name = obj.name
|
||||
obj.name = f"{group_name}:{name}"
|
||||
if obj.type != 'EMPTY':
|
||||
name_data = obj.data.name
|
||||
obj.data.name = f"{group_name}:{name_data}"
|
||||
|
||||
if not obj.get(AVALON_PROPERTY):
|
||||
obj[AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
asset_name = plugin.asset_name(asset, subset)
|
||||
unique_number = plugin.get_unique_number(asset, subset)
|
||||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
objects = []
|
||||
nodes = list(asset_group.children)
|
||||
|
||||
for obj in nodes:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"libpath": libpath,
|
||||
"asset_name": asset_name,
|
||||
"parent": str(context["representation"]["parent"]),
|
||||
"family": context["representation"]["context"]["family"],
|
||||
"objectName": group_name
|
||||
}
|
||||
|
||||
self[:] = objects
|
||||
return objects
|
||||
|
||||
def exec_update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all objects of the current collection, load the new
|
||||
ones and add them to the collection.
|
||||
If the objects of the collection are used in another collection they
|
||||
will not be removed, only unlinked. Normally this should not be the
|
||||
case though.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
self.log.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert asset_group, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
metadata = asset_group.get(AVALON_PROPERTY)
|
||||
group_libpath = metadata["libpath"]
|
||||
|
||||
normalized_group_libpath = (
|
||||
str(Path(bpy.path.abspath(group_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
self.log.debug(
|
||||
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_group_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_group_libpath == normalized_libpath:
|
||||
self.log.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
mat = asset_group.matrix_basis.copy()
|
||||
|
||||
self._remove(asset_group)
|
||||
self._process(str(libpath), asset_group, object_name)
|
||||
|
||||
asset_group.matrix_basis = mat
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
metadata["representation"] = str(representation["_id"])
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
|
||||
if not asset_group:
|
||||
return False
|
||||
|
||||
self._remove(asset_group)
|
||||
|
||||
bpy.data.objects.remove(asset_group)
|
||||
|
||||
return True
|
||||
|
|
@ -12,6 +12,7 @@ from avalon.blender.pipeline import AVALON_CONTAINERS
|
|||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype import lib
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
|
|
@ -103,6 +104,21 @@ class JsonLayoutLoader(plugin.AssetLoader):
|
|||
options=options
|
||||
)
|
||||
|
||||
# Create the camera asset and the camera instance
|
||||
creator_plugin = lib.get_creator_by_name("CreateCamera")
|
||||
if not creator_plugin:
|
||||
raise ValueError("Creator plugin \"CreateCamera\" was "
|
||||
"not found.")
|
||||
|
||||
api.create(
|
||||
creator_plugin,
|
||||
name="camera",
|
||||
# name=f"{unique_number}_{subset}_animation",
|
||||
asset=asset,
|
||||
options={"useSelection": False}
|
||||
# data={"dependencies": str(context["representation"]["_id"])}
|
||||
)
|
||||
|
||||
def process_asset(self,
|
||||
context: dict,
|
||||
name: str,
|
||||
|
|
|
|||
73
openpype/hosts/blender/plugins/publish/extract_camera.py
Normal file
73
openpype/hosts/blender/plugins/publish/extract_camera.py
Normal file
|
|
@ -0,0 +1,73 @@
|
|||
import os
|
||||
|
||||
from openpype import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
import bpy
|
||||
|
||||
|
||||
class ExtractCamera(api.Extractor):
|
||||
"""Extract as the camera as FBX."""
|
||||
|
||||
label = "Extract Camera"
|
||||
hosts = ["blender"]
|
||||
families = ["camera"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
# Define extract output file path
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = f"{instance.name}.fbx"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
selected = []
|
||||
|
||||
camera = None
|
||||
|
||||
for obj in instance:
|
||||
if obj.type == "CAMERA":
|
||||
obj.select_set(True)
|
||||
selected.append(obj)
|
||||
camera = obj
|
||||
break
|
||||
|
||||
assert camera, "No camera found"
|
||||
|
||||
context = plugin.create_blender_context(
|
||||
active=camera, selected=selected)
|
||||
|
||||
scale_length = bpy.context.scene.unit_settings.scale_length
|
||||
bpy.context.scene.unit_settings.scale_length = 0.01
|
||||
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
context,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
|
||||
bpy.context.scene.unit_settings.scale_length = scale_length
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s",
|
||||
instance.name, representation)
|
||||
|
|
@ -0,0 +1,48 @@
|
|||
from typing import List
|
||||
|
||||
import mathutils
|
||||
|
||||
import pyblish.api
|
||||
import openpype.hosts.blender.api.action
|
||||
|
||||
|
||||
class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
|
||||
"""Camera must have a keyframe at frame 0.
|
||||
|
||||
Unreal shifts the first keyframe to frame 0. Forcing the camera to have
|
||||
a keyframe at frame 0 will ensure that the animation will be the same
|
||||
in Unreal and Blender.
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["blender"]
|
||||
families = ["camera"]
|
||||
category = "geometry"
|
||||
version = (0, 1, 0)
|
||||
label = "Zero Keyframe"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
|
||||
_identity = mathutils.Matrix()
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
invalid = []
|
||||
for obj in [obj for obj in instance]:
|
||||
if obj.type == "CAMERA":
|
||||
if obj.animation_data and obj.animation_data.action:
|
||||
action = obj.animation_data.action
|
||||
frames_set = set()
|
||||
for fcu in action.fcurves:
|
||||
for kp in fcu.keyframe_points:
|
||||
frames_set.add(kp.co[0])
|
||||
frames = list(frames_set)
|
||||
frames.sort()
|
||||
if frames[0] != 0.0:
|
||||
invalid.append(obj)
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
f"Object found in instance is not in Object Mode: {invalid}")
|
||||
105
openpype/hosts/flame/__init__.py
Normal file
105
openpype/hosts/flame/__init__.py
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
from .api.utils import (
|
||||
setup
|
||||
)
|
||||
|
||||
from .api.pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ls,
|
||||
containerise,
|
||||
update_container,
|
||||
maintained_selection,
|
||||
remove_instance,
|
||||
list_instances,
|
||||
imprint
|
||||
)
|
||||
|
||||
from .api.lib import (
|
||||
FlameAppFramework,
|
||||
maintain_current_timeline,
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
get_current_timeline,
|
||||
create_bin,
|
||||
)
|
||||
|
||||
from .api.menu import (
|
||||
FlameMenuProjectConnect,
|
||||
FlameMenuTimeline
|
||||
)
|
||||
|
||||
from .api.workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
current_file,
|
||||
has_unsaved_changes,
|
||||
file_extensions,
|
||||
work_root
|
||||
)
|
||||
|
||||
import os
|
||||
|
||||
HOST_DIR = os.path.dirname(
|
||||
os.path.abspath(__file__)
|
||||
)
|
||||
API_DIR = os.path.join(HOST_DIR, "api")
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
app_framework = None
|
||||
apps = []
|
||||
|
||||
|
||||
__all__ = [
|
||||
"HOST_DIR",
|
||||
"API_DIR",
|
||||
"PLUGINS_DIR",
|
||||
"PUBLISH_PATH",
|
||||
"LOAD_PATH",
|
||||
"CREATE_PATH",
|
||||
"INVENTORY_PATH",
|
||||
"INVENTORY_PATH",
|
||||
|
||||
"app_framework",
|
||||
"apps",
|
||||
|
||||
# pipeline
|
||||
"install",
|
||||
"uninstall",
|
||||
"ls",
|
||||
"containerise",
|
||||
"update_container",
|
||||
"reload_pipeline",
|
||||
"maintained_selection",
|
||||
"remove_instance",
|
||||
"list_instances",
|
||||
"imprint",
|
||||
|
||||
# utils
|
||||
"setup",
|
||||
|
||||
# lib
|
||||
"FlameAppFramework",
|
||||
"maintain_current_timeline",
|
||||
"get_project_manager",
|
||||
"get_current_project",
|
||||
"get_current_timeline",
|
||||
"create_bin",
|
||||
|
||||
# menu
|
||||
"FlameMenuProjectConnect",
|
||||
"FlameMenuTimeline",
|
||||
|
||||
# plugin
|
||||
|
||||
# workio
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
]
|
||||
3
openpype/hosts/flame/api/__init__.py
Normal file
3
openpype/hosts/flame/api/__init__.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
"""
|
||||
OpenPype Autodesk Flame api
|
||||
"""
|
||||
276
openpype/hosts/flame/api/lib.py
Normal file
276
openpype/hosts/flame/api/lib.py
Normal file
|
|
@ -0,0 +1,276 @@
|
|||
import sys
|
||||
import os
|
||||
import pickle
|
||||
import contextlib
|
||||
from pprint import pformat
|
||||
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def io_preferences_file(klass, filepath, write=False):
|
||||
try:
|
||||
flag = "w" if write else "r"
|
||||
yield open(filepath, flag)
|
||||
|
||||
except IOError as _error:
|
||||
klass.log.info("Unable to work with preferences `{}`: {}".format(
|
||||
filepath, _error))
|
||||
|
||||
|
||||
class FlameAppFramework(object):
|
||||
# flameAppFramework class takes care of preferences
|
||||
|
||||
class prefs_dict(dict):
|
||||
|
||||
def __init__(self, master, name, **kwargs):
|
||||
self.name = name
|
||||
self.master = master
|
||||
if not self.master.get(self.name):
|
||||
self.master[self.name] = {}
|
||||
self.master[self.name].__init__()
|
||||
|
||||
def __getitem__(self, k):
|
||||
return self.master[self.name].__getitem__(k)
|
||||
|
||||
def __setitem__(self, k, v):
|
||||
return self.master[self.name].__setitem__(k, v)
|
||||
|
||||
def __delitem__(self, k):
|
||||
return self.master[self.name].__delitem__(k)
|
||||
|
||||
def get(self, k, default=None):
|
||||
return self.master[self.name].get(k, default)
|
||||
|
||||
def setdefault(self, k, default=None):
|
||||
return self.master[self.name].setdefault(k, default)
|
||||
|
||||
def pop(self, k, v=object()):
|
||||
if v is object():
|
||||
return self.master[self.name].pop(k)
|
||||
return self.master[self.name].pop(k, v)
|
||||
|
||||
def update(self, mapping=(), **kwargs):
|
||||
self.master[self.name].update(mapping, **kwargs)
|
||||
|
||||
def __contains__(self, k):
|
||||
return self.master[self.name].__contains__(k)
|
||||
|
||||
def copy(self): # don"t delegate w/ super - dict.copy() -> dict :(
|
||||
return type(self)(self)
|
||||
|
||||
def keys(self):
|
||||
return self.master[self.name].keys()
|
||||
|
||||
@classmethod
|
||||
def fromkeys(cls, keys, v=None):
|
||||
return cls.master[cls.name].fromkeys(keys, v)
|
||||
|
||||
def __repr__(self):
|
||||
return "{0}({1})".format(
|
||||
type(self).__name__, self.master[self.name].__repr__())
|
||||
|
||||
def master_keys(self):
|
||||
return self.master.keys()
|
||||
|
||||
def __init__(self):
|
||||
self.name = self.__class__.__name__
|
||||
self.bundle_name = "OpenPypeFlame"
|
||||
# self.prefs scope is limited to flame project and user
|
||||
self.prefs = {}
|
||||
self.prefs_user = {}
|
||||
self.prefs_global = {}
|
||||
self.log = log
|
||||
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
self.flame_project_name = self.flame.project.current_project.name
|
||||
self.flame_user_name = flame.users.current_user.name
|
||||
except Exception:
|
||||
self.flame = None
|
||||
self.flame_project_name = None
|
||||
self.flame_user_name = None
|
||||
|
||||
import socket
|
||||
self.hostname = socket.gethostname()
|
||||
|
||||
if sys.platform == "darwin":
|
||||
self.prefs_folder = os.path.join(
|
||||
os.path.expanduser("~"),
|
||||
"Library",
|
||||
"Caches",
|
||||
"OpenPype",
|
||||
self.bundle_name
|
||||
)
|
||||
elif sys.platform.startswith("linux"):
|
||||
self.prefs_folder = os.path.join(
|
||||
os.path.expanduser("~"),
|
||||
".OpenPype",
|
||||
self.bundle_name)
|
||||
|
||||
self.prefs_folder = os.path.join(
|
||||
self.prefs_folder,
|
||||
self.hostname,
|
||||
)
|
||||
|
||||
self.log.info("[{}] waking up".format(self.__class__.__name__))
|
||||
self.load_prefs()
|
||||
|
||||
# menu auto-refresh defaults
|
||||
|
||||
if not self.prefs_global.get("menu_auto_refresh"):
|
||||
self.prefs_global["menu_auto_refresh"] = {
|
||||
"media_panel": True,
|
||||
"batch": True,
|
||||
"main_menu": True,
|
||||
"timeline_menu": True
|
||||
}
|
||||
|
||||
self.apps = []
|
||||
|
||||
def get_pref_file_paths(self):
|
||||
|
||||
prefix = self.prefs_folder + os.path.sep + self.bundle_name
|
||||
prefs_file_path = "_".join([
|
||||
prefix, self.flame_user_name,
|
||||
self.flame_project_name]) + ".prefs"
|
||||
prefs_user_file_path = "_".join([
|
||||
prefix, self.flame_user_name]) + ".prefs"
|
||||
prefs_global_file_path = prefix + ".prefs"
|
||||
|
||||
return (prefs_file_path, prefs_user_file_path, prefs_global_file_path)
|
||||
|
||||
def load_prefs(self):
|
||||
|
||||
(proj_pref_path, user_pref_path,
|
||||
glob_pref_path) = self.get_pref_file_paths()
|
||||
|
||||
with io_preferences_file(self, proj_pref_path) as prefs_file:
|
||||
self.prefs = pickle.load(prefs_file)
|
||||
self.log.info(
|
||||
"Project - preferences contents:\n{}".format(
|
||||
pformat(self.prefs)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, user_pref_path) as prefs_file:
|
||||
self.prefs_user = pickle.load(prefs_file)
|
||||
self.log.info(
|
||||
"User - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_user)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, glob_pref_path) as prefs_file:
|
||||
self.prefs_global = pickle.load(prefs_file)
|
||||
self.log.info(
|
||||
"Global - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_global)
|
||||
))
|
||||
|
||||
return True
|
||||
|
||||
def save_prefs(self):
|
||||
# make sure the preference folder is available
|
||||
if not os.path.isdir(self.prefs_folder):
|
||||
try:
|
||||
os.makedirs(self.prefs_folder)
|
||||
except Exception:
|
||||
self.log.info("Unable to create folder {}".format(
|
||||
self.prefs_folder))
|
||||
return False
|
||||
|
||||
# get all pref file paths
|
||||
(proj_pref_path, user_pref_path,
|
||||
glob_pref_path) = self.get_pref_file_paths()
|
||||
|
||||
with io_preferences_file(self, proj_pref_path, True) as prefs_file:
|
||||
pickle.dump(self.prefs, prefs_file)
|
||||
self.log.info(
|
||||
"Project - preferences contents:\n{}".format(
|
||||
pformat(self.prefs)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, user_pref_path, True) as prefs_file:
|
||||
pickle.dump(self.prefs_user, prefs_file)
|
||||
self.log.info(
|
||||
"User - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_user)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, glob_pref_path, True) as prefs_file:
|
||||
pickle.dump(self.prefs_global, prefs_file)
|
||||
self.log.info(
|
||||
"Global - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_global)
|
||||
))
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintain_current_timeline(to_timeline, from_timeline=None):
|
||||
"""Maintain current timeline selection during context
|
||||
|
||||
Attributes:
|
||||
from_timeline (resolve.Timeline)[optional]:
|
||||
Example:
|
||||
>>> print(from_timeline.GetName())
|
||||
timeline1
|
||||
>>> print(to_timeline.GetName())
|
||||
timeline2
|
||||
|
||||
>>> with maintain_current_timeline(to_timeline):
|
||||
... print(get_current_timeline().GetName())
|
||||
timeline2
|
||||
|
||||
>>> print(get_current_timeline().GetName())
|
||||
timeline1
|
||||
"""
|
||||
# todo: this is still Resolve's implementation
|
||||
project = get_current_project()
|
||||
working_timeline = from_timeline or project.GetCurrentTimeline()
|
||||
|
||||
# swith to the input timeline
|
||||
project.SetCurrentTimeline(to_timeline)
|
||||
|
||||
try:
|
||||
# do a work
|
||||
yield
|
||||
finally:
|
||||
# put the original working timeline to context
|
||||
project.SetCurrentTimeline(working_timeline)
|
||||
|
||||
|
||||
def get_project_manager():
|
||||
# TODO: get_project_manager
|
||||
return
|
||||
|
||||
|
||||
def get_media_storage():
|
||||
# TODO: get_media_storage
|
||||
return
|
||||
|
||||
|
||||
def get_current_project():
|
||||
# TODO: get_current_project
|
||||
return
|
||||
|
||||
|
||||
def get_current_timeline(new=False):
|
||||
# TODO: get_current_timeline
|
||||
return
|
||||
|
||||
|
||||
def create_bin(name, root=None):
|
||||
# TODO: create_bin
|
||||
return
|
||||
|
||||
|
||||
def rescan_hooks():
|
||||
import flame
|
||||
try:
|
||||
flame.execute_shortcut('Rescan Python Hooks')
|
||||
except Exception:
|
||||
pass
|
||||
208
openpype/hosts/flame/api/menu.py
Normal file
208
openpype/hosts/flame/api/menu.py
Normal file
|
|
@ -0,0 +1,208 @@
|
|||
import os
|
||||
from Qt import QtWidgets
|
||||
from copy import deepcopy
|
||||
|
||||
from openpype.tools.utils.host_tools import HostToolsHelper
|
||||
|
||||
|
||||
menu_group_name = 'OpenPype'
|
||||
|
||||
default_flame_export_presets = {
|
||||
'Publish': {
|
||||
'PresetVisibility': 2,
|
||||
'PresetType': 0,
|
||||
'PresetFile': 'OpenEXR/OpenEXR (16-bit fp PIZ).xml'
|
||||
},
|
||||
'Preview': {
|
||||
'PresetVisibility': 3,
|
||||
'PresetType': 2,
|
||||
'PresetFile': 'Generate Preview.xml'
|
||||
},
|
||||
'Thumbnail': {
|
||||
'PresetVisibility': 3,
|
||||
'PresetType': 0,
|
||||
'PresetFile': 'Generate Thumbnail.xml'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class _FlameMenuApp(object):
|
||||
def __init__(self, framework):
|
||||
self.name = self.__class__.__name__
|
||||
self.framework = framework
|
||||
self.log = framework.log
|
||||
self.menu_group_name = menu_group_name
|
||||
self.dynamic_menu_data = {}
|
||||
|
||||
# flame module is only avaliable when a
|
||||
# flame project is loaded and initialized
|
||||
self.flame = None
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
self.flame_project_name = flame.project.current_project.name
|
||||
self.prefs = self.framework.prefs_dict(self.framework.prefs, self.name)
|
||||
self.prefs_user = self.framework.prefs_dict(
|
||||
self.framework.prefs_user, self.name)
|
||||
self.prefs_global = self.framework.prefs_dict(
|
||||
self.framework.prefs_global, self.name)
|
||||
|
||||
self.mbox = QtWidgets.QMessageBox()
|
||||
|
||||
self.menu = {
|
||||
"actions": [{
|
||||
'name': os.getenv("AVALON_PROJECT", "project"),
|
||||
'isEnabled': False
|
||||
}],
|
||||
"name": self.menu_group_name
|
||||
}
|
||||
self.tools_helper = HostToolsHelper()
|
||||
|
||||
def __getattr__(self, name):
|
||||
def method(*args, **kwargs):
|
||||
print('calling %s' % name)
|
||||
return method
|
||||
|
||||
def rescan(self, *args, **kwargs):
|
||||
if not self.flame:
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
if self.flame:
|
||||
self.flame.execute_shortcut('Rescan Python Hooks')
|
||||
self.log.info('Rescan Python Hooks')
|
||||
|
||||
|
||||
class FlameMenuProjectConnect(_FlameMenuApp):
|
||||
|
||||
# flameMenuProjectconnect app takes care of the preferences dialog as well
|
||||
|
||||
def __init__(self, framework):
|
||||
_FlameMenuApp.__init__(self, framework)
|
||||
|
||||
def __getattr__(self, name):
|
||||
def method(*args, **kwargs):
|
||||
project = self.dynamic_menu_data.get(name)
|
||||
if project:
|
||||
self.link_project(project)
|
||||
return method
|
||||
|
||||
def build_menu(self):
|
||||
if not self.flame:
|
||||
return []
|
||||
|
||||
flame_project_name = self.flame_project_name
|
||||
self.log.info("______ {} ______".format(flame_project_name))
|
||||
|
||||
menu = deepcopy(self.menu)
|
||||
|
||||
menu['actions'].append({
|
||||
"name": "Workfiles ...",
|
||||
"execute": lambda x: self.tools_helper.show_workfiles()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Create ...",
|
||||
"execute": lambda x: self.tools_helper.show_creator()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Publish ...",
|
||||
"execute": lambda x: self.tools_helper.show_publish()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Load ...",
|
||||
"execute": lambda x: self.tools_helper.show_loader()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Manage ...",
|
||||
"execute": lambda x: self.tools_helper.show_scene_inventory()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Library ...",
|
||||
"execute": lambda x: self.tools_helper.show_library_loader()
|
||||
})
|
||||
return menu
|
||||
|
||||
def get_projects(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def refresh(self, *args, **kwargs):
|
||||
self.rescan()
|
||||
|
||||
def rescan(self, *args, **kwargs):
|
||||
if not self.flame:
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
if self.flame:
|
||||
self.flame.execute_shortcut('Rescan Python Hooks')
|
||||
self.log.info('Rescan Python Hooks')
|
||||
|
||||
|
||||
class FlameMenuTimeline(_FlameMenuApp):
|
||||
|
||||
# flameMenuProjectconnect app takes care of the preferences dialog as well
|
||||
|
||||
def __init__(self, framework):
|
||||
_FlameMenuApp.__init__(self, framework)
|
||||
|
||||
def __getattr__(self, name):
|
||||
def method(*args, **kwargs):
|
||||
project = self.dynamic_menu_data.get(name)
|
||||
if project:
|
||||
self.link_project(project)
|
||||
return method
|
||||
|
||||
def build_menu(self):
|
||||
if not self.flame:
|
||||
return []
|
||||
|
||||
flame_project_name = self.flame_project_name
|
||||
self.log.info("______ {} ______".format(flame_project_name))
|
||||
|
||||
menu = deepcopy(self.menu)
|
||||
|
||||
menu['actions'].append({
|
||||
"name": "Create ...",
|
||||
"execute": lambda x: self.tools_helper.show_creator()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Publish ...",
|
||||
"execute": lambda x: self.tools_helper.show_publish()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Load ...",
|
||||
"execute": lambda x: self.tools_helper.show_loader()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Manage ...",
|
||||
"execute": lambda x: self.tools_helper.show_scene_inventory()
|
||||
})
|
||||
|
||||
return menu
|
||||
|
||||
def get_projects(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def refresh(self, *args, **kwargs):
|
||||
self.rescan()
|
||||
|
||||
def rescan(self, *args, **kwargs):
|
||||
if not self.flame:
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
if self.flame:
|
||||
self.flame.execute_shortcut('Rescan Python Hooks')
|
||||
self.log.info('Rescan Python Hooks')
|
||||
155
openpype/hosts/flame/api/pipeline.py
Normal file
155
openpype/hosts/flame/api/pipeline.py
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
"""
|
||||
Basic avalon integration
|
||||
"""
|
||||
import contextlib
|
||||
from avalon import api as avalon
|
||||
from pyblish import api as pyblish
|
||||
from openpype.api import Logger
|
||||
|
||||
AVALON_CONTAINERS = "AVALON_CONTAINERS"
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def install():
|
||||
from .. import (
|
||||
PUBLISH_PATH,
|
||||
LOAD_PATH,
|
||||
CREATE_PATH,
|
||||
INVENTORY_PATH
|
||||
)
|
||||
# TODO: install
|
||||
|
||||
# Disable all families except for the ones we explicitly want to see
|
||||
family_states = [
|
||||
"imagesequence",
|
||||
"render2d",
|
||||
"plate",
|
||||
"render",
|
||||
"mov",
|
||||
"clip"
|
||||
]
|
||||
avalon.data["familiesStateDefault"] = False
|
||||
avalon.data["familiesStateToggled"] = family_states
|
||||
|
||||
log.info("openpype.hosts.flame installed")
|
||||
|
||||
pyblish.register_host("flame")
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
log.info("Registering Flame plug-ins..")
|
||||
|
||||
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
|
||||
def uninstall():
|
||||
from .. import (
|
||||
PUBLISH_PATH,
|
||||
LOAD_PATH,
|
||||
CREATE_PATH,
|
||||
INVENTORY_PATH
|
||||
)
|
||||
|
||||
# TODO: uninstall
|
||||
pyblish.deregister_host("flame")
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
log.info("Deregistering DaVinci Resovle plug-ins..")
|
||||
|
||||
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
|
||||
def containerise(tl_segment,
|
||||
name,
|
||||
namespace,
|
||||
context,
|
||||
loader=None,
|
||||
data=None):
|
||||
# TODO: containerise
|
||||
pass
|
||||
|
||||
|
||||
def ls():
|
||||
"""List available containers.
|
||||
"""
|
||||
# TODO: ls
|
||||
pass
|
||||
|
||||
|
||||
def parse_container(tl_segment, validate=True):
|
||||
"""Return container data from timeline_item's openpype tag.
|
||||
"""
|
||||
# TODO: parse_container
|
||||
pass
|
||||
|
||||
|
||||
def update_container(tl_segment, data=None):
|
||||
"""Update container data to input timeline_item's openpype tag.
|
||||
"""
|
||||
# TODO: update_container
|
||||
pass
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
"""Maintain selection during context
|
||||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... node['selected'].setValue(True)
|
||||
>>> print(node['selected'].value())
|
||||
False
|
||||
"""
|
||||
# TODO: maintained_selection + remove undo steps
|
||||
|
||||
try:
|
||||
# do the operation
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
def reset_selection():
|
||||
"""Deselect all selected nodes
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle node passthrough states on instance toggles."""
|
||||
|
||||
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
|
||||
instance, old_value, new_value))
|
||||
|
||||
# from openpype.hosts.resolve import (
|
||||
# set_publish_attribute
|
||||
# )
|
||||
|
||||
# # Whether instances should be passthrough based on new value
|
||||
# timeline_item = instance.data["item"]
|
||||
# set_publish_attribute(timeline_item, new_value)
|
||||
|
||||
|
||||
def remove_instance(instance):
|
||||
"""Remove instance marker from track item."""
|
||||
# TODO: remove_instance
|
||||
pass
|
||||
|
||||
|
||||
def list_instances():
|
||||
"""List all created instances from current workfile."""
|
||||
# TODO: list_instances
|
||||
pass
|
||||
|
||||
|
||||
def imprint(item, data=None):
|
||||
# TODO: imprint
|
||||
pass
|
||||
3
openpype/hosts/flame/api/plugin.py
Normal file
3
openpype/hosts/flame/api/plugin.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
# Creator plugin functions
|
||||
# Publishing plugin functions
|
||||
# Loader plugin functions
|
||||
108
openpype/hosts/flame/api/utils.py
Normal file
108
openpype/hosts/flame/api/utils.py
Normal file
|
|
@ -0,0 +1,108 @@
|
|||
"""
|
||||
Flame utils for syncing scripts
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
from openpype.api import Logger
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def _sync_utility_scripts(env=None):
|
||||
""" Synchronizing basic utlility scripts for flame.
|
||||
|
||||
To be able to run start OpenPype within Flame we have to copy
|
||||
all utility_scripts and additional FLAME_SCRIPT_DIR into
|
||||
`/opt/Autodesk/shared/python`. This will be always synchronizing those
|
||||
folders.
|
||||
"""
|
||||
from .. import HOST_DIR
|
||||
|
||||
env = env or os.environ
|
||||
|
||||
# initiate inputs
|
||||
scripts = {}
|
||||
fsd_env = env.get("FLAME_SCRIPT_DIRS", "")
|
||||
flame_shared_dir = "/opt/Autodesk/shared/python"
|
||||
|
||||
fsd_paths = [os.path.join(
|
||||
HOST_DIR,
|
||||
"utility_scripts"
|
||||
)]
|
||||
|
||||
# collect script dirs
|
||||
log.info("FLAME_SCRIPT_DIRS: `{fsd_env}`".format(**locals()))
|
||||
log.info("fsd_paths: `{fsd_paths}`".format(**locals()))
|
||||
|
||||
# add application environment setting for FLAME_SCRIPT_DIR
|
||||
# to script path search
|
||||
for _dirpath in fsd_env.split(os.pathsep):
|
||||
if not os.path.isdir(_dirpath):
|
||||
log.warning("Path is not a valid dir: `{_dirpath}`".format(
|
||||
**locals()))
|
||||
continue
|
||||
fsd_paths.append(_dirpath)
|
||||
|
||||
# collect scripts from dirs
|
||||
for path in fsd_paths:
|
||||
scripts.update({path: os.listdir(path)})
|
||||
|
||||
remove_black_list = []
|
||||
for _k, s_list in scripts.items():
|
||||
remove_black_list += s_list
|
||||
|
||||
log.info("remove_black_list: `{remove_black_list}`".format(**locals()))
|
||||
log.info("Additional Flame script paths: `{fsd_paths}`".format(**locals()))
|
||||
log.info("Flame Scripts: `{scripts}`".format(**locals()))
|
||||
|
||||
# make sure no script file is in folder
|
||||
if next(iter(os.listdir(flame_shared_dir)), None):
|
||||
for _itm in os.listdir(flame_shared_dir):
|
||||
skip = False
|
||||
|
||||
# skip all scripts and folders which are not maintained
|
||||
if _itm not in remove_black_list:
|
||||
skip = True
|
||||
|
||||
# do not skyp if pyc in extension
|
||||
if not os.path.isdir(_itm) and "pyc" in os.path.splitext(_itm)[-1]:
|
||||
skip = False
|
||||
|
||||
# continue if skip in true
|
||||
if skip:
|
||||
continue
|
||||
|
||||
path = os.path.join(flame_shared_dir, _itm)
|
||||
log.info("Removing `{path}`...".format(**locals()))
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path, onerror=None)
|
||||
else:
|
||||
os.remove(path)
|
||||
|
||||
# copy scripts into Resolve's utility scripts dir
|
||||
for dirpath, scriptlist in scripts.items():
|
||||
# directory and scripts list
|
||||
for _script in scriptlist:
|
||||
# script in script list
|
||||
src = os.path.join(dirpath, _script)
|
||||
dst = os.path.join(flame_shared_dir, _script)
|
||||
log.info("Copying `{src}` to `{dst}`...".format(**locals()))
|
||||
if os.path.isdir(src):
|
||||
shutil.copytree(
|
||||
src, dst, symlinks=False,
|
||||
ignore=None, ignore_dangling_symlinks=False
|
||||
)
|
||||
else:
|
||||
shutil.copy2(src, dst)
|
||||
|
||||
|
||||
def setup(env=None):
|
||||
""" Wrapper installer started from
|
||||
`flame/hooks/pre_flame_setup.py`
|
||||
"""
|
||||
env = env or os.environ
|
||||
|
||||
# synchronize resolve utility scripts
|
||||
_sync_utility_scripts(env)
|
||||
|
||||
log.info("Flame OpenPype wrapper has been installed")
|
||||
37
openpype/hosts/flame/api/workio.py
Normal file
37
openpype/hosts/flame/api/workio.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
"""Host API required Work Files tool"""
|
||||
|
||||
import os
|
||||
from openpype.api import Logger
|
||||
# from .. import (
|
||||
# get_project_manager,
|
||||
# get_current_project
|
||||
# )
|
||||
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
exported_projet_ext = ".otoc"
|
||||
|
||||
|
||||
def file_extensions():
|
||||
return [exported_projet_ext]
|
||||
|
||||
|
||||
def has_unsaved_changes():
|
||||
pass
|
||||
|
||||
|
||||
def save_file(filepath):
|
||||
pass
|
||||
|
||||
|
||||
def open_file(filepath):
|
||||
pass
|
||||
|
||||
|
||||
def current_file():
|
||||
pass
|
||||
|
||||
|
||||
def work_root(session):
|
||||
return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/")
|
||||
132
openpype/hosts/flame/hooks/pre_flame_setup.py
Normal file
132
openpype/hosts/flame/hooks/pre_flame_setup.py
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
import os
|
||||
import json
|
||||
import tempfile
|
||||
import contextlib
|
||||
from openpype.lib import (
|
||||
PreLaunchHook, get_openpype_username)
|
||||
from openpype.hosts import flame as opflame
|
||||
import openpype
|
||||
from pprint import pformat
|
||||
|
||||
|
||||
class FlamePrelaunch(PreLaunchHook):
|
||||
""" Flame prelaunch hook
|
||||
|
||||
Will make sure flame_script_dirs are coppied to user's folder defined
|
||||
in environment var FLAME_SCRIPT_DIR.
|
||||
"""
|
||||
app_groups = ["flame"]
|
||||
|
||||
# todo: replace version number with avalon launch app version
|
||||
flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7"
|
||||
|
||||
wtc_script_path = os.path.join(
|
||||
opflame.HOST_DIR, "scripts", "wiretap_com.py")
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
project_doc = self.data["project_doc"]
|
||||
user_name = get_openpype_username()
|
||||
|
||||
self.log.debug("Collected user \"{}\"".format(user_name))
|
||||
self.log.info(pformat(project_doc))
|
||||
_db_p_data = project_doc["data"]
|
||||
width = _db_p_data["resolutionWidth"]
|
||||
height = _db_p_data["resolutionHeight"]
|
||||
fps = int(_db_p_data["fps"])
|
||||
|
||||
project_data = {
|
||||
"Name": project_doc["name"],
|
||||
"Nickname": _db_p_data["code"],
|
||||
"Description": "Created by OpenPype",
|
||||
"SetupDir": project_doc["name"],
|
||||
"FrameWidth": int(width),
|
||||
"FrameHeight": int(height),
|
||||
"AspectRatio": float((width / height) * _db_p_data["pixelAspect"]),
|
||||
"FrameRate": "{} fps".format(fps),
|
||||
"FrameDepth": "16-bit fp",
|
||||
"FieldDominance": "PROGRESSIVE"
|
||||
}
|
||||
|
||||
data_to_script = {
|
||||
# from settings
|
||||
"host_name": "localhost",
|
||||
"volume_name": "stonefs",
|
||||
"group_name": "staff",
|
||||
"color_policy": "ACES 1.1",
|
||||
|
||||
# from project
|
||||
"project_name": project_doc["name"],
|
||||
"user_name": user_name,
|
||||
"project_data": project_data
|
||||
}
|
||||
app_arguments = self._get_launch_arguments(data_to_script)
|
||||
|
||||
self.log.info(pformat(dict(self.launch_context.env)))
|
||||
|
||||
opflame.setup(self.launch_context.env)
|
||||
|
||||
self.launch_context.launch_args.extend(app_arguments)
|
||||
|
||||
def _get_launch_arguments(self, script_data):
|
||||
# Dump data to string
|
||||
dumped_script_data = json.dumps(script_data)
|
||||
|
||||
with make_temp_file(dumped_script_data) as tmp_json_path:
|
||||
# Prepare subprocess arguments
|
||||
args = [
|
||||
self.flame_python_exe,
|
||||
self.wtc_script_path,
|
||||
tmp_json_path
|
||||
]
|
||||
self.log.info("Executing: {}".format(" ".join(args)))
|
||||
|
||||
process_kwargs = {
|
||||
"logger": self.log,
|
||||
"env": {}
|
||||
}
|
||||
|
||||
openpype.api.run_subprocess(args, **process_kwargs)
|
||||
|
||||
# process returned json file to pass launch args
|
||||
return_json_data = open(tmp_json_path).read()
|
||||
returned_data = json.loads(return_json_data)
|
||||
app_args = returned_data.get("app_args")
|
||||
self.log.info("____ app_args: `{}`".format(app_args))
|
||||
|
||||
if not app_args:
|
||||
RuntimeError("App arguments were not solved")
|
||||
|
||||
return app_args
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def make_temp_file(data):
|
||||
try:
|
||||
# Store dumped json to temporary file
|
||||
temporary_json_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", suffix=".json", delete=False
|
||||
)
|
||||
temporary_json_file.write(data)
|
||||
temporary_json_file.close()
|
||||
temporary_json_filepath = temporary_json_file.name.replace(
|
||||
"\\", "/"
|
||||
)
|
||||
|
||||
yield temporary_json_filepath
|
||||
|
||||
except IOError as _error:
|
||||
raise IOError(
|
||||
"Not able to create temp json file: {}".format(
|
||||
_error
|
||||
)
|
||||
)
|
||||
|
||||
finally:
|
||||
# Remove the temporary json
|
||||
os.remove(temporary_json_filepath)
|
||||
490
openpype/hosts/flame/scripts/wiretap_com.py
Normal file
490
openpype/hosts/flame/scripts/wiretap_com.py
Normal file
|
|
@ -0,0 +1,490 @@
|
|||
#!/usr/bin/env python2.7
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
import json
|
||||
import xml.dom.minidom as minidom
|
||||
from copy import deepcopy
|
||||
import datetime
|
||||
|
||||
try:
|
||||
from libwiretapPythonClientAPI import (
|
||||
WireTapClientInit)
|
||||
except ImportError:
|
||||
flame_python_path = "/opt/Autodesk/flame_2021/python"
|
||||
flame_exe_path = (
|
||||
"/opt/Autodesk/flame_2021/bin/flame.app"
|
||||
"/Contents/MacOS/startApp")
|
||||
|
||||
sys.path.append(flame_python_path)
|
||||
|
||||
from libwiretapPythonClientAPI import (
|
||||
WireTapClientInit,
|
||||
WireTapClientUninit,
|
||||
WireTapNodeHandle,
|
||||
WireTapServerHandle,
|
||||
WireTapInt,
|
||||
WireTapStr
|
||||
)
|
||||
|
||||
|
||||
class WireTapCom(object):
|
||||
"""
|
||||
Comunicator class wrapper for talking to WireTap db.
|
||||
|
||||
This way we are able to set new project with settings and
|
||||
correct colorspace policy. Also we are able to create new user
|
||||
or get actuall user with similar name (users are usually cloning
|
||||
their profiles and adding date stamp into suffix).
|
||||
"""
|
||||
|
||||
def __init__(self, host_name=None, volume_name=None, group_name=None):
|
||||
"""Initialisation of WireTap communication class
|
||||
|
||||
Args:
|
||||
host_name (str, optional): Name of host server. Defaults to None.
|
||||
volume_name (str, optional): Name of volume. Defaults to None.
|
||||
group_name (str, optional): Name of user group. Defaults to None.
|
||||
"""
|
||||
# set main attributes of server
|
||||
# if there are none set the default installation
|
||||
self.host_name = host_name or "localhost"
|
||||
self.volume_name = volume_name or "stonefs"
|
||||
self.group_name = group_name or "staff"
|
||||
|
||||
# initialize WireTap client
|
||||
WireTapClientInit()
|
||||
|
||||
# add the server to shared variable
|
||||
self._server = WireTapServerHandle("{}:IFFFS".format(self.host_name))
|
||||
print("WireTap connected at '{}'...".format(
|
||||
self.host_name))
|
||||
|
||||
def close(self):
|
||||
self._server = None
|
||||
WireTapClientUninit()
|
||||
print("WireTap closed...")
|
||||
|
||||
def get_launch_args(
|
||||
self, project_name, project_data, user_name, *args, **kwargs):
|
||||
"""Forming launch arguments for OpenPype launcher.
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
project_data (dict): Flame compatible project data
|
||||
user_name (str): name of user
|
||||
|
||||
Returns:
|
||||
list: arguments
|
||||
"""
|
||||
|
||||
workspace_name = kwargs.get("workspace_name")
|
||||
color_policy = kwargs.get("color_policy")
|
||||
|
||||
self._project_prep(project_name)
|
||||
self._set_project_settings(project_name, project_data)
|
||||
self._set_project_colorspace(project_name, color_policy)
|
||||
user_name = self._user_prep(user_name)
|
||||
|
||||
if workspace_name is None:
|
||||
# default workspace
|
||||
print("Using a default workspace")
|
||||
return [
|
||||
"--start-project={}".format(project_name),
|
||||
"--start-user={}".format(user_name),
|
||||
"--create-workspace"
|
||||
]
|
||||
|
||||
else:
|
||||
print(
|
||||
"Using a custom workspace '{}'".format(workspace_name))
|
||||
|
||||
self._workspace_prep(project_name, workspace_name)
|
||||
return [
|
||||
"--start-project={}".format(project_name),
|
||||
"--start-user={}".format(user_name),
|
||||
"--create-workspace",
|
||||
"--start-workspace={}".format(workspace_name)
|
||||
]
|
||||
|
||||
def _workspace_prep(self, project_name, workspace_name):
|
||||
"""Preparing a workspace
|
||||
|
||||
In case it doesn not exists it will create one
|
||||
|
||||
Args:
|
||||
project_name (str): project name
|
||||
workspace_name (str): workspace name
|
||||
|
||||
Raises:
|
||||
AttributeError: unable to create workspace
|
||||
"""
|
||||
workspace_exists = self._child_is_in_parent_path(
|
||||
"/projects/{}".format(project_name), workspace_name, "WORKSPACE"
|
||||
)
|
||||
if not workspace_exists:
|
||||
project = WireTapNodeHandle(
|
||||
self._server, "/projects/{}".format(project_name))
|
||||
|
||||
workspace_node = WireTapNodeHandle()
|
||||
created_workspace = project.createNode(
|
||||
workspace_name, "WORKSPACE", workspace_node)
|
||||
|
||||
if not created_workspace:
|
||||
raise AttributeError(
|
||||
"Cannot create workspace `{}` in "
|
||||
"project `{}`: `{}`".format(
|
||||
workspace_name, project_name, project.lastError())
|
||||
)
|
||||
|
||||
print(
|
||||
"Workspace `{}` is successfully created".format(workspace_name))
|
||||
|
||||
def _project_prep(self, project_name):
|
||||
"""Preparing a project
|
||||
|
||||
In case it doesn not exists it will create one
|
||||
|
||||
Args:
|
||||
project_name (str): project name
|
||||
|
||||
Raises:
|
||||
AttributeError: unable to create project
|
||||
"""
|
||||
# test if projeft exists
|
||||
project_exists = self._child_is_in_parent_path(
|
||||
"/projects", project_name, "PROJECT")
|
||||
|
||||
if not project_exists:
|
||||
volumes = self._get_all_volumes()
|
||||
|
||||
if len(volumes) == 0:
|
||||
raise AttributeError(
|
||||
"Not able to create new project. No Volumes existing"
|
||||
)
|
||||
|
||||
# check if volumes exists
|
||||
if self.volume_name not in volumes:
|
||||
raise AttributeError(
|
||||
("Volume '{}' does not exist '{}'").format(
|
||||
self.volume_name, volumes)
|
||||
)
|
||||
|
||||
# form cmd arguments
|
||||
project_create_cmd = [
|
||||
os.path.join(
|
||||
"/opt/Autodesk/",
|
||||
"wiretap",
|
||||
"tools",
|
||||
"2021",
|
||||
"wiretap_create_node",
|
||||
),
|
||||
'-n',
|
||||
os.path.join("/volumes", self.volume_name),
|
||||
'-d',
|
||||
project_name,
|
||||
'-g',
|
||||
]
|
||||
|
||||
project_create_cmd.append(self.group_name)
|
||||
|
||||
print(project_create_cmd)
|
||||
|
||||
exit_code = subprocess.call(
|
||||
project_create_cmd,
|
||||
cwd=os.path.expanduser('~'))
|
||||
|
||||
if exit_code != 0:
|
||||
RuntimeError("Cannot create project in flame db")
|
||||
|
||||
print(
|
||||
"A new project '{}' is created.".format(project_name))
|
||||
|
||||
def _get_all_volumes(self):
|
||||
"""Request all available volumens from WireTap
|
||||
|
||||
Returns:
|
||||
list: all available volumes in server
|
||||
|
||||
Rises:
|
||||
AttributeError: unable to get any volumes childs from server
|
||||
"""
|
||||
root = WireTapNodeHandle(self._server, "/volumes")
|
||||
children_num = WireTapInt(0)
|
||||
|
||||
get_children_num = root.getNumChildren(children_num)
|
||||
if not get_children_num:
|
||||
raise AttributeError(
|
||||
"Cannot get number of volumes: {}".format(root.lastError())
|
||||
)
|
||||
|
||||
volumes = []
|
||||
|
||||
# go trough all children and get volume names
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
|
||||
# get a child
|
||||
if not root.getChild(child_idx, child_obj):
|
||||
raise AttributeError(
|
||||
"Unable to get child: {}".format(root.lastError()))
|
||||
|
||||
node_name = WireTapStr()
|
||||
get_children_name = child_obj.getDisplayName(node_name)
|
||||
|
||||
if not get_children_name:
|
||||
raise AttributeError(
|
||||
"Unable to get child name: {}".format(
|
||||
child_obj.lastError())
|
||||
)
|
||||
|
||||
volumes.append(node_name.c_str())
|
||||
|
||||
return volumes
|
||||
|
||||
def _user_prep(self, user_name):
|
||||
"""Ensuring user does exists in user's stack
|
||||
|
||||
Args:
|
||||
user_name (str): name of a user
|
||||
|
||||
Raises:
|
||||
AttributeError: unable to create user
|
||||
"""
|
||||
|
||||
# get all used usernames in db
|
||||
used_names = self._get_usernames()
|
||||
print(">> used_names: {}".format(used_names))
|
||||
|
||||
# filter only those which are sharing input user name
|
||||
filtered_users = [user for user in used_names if user_name in user]
|
||||
|
||||
if filtered_users:
|
||||
# todo: need to find lastly created following regex patern for
|
||||
# date used in name
|
||||
return filtered_users.pop()
|
||||
|
||||
# create new user name with date in suffix
|
||||
now = datetime.datetime.now() # current date and time
|
||||
date = now.strftime("%Y%m%d")
|
||||
new_user_name = "{}_{}".format(user_name, date)
|
||||
print(new_user_name)
|
||||
|
||||
if not self._child_is_in_parent_path("/users", new_user_name, "USER"):
|
||||
# Create the new user
|
||||
users = WireTapNodeHandle(self._server, "/users")
|
||||
|
||||
user_node = WireTapNodeHandle()
|
||||
created_user = users.createNode(new_user_name, "USER", user_node)
|
||||
if not created_user:
|
||||
raise AttributeError(
|
||||
"User {} cannot be created: {}".format(
|
||||
new_user_name, users.lastError())
|
||||
)
|
||||
|
||||
print("User `{}` is created".format(new_user_name))
|
||||
return new_user_name
|
||||
|
||||
def _get_usernames(self):
|
||||
"""Requesting all available users from WireTap
|
||||
|
||||
Returns:
|
||||
list: all available user names
|
||||
|
||||
Raises:
|
||||
AttributeError: there are no users in server
|
||||
"""
|
||||
root = WireTapNodeHandle(self._server, "/users")
|
||||
children_num = WireTapInt(0)
|
||||
|
||||
get_children_num = root.getNumChildren(children_num)
|
||||
if not get_children_num:
|
||||
raise AttributeError(
|
||||
"Cannot get number of volumes: {}".format(root.lastError())
|
||||
)
|
||||
|
||||
usernames = []
|
||||
|
||||
# go trough all children and get volume names
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
|
||||
# get a child
|
||||
if not root.getChild(child_idx, child_obj):
|
||||
raise AttributeError(
|
||||
"Unable to get child: {}".format(root.lastError()))
|
||||
|
||||
node_name = WireTapStr()
|
||||
get_children_name = child_obj.getDisplayName(node_name)
|
||||
|
||||
if not get_children_name:
|
||||
raise AttributeError(
|
||||
"Unable to get child name: {}".format(
|
||||
child_obj.lastError())
|
||||
)
|
||||
|
||||
usernames.append(node_name.c_str())
|
||||
|
||||
return usernames
|
||||
|
||||
def _child_is_in_parent_path(self, parent_path, child_name, child_type):
|
||||
"""Checking if a given child is in parent path.
|
||||
|
||||
Args:
|
||||
parent_path (str): db path to parent
|
||||
child_name (str): name of child
|
||||
child_type (str): type of child
|
||||
|
||||
Raises:
|
||||
AttributeError: Not able to get number of children
|
||||
AttributeError: Not able to get children form parent
|
||||
AttributeError: Not able to get children name
|
||||
AttributeError: Not able to get children type
|
||||
|
||||
Returns:
|
||||
bool: True if child is in parent path
|
||||
"""
|
||||
parent = WireTapNodeHandle(self._server, parent_path)
|
||||
|
||||
# iterate number of children
|
||||
children_num = WireTapInt(0)
|
||||
requested = parent.getNumChildren(children_num)
|
||||
if not requested:
|
||||
raise AttributeError((
|
||||
"Error: Cannot request number of "
|
||||
"childrens from the node {}. Make sure your "
|
||||
"wiretap service is running: {}").format(
|
||||
parent_path, parent.lastError())
|
||||
)
|
||||
|
||||
# iterate children
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
if not parent.getChild(child_idx, child_obj):
|
||||
raise AttributeError(
|
||||
"Cannot get child: {}".format(
|
||||
parent.lastError()))
|
||||
|
||||
node_name = WireTapStr()
|
||||
node_type = WireTapStr()
|
||||
|
||||
if not child_obj.getDisplayName(node_name):
|
||||
raise AttributeError(
|
||||
"Unable to get child name: %s" % child_obj.lastError()
|
||||
)
|
||||
if not child_obj.getNodeTypeStr(node_type):
|
||||
raise AttributeError(
|
||||
"Unable to obtain child type: %s" % child_obj.lastError()
|
||||
)
|
||||
|
||||
if (node_name.c_str() == child_name) and (
|
||||
node_type.c_str() == child_type):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _set_project_settings(self, project_name, project_data):
|
||||
"""Setting project attributes.
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
project_data (dict): data with project attributes
|
||||
(flame compatible)
|
||||
|
||||
Raises:
|
||||
AttributeError: Not able to set project attributes
|
||||
"""
|
||||
# generated xml from project_data dict
|
||||
_xml = "<Project>"
|
||||
for key, value in project_data.items():
|
||||
_xml += "<{}>{}</{}>".format(key, value, key)
|
||||
_xml += "</Project>"
|
||||
|
||||
pretty_xml = minidom.parseString(_xml).toprettyxml()
|
||||
print("__ xml: {}".format(pretty_xml))
|
||||
|
||||
# set project data to wiretap
|
||||
project_node = WireTapNodeHandle(
|
||||
self._server, "/projects/{}".format(project_name))
|
||||
|
||||
if not project_node.setMetaData("XML", _xml):
|
||||
raise AttributeError(
|
||||
"Not able to set project attributes {}. Error: {}".format(
|
||||
project_name, project_node.lastError())
|
||||
)
|
||||
|
||||
print("Project settings successfully set.")
|
||||
|
||||
def _set_project_colorspace(self, project_name, color_policy):
|
||||
"""Set project's colorspace policy.
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
color_policy (str): name of policy
|
||||
|
||||
Raises:
|
||||
RuntimeError: Not able to set colorspace policy
|
||||
"""
|
||||
color_policy = color_policy or "Legacy"
|
||||
project_colorspace_cmd = [
|
||||
os.path.join(
|
||||
"/opt/Autodesk/",
|
||||
"wiretap",
|
||||
"tools",
|
||||
"2021",
|
||||
"wiretap_duplicate_node",
|
||||
),
|
||||
"-s",
|
||||
"/syncolor/policies/Autodesk/{}".format(color_policy),
|
||||
"-n",
|
||||
"/projects/{}/syncolor".format(project_name)
|
||||
]
|
||||
|
||||
print(project_colorspace_cmd)
|
||||
|
||||
exit_code = subprocess.call(
|
||||
project_colorspace_cmd,
|
||||
cwd=os.path.expanduser('~'))
|
||||
|
||||
if exit_code != 0:
|
||||
RuntimeError("Cannot set colorspace {} on project {}".format(
|
||||
color_policy, project_name
|
||||
))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# get json exchange data
|
||||
json_path = sys.argv[-1]
|
||||
json_data = open(json_path).read()
|
||||
in_data = json.loads(json_data)
|
||||
out_data = deepcopy(in_data)
|
||||
|
||||
# get main server attributes
|
||||
host_name = in_data.pop("host_name")
|
||||
volume_name = in_data.pop("volume_name")
|
||||
group_name = in_data.pop("group_name")
|
||||
|
||||
# initialize class
|
||||
wiretap_handler = WireTapCom(host_name, volume_name, group_name)
|
||||
|
||||
try:
|
||||
app_args = wiretap_handler.get_launch_args(
|
||||
project_name=in_data.pop("project_name"),
|
||||
project_data=in_data.pop("project_data"),
|
||||
user_name=in_data.pop("user_name"),
|
||||
**in_data
|
||||
)
|
||||
finally:
|
||||
wiretap_handler.close()
|
||||
|
||||
# set returned args back to out data
|
||||
out_data.update({
|
||||
"app_args": app_args
|
||||
})
|
||||
|
||||
# write it out back to the exchange json file
|
||||
with open(json_path, "w") as file_stream:
|
||||
json.dump(out_data, file_stream, indent=4)
|
||||
191
openpype/hosts/flame/utility_scripts/openpype_in_flame.py
Normal file
191
openpype/hosts/flame/utility_scripts/openpype_in_flame.py
Normal file
|
|
@ -0,0 +1,191 @@
|
|||
from __future__ import print_function
|
||||
import sys
|
||||
from Qt import QtWidgets
|
||||
from pprint import pformat
|
||||
import atexit
|
||||
import openpype
|
||||
import avalon
|
||||
import openpype.hosts.flame as opflame
|
||||
|
||||
flh = sys.modules[__name__]
|
||||
flh._project = None
|
||||
|
||||
|
||||
def openpype_install():
|
||||
"""Registering OpenPype in context
|
||||
"""
|
||||
openpype.install()
|
||||
avalon.api.install(opflame)
|
||||
print("Avalon registred hosts: {}".format(
|
||||
avalon.api.registered_host()))
|
||||
|
||||
|
||||
# Exception handler
|
||||
def exeption_handler(exctype, value, _traceback):
|
||||
"""Exception handler for improving UX
|
||||
|
||||
Args:
|
||||
exctype (str): type of exception
|
||||
value (str): exception value
|
||||
tb (str): traceback to show
|
||||
"""
|
||||
import traceback
|
||||
msg = "OpenPype: Python exception {} in {}".format(value, exctype)
|
||||
mbox = QtWidgets.QMessageBox()
|
||||
mbox.setText(msg)
|
||||
mbox.setDetailedText(
|
||||
pformat(traceback.format_exception(exctype, value, _traceback)))
|
||||
mbox.setStyleSheet('QLabel{min-width: 800px;}')
|
||||
mbox.exec_()
|
||||
sys.__excepthook__(exctype, value, _traceback)
|
||||
|
||||
|
||||
# add exception handler into sys module
|
||||
sys.excepthook = exeption_handler
|
||||
|
||||
|
||||
# register clean up logic to be called at Flame exit
|
||||
def cleanup():
|
||||
"""Cleaning up Flame framework context
|
||||
"""
|
||||
if opflame.apps:
|
||||
print('`{}` cleaning up apps:\n {}\n'.format(
|
||||
__file__, pformat(opflame.apps)))
|
||||
while len(opflame.apps):
|
||||
app = opflame.apps.pop()
|
||||
print('`{}` removing : {}'.format(__file__, app.name))
|
||||
del app
|
||||
opflame.apps = []
|
||||
|
||||
if opflame.app_framework:
|
||||
print('PYTHON\t: %s cleaning up' % opflame.app_framework.bundle_name)
|
||||
opflame.app_framework.save_prefs()
|
||||
opflame.app_framework = None
|
||||
|
||||
|
||||
atexit.register(cleanup)
|
||||
|
||||
|
||||
def load_apps():
|
||||
"""Load available apps into Flame framework
|
||||
"""
|
||||
opflame.apps.append(opflame.FlameMenuProjectConnect(opflame.app_framework))
|
||||
opflame.apps.append(opflame.FlameMenuTimeline(opflame.app_framework))
|
||||
opflame.app_framework.log.info("Apps are loaded")
|
||||
|
||||
|
||||
def project_changed_dict(info):
|
||||
"""Hook for project change action
|
||||
|
||||
Args:
|
||||
info (str): info text
|
||||
"""
|
||||
cleanup()
|
||||
|
||||
|
||||
def app_initialized(parent=None):
|
||||
"""Inicialization of Framework
|
||||
|
||||
Args:
|
||||
parent (obj, optional): Parent object. Defaults to None.
|
||||
"""
|
||||
opflame.app_framework = opflame.FlameAppFramework()
|
||||
|
||||
print("{} initializing".format(
|
||||
opflame.app_framework.bundle_name))
|
||||
|
||||
load_apps()
|
||||
|
||||
|
||||
"""
|
||||
Initialisation of the hook is starting from here
|
||||
|
||||
First it needs to test if it can import the flame modul.
|
||||
This will happen only in case a project has been loaded.
|
||||
Then `app_initialized` will load main Framework which will load
|
||||
all menu objects as apps.
|
||||
"""
|
||||
|
||||
try:
|
||||
import flame # noqa
|
||||
app_initialized(parent=None)
|
||||
except ImportError:
|
||||
print("!!!! not able to import flame module !!!!")
|
||||
|
||||
|
||||
def rescan_hooks():
|
||||
import flame # noqa
|
||||
flame.execute_shortcut('Rescan Python Hooks')
|
||||
|
||||
|
||||
def _build_app_menu(app_name):
|
||||
"""Flame menu object generator
|
||||
|
||||
Args:
|
||||
app_name (str): name of menu object app
|
||||
|
||||
Returns:
|
||||
list: menu object
|
||||
"""
|
||||
menu = []
|
||||
|
||||
# first find the relative appname
|
||||
app = None
|
||||
for _app in opflame.apps:
|
||||
if _app.__class__.__name__ == app_name:
|
||||
app = _app
|
||||
|
||||
if app:
|
||||
menu.append(app.build_menu())
|
||||
|
||||
if opflame.app_framework:
|
||||
menu_auto_refresh = opflame.app_framework.prefs_global.get(
|
||||
'menu_auto_refresh', {})
|
||||
if menu_auto_refresh.get('timeline_menu', True):
|
||||
try:
|
||||
import flame # noqa
|
||||
flame.schedule_idle_event(rescan_hooks)
|
||||
except ImportError:
|
||||
print("!-!!! not able to import flame module !!!!")
|
||||
|
||||
return menu
|
||||
|
||||
|
||||
""" Flame hooks are starting here
|
||||
"""
|
||||
|
||||
|
||||
def project_saved(project_name, save_time, is_auto_save):
|
||||
"""Hook to activate when project is saved
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
save_time (str): time when it was saved
|
||||
is_auto_save (bool): autosave is on or off
|
||||
"""
|
||||
if opflame.app_framework:
|
||||
opflame.app_framework.save_prefs()
|
||||
|
||||
|
||||
def get_main_menu_custom_ui_actions():
|
||||
"""Hook to create submenu in start menu
|
||||
|
||||
Returns:
|
||||
list: menu object
|
||||
"""
|
||||
# install openpype and the host
|
||||
openpype_install()
|
||||
|
||||
return _build_app_menu("FlameMenuProjectConnect")
|
||||
|
||||
|
||||
def get_timeline_custom_ui_actions():
|
||||
"""Hook to create submenu in timeline
|
||||
|
||||
Returns:
|
||||
list: menu object
|
||||
"""
|
||||
# install openpype and the host
|
||||
openpype_install()
|
||||
|
||||
return _build_app_menu("FlameMenuTimeline")
|
||||
|
|
@ -1,6 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Houdini specific Avalon/Pyblish plugin definitions."""
|
||||
import sys
|
||||
from avalon.api import CreatorError
|
||||
from avalon import houdini
|
||||
import six
|
||||
|
||||
|
|
@ -8,7 +9,7 @@ import hou
|
|||
from openpype.api import PypeCreatorMixin
|
||||
|
||||
|
||||
class OpenPypeCreatorError(Exception):
|
||||
class OpenPypeCreatorError(CreatorError):
|
||||
pass
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -4,8 +4,8 @@ import contextlib
|
|||
|
||||
import logging
|
||||
from Qt import QtCore, QtGui
|
||||
from avalon.tools.widgets import AssetWidget
|
||||
from avalon import style
|
||||
from openpype.tools.utils.widgets import AssetWidget
|
||||
from avalon import style, io
|
||||
|
||||
from pxr import Sdf
|
||||
|
||||
|
|
@ -31,7 +31,7 @@ def pick_asset(node):
|
|||
# Construct the AssetWidget as a frameless popup so it automatically
|
||||
# closes when clicked outside of it.
|
||||
global tool
|
||||
tool = AssetWidget(silo_creatable=False)
|
||||
tool = AssetWidget(io)
|
||||
tool.setContentsMargins(5, 5, 5, 5)
|
||||
tool.setWindowTitle("Pick Asset")
|
||||
tool.setStyleSheet(style.load_stylesheet())
|
||||
|
|
@ -41,8 +41,6 @@ def pick_asset(node):
|
|||
# Select the current asset if there is any
|
||||
name = parm.eval()
|
||||
if name:
|
||||
from avalon import io
|
||||
|
||||
db_asset = io.find_one({"name": name, "type": "asset"})
|
||||
if db_asset:
|
||||
silo = db_asset.get("silo")
|
||||
|
|
|
|||
96
openpype/hosts/houdini/plugins/create/create_hda.py
Normal file
96
openpype/hosts/houdini/plugins/create/create_hda.py
Normal file
|
|
@ -0,0 +1,96 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from avalon.houdini import lib
|
||||
from avalon import io
|
||||
import hou
|
||||
|
||||
|
||||
class CreateHDA(plugin.Creator):
|
||||
"""Publish Houdini Digital Asset file."""
|
||||
|
||||
name = "hda"
|
||||
label = "Houdini Digital Asset (Hda)"
|
||||
family = "hda"
|
||||
icon = "gears"
|
||||
maintain_selection = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateHDA, self).__init__(*args, **kwargs)
|
||||
self.data.pop("active", None)
|
||||
|
||||
def _check_existing(self, subset_name):
|
||||
# type: (str) -> bool
|
||||
"""Check if existing subset name versions already exists."""
|
||||
# Get all subsets of the current asset
|
||||
asset_id = io.find_one({"name": self.data["asset"], "type": "asset"},
|
||||
projection={"_id": True})['_id']
|
||||
subset_docs = io.find(
|
||||
{
|
||||
"type": "subset",
|
||||
"parent": asset_id
|
||||
}, {"name": 1}
|
||||
)
|
||||
existing_subset_names = set(subset_docs.distinct("name"))
|
||||
existing_subset_names_low = {
|
||||
_name.lower() for _name in existing_subset_names
|
||||
}
|
||||
return subset_name.lower() in existing_subset_names_low
|
||||
|
||||
def _process(self, instance):
|
||||
subset_name = self.data["subset"]
|
||||
# get selected nodes
|
||||
out = hou.node("/obj")
|
||||
self.nodes = hou.selectedNodes()
|
||||
|
||||
if (self.options or {}).get("useSelection") and self.nodes:
|
||||
# if we have `use selection` enabled and we have some
|
||||
# selected nodes ...
|
||||
to_hda = self.nodes[0]
|
||||
if len(self.nodes) > 1:
|
||||
# if there is more then one node, create subnet first
|
||||
subnet = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
to_hda = subnet
|
||||
else:
|
||||
# in case of no selection, just create subnet node
|
||||
subnet = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
subnet.moveToGoodPosition()
|
||||
to_hda = subnet
|
||||
|
||||
if not to_hda.type().definition():
|
||||
# if node type has not its definition, it is not user
|
||||
# created hda. We test if hda can be created from the node.
|
||||
if not to_hda.canCreateDigitalAsset():
|
||||
raise Exception(
|
||||
"cannot create hda from node {}".format(to_hda))
|
||||
|
||||
hda_node = to_hda.createDigitalAsset(
|
||||
name=subset_name,
|
||||
hda_file_name="$HIP/{}.hda".format(subset_name)
|
||||
)
|
||||
hou.moveNodesTo(self.nodes, hda_node)
|
||||
hda_node.layoutChildren()
|
||||
else:
|
||||
if self._check_existing(subset_name):
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("subset {} is already published with different HDA"
|
||||
"definition.").format(subset_name))
|
||||
hda_node = to_hda
|
||||
|
||||
hda_node.setName(subset_name)
|
||||
|
||||
# delete node created by Avalon in /out
|
||||
# this needs to be addressed in future Houdini workflow refactor.
|
||||
|
||||
hou.node("/out/{}".format(subset_name)).destroy()
|
||||
|
||||
try:
|
||||
lib.imprint(hda_node, self.data)
|
||||
except hou.OperationFailed:
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("Cannot set metadata on asset. Might be that it already is "
|
||||
"OpenPype asset.")
|
||||
)
|
||||
|
||||
return hda_node
|
||||
62
openpype/hosts/houdini/plugins/load/load_hda.py
Normal file
62
openpype/hosts/houdini/plugins/load/load_hda.py
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from avalon import api
|
||||
|
||||
from avalon.houdini import pipeline
|
||||
|
||||
|
||||
class HdaLoader(api.Loader):
|
||||
"""Load Houdini Digital Asset file."""
|
||||
|
||||
families = ["hda"]
|
||||
label = "Load Hda"
|
||||
representations = ["hda"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
import os
|
||||
import hou
|
||||
|
||||
# Format file name, Houdini only wants forward slashes
|
||||
file_path = os.path.normpath(self.fname)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
|
||||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Create a unique name
|
||||
counter = 1
|
||||
namespace = namespace or context["asset"]["name"]
|
||||
formatted = "{}_{}".format(namespace, name) if namespace else name
|
||||
node_name = "{0}_{1:03d}".format(formatted, counter)
|
||||
|
||||
hou.hda.installFile(file_path)
|
||||
hda_node = obj.createNode(name, node_name)
|
||||
|
||||
self[:] = [hda_node]
|
||||
|
||||
return pipeline.containerise(
|
||||
node_name,
|
||||
namespace,
|
||||
[hda_node],
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
suffix="",
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
import hou
|
||||
|
||||
hda_node = container["node"]
|
||||
file_path = api.get_representation_path(representation)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
hou.hda.installFile(file_path)
|
||||
defs = hda_node.type().allInstalledDefinitions()
|
||||
def_paths = [d.libraryFilePath() for d in defs]
|
||||
new = def_paths.index(file_path)
|
||||
defs[new].setIsPreferred(True)
|
||||
|
||||
def remove(self, container):
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
|
@ -23,8 +23,10 @@ class CollectInstanceActiveState(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
# Check bypass state and reverse
|
||||
active = True
|
||||
node = instance[0]
|
||||
active = not node.isBypassed()
|
||||
if hasattr(node, "isBypassed"):
|
||||
active = not node.isBypassed()
|
||||
|
||||
# Set instance active state
|
||||
instance.data.update(
|
||||
|
|
|
|||
|
|
@ -31,6 +31,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
|
||||
nodes = hou.node("/out").children()
|
||||
nodes += hou.node("/obj").children()
|
||||
|
||||
# Include instances in USD stage only when it exists so it
|
||||
# remains backwards compatible with version before houdini 18
|
||||
|
|
@ -49,9 +50,12 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
has_family = node.evalParm("family")
|
||||
assert has_family, "'%s' is missing 'family'" % node.name()
|
||||
|
||||
self.log.info("processing {}".format(node))
|
||||
|
||||
data = lib.read(node)
|
||||
# Check bypass state and reverse
|
||||
data.update({"active": not node.isBypassed()})
|
||||
if hasattr(node, "isBypassed"):
|
||||
data.update({"active": not node.isBypassed()})
|
||||
|
||||
# temporarily translation of `active` to `publish` till issue has
|
||||
# been resolved, https://github.com/pyblish/pyblish-base/issues/307
|
||||
|
|
|
|||
43
openpype/hosts/houdini/plugins/publish/extract_hda.py
Normal file
43
openpype/hosts/houdini/plugins/publish/extract_hda.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
|
||||
from pprint import pformat
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ExtractHDA(openpype.api.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract HDA"
|
||||
hosts = ["houdini"]
|
||||
families = ["hda"]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.info(pformat(instance.data))
|
||||
hda_node = instance[0]
|
||||
hda_def = hda_node.type().definition()
|
||||
hda_options = hda_def.options()
|
||||
hda_options.setSaveInitialParmsAndContents(True)
|
||||
|
||||
next_version = instance.data["anatomyData"]["version"]
|
||||
self.log.info("setting version: {}".format(next_version))
|
||||
hda_def.setVersion(str(next_version))
|
||||
hda_def.setOptions(hda_options)
|
||||
hda_def.save(hda_def.libraryFilePath(), hda_node, hda_options)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
file = os.path.basename(hda_def.libraryFilePath())
|
||||
staging_dir = os.path.dirname(hda_def.libraryFilePath())
|
||||
self.log.info("Using HDA from {}".format(hda_def.libraryFilePath()))
|
||||
|
||||
representation = {
|
||||
'name': 'hda',
|
||||
'ext': 'hda',
|
||||
'files': file,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -35,5 +35,5 @@ class ValidateBypassed(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
|
||||
rop = instance[0]
|
||||
if rop.isBypassed():
|
||||
if hasattr(rop, "isBypassed") and rop.isBypassed():
|
||||
return [rop]
|
||||
|
|
|
|||
|
|
@ -275,8 +275,7 @@ def on_open(_):
|
|||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
import avalon.tools.sceneinventory as tool
|
||||
tool.show(parent=parent)
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Maya scene has outdated content")
|
||||
|
|
|
|||
|
|
@ -437,7 +437,8 @@ def empty_sets(sets, force=False):
|
|||
cmds.connectAttr(src, dest)
|
||||
|
||||
# Restore original members
|
||||
for origin_set, members in original.iteritems():
|
||||
_iteritems = getattr(original, "iteritems", original.items)
|
||||
for origin_set, members in _iteritems():
|
||||
cmds.sets(members, forceElement=origin_set)
|
||||
|
||||
|
||||
|
|
@ -581,7 +582,7 @@ def get_shader_assignments_from_shapes(shapes, components=True):
|
|||
|
||||
# Build a mapping from parent to shapes to include in lookup.
|
||||
transforms = {shape.rsplit("|", 1)[0]: shape for shape in shapes}
|
||||
lookup = set(shapes + transforms.keys())
|
||||
lookup = set(shapes) | set(transforms.keys())
|
||||
|
||||
component_assignments = defaultdict(list)
|
||||
for shading_group in assignments.keys():
|
||||
|
|
@ -669,7 +670,8 @@ def displaySmoothness(nodes,
|
|||
yield
|
||||
finally:
|
||||
# Revert state
|
||||
for node, state in originals.iteritems():
|
||||
_iteritems = getattr(originals, "iteritems", originals.items)
|
||||
for node, state in _iteritems():
|
||||
if state:
|
||||
cmds.displaySmoothness(node, **state)
|
||||
|
||||
|
|
@ -712,7 +714,8 @@ def no_display_layers(nodes):
|
|||
yield
|
||||
finally:
|
||||
# Restore original members
|
||||
for layer, members in original.iteritems():
|
||||
_iteritems = getattr(original, "iteritems", original.items)
|
||||
for layer, members in _iteritems():
|
||||
cmds.editDisplayLayerMembers(layer, members, noRecurse=True)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import os
|
|||
import contextlib
|
||||
import copy
|
||||
|
||||
import six
|
||||
from maya import cmds
|
||||
|
||||
from avalon import api, io
|
||||
|
|
@ -69,7 +70,8 @@ def unlocked(nodes):
|
|||
yield
|
||||
finally:
|
||||
# Reapply original states
|
||||
for uuid, state in states.iteritems():
|
||||
_iteritems = getattr(states, "iteritems", states.items)
|
||||
for uuid, state in _iteritems():
|
||||
nodes_from_id = cmds.ls(uuid, long=True)
|
||||
if nodes_from_id:
|
||||
node = nodes_from_id[0]
|
||||
|
|
@ -94,7 +96,7 @@ def load_package(filepath, name, namespace=None):
|
|||
# Define a unique namespace for the package
|
||||
namespace = os.path.basename(filepath).split(".")[0]
|
||||
unique_namespace(namespace)
|
||||
assert isinstance(namespace, basestring)
|
||||
assert isinstance(namespace, six.string_types)
|
||||
|
||||
# Load the setdress package data
|
||||
with open(filepath, "r") as fp:
|
||||
|
|
|
|||
|
|
@ -183,7 +183,8 @@ class ExtractFBX(openpype.api.Extractor):
|
|||
# Apply the FBX overrides through MEL since the commands
|
||||
# only work correctly in MEL according to online
|
||||
# available discussions on the topic
|
||||
for option, value in options.iteritems():
|
||||
_iteritems = getattr(options, "iteritems", options.items)
|
||||
for option, value in _iteritems():
|
||||
key = option[0].upper() + option[1:] # uppercase first letter
|
||||
|
||||
# Boolean must be passed as lower-case strings
|
||||
|
|
|
|||
|
|
@ -383,7 +383,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
|
|||
"attributes": {
|
||||
"environmental_variables": {
|
||||
"value": ", ".join("{!s}={!r}".format(k, v)
|
||||
for (k, v) in env.iteritems()),
|
||||
for (k, v) in env.items()),
|
||||
|
||||
"state": True,
|
||||
"subst": False
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@ import pyblish.api
|
|||
import openpype.api
|
||||
import string
|
||||
|
||||
import six
|
||||
|
||||
# Allow only characters, numbers and underscore
|
||||
allowed = set(string.ascii_lowercase +
|
||||
string.ascii_uppercase +
|
||||
|
|
@ -29,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin):
|
|||
raise RuntimeError("Instance is missing subset "
|
||||
"name: {0}".format(subset))
|
||||
|
||||
if not isinstance(subset, basestring):
|
||||
if not isinstance(subset, six.string_types):
|
||||
raise TypeError("Instance subset name must be string, "
|
||||
"got: {0} ({1})".format(subset, type(subset)))
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,8 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin):
|
|||
|
||||
# Take only the ids with more than one member
|
||||
invalid = list()
|
||||
for _ids, members in ids.iteritems():
|
||||
_iteritems = getattr(ids, "iteritems", ids.items)
|
||||
for _ids, members in _iteritems():
|
||||
if len(members) > 1:
|
||||
cls.log.error("ID found on multiple nodes: '%s'" % members)
|
||||
invalid.extend(members)
|
||||
|
|
|
|||
|
|
@ -32,7 +32,10 @@ class ValidateNodeNoGhosting(pyblish.api.InstancePlugin):
|
|||
nodes = cmds.ls(instance, long=True, type=['transform', 'shape'])
|
||||
invalid = []
|
||||
for node in nodes:
|
||||
for attr, required_value in cls._attributes.iteritems():
|
||||
_iteritems = getattr(
|
||||
cls._attributes, "iteritems", cls._attributes.items
|
||||
)
|
||||
for attr, required_value in _iteritems():
|
||||
if cmds.attributeQuery(attr, node=node, exists=True):
|
||||
|
||||
value = cmds.getAttr('{0}.{1}'.format(node, attr))
|
||||
|
|
|
|||
|
|
@ -33,7 +33,8 @@ class ValidateShapeRenderStats(pyblish.api.Validator):
|
|||
shapes = cmds.ls(instance, long=True, type='surfaceShape')
|
||||
invalid = []
|
||||
for shape in shapes:
|
||||
for attr, default_value in cls.defaults.iteritems():
|
||||
_iteritems = getattr(cls.defaults, "iteritems", cls.defaults.items)
|
||||
for attr, default_value in _iteritems():
|
||||
if cmds.attributeQuery(attr, node=shape, exists=True):
|
||||
value = cmds.getAttr('{}.{}'.format(shape, attr))
|
||||
if value != default_value:
|
||||
|
|
@ -52,7 +53,8 @@ class ValidateShapeRenderStats(pyblish.api.Validator):
|
|||
@classmethod
|
||||
def repair(cls, instance):
|
||||
for shape in cls.get_invalid(instance):
|
||||
for attr, default_value in cls.defaults.iteritems():
|
||||
_iteritems = getattr(cls.defaults, "iteritems", cls.defaults.items)
|
||||
for attr, default_value in _iteritems():
|
||||
|
||||
if cmds.attributeQuery(attr, node=shape, exists=True):
|
||||
plug = '{0}.{1}'.format(shape, attr)
|
||||
|
|
|
|||
|
|
@ -6,7 +6,6 @@ from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
|
|||
|
||||
stub = photoshop.stub()
|
||||
|
||||
|
||||
class ImageLoader(api.Loader):
|
||||
"""Load images
|
||||
|
||||
|
|
@ -21,7 +20,7 @@ class ImageLoader(api.Loader):
|
|||
context["asset"]["name"],
|
||||
name)
|
||||
with photoshop.maintained_selection():
|
||||
layer = stub.import_smart_object(self.fname, layer_name)
|
||||
layer = self.import_layer(self.fname, layer_name)
|
||||
|
||||
self[:] = [layer]
|
||||
namespace = namespace or layer_name
|
||||
|
|
@ -45,8 +44,9 @@ class ImageLoader(api.Loader):
|
|||
layer_name = "{}_{}".format(context["asset"], context["subset"])
|
||||
# switching assets
|
||||
if namespace_from_container != layer_name:
|
||||
layer_name = self._get_unique_layer_name(context["asset"],
|
||||
context["subset"])
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"],
|
||||
context["subset"])
|
||||
else: # switching version - keep same name
|
||||
layer_name = container["namespace"]
|
||||
|
||||
|
|
@ -72,3 +72,6 @@ class ImageLoader(api.Loader):
|
|||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def import_layer(self, file_name, layer_name):
|
||||
return stub.import_smart_object(file_name, layer_name)
|
||||
|
|
|
|||
82
openpype/hosts/photoshop/plugins/load/load_reference.py
Normal file
82
openpype/hosts/photoshop/plugins/load/load_reference.py
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
import re
|
||||
|
||||
from avalon import api, photoshop
|
||||
|
||||
from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
|
||||
|
||||
stub = photoshop.stub()
|
||||
|
||||
|
||||
class ReferenceLoader(api.Loader):
|
||||
"""Load reference images
|
||||
|
||||
Stores the imported asset in a container named after the asset.
|
||||
|
||||
Inheriting from 'load_image' didn't work because of
|
||||
"Cannot write to closing transport", possible refactor.
|
||||
"""
|
||||
|
||||
families = ["image", "render"]
|
||||
representations = ["*"]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"]["name"],
|
||||
name)
|
||||
with photoshop.maintained_selection():
|
||||
layer = self.import_layer(self.fname, layer_name)
|
||||
|
||||
self[:] = [layer]
|
||||
namespace = namespace or layer_name
|
||||
|
||||
return photoshop.containerise(
|
||||
name,
|
||||
namespace,
|
||||
layer,
|
||||
context,
|
||||
self.__class__.__name__
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
""" Switch asset or change version """
|
||||
layer = container.pop("layer")
|
||||
|
||||
context = representation.get("context", {})
|
||||
|
||||
namespace_from_container = re.sub(r'_\d{3}$', '',
|
||||
container["namespace"])
|
||||
layer_name = "{}_{}".format(context["asset"], context["subset"])
|
||||
# switching assets
|
||||
if namespace_from_container != layer_name:
|
||||
layer_name = get_unique_layer_name(stub.get_layers(),
|
||||
context["asset"],
|
||||
context["subset"])
|
||||
else: # switching version - keep same name
|
||||
layer_name = container["namespace"]
|
||||
|
||||
path = api.get_representation_path(representation)
|
||||
with photoshop.maintained_selection():
|
||||
stub.replace_smart_object(
|
||||
layer, path, layer_name
|
||||
)
|
||||
|
||||
stub.imprint(
|
||||
layer, {"representation": str(representation["_id"])}
|
||||
)
|
||||
|
||||
def remove(self, container):
|
||||
"""
|
||||
Removes element from scene: deletes layer + removes from Headline
|
||||
Args:
|
||||
container (dict): container to be removed - used to get layer_id
|
||||
"""
|
||||
layer = container.pop("layer")
|
||||
stub.imprint(layer, {})
|
||||
stub.delete_layer(layer.id)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def import_layer(self, file_name, layer_name):
|
||||
return stub.import_smart_object(file_name, layer_name,
|
||||
as_reference=True)
|
||||
30
openpype/hosts/photoshop/plugins/publish/closePS.py
Normal file
30
openpype/hosts/photoshop/plugins/publish/closePS.py
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Close PS after publish. For Webpublishing only."""
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from avalon import photoshop
|
||||
|
||||
|
||||
class ClosePS(pyblish.api.ContextPlugin):
|
||||
"""Close PS after publish. For Webpublishing only.
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 14
|
||||
label = "Close PS"
|
||||
optional = True
|
||||
active = True
|
||||
|
||||
hosts = ["photoshop"]
|
||||
|
||||
def process(self, context):
|
||||
self.log.info("ClosePS")
|
||||
if not os.environ.get("IS_HEADLESS"):
|
||||
return
|
||||
|
||||
stub = photoshop.stub()
|
||||
self.log.info("Shutting down PS")
|
||||
stub.save()
|
||||
stub.close()
|
||||
self.log.info("PS closed")
|
||||
|
|
@ -0,0 +1,135 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import re
|
||||
|
||||
from avalon import photoshop
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.lib.plugin_tools import parse_json
|
||||
|
||||
|
||||
class CollectRemoteInstances(pyblish.api.ContextPlugin):
|
||||
"""Gather instances configured color code of a layer.
|
||||
|
||||
Used in remote publishing when artists marks publishable layers by color-
|
||||
coding.
|
||||
|
||||
Identifier:
|
||||
id (str): "pyblish.avalon.instance"
|
||||
"""
|
||||
order = pyblish.api.CollectorOrder + 0.100
|
||||
|
||||
label = "Instances"
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["photoshop"]
|
||||
|
||||
# configurable by Settings
|
||||
color_code_mapping = []
|
||||
|
||||
def process(self, context):
|
||||
self.log.info("CollectRemoteInstances")
|
||||
self.log.info("mapping:: {}".format(self.color_code_mapping))
|
||||
if not os.environ.get("IS_HEADLESS"):
|
||||
self.log.debug("Not headless publishing, skipping.")
|
||||
return
|
||||
|
||||
# parse variant if used in webpublishing, comes from webpublisher batch
|
||||
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
|
||||
variant = "Main"
|
||||
if batch_dir and os.path.exists(batch_dir):
|
||||
# TODO check if batch manifest is same as tasks manifests
|
||||
task_data = parse_json(os.path.join(batch_dir,
|
||||
"manifest.json"))
|
||||
if not task_data:
|
||||
raise ValueError(
|
||||
"Cannot parse batch meta in {} folder".format(batch_dir))
|
||||
variant = task_data["variant"]
|
||||
|
||||
stub = photoshop.stub()
|
||||
layers = stub.get_layers()
|
||||
|
||||
instance_names = []
|
||||
for layer in layers:
|
||||
self.log.info("Layer:: {}".format(layer))
|
||||
resolved_family, resolved_subset_template = self._resolve_mapping(
|
||||
layer
|
||||
)
|
||||
self.log.info("resolved_family {}".format(resolved_family))
|
||||
self.log.info("resolved_subset_template {}".format(
|
||||
resolved_subset_template))
|
||||
|
||||
if not resolved_subset_template or not resolved_family:
|
||||
self.log.debug("!!! Not marked, skip")
|
||||
continue
|
||||
|
||||
if layer.parents:
|
||||
self.log.debug("!!! Not a top layer, skip")
|
||||
continue
|
||||
|
||||
instance = context.create_instance(layer.name)
|
||||
instance.append(layer)
|
||||
instance.data["family"] = resolved_family
|
||||
instance.data["publish"] = layer.visible
|
||||
instance.data["asset"] = context.data["assetEntity"]["name"]
|
||||
instance.data["task"] = context.data["taskType"]
|
||||
|
||||
fill_pairs = {
|
||||
"variant": variant,
|
||||
"family": instance.data["family"],
|
||||
"task": instance.data["task"],
|
||||
"layer": layer.name
|
||||
}
|
||||
subset = resolved_subset_template.format(
|
||||
**prepare_template_data(fill_pairs))
|
||||
instance.data["subset"] = subset
|
||||
|
||||
instance_names.append(layer.name)
|
||||
|
||||
# Produce diagnostic message for any graphical
|
||||
# user interface interested in visualising it.
|
||||
self.log.info("Found: \"%s\" " % instance.data["name"])
|
||||
self.log.info("instance: {} ".format(instance.data))
|
||||
|
||||
if len(instance_names) != len(set(instance_names)):
|
||||
self.log.warning("Duplicate instances found. " +
|
||||
"Remove unwanted via SubsetManager")
|
||||
|
||||
def _resolve_mapping(self, layer):
|
||||
"""Matches 'layer' color code and name to mapping.
|
||||
|
||||
If both color code AND name regex is configured, BOTH must be valid
|
||||
If layer matches to multiple mappings, only first is used!
|
||||
"""
|
||||
family_list = []
|
||||
family = None
|
||||
subset_name_list = []
|
||||
resolved_subset_template = None
|
||||
for mapping in self.color_code_mapping:
|
||||
if mapping["color_code"] and \
|
||||
layer.color_code not in mapping["color_code"]:
|
||||
continue
|
||||
|
||||
if mapping["layer_name_regex"] and \
|
||||
not any(re.search(pattern, layer.name)
|
||||
for pattern in mapping["layer_name_regex"]):
|
||||
continue
|
||||
|
||||
family_list.append(mapping["family"])
|
||||
subset_name_list.append(mapping["subset_template_name"])
|
||||
|
||||
if len(subset_name_list) > 1:
|
||||
self.log.warning("Multiple mappings found for '{}'".
|
||||
format(layer.name))
|
||||
self.log.warning("Only first subset name template used!")
|
||||
subset_name_list[:] = subset_name_list[0]
|
||||
|
||||
if len(family_list) > 1:
|
||||
self.log.warning("Multiple mappings found for '{}'".
|
||||
format(layer.name))
|
||||
self.log.warning("Only first family used!")
|
||||
family_list[:] = family_list[0]
|
||||
if subset_name_list:
|
||||
resolved_subset_template = subset_name_list.pop()
|
||||
if family_list:
|
||||
family = family_list.pop()
|
||||
|
||||
return family, resolved_subset_template
|
||||
|
|
@ -12,7 +12,7 @@ class ExtractImage(openpype.api.Extractor):
|
|||
|
||||
label = "Extract Image"
|
||||
hosts = ["photoshop"]
|
||||
families = ["image"]
|
||||
families = ["image", "background"]
|
||||
formats = ["png", "jpg"]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import json
|
|||
import pyblish.api
|
||||
|
||||
from avalon import io
|
||||
from openpype.lib import get_subset_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
|
||||
|
||||
class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
||||
|
|
@ -26,16 +26,10 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
context = instance.context
|
||||
asset_name = instance.data["asset"]
|
||||
|
||||
asset_doc = io.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
},
|
||||
{
|
||||
"_id": 1,
|
||||
"data.tasks": 1
|
||||
}
|
||||
)
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
if not asset_doc:
|
||||
raise AssertionError((
|
||||
"Couldn't find Asset document with name \"{}\""
|
||||
|
|
@ -53,11 +47,11 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
task_name = available_task_names[_task_name_low]
|
||||
break
|
||||
|
||||
subset_name = get_subset_name(
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.new_instance_family,
|
||||
self.subset_name_variant,
|
||||
task_name,
|
||||
asset_doc["_id"],
|
||||
asset_doc,
|
||||
io.Session["AVALON_PROJECT"]
|
||||
)
|
||||
instance_name = f"{asset_name}_{subset_name}"
|
||||
|
|
|
|||
|
|
@ -22,15 +22,15 @@ class ValidateSources(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
self.log.info("instance {}".format(instance.data))
|
||||
|
||||
for repr in instance.data["representations"]:
|
||||
for repre in instance.data.get("representations") or []:
|
||||
files = []
|
||||
if isinstance(repr["files"], str):
|
||||
files.append(repr["files"])
|
||||
if isinstance(repre["files"], str):
|
||||
files.append(repre["files"])
|
||||
else:
|
||||
files = list(repr["files"])
|
||||
files = list(repre["files"])
|
||||
|
||||
for file_name in files:
|
||||
source_file = os.path.join(repr["stagingDir"],
|
||||
source_file = os.path.join(repre["stagingDir"],
|
||||
file_name)
|
||||
|
||||
if not os.path.exists(source_file):
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import copy
|
|||
import pyblish.api
|
||||
from avalon import io
|
||||
|
||||
from openpype.lib import get_subset_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
|
|
@ -70,16 +70,10 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
# - not sure if it's good idea to require asset id in
|
||||
# get_subset_name?
|
||||
asset_name = context.data["workfile_context"]["asset"]
|
||||
asset_doc = io.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
},
|
||||
{"_id": 1}
|
||||
)
|
||||
asset_id = None
|
||||
if asset_doc:
|
||||
asset_id = asset_doc["_id"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
|
|
@ -88,11 +82,11 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
# Use empty variant value
|
||||
variant = ""
|
||||
task_name = io.Session["AVALON_TASK"]
|
||||
new_subset_name = get_subset_name(
|
||||
new_subset_name = get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_id,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import json
|
|||
import pyblish.api
|
||||
from avalon import io
|
||||
|
||||
from openpype.lib import get_subset_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
|
|
@ -28,16 +28,10 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
# get_subset_name?
|
||||
family = "workfile"
|
||||
asset_name = context.data["workfile_context"]["asset"]
|
||||
asset_doc = io.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
},
|
||||
{"_id": 1}
|
||||
)
|
||||
asset_id = None
|
||||
if asset_doc:
|
||||
asset_id = asset_doc["_id"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
|
|
@ -46,11 +40,11 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
# Use empty variant value
|
||||
variant = ""
|
||||
task_name = io.Session["AVALON_TASK"]
|
||||
subset_name = get_subset_name(
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_id,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
)
|
||||
|
|
|
|||
|
|
@ -253,6 +253,7 @@ def create_unreal_project(project_name: str,
|
|||
"Plugins": [
|
||||
{"Name": "PythonScriptPlugin", "Enabled": True},
|
||||
{"Name": "EditorScriptingUtilities", "Enabled": True},
|
||||
{"Name": "SequencerScripting", "Enabled": True},
|
||||
{"Name": "Avalon", "Enabled": True}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,9 @@ from pathlib import Path
|
|||
from openpype.lib import (
|
||||
PreLaunchHook,
|
||||
ApplicationLaunchFailed,
|
||||
ApplicationNotFound
|
||||
ApplicationNotFound,
|
||||
get_workdir_data,
|
||||
get_workfile_template_key
|
||||
)
|
||||
from openpype.hosts.unreal.api import lib as unreal_lib
|
||||
|
||||
|
|
@ -25,13 +27,46 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
|
||||
def _get_work_filename(self):
|
||||
# Use last workfile if was found
|
||||
if self.data.get("last_workfile_path"):
|
||||
last_workfile = Path(self.data.get("last_workfile_path"))
|
||||
if last_workfile and last_workfile.exists():
|
||||
return last_workfile.name
|
||||
|
||||
# Prepare data for fill data and for getting workfile template key
|
||||
task_name = self.data["task_name"]
|
||||
anatomy = self.data["anatomy"]
|
||||
asset_doc = self.data["asset_doc"]
|
||||
project_doc = self.data["project_doc"]
|
||||
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
|
||||
workdir_data = get_workdir_data(
|
||||
project_doc, asset_doc, task_name, self.host_name
|
||||
)
|
||||
# QUESTION raise exception if version is part of filename template?
|
||||
workdir_data["version"] = 1
|
||||
workdir_data["ext"] = "uproject"
|
||||
|
||||
# Get workfile template key for current context
|
||||
workfile_template_key = get_workfile_template_key(
|
||||
task_type,
|
||||
self.host_name,
|
||||
project_name=project_doc["name"]
|
||||
)
|
||||
# Fill templates
|
||||
filled_anatomy = anatomy.format(workdir_data)
|
||||
|
||||
# Return filename
|
||||
return filled_anatomy[workfile_template_key]["file"]
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
asset_name = self.data["asset_name"]
|
||||
task_name = self.data["task_name"]
|
||||
workdir = self.launch_context.env["AVALON_WORKDIR"]
|
||||
engine_version = self.app_name.split("/")[-1].replace("-", ".")
|
||||
unreal_project_name = f"{asset_name}_{task_name}"
|
||||
try:
|
||||
if int(engine_version.split(".")[0]) < 4 and \
|
||||
int(engine_version.split(".")[1]) < 26:
|
||||
|
|
@ -45,6 +80,8 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
# so lets keep it quite.
|
||||
...
|
||||
|
||||
unreal_project_filename = self._get_work_filename()
|
||||
unreal_project_name = os.path.splitext(unreal_project_filename)[0]
|
||||
# Unreal is sensitive about project names longer then 20 chars
|
||||
if len(unreal_project_name) > 20:
|
||||
self.log.warning((
|
||||
|
|
@ -89,10 +126,10 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
ue4_path = unreal_lib.get_editor_executable_path(
|
||||
Path(detected[engine_version]))
|
||||
|
||||
self.launch_context.launch_args.append(ue4_path.as_posix())
|
||||
self.launch_context.launch_args = [ue4_path.as_posix()]
|
||||
project_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
project_file = project_path / f"{unreal_project_name}.uproject"
|
||||
project_file = project_path / unreal_project_filename
|
||||
if not project_file.is_file():
|
||||
engine_path = detected[engine_version]
|
||||
self.log.info((
|
||||
|
|
|
|||
43
openpype/hosts/unreal/plugins/create/create_camera.py
Normal file
43
openpype/hosts/unreal/plugins/create/create_camera.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
import unreal
|
||||
from unreal import EditorAssetLibrary as eal
|
||||
from unreal import EditorLevelLibrary as ell
|
||||
|
||||
from openpype.hosts.unreal.api.plugin import Creator
|
||||
from avalon.unreal import (
|
||||
instantiate,
|
||||
)
|
||||
|
||||
|
||||
class CreateCamera(Creator):
|
||||
"""Layout output for character rigs"""
|
||||
|
||||
name = "layoutMain"
|
||||
label = "Camera"
|
||||
family = "camera"
|
||||
icon = "cubes"
|
||||
|
||||
root = "/Game/Avalon/Instances"
|
||||
suffix = "_INS"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateCamera, self).__init__(*args, **kwargs)
|
||||
|
||||
def process(self):
|
||||
data = self.data
|
||||
|
||||
name = data["subset"]
|
||||
|
||||
data["level"] = ell.get_editor_world().get_path_name()
|
||||
|
||||
if not eal.does_directory_exist(self.root):
|
||||
eal.make_directory(self.root)
|
||||
|
||||
factory = unreal.LevelSequenceFactoryNew()
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
tools.create_asset(name, f"{self.root}/{name}", None, factory)
|
||||
|
||||
asset_name = f"{self.root}/{name}/{name}.{name}"
|
||||
|
||||
data["members"] = [asset_name]
|
||||
|
||||
instantiate(f"{self.root}", name, data, None, self.suffix)
|
||||
206
openpype/hosts/unreal/plugins/load/load_camera.py
Normal file
206
openpype/hosts/unreal/plugins/load/load_camera.py
Normal file
|
|
@ -0,0 +1,206 @@
|
|||
import os
|
||||
|
||||
from avalon import api, io, pipeline
|
||||
from avalon.unreal import lib
|
||||
from avalon.unreal import pipeline as unreal_pipeline
|
||||
import unreal
|
||||
|
||||
|
||||
class CameraLoader(api.Loader):
|
||||
"""Load Unreal StaticMesh from FBX"""
|
||||
|
||||
families = ["camera"]
|
||||
label = "Load Camera"
|
||||
representations = ["fbx"]
|
||||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
"""
|
||||
Load and containerise representation into Content Browser.
|
||||
|
||||
This is two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
namespace (str): in Unreal this is basically path to container.
|
||||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
data (dict): Those would be data to be imprinted. This is not used
|
||||
now, data are imprinted by `containerise()`.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
"""
|
||||
|
||||
# Create directory for asset and avalon container
|
||||
root = "/Game/Avalon/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
|
||||
unique_number = 1
|
||||
|
||||
if unreal.EditorAssetLibrary.does_directory_exist(f"{root}/{asset}"):
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
f"{root}/{asset}", recursive=False, include_folder=True
|
||||
)
|
||||
|
||||
# Get highest number to make a unique name
|
||||
folders = [a for a in asset_content
|
||||
if a[-1] == "/" and f"{name}_" in a]
|
||||
f_numbers = []
|
||||
for f in folders:
|
||||
# Get number from folder name. Splits the string by "_" and
|
||||
# removes the last element (which is a "/").
|
||||
f_numbers.append(int(f.split("_")[-1][:-1]))
|
||||
f_numbers.sort()
|
||||
if not f_numbers:
|
||||
unique_number = 1
|
||||
else:
|
||||
unique_number = f_numbers[-1] + 1
|
||||
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{root}/{asset}/{name}_{unique_number:02d}", suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
sequence = tools.create_asset(
|
||||
asset_name=asset_name,
|
||||
package_path=asset_dir,
|
||||
asset_class=unreal.LevelSequence,
|
||||
factory=unreal.LevelSequenceFactoryNew()
|
||||
)
|
||||
|
||||
io_asset = io.Session["AVALON_ASSET"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": io_asset
|
||||
})
|
||||
|
||||
data = asset_doc.get("data")
|
||||
|
||||
if data:
|
||||
sequence.set_display_rate(unreal.FrameRate(data.get("fps"), 1.0))
|
||||
sequence.set_playback_start(data.get("frameStart"))
|
||||
sequence.set_playback_end(data.get("frameEnd"))
|
||||
|
||||
settings = unreal.MovieSceneUserImportFBXSettings()
|
||||
settings.set_editor_property('reduce_keys', False)
|
||||
|
||||
unreal.SequencerTools.import_fbx(
|
||||
unreal.EditorLevelLibrary.get_editor_world(),
|
||||
sequence,
|
||||
sequence.get_bindings(),
|
||||
settings,
|
||||
self.fname
|
||||
)
|
||||
|
||||
# Create Asset Container
|
||||
lib.create_avalon_container(container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": pipeline.AVALON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(
|
||||
"{}/{}".format(asset_dir, container_name), data)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
path = container["namespace"]
|
||||
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
path, recursive=False, include_folder=False
|
||||
)
|
||||
asset_name = ""
|
||||
for a in asset_content:
|
||||
asset = ar.get_asset_by_object_path(a)
|
||||
if a.endswith("_CON"):
|
||||
loaded_asset = unreal.EditorAssetLibrary.load_asset(a)
|
||||
unreal.EditorAssetLibrary.set_metadata_tag(
|
||||
loaded_asset, "representation", str(representation["_id"])
|
||||
)
|
||||
unreal.EditorAssetLibrary.set_metadata_tag(
|
||||
loaded_asset, "parent", str(representation["parent"])
|
||||
)
|
||||
asset_name = unreal.EditorAssetLibrary.get_metadata_tag(
|
||||
loaded_asset, "asset_name"
|
||||
)
|
||||
elif asset.asset_class == "LevelSequence":
|
||||
unreal.EditorAssetLibrary.delete_asset(a)
|
||||
|
||||
sequence = tools.create_asset(
|
||||
asset_name=asset_name,
|
||||
package_path=path,
|
||||
asset_class=unreal.LevelSequence,
|
||||
factory=unreal.LevelSequenceFactoryNew()
|
||||
)
|
||||
|
||||
io_asset = io.Session["AVALON_ASSET"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": io_asset
|
||||
})
|
||||
|
||||
data = asset_doc.get("data")
|
||||
|
||||
if data:
|
||||
sequence.set_display_rate(unreal.FrameRate(data.get("fps"), 1.0))
|
||||
sequence.set_playback_start(data.get("frameStart"))
|
||||
sequence.set_playback_end(data.get("frameEnd"))
|
||||
|
||||
settings = unreal.MovieSceneUserImportFBXSettings()
|
||||
settings.set_editor_property('reduce_keys', False)
|
||||
|
||||
unreal.SequencerTools.import_fbx(
|
||||
unreal.EditorLevelLibrary.get_editor_world(),
|
||||
sequence,
|
||||
sequence.get_bindings(),
|
||||
settings,
|
||||
str(representation["data"]["path"])
|
||||
)
|
||||
|
||||
def remove(self, container):
|
||||
path = container["namespace"]
|
||||
parent_path = os.path.dirname(path)
|
||||
|
||||
unreal.EditorAssetLibrary.delete_directory(path)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
parent_path, recursive=False, include_folder=True
|
||||
)
|
||||
|
||||
if len(asset_content) == 0:
|
||||
unreal.EditorAssetLibrary.delete_directory(parent_path)
|
||||
54
openpype/hosts/unreal/plugins/publish/extract_camera.py
Normal file
54
openpype/hosts/unreal/plugins/publish/extract_camera.py
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
import os
|
||||
|
||||
import unreal
|
||||
from unreal import EditorAssetLibrary as eal
|
||||
from unreal import EditorLevelLibrary as ell
|
||||
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ExtractCamera(openpype.api.Extractor):
|
||||
"""Extract a camera."""
|
||||
|
||||
label = "Extract Camera"
|
||||
hosts = ["unreal"]
|
||||
families = ["camera"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
# Define extract output file path
|
||||
stagingdir = self.staging_dir(instance)
|
||||
fbx_filename = "{}.fbx".format(instance.name)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
# Check if the loaded level is the same of the instance
|
||||
current_level = ell.get_editor_world().get_path_name()
|
||||
assert current_level == instance.data.get("level"), \
|
||||
"Wrong level loaded"
|
||||
|
||||
for member in instance[:]:
|
||||
data = eal.find_asset_data(member)
|
||||
if data.asset_class == "LevelSequence":
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
sequence = ar.get_asset_by_object_path(member).get_asset()
|
||||
unreal.SequencerTools.export_fbx(
|
||||
ell.get_editor_world(),
|
||||
sequence,
|
||||
sequence.get_bindings(),
|
||||
unreal.FbxExportOption(),
|
||||
os.path.join(stagingdir, fbx_filename)
|
||||
)
|
||||
break
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
fbx_representation = {
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': fbx_filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(fbx_representation)
|
||||
|
|
@ -15,6 +15,7 @@ import tempfile
|
|||
import pyblish.api
|
||||
from avalon import io
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.lib.plugin_tools import parse_json, get_batch_asset_task_info
|
||||
|
||||
|
||||
class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
||||
|
|
@ -33,22 +34,6 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
# from Settings
|
||||
task_type_to_family = {}
|
||||
|
||||
def _load_json(self, path):
|
||||
path = path.strip('\"')
|
||||
assert os.path.isfile(path), (
|
||||
"Path to json file doesn't exist. \"{}\"".format(path)
|
||||
)
|
||||
data = None
|
||||
with open(path, "r") as json_file:
|
||||
try:
|
||||
data = json.load(json_file)
|
||||
except Exception as exc:
|
||||
self.log.error(
|
||||
"Error loading json: "
|
||||
"{} - Exception: {}".format(path, exc)
|
||||
)
|
||||
return data
|
||||
|
||||
def _process_batch(self, dir_url):
|
||||
task_subfolders = [
|
||||
os.path.join(dir_url, o)
|
||||
|
|
@ -56,22 +41,15 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
if os.path.isdir(os.path.join(dir_url, o))]
|
||||
self.log.info("task_sub:: {}".format(task_subfolders))
|
||||
for task_dir in task_subfolders:
|
||||
task_data = self._load_json(os.path.join(task_dir,
|
||||
"manifest.json"))
|
||||
task_data = parse_json(os.path.join(task_dir,
|
||||
"manifest.json"))
|
||||
self.log.info("task_data:: {}".format(task_data))
|
||||
ctx = task_data["context"]
|
||||
task_type = "default_task_type"
|
||||
task_name = None
|
||||
|
||||
if ctx["type"] == "task":
|
||||
items = ctx["path"].split('/')
|
||||
asset = items[-2]
|
||||
os.environ["AVALON_TASK"] = ctx["name"]
|
||||
task_name = ctx["name"]
|
||||
task_type = ctx["attributes"]["type"]
|
||||
else:
|
||||
asset = ctx["name"]
|
||||
os.environ["AVALON_TASK"] = ""
|
||||
asset, task_name, task_type = get_batch_asset_task_info(ctx)
|
||||
|
||||
if task_name:
|
||||
os.environ["AVALON_TASK"] = task_name
|
||||
|
||||
is_sequence = len(task_data["files"]) > 1
|
||||
|
||||
|
|
@ -261,7 +239,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
assert batch_dir, (
|
||||
"Missing `OPENPYPE_PUBLISH_DATA`")
|
||||
|
||||
assert batch_dir, \
|
||||
assert os.path.exists(batch_dir), \
|
||||
"Folder {} doesn't exist".format(batch_dir)
|
||||
|
||||
project_name = os.environ.get("AVALON_PROJECT")
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ from avalon.api import AvalonMongoDB
|
|||
|
||||
from openpype.lib import OpenPypeMongoConnection
|
||||
from openpype_modules.avalon_apps.rest_api import _RestApiEndpoint
|
||||
from openpype.lib.plugin_tools import parse_json
|
||||
|
||||
from openpype.lib import PypeLogger
|
||||
|
||||
|
|
@ -175,6 +176,9 @@ class TaskNode(Node):
|
|||
class WebpublisherBatchPublishEndpoint(_RestApiEndpoint):
|
||||
"""Triggers headless publishing of batch."""
|
||||
async def post(self, request) -> Response:
|
||||
# for postprocessing in host, currently only PS
|
||||
host_map = {"photoshop": [".psd", ".psb"]}
|
||||
|
||||
output = {}
|
||||
log.info("WebpublisherBatchPublishEndpoint called")
|
||||
content = await request.json()
|
||||
|
|
@ -182,10 +186,44 @@ class WebpublisherBatchPublishEndpoint(_RestApiEndpoint):
|
|||
batch_path = os.path.join(self.resource.upload_dir,
|
||||
content["batch"])
|
||||
|
||||
add_args = {
|
||||
"host": "webpublisher",
|
||||
"project": content["project_name"],
|
||||
"user": content["user"]
|
||||
}
|
||||
|
||||
command = "remotepublish"
|
||||
|
||||
if content.get("studio_processing"):
|
||||
log.info("Post processing called")
|
||||
|
||||
batch_data = parse_json(os.path.join(batch_path, "manifest.json"))
|
||||
if not batch_data:
|
||||
raise ValueError(
|
||||
"Cannot parse batch meta in {} folder".format(batch_path))
|
||||
task_dir_name = batch_data["tasks"][0]
|
||||
task_data = parse_json(os.path.join(batch_path, task_dir_name,
|
||||
"manifest.json"))
|
||||
if not task_data:
|
||||
raise ValueError(
|
||||
"Cannot parse batch meta in {} folder".format(task_data))
|
||||
|
||||
command = "remotepublishfromapp"
|
||||
for host, extensions in host_map.items():
|
||||
for ext in extensions:
|
||||
for file_name in task_data["files"]:
|
||||
if ext in file_name:
|
||||
add_args["host"] = host
|
||||
break
|
||||
|
||||
if not add_args.get("host"):
|
||||
raise ValueError(
|
||||
"Couldn't discern host from {}".format(task_data["files"]))
|
||||
|
||||
openpype_app = self.resource.executable
|
||||
args = [
|
||||
openpype_app,
|
||||
'remotepublish',
|
||||
command,
|
||||
batch_path
|
||||
]
|
||||
|
||||
|
|
@ -193,12 +231,6 @@ class WebpublisherBatchPublishEndpoint(_RestApiEndpoint):
|
|||
msg = "Non existent OpenPype executable {}".format(openpype_app)
|
||||
raise RuntimeError(msg)
|
||||
|
||||
add_args = {
|
||||
"host": "webpublisher",
|
||||
"project": content["project_name"],
|
||||
"user": content["user"]
|
||||
}
|
||||
|
||||
for key, value in add_args.items():
|
||||
args.append("--{}".format(key))
|
||||
args.append(value)
|
||||
|
|
|
|||
|
|
@ -461,13 +461,8 @@ class ApplicationExecutable:
|
|||
# On MacOS check if exists path to executable when ends with `.app`
|
||||
# - it is common that path will lead to "/Applications/Blender" but
|
||||
# real path is "/Applications/Blender.app"
|
||||
if (
|
||||
platform.system().lower() == "darwin"
|
||||
and not os.path.exists(executable)
|
||||
):
|
||||
_executable = executable + ".app"
|
||||
if os.path.exists(_executable):
|
||||
executable = _executable
|
||||
if platform.system().lower() == "darwin":
|
||||
executable = self.macos_executable_prep(executable)
|
||||
|
||||
self.executable_path = executable
|
||||
|
||||
|
|
@ -477,6 +472,45 @@ class ApplicationExecutable:
|
|||
def __repr__(self):
|
||||
return "<{}> {}".format(self.__class__.__name__, self.executable_path)
|
||||
|
||||
@staticmethod
|
||||
def macos_executable_prep(executable):
|
||||
"""Try to find full path to executable file.
|
||||
|
||||
Real executable is stored in '*.app/Contents/MacOS/<executable>'.
|
||||
|
||||
Having path to '*.app' gives ability to read it's plist info and
|
||||
use "CFBundleExecutable" key from plist to know what is "executable."
|
||||
|
||||
Plist is stored in '*.app/Contents/Info.plist'.
|
||||
|
||||
This is because some '*.app' directories don't have same permissions
|
||||
as real executable.
|
||||
"""
|
||||
# Try to find if there is `.app` file
|
||||
if not os.path.exists(executable):
|
||||
_executable = executable + ".app"
|
||||
if os.path.exists(_executable):
|
||||
executable = _executable
|
||||
|
||||
# Try to find real executable if executable has `Contents` subfolder
|
||||
contents_dir = os.path.join(executable, "Contents")
|
||||
if os.path.exists(contents_dir):
|
||||
executable_filename = None
|
||||
# Load plist file and check for bundle executable
|
||||
plist_filepath = os.path.join(contents_dir, "Info.plist")
|
||||
if os.path.exists(plist_filepath):
|
||||
import plistlib
|
||||
|
||||
parsed_plist = plistlib.readPlist(plist_filepath)
|
||||
executable_filename = parsed_plist.get("CFBundleExecutable")
|
||||
|
||||
if executable_filename:
|
||||
executable = os.path.join(
|
||||
contents_dir, "MacOS", executable_filename
|
||||
)
|
||||
|
||||
return executable
|
||||
|
||||
def as_args(self):
|
||||
return [self.executable_path]
|
||||
|
||||
|
|
|
|||
|
|
@ -245,6 +245,27 @@ def process_sequence(
|
|||
report_items["Source file was not found"].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
delivery_templates = anatomy.templates.get("delivery") or {}
|
||||
delivery_template = delivery_templates.get(template_name)
|
||||
if delivery_template is None:
|
||||
msg = (
|
||||
"Delivery template \"{}\" in anatomy of project \"{}\""
|
||||
" was not found"
|
||||
).format(template_name, anatomy.project_name)
|
||||
report_items[""].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
# Check if 'frame' key is available in template which is required
|
||||
# for sequence delivery
|
||||
if "{frame" not in delivery_template:
|
||||
msg = (
|
||||
"Delivery template \"{}\" in anatomy of project \"{}\""
|
||||
"does not contain '{{frame}}' key to fill. Delivery of sequence"
|
||||
" can't be processed."
|
||||
).format(template_name, anatomy.project_name)
|
||||
report_items[""].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
dir_path, file_name = os.path.split(str(src_path))
|
||||
|
||||
context = repre["context"]
|
||||
|
|
|
|||
|
|
@ -28,17 +28,15 @@ class TaskNotSetError(KeyError):
|
|||
super(TaskNotSetError, self).__init__(msg)
|
||||
|
||||
|
||||
def _get_subset_name(
|
||||
def get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_id,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
default_template,
|
||||
dynamic_data,
|
||||
dbcon
|
||||
project_name=None,
|
||||
host_name=None,
|
||||
default_template=None,
|
||||
dynamic_data=None
|
||||
):
|
||||
"""Calculate subset name based on passed context and OpenPype settings.
|
||||
|
||||
|
|
@ -54,8 +52,6 @@ def _get_subset_name(
|
|||
family (str): Instance family.
|
||||
variant (str): In most of cases it is user input during creation.
|
||||
task_name (str): Task name on which context is instance created.
|
||||
asset_id (ObjectId): Id of object. Is optional if `asset_doc` is
|
||||
passed.
|
||||
asset_doc (dict): Queried asset document with it's tasks in data.
|
||||
Used to get task type.
|
||||
project_name (str): Name of project on which is instance created.
|
||||
|
|
@ -84,25 +80,6 @@ def _get_subset_name(
|
|||
|
||||
project_name = avalon.api.Session["AVALON_PROJECT"]
|
||||
|
||||
# Query asset document if was not passed
|
||||
if asset_doc is None:
|
||||
if dbcon is None:
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
dbcon.install()
|
||||
|
||||
asset_doc = dbcon.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"_id": asset_id
|
||||
},
|
||||
{
|
||||
"data.tasks": True
|
||||
}
|
||||
) or {}
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
|
|
@ -144,34 +121,6 @@ def _get_subset_name(
|
|||
return template.format(**prepare_template_data(fill_pairs))
|
||||
|
||||
|
||||
def get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name=None,
|
||||
host_name=None,
|
||||
default_template=None,
|
||||
dynamic_data=None,
|
||||
dbcon=None
|
||||
):
|
||||
"""Calculate subset name using OpenPype settings.
|
||||
|
||||
This variant of function expects already queried asset document.
|
||||
"""
|
||||
return _get_subset_name(
|
||||
family, variant,
|
||||
task_name,
|
||||
None,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
default_template,
|
||||
dynamic_data,
|
||||
dbcon
|
||||
)
|
||||
|
||||
|
||||
def get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
|
|
@ -190,17 +139,28 @@ def get_subset_name(
|
|||
This is legacy function should be replaced with
|
||||
`get_subset_name_with_asset_doc` where asset document is expected.
|
||||
"""
|
||||
return _get_subset_name(
|
||||
if dbcon is None:
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
dbcon.install()
|
||||
|
||||
asset_doc = dbcon.find_one(
|
||||
{"_id": asset_id},
|
||||
{"data.tasks": True}
|
||||
) or {}
|
||||
|
||||
return get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_id,
|
||||
None,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
default_template,
|
||||
dynamic_data,
|
||||
dbcon
|
||||
dynamic_data
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -578,3 +538,48 @@ def should_decompress(file_url):
|
|||
"compression: \"dwab\"" in output
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def parse_json(path):
|
||||
"""Parses json file at 'path' location
|
||||
|
||||
Returns:
|
||||
(dict) or None if unparsable
|
||||
Raises:
|
||||
AsssertionError if 'path' doesn't exist
|
||||
"""
|
||||
path = path.strip('\"')
|
||||
assert os.path.isfile(path), (
|
||||
"Path to json file doesn't exist. \"{}\"".format(path)
|
||||
)
|
||||
data = None
|
||||
with open(path, "r") as json_file:
|
||||
try:
|
||||
data = json.load(json_file)
|
||||
except Exception as exc:
|
||||
log.error(
|
||||
"Error loading json: "
|
||||
"{} - Exception: {}".format(path, exc)
|
||||
)
|
||||
return data
|
||||
|
||||
|
||||
def get_batch_asset_task_info(ctx):
|
||||
"""Parses context data from webpublisher's batch metadata
|
||||
|
||||
Returns:
|
||||
(tuple): asset, task_name (Optional), task_type
|
||||
"""
|
||||
task_type = "default_task_type"
|
||||
task_name = None
|
||||
asset = None
|
||||
|
||||
if ctx["type"] == "task":
|
||||
items = ctx["path"].split('/')
|
||||
asset = items[-2]
|
||||
task_name = ctx["name"]
|
||||
task_type = ctx["attributes"]["type"]
|
||||
else:
|
||||
asset = ctx["name"]
|
||||
|
||||
return asset, task_name, task_type
|
||||
|
|
|
|||
159
openpype/lib/remote_publish.py
Normal file
159
openpype/lib/remote_publish.py
Normal file
|
|
@ -0,0 +1,159 @@
|
|||
import os
|
||||
from datetime import datetime
|
||||
import sys
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
import pyblish.util
|
||||
import pyblish.api
|
||||
|
||||
from openpype import uninstall
|
||||
from openpype.lib.mongo import OpenPypeMongoConnection
|
||||
|
||||
|
||||
def get_webpublish_conn():
|
||||
"""Get connection to OP 'webpublishes' collection."""
|
||||
mongo_client = OpenPypeMongoConnection.get_mongo_client()
|
||||
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
|
||||
return mongo_client[database_name]["webpublishes"]
|
||||
|
||||
|
||||
def start_webpublish_log(dbcon, batch_id, user):
|
||||
"""Start new log record for 'batch_id'
|
||||
|
||||
Args:
|
||||
dbcon (OpenPypeMongoConnection)
|
||||
batch_id (str)
|
||||
user (str)
|
||||
Returns
|
||||
(ObjectId) from DB
|
||||
"""
|
||||
return dbcon.insert_one({
|
||||
"batch_id": batch_id,
|
||||
"start_date": datetime.now(),
|
||||
"user": user,
|
||||
"status": "in_progress"
|
||||
}).inserted_id
|
||||
|
||||
|
||||
def publish_and_log(dbcon, _id, log, close_plugin_name=None):
|
||||
"""Loops through all plugins, logs ok and fails into OP DB.
|
||||
|
||||
Args:
|
||||
dbcon (OpenPypeMongoConnection)
|
||||
_id (str)
|
||||
log (OpenPypeLogger)
|
||||
close_plugin_name (str): name of plugin with responsibility to
|
||||
close host app
|
||||
"""
|
||||
# Error exit as soon as any error occurs.
|
||||
error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"
|
||||
|
||||
close_plugin = _get_close_plugin(close_plugin_name, log)
|
||||
|
||||
if isinstance(_id, str):
|
||||
_id = ObjectId(_id)
|
||||
|
||||
log_lines = []
|
||||
for result in pyblish.util.publish_iter():
|
||||
for record in result["records"]:
|
||||
log_lines.append("{}: {}".format(
|
||||
result["plugin"].label, record.msg))
|
||||
|
||||
if result["error"]:
|
||||
log.error(error_format.format(**result))
|
||||
uninstall()
|
||||
log_lines.append(error_format.format(**result))
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"finish_date": datetime.now(),
|
||||
"status": "error",
|
||||
"log": os.linesep.join(log_lines)
|
||||
|
||||
}}
|
||||
)
|
||||
if close_plugin: # close host app explicitly after error
|
||||
context = pyblish.api.Context()
|
||||
close_plugin().process(context)
|
||||
sys.exit(1)
|
||||
else:
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"progress": max(result["progress"], 0.95),
|
||||
"log": os.linesep.join(log_lines)
|
||||
}}
|
||||
)
|
||||
|
||||
# final update
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"finish_date": datetime.now(),
|
||||
"status": "finished_ok",
|
||||
"progress": 1,
|
||||
"log": os.linesep.join(log_lines)
|
||||
}}
|
||||
)
|
||||
|
||||
|
||||
def fail_batch(_id, batches_in_progress, dbcon):
|
||||
"""Set current batch as failed as there are some stuck batches."""
|
||||
running_batches = [str(batch["_id"])
|
||||
for batch in batches_in_progress
|
||||
if batch["_id"] != _id]
|
||||
msg = "There are still running batches {}\n". \
|
||||
format("\n".join(running_batches))
|
||||
msg += "Ask admin to check them and reprocess current batch"
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"finish_date": datetime.now(),
|
||||
"status": "error",
|
||||
"log": msg
|
||||
|
||||
}}
|
||||
)
|
||||
raise ValueError(msg)
|
||||
|
||||
|
||||
def find_variant_key(application_manager, host):
|
||||
"""Searches for latest installed variant for 'host'
|
||||
|
||||
Args:
|
||||
application_manager (ApplicationManager)
|
||||
host (str)
|
||||
Returns
|
||||
(string) (optional)
|
||||
Raises:
|
||||
(ValueError) if no variant found
|
||||
"""
|
||||
app_group = application_manager.app_groups.get(host)
|
||||
if not app_group or not app_group.enabled:
|
||||
raise ValueError("No application {} configured".format(host))
|
||||
|
||||
found_variant_key = None
|
||||
# finds most up-to-date variant if any installed
|
||||
for variant_key, variant in app_group.variants.items():
|
||||
for executable in variant.executables:
|
||||
if executable.exists():
|
||||
found_variant_key = variant_key
|
||||
|
||||
if not found_variant_key:
|
||||
raise ValueError("No executable for {} found".format(host))
|
||||
|
||||
return found_variant_key
|
||||
|
||||
|
||||
def _get_close_plugin(close_plugin_name, log):
|
||||
if close_plugin_name:
|
||||
plugins = pyblish.api.discover()
|
||||
for plugin in plugins:
|
||||
if plugin.__name__ == close_plugin_name:
|
||||
return plugin
|
||||
|
||||
log.warning("Close plugin not found, app might not close.")
|
||||
|
|
@ -26,14 +26,21 @@ class CollectUsername(pyblish.api.ContextPlugin):
|
|||
"""
|
||||
order = pyblish.api.CollectorOrder - 0.488
|
||||
label = "Collect ftrack username"
|
||||
hosts = ["webpublisher"]
|
||||
hosts = ["webpublisher", "photoshop"]
|
||||
|
||||
_context = None
|
||||
|
||||
def process(self, context):
|
||||
self.log.info("CollectUsername")
|
||||
# photoshop could be triggered remotely in webpublisher fashion
|
||||
if os.environ["AVALON_APP"] == "photoshop":
|
||||
if not os.environ.get("IS_HEADLESS"):
|
||||
self.log.debug("Regular process, skipping")
|
||||
return
|
||||
|
||||
os.environ["FTRACK_API_USER"] = os.environ["FTRACK_BOT_API_USER"]
|
||||
os.environ["FTRACK_API_KEY"] = os.environ["FTRACK_BOT_API_KEY"]
|
||||
self.log.info("CollectUsername")
|
||||
|
||||
for instance in context:
|
||||
email = instance.data["user_email"]
|
||||
self.log.info("email:: {}".format(email))
|
||||
|
|
|
|||
|
|
@ -99,7 +99,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"camerarig",
|
||||
"redshiftproxy",
|
||||
"effect",
|
||||
"xgen"
|
||||
"xgen",
|
||||
"hda"
|
||||
]
|
||||
exclude_families = ["clip"]
|
||||
db_representation_context_keys = [
|
||||
|
|
|
|||
|
|
@ -9,9 +9,9 @@ class ShowInventory(pyblish.api.Action):
|
|||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
from avalon.tools import sceneinventory
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
sceneinventory.show()
|
||||
host_tools.show_scene_inventory()
|
||||
|
||||
|
||||
class ValidateContainers(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -3,10 +3,18 @@
|
|||
import os
|
||||
import sys
|
||||
import json
|
||||
from datetime import datetime
|
||||
import time
|
||||
|
||||
from openpype.lib import PypeLogger
|
||||
from openpype.api import get_app_environments_for_context
|
||||
from openpype.lib.plugin_tools import parse_json, get_batch_asset_task_info
|
||||
from openpype.lib.remote_publish import (
|
||||
get_webpublish_conn,
|
||||
start_webpublish_log,
|
||||
publish_and_log,
|
||||
fail_batch,
|
||||
find_variant_key
|
||||
)
|
||||
|
||||
|
||||
class PypeCommands:
|
||||
|
|
@ -110,10 +118,116 @@ class PypeCommands:
|
|||
log.info("Publish finished.")
|
||||
uninstall()
|
||||
|
||||
@staticmethod
|
||||
def remotepublishfromapp(project, batch_dir, host, user, targets=None):
|
||||
"""Opens installed variant of 'host' and run remote publish there.
|
||||
|
||||
Currently implemented and tested for Photoshop where customer
|
||||
wants to process uploaded .psd file and publish collected layers
|
||||
from there.
|
||||
|
||||
Checks if no other batches are running (status =='in_progress). If
|
||||
so, it sleeps for SLEEP (this is separate process),
|
||||
waits for WAIT_FOR seconds altogether.
|
||||
|
||||
Requires installed host application on the machine.
|
||||
|
||||
Runs publish process as user would, in automatic fashion.
|
||||
"""
|
||||
SLEEP = 5 # seconds for another loop check for concurrently runs
|
||||
WAIT_FOR = 300 # seconds to wait for conc. runs
|
||||
|
||||
from openpype import install, uninstall
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger.get_logger()
|
||||
|
||||
log.info("remotepublishphotoshop command")
|
||||
|
||||
install()
|
||||
|
||||
from openpype.lib import ApplicationManager
|
||||
application_manager = ApplicationManager()
|
||||
|
||||
found_variant_key = find_variant_key(application_manager, host)
|
||||
|
||||
app_name = "{}/{}".format(host, found_variant_key)
|
||||
|
||||
batch_data = None
|
||||
if batch_dir and os.path.exists(batch_dir):
|
||||
batch_data = parse_json(os.path.join(batch_dir, "manifest.json"))
|
||||
|
||||
if not batch_data:
|
||||
raise ValueError(
|
||||
"Cannot parse batch meta in {} folder".format(batch_dir))
|
||||
|
||||
asset, task_name, _task_type = get_batch_asset_task_info(
|
||||
batch_data["context"])
|
||||
|
||||
# processing from app expects JUST ONE task in batch and 1 workfile
|
||||
task_dir_name = batch_data["tasks"][0]
|
||||
task_data = parse_json(os.path.join(batch_dir, task_dir_name,
|
||||
"manifest.json"))
|
||||
|
||||
workfile_path = os.path.join(batch_dir,
|
||||
task_dir_name,
|
||||
task_data["files"][0])
|
||||
|
||||
print("workfile_path {}".format(workfile_path))
|
||||
|
||||
_, batch_id = os.path.split(batch_dir)
|
||||
dbcon = get_webpublish_conn()
|
||||
# safer to start logging here, launch might be broken altogether
|
||||
_id = start_webpublish_log(dbcon, batch_id, user)
|
||||
|
||||
in_progress = True
|
||||
slept_times = 0
|
||||
while in_progress:
|
||||
batches_in_progress = list(dbcon.find({
|
||||
"status": "in_progress"
|
||||
}))
|
||||
if len(batches_in_progress) > 1:
|
||||
if slept_times * SLEEP >= WAIT_FOR:
|
||||
fail_batch(_id, batches_in_progress, dbcon)
|
||||
|
||||
print("Another batch running, sleeping for a bit")
|
||||
time.sleep(SLEEP)
|
||||
slept_times += 1
|
||||
else:
|
||||
in_progress = False
|
||||
|
||||
# must have for proper launch of app
|
||||
env = get_app_environments_for_context(
|
||||
project,
|
||||
asset,
|
||||
task_name,
|
||||
app_name
|
||||
)
|
||||
os.environ.update(env)
|
||||
|
||||
os.environ["OPENPYPE_PUBLISH_DATA"] = batch_dir
|
||||
os.environ["IS_HEADLESS"] = "true"
|
||||
# must pass identifier to update log lines for a batch
|
||||
os.environ["BATCH_LOG_ID"] = str(_id)
|
||||
|
||||
data = {
|
||||
"last_workfile_path": workfile_path,
|
||||
"start_last_workfile": True
|
||||
}
|
||||
|
||||
launched_app = application_manager.launch(app_name, **data)
|
||||
|
||||
while launched_app.poll() is None:
|
||||
time.sleep(0.5)
|
||||
|
||||
uninstall()
|
||||
|
||||
@staticmethod
|
||||
def remotepublish(project, batch_path, host, user, targets=None):
|
||||
"""Start headless publishing.
|
||||
|
||||
Used to publish rendered assets, workfiles etc.
|
||||
|
||||
Publish use json from passed paths argument.
|
||||
|
||||
Args:
|
||||
|
|
@ -134,7 +248,6 @@ class PypeCommands:
|
|||
|
||||
from openpype import install, uninstall
|
||||
from openpype.api import Logger
|
||||
from openpype.lib import OpenPypeMongoConnection
|
||||
|
||||
# Register target and host
|
||||
import pyblish.api
|
||||
|
|
@ -166,62 +279,11 @@ class PypeCommands:
|
|||
|
||||
log.info("Running publish ...")
|
||||
|
||||
# Error exit as soon as any error occurs.
|
||||
error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"
|
||||
|
||||
mongo_client = OpenPypeMongoConnection.get_mongo_client()
|
||||
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
|
||||
dbcon = mongo_client[database_name]["webpublishes"]
|
||||
|
||||
_, batch_id = os.path.split(batch_path)
|
||||
_id = dbcon.insert_one({
|
||||
"batch_id": batch_id,
|
||||
"start_date": datetime.now(),
|
||||
"user": user,
|
||||
"status": "in_progress"
|
||||
}).inserted_id
|
||||
dbcon = get_webpublish_conn()
|
||||
_id = start_webpublish_log(dbcon, batch_id, user)
|
||||
|
||||
log_lines = []
|
||||
for result in pyblish.util.publish_iter():
|
||||
for record in result["records"]:
|
||||
log_lines.append("{}: {}".format(
|
||||
result["plugin"].label, record.msg))
|
||||
|
||||
if result["error"]:
|
||||
log.error(error_format.format(**result))
|
||||
uninstall()
|
||||
log_lines.append(error_format.format(**result))
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"finish_date": datetime.now(),
|
||||
"status": "error",
|
||||
"log": os.linesep.join(log_lines)
|
||||
|
||||
}}
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"progress": max(result["progress"], 0.95),
|
||||
"log": os.linesep.join(log_lines)
|
||||
}}
|
||||
)
|
||||
|
||||
dbcon.update_one(
|
||||
{"_id": _id},
|
||||
{"$set":
|
||||
{
|
||||
"finish_date": datetime.now(),
|
||||
"status": "finished_ok",
|
||||
"progress": 1,
|
||||
"log": os.linesep.join(log_lines)
|
||||
}}
|
||||
)
|
||||
publish_and_log(dbcon, _id, log)
|
||||
|
||||
log.info("Publish finished.")
|
||||
uninstall()
|
||||
|
|
|
|||
BIN
openpype/resources/app_icons/flame.png
Normal file
BIN
openpype/resources/app_icons/flame.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 73 KiB |
|
|
@ -162,9 +162,7 @@
|
|||
]
|
||||
}
|
||||
],
|
||||
"customNodes": [
|
||||
|
||||
]
|
||||
"customNodes": []
|
||||
},
|
||||
"regexInputs": {
|
||||
"inputs": [
|
||||
|
|
|
|||
|
|
@ -12,6 +12,16 @@
|
|||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"CollectRemoteInstances": {
|
||||
"color_code_mapping": [
|
||||
{
|
||||
"color_code": [],
|
||||
"layer_name_regex": [],
|
||||
"family": "",
|
||||
"subset_template_name": ""
|
||||
}
|
||||
]
|
||||
},
|
||||
"ExtractImage": {
|
||||
"formats": [
|
||||
"png",
|
||||
|
|
|
|||
|
|
@ -97,6 +97,42 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"flame": {
|
||||
"enabled": true,
|
||||
"label": "Flame",
|
||||
"icon": "{}/app_icons/flame.png",
|
||||
"host_name": "flame",
|
||||
"environment": {
|
||||
"FLAME_SCRIPT_DIRS": {
|
||||
"windows": "",
|
||||
"darwin": "",
|
||||
"linux": ""
|
||||
}
|
||||
},
|
||||
"variants": {
|
||||
"2021": {
|
||||
"use_python_2": true,
|
||||
"executables": {
|
||||
"windows": [],
|
||||
"darwin": [
|
||||
"/opt/Autodesk/flame_2021/bin/flame.app/Contents/MacOS/startApp"
|
||||
],
|
||||
"linux": [
|
||||
"/opt/Autodesk/flame_2021/bin/flame"
|
||||
]
|
||||
},
|
||||
"arguments": {
|
||||
"windows": [],
|
||||
"darwin": [],
|
||||
"linux": []
|
||||
},
|
||||
"environment": {}
|
||||
},
|
||||
"__dynamic_keys_labels__": {
|
||||
"2021": "2021 (Testing Only)"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nuke": {
|
||||
"enabled": true,
|
||||
"label": "Nuke",
|
||||
|
|
@ -620,12 +656,12 @@
|
|||
"FUSION_UTILITY_SCRIPTS_SOURCE_DIR": [],
|
||||
"FUSION_UTILITY_SCRIPTS_DIR": {
|
||||
"windows": "{PROGRAMDATA}/Blackmagic Design/Fusion/Scripts/Comp",
|
||||
"darvin": "/Library/Application Support/Blackmagic Design/Fusion/Scripts/Comp",
|
||||
"darwin": "/Library/Application Support/Blackmagic Design/Fusion/Scripts/Comp",
|
||||
"linux": "/opt/Fusion/Scripts/Comp"
|
||||
},
|
||||
"PYTHON36": {
|
||||
"windows": "{LOCALAPPDATA}/Programs/Python/Python36",
|
||||
"darvin": "~/Library/Python/3.6/bin",
|
||||
"darwin": "~/Library/Python/3.6/bin",
|
||||
"linux": "/opt/Python/3.6/bin"
|
||||
},
|
||||
"PYTHONPATH": [
|
||||
|
|
@ -686,22 +722,22 @@
|
|||
"RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR": [],
|
||||
"RESOLVE_SCRIPT_API": {
|
||||
"windows": "{PROGRAMDATA}/Blackmagic Design/DaVinci Resolve/Support/Developer/Scripting",
|
||||
"darvin": "/Library/Application Support/Blackmagic Design/DaVinci Resolve/Developer/Scripting",
|
||||
"darwin": "/Library/Application Support/Blackmagic Design/DaVinci Resolve/Developer/Scripting",
|
||||
"linux": "/opt/resolve/Developer/Scripting"
|
||||
},
|
||||
"RESOLVE_SCRIPT_LIB": {
|
||||
"windows": "C:/Program Files/Blackmagic Design/DaVinci Resolve/fusionscript.dll",
|
||||
"darvin": "/Applications/DaVinci Resolve/DaVinci Resolve.app/Contents/Libraries/Fusion/fusionscript.so",
|
||||
"darwin": "/Applications/DaVinci Resolve/DaVinci Resolve.app/Contents/Libraries/Fusion/fusionscript.so",
|
||||
"linux": "/opt/resolve/libs/Fusion/fusionscript.so"
|
||||
},
|
||||
"RESOLVE_UTILITY_SCRIPTS_DIR": {
|
||||
"windows": "{PROGRAMDATA}/Blackmagic Design/DaVinci Resolve/Fusion/Scripts/Comp",
|
||||
"darvin": "/Library/Application Support/Blackmagic Design/DaVinci Resolve/Fusion/Scripts/Comp",
|
||||
"darwin": "/Library/Application Support/Blackmagic Design/DaVinci Resolve/Fusion/Scripts/Comp",
|
||||
"linux": "/opt/resolve/Fusion/Scripts/Comp"
|
||||
},
|
||||
"PYTHON36_RESOLVE": {
|
||||
"windows": "{LOCALAPPDATA}/Programs/Python/Python36",
|
||||
"darvin": "~/Library/Python/3.6/bin",
|
||||
"darwin": "~/Library/Python/3.6/bin",
|
||||
"linux": "/opt/Python/3.6/bin"
|
||||
},
|
||||
"PYTHONPATH": [
|
||||
|
|
@ -973,8 +1009,6 @@
|
|||
},
|
||||
"variants": {
|
||||
"2020": {
|
||||
"enabled": true,
|
||||
"variant_label": "2020",
|
||||
"executables": {
|
||||
"windows": [
|
||||
"C:\\Program Files\\Adobe\\Adobe Photoshop 2020\\Photoshop.exe"
|
||||
|
|
@ -990,8 +1024,6 @@
|
|||
"environment": {}
|
||||
},
|
||||
"2021": {
|
||||
"enabled": true,
|
||||
"variant_label": "2021",
|
||||
"executables": {
|
||||
"windows": [
|
||||
"C:\\Program Files\\Adobe\\Adobe Photoshop 2021\\Photoshop.exe"
|
||||
|
|
@ -1005,6 +1037,21 @@
|
|||
"linux": []
|
||||
},
|
||||
"environment": {}
|
||||
},
|
||||
"2022": {
|
||||
"executables": {
|
||||
"windows": [
|
||||
"C:\\Program Files\\Adobe\\Adobe Photoshop 2022\\Photoshop.exe"
|
||||
],
|
||||
"darwin": [],
|
||||
"linux": []
|
||||
},
|
||||
"arguments": {
|
||||
"windows": [],
|
||||
"darwin": [],
|
||||
"linux": []
|
||||
},
|
||||
"environment": {}
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
|
|||
|
|
@ -110,7 +110,10 @@ from .enum_entity import (
|
|||
)
|
||||
|
||||
from .list_entity import ListEntity
|
||||
from .dict_immutable_keys_entity import DictImmutableKeysEntity
|
||||
from .dict_immutable_keys_entity import (
|
||||
DictImmutableKeysEntity,
|
||||
RootsDictEntity
|
||||
)
|
||||
from .dict_mutable_keys_entity import DictMutableKeysEntity
|
||||
from .dict_conditional import (
|
||||
DictConditionalEntity,
|
||||
|
|
@ -169,6 +172,7 @@ __all__ = (
|
|||
"ListEntity",
|
||||
|
||||
"DictImmutableKeysEntity",
|
||||
"RootsDictEntity",
|
||||
|
||||
"DictMutableKeysEntity",
|
||||
|
||||
|
|
|
|||
|
|
@ -510,7 +510,7 @@ class BaseItemEntity(BaseEntity):
|
|||
pass
|
||||
|
||||
@abstractmethod
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
"""Entity specific initialization process."""
|
||||
pass
|
||||
|
||||
|
|
@ -920,7 +920,7 @@ class ItemEntity(BaseItemEntity):
|
|||
_default_label_wrap["collapsed"]
|
||||
)
|
||||
|
||||
self._item_initalization()
|
||||
self._item_initialization()
|
||||
|
||||
def save(self):
|
||||
"""Call save on root item."""
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ from .exceptions import (
|
|||
class ColorEntity(InputEntity):
|
||||
schema_types = ["color"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.valid_value_types = (list, )
|
||||
self.value_on_not_set = [0, 0, 0, 255]
|
||||
self.use_alpha = self.schema_data.get("use_alpha", True)
|
||||
|
|
|
|||
|
|
@ -107,7 +107,7 @@ class DictConditionalEntity(ItemEntity):
|
|||
for _key, _value in new_value.items():
|
||||
self.non_gui_children[self.current_enum][_key].set(_value)
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self._default_metadata = NOT_SET
|
||||
self._studio_override_metadata = NOT_SET
|
||||
self._project_override_metadata = NOT_SET
|
||||
|
|
|
|||
|
|
@ -4,7 +4,8 @@ import collections
|
|||
from .lib import (
|
||||
WRAPPER_TYPES,
|
||||
OverrideState,
|
||||
NOT_SET
|
||||
NOT_SET,
|
||||
STRING_TYPE
|
||||
)
|
||||
from openpype.settings.constants import (
|
||||
METADATA_KEYS,
|
||||
|
|
@ -18,6 +19,7 @@ from . import (
|
|||
GUIEntity
|
||||
)
|
||||
from .exceptions import (
|
||||
DefaultsNotDefined,
|
||||
SchemaDuplicatedKeys,
|
||||
EntitySchemaError,
|
||||
InvalidKeySymbols
|
||||
|
|
@ -172,7 +174,7 @@ class DictImmutableKeysEntity(ItemEntity):
|
|||
for child_obj in added_children:
|
||||
self.gui_layout.append(child_obj)
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self._default_metadata = NOT_SET
|
||||
self._studio_override_metadata = NOT_SET
|
||||
self._project_override_metadata = NOT_SET
|
||||
|
|
@ -547,3 +549,178 @@ class DictImmutableKeysEntity(ItemEntity):
|
|||
super(DictImmutableKeysEntity, self).reset_callbacks()
|
||||
for child_entity in self.children:
|
||||
child_entity.reset_callbacks()
|
||||
|
||||
|
||||
class RootsDictEntity(DictImmutableKeysEntity):
|
||||
"""Entity that adds ability to fill value for roots of current project.
|
||||
|
||||
Value schema is defined by `object_type`.
|
||||
|
||||
It is not possible to change override state (Studio values will always
|
||||
contain studio overrides and same for project). That is because roots can
|
||||
be totally different for each project.
|
||||
"""
|
||||
_origin_schema_data = None
|
||||
schema_types = ["dict-roots"]
|
||||
|
||||
def _item_initialization(self):
|
||||
origin_schema_data = self.schema_data
|
||||
|
||||
self.separate_items = origin_schema_data.get("separate_items", True)
|
||||
object_type = origin_schema_data.get("object_type")
|
||||
if isinstance(object_type, STRING_TYPE):
|
||||
object_type = {"type": object_type}
|
||||
self.object_type = object_type
|
||||
|
||||
if not self.is_group:
|
||||
self.is_group = True
|
||||
|
||||
schema_data = copy.deepcopy(self.schema_data)
|
||||
schema_data["children"] = []
|
||||
|
||||
self.schema_data = schema_data
|
||||
self._origin_schema_data = origin_schema_data
|
||||
|
||||
self._default_value = NOT_SET
|
||||
self._studio_value = NOT_SET
|
||||
self._project_value = NOT_SET
|
||||
|
||||
super(RootsDictEntity, self)._item_initialization()
|
||||
|
||||
def schema_validations(self):
|
||||
if self.object_type is None:
|
||||
reason = (
|
||||
"Missing children definitions for root values"
|
||||
" ('object_type' not filled)."
|
||||
)
|
||||
raise EntitySchemaError(self, reason)
|
||||
|
||||
if not isinstance(self.object_type, dict):
|
||||
reason = (
|
||||
"Children definitions for root values must be dictionary"
|
||||
" ('object_type' is \"{}\")."
|
||||
).format(str(type(self.object_type)))
|
||||
raise EntitySchemaError(self, reason)
|
||||
|
||||
super(RootsDictEntity, self).schema_validations()
|
||||
|
||||
def set_override_state(self, state, ignore_missing_defaults):
|
||||
self.children = []
|
||||
self.non_gui_children = {}
|
||||
self.gui_layout = []
|
||||
|
||||
roots_entity = self.get_entity_from_path(
|
||||
"project_anatomy/roots"
|
||||
)
|
||||
children = []
|
||||
first = True
|
||||
for key in roots_entity.keys():
|
||||
if first:
|
||||
first = False
|
||||
elif self.separate_items:
|
||||
children.append({"type": "separator"})
|
||||
child = copy.deepcopy(self.object_type)
|
||||
child["key"] = key
|
||||
child["label"] = key
|
||||
children.append(child)
|
||||
|
||||
schema_data = copy.deepcopy(self.schema_data)
|
||||
schema_data["children"] = children
|
||||
|
||||
self._add_children(schema_data)
|
||||
|
||||
self._set_children_values(state)
|
||||
|
||||
super(RootsDictEntity, self).set_override_state(
|
||||
state, True
|
||||
)
|
||||
|
||||
if state == OverrideState.STUDIO:
|
||||
self.add_to_studio_default()
|
||||
|
||||
elif state == OverrideState.PROJECT:
|
||||
self.add_to_project_override()
|
||||
|
||||
def on_child_change(self, child_obj):
|
||||
if self._override_state is OverrideState.STUDIO:
|
||||
if not child_obj.has_studio_override:
|
||||
self.add_to_studio_default()
|
||||
|
||||
elif self._override_state is OverrideState.PROJECT:
|
||||
if not child_obj.has_project_override:
|
||||
self.add_to_project_override()
|
||||
|
||||
return super(RootsDictEntity, self).on_child_change(child_obj)
|
||||
|
||||
def _set_children_values(self, state):
|
||||
if state >= OverrideState.DEFAULTS:
|
||||
default_value = self._default_value
|
||||
if default_value is NOT_SET:
|
||||
if state > OverrideState.DEFAULTS:
|
||||
raise DefaultsNotDefined(self)
|
||||
else:
|
||||
default_value = {}
|
||||
|
||||
for key, child_obj in self.non_gui_children.items():
|
||||
child_value = default_value.get(key, NOT_SET)
|
||||
child_obj.update_default_value(child_value)
|
||||
|
||||
if state >= OverrideState.STUDIO:
|
||||
value = self._studio_value
|
||||
if value is NOT_SET:
|
||||
value = {}
|
||||
|
||||
for key, child_obj in self.non_gui_children.items():
|
||||
child_value = value.get(key, NOT_SET)
|
||||
child_obj.update_studio_value(child_value)
|
||||
|
||||
if state >= OverrideState.PROJECT:
|
||||
value = self._project_value
|
||||
if value is NOT_SET:
|
||||
value = {}
|
||||
|
||||
for key, child_obj in self.non_gui_children.items():
|
||||
child_value = value.get(key, NOT_SET)
|
||||
child_obj.update_project_value(child_value)
|
||||
|
||||
def _update_current_metadata(self):
|
||||
"""Override this method as this entity should not have metadata."""
|
||||
self._metadata_are_modified = False
|
||||
self._current_metadata = {}
|
||||
|
||||
def update_default_value(self, value):
|
||||
"""Update default values.
|
||||
|
||||
Not an api method, should be called by parent.
|
||||
"""
|
||||
value = self._check_update_value(value, "default")
|
||||
value, _ = self._prepare_value(value)
|
||||
|
||||
self._default_value = value
|
||||
self._default_metadata = {}
|
||||
self.has_default_value = value is not NOT_SET
|
||||
|
||||
def update_studio_value(self, value):
|
||||
"""Update studio override values.
|
||||
|
||||
Not an api method, should be called by parent.
|
||||
"""
|
||||
value = self._check_update_value(value, "studio override")
|
||||
value, _ = self._prepare_value(value)
|
||||
|
||||
self._studio_value = value
|
||||
self._studio_override_metadata = {}
|
||||
self.had_studio_override = value is not NOT_SET
|
||||
|
||||
def update_project_value(self, value):
|
||||
"""Update project override values.
|
||||
|
||||
Not an api method, should be called by parent.
|
||||
"""
|
||||
|
||||
value = self._check_update_value(value, "project override")
|
||||
value, _metadata = self._prepare_value(value)
|
||||
|
||||
self._project_value = value
|
||||
self._project_override_metadata = {}
|
||||
self.had_project_override = value is not NOT_SET
|
||||
|
|
|
|||
|
|
@ -191,7 +191,7 @@ class DictMutableKeysEntity(EndpointEntity):
|
|||
child_entity = self.children_by_key[key]
|
||||
self.set_child_label(child_entity, label)
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self._default_metadata = {}
|
||||
self._studio_override_metadata = {}
|
||||
self._project_override_metadata = {}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ from .lib import (
|
|||
|
||||
|
||||
class BaseEnumEntity(InputEntity):
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = True
|
||||
self.value_on_not_set = None
|
||||
self.enum_items = None
|
||||
|
|
@ -70,7 +70,7 @@ class BaseEnumEntity(InputEntity):
|
|||
class EnumEntity(BaseEnumEntity):
|
||||
schema_types = ["enum"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = self.schema_data.get("multiselection", False)
|
||||
self.enum_items = self.schema_data.get("enum_items")
|
||||
# Default is optional and non breaking attribute
|
||||
|
|
@ -143,6 +143,7 @@ class HostsEnumEntity(BaseEnumEntity):
|
|||
"aftereffects",
|
||||
"blender",
|
||||
"celaction",
|
||||
"flame",
|
||||
"fusion",
|
||||
"harmony",
|
||||
"hiero",
|
||||
|
|
@ -156,7 +157,7 @@ class HostsEnumEntity(BaseEnumEntity):
|
|||
"standalonepublisher"
|
||||
]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = self.schema_data.get("multiselection", True)
|
||||
use_empty_value = False
|
||||
if not self.multiselection:
|
||||
|
|
@ -249,7 +250,7 @@ class HostsEnumEntity(BaseEnumEntity):
|
|||
class AppsEnumEntity(BaseEnumEntity):
|
||||
schema_types = ["apps-enum"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = True
|
||||
self.value_on_not_set = []
|
||||
self.enum_items = []
|
||||
|
|
@ -316,7 +317,7 @@ class AppsEnumEntity(BaseEnumEntity):
|
|||
class ToolsEnumEntity(BaseEnumEntity):
|
||||
schema_types = ["tools-enum"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = True
|
||||
self.value_on_not_set = []
|
||||
self.enum_items = []
|
||||
|
|
@ -375,7 +376,7 @@ class ToolsEnumEntity(BaseEnumEntity):
|
|||
class TaskTypeEnumEntity(BaseEnumEntity):
|
||||
schema_types = ["task-types-enum"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = self.schema_data.get("multiselection", True)
|
||||
if self.multiselection:
|
||||
self.valid_value_types = (list, )
|
||||
|
|
@ -451,7 +452,7 @@ class TaskTypeEnumEntity(BaseEnumEntity):
|
|||
class DeadlineUrlEnumEntity(BaseEnumEntity):
|
||||
schema_types = ["deadline_url-enum"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = self.schema_data.get("multiselection", True)
|
||||
|
||||
self.enum_items = []
|
||||
|
|
@ -502,7 +503,7 @@ class DeadlineUrlEnumEntity(BaseEnumEntity):
|
|||
class AnatomyTemplatesEnumEntity(BaseEnumEntity):
|
||||
schema_types = ["anatomy-templates-enum"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.multiselection = False
|
||||
|
||||
self.enum_items = []
|
||||
|
|
|
|||
|
|
@ -362,7 +362,7 @@ class NumberEntity(InputEntity):
|
|||
float_number_regex = re.compile(r"^\d+\.\d+$")
|
||||
int_number_regex = re.compile(r"^\d+$")
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.minimum = self.schema_data.get("minimum", -99999)
|
||||
self.maximum = self.schema_data.get("maximum", 99999)
|
||||
self.decimal = self.schema_data.get("decimal", 0)
|
||||
|
|
@ -420,7 +420,7 @@ class NumberEntity(InputEntity):
|
|||
class BoolEntity(InputEntity):
|
||||
schema_types = ["boolean"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.valid_value_types = (bool, )
|
||||
value_on_not_set = self.convert_to_valid_type(
|
||||
self.schema_data.get("default", True)
|
||||
|
|
@ -431,7 +431,7 @@ class BoolEntity(InputEntity):
|
|||
class TextEntity(InputEntity):
|
||||
schema_types = ["text"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.valid_value_types = (STRING_TYPE, )
|
||||
self.value_on_not_set = ""
|
||||
|
||||
|
|
@ -449,7 +449,7 @@ class TextEntity(InputEntity):
|
|||
class PathInput(InputEntity):
|
||||
schema_types = ["path-input"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.valid_value_types = (STRING_TYPE, )
|
||||
self.value_on_not_set = ""
|
||||
|
||||
|
|
@ -460,7 +460,7 @@ class PathInput(InputEntity):
|
|||
class RawJsonEntity(InputEntity):
|
||||
schema_types = ["raw-json"]
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
# Schema must define if valid value is dict or list
|
||||
store_as_string = self.schema_data.get("store_as_string", False)
|
||||
is_list = self.schema_data.get("is_list", False)
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ class PathEntity(ItemEntity):
|
|||
raise AttributeError(self.attribute_error_msg.format("items"))
|
||||
return self.child_obj.items()
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
if self.group_item is None and not self.is_group:
|
||||
self.is_group = True
|
||||
|
||||
|
|
@ -216,7 +216,7 @@ class ListStrictEntity(ItemEntity):
|
|||
return self.children[idx]
|
||||
return default
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.valid_value_types = (list, )
|
||||
self.require_key = True
|
||||
|
||||
|
|
|
|||
|
|
@ -149,7 +149,7 @@ class ListEntity(EndpointEntity):
|
|||
return list(value)
|
||||
return NOT_SET
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
self.valid_value_types = (list, )
|
||||
self.children = []
|
||||
self.value_on_not_set = []
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ class RootEntity(BaseItemEntity):
|
|||
super(RootEntity, self).__init__(schema_data)
|
||||
self._require_restart_callbacks = []
|
||||
self._item_ids_require_restart = set()
|
||||
self._item_initalization()
|
||||
self._item_initialization()
|
||||
if reset:
|
||||
self.reset()
|
||||
|
||||
|
|
@ -176,7 +176,7 @@ class RootEntity(BaseItemEntity):
|
|||
for child_obj in added_children:
|
||||
self.gui_layout.append(child_obj)
|
||||
|
||||
def _item_initalization(self):
|
||||
def _item_initialization(self):
|
||||
# Store `self` to `root_item` for children entities
|
||||
self.root_item = self
|
||||
|
||||
|
|
|
|||
|
|
@ -208,6 +208,25 @@
|
|||
}
|
||||
```
|
||||
|
||||
## dict-roots
|
||||
- entity can be used only in Project settings
|
||||
- keys of dictionary are based on current project roots
|
||||
- they are not updated "live" it is required to save root changes and then
|
||||
modify values on this entity
|
||||
# TODO do live updates
|
||||
```
|
||||
{
|
||||
"type": "dict-roots",
|
||||
"key": "roots",
|
||||
"label": "Roots",
|
||||
"object_type": {
|
||||
"type": "path",
|
||||
"multiplatform": true,
|
||||
"multipath": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## dict-conditional
|
||||
- is similar to `dict` but has always available one enum entity
|
||||
- the enum entity has single selection and it's value define other children entities
|
||||
|
|
|
|||
|
|
@ -43,6 +43,61 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"is_group": true,
|
||||
"key": "CollectRemoteInstances",
|
||||
"label": "Collect Instances for Webpublish",
|
||||
"children": [
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Set color for publishable layers, set publishable families."
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "color_code_mapping",
|
||||
"label": "Color code mappings",
|
||||
"use_label_wrap": false,
|
||||
"collapsible": false,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"type": "list",
|
||||
"key": "color_code",
|
||||
"label": "Color codes for layers",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "layer_name_regex",
|
||||
"label": "Layer name regex",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"key": "family",
|
||||
"label": "Resulting family",
|
||||
"type": "enum",
|
||||
"enum_items": [
|
||||
{
|
||||
"image": "image"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "subset_template_name",
|
||||
"label": "Subset template name"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -0,0 +1,39 @@
|
|||
{
|
||||
"type": "dict",
|
||||
"key": "flame",
|
||||
"label": "Autodesk Flame",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_host_unchangables"
|
||||
},
|
||||
{
|
||||
"key": "environment",
|
||||
"label": "Environment",
|
||||
"type": "raw-json"
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"key": "variants",
|
||||
"collapsible_key": true,
|
||||
"use_label_wrap": false,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_host_variant_items"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -20,26 +20,21 @@
|
|||
"type": "raw-json"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"type": "dict-modifiable",
|
||||
"key": "variants",
|
||||
"children": [
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_host_variant",
|
||||
"template_data": [
|
||||
{
|
||||
"app_variant_label": "2020",
|
||||
"app_variant": "2020",
|
||||
"variant_skip_paths": ["use_python_2"]
|
||||
},
|
||||
{
|
||||
"app_variant_label": "2021",
|
||||
"app_variant": "2021",
|
||||
"variant_skip_paths": ["use_python_2"]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
"collapsible_key": true,
|
||||
"use_label_wrap": false,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_host_variant_items",
|
||||
"skip_paths": ["use_python_2"]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -9,6 +9,10 @@
|
|||
"type": "schema",
|
||||
"name": "schema_maya"
|
||||
},
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_flame"
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_nuke",
|
||||
|
|
|
|||
|
|
@ -8,8 +8,7 @@ from avalon.api import AvalonMongoDB
|
|||
from openpype import style
|
||||
from openpype.api import resources
|
||||
|
||||
from avalon.tools import lib as tools_lib
|
||||
from avalon.tools.widgets import AssetWidget
|
||||
from openpype.tools.utils.widgets import AssetWidget
|
||||
from avalon.vendor import qtawesome
|
||||
from .models import ProjectModel
|
||||
from .lib import get_action_label, ProjectHandler
|
||||
|
|
|
|||
|
|
@ -164,8 +164,9 @@ class LoaderWindow(QtWidgets.QDialog):
|
|||
|
||||
subsets_widget.load_started.connect(self._on_load_start)
|
||||
subsets_widget.load_ended.connect(self._on_load_end)
|
||||
repres_widget.load_started.connect(self._on_load_start)
|
||||
repres_widget.load_ended.connect(self._on_load_end)
|
||||
if repres_widget:
|
||||
repres_widget.load_started.connect(self._on_load_start)
|
||||
repres_widget.load_ended.connect(self._on_load_end)
|
||||
|
||||
self._sync_server_enabled = sync_server_enabled
|
||||
|
||||
|
|
|
|||
|
|
@ -2,20 +2,27 @@ import sys
|
|||
import time
|
||||
import logging
|
||||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.hosts.maya.api.lib import assign_look_by_version
|
||||
|
||||
from avalon import style, io
|
||||
from avalon.tools import lib
|
||||
from avalon.vendor.Qt import QtWidgets, QtCore
|
||||
|
||||
from maya import cmds
|
||||
# old api for MFileIO
|
||||
import maya.OpenMaya
|
||||
import maya.api.OpenMaya as om
|
||||
|
||||
from . import widgets
|
||||
from . import commands
|
||||
from . vray_proxies import vrayproxy_assign_look
|
||||
from .widgets import (
|
||||
AssetOutliner,
|
||||
LookOutliner
|
||||
)
|
||||
from .commands import (
|
||||
get_workfile,
|
||||
remove_unused_looks
|
||||
)
|
||||
from .vray_proxies import vrayproxy_assign_look
|
||||
|
||||
|
||||
module = sys.modules[__name__]
|
||||
|
|
@ -32,7 +39,7 @@ class App(QtWidgets.QWidget):
|
|||
# Store callback references
|
||||
self._callbacks = []
|
||||
|
||||
filename = commands.get_workfile()
|
||||
filename = get_workfile()
|
||||
|
||||
self.setObjectName("lookManager")
|
||||
self.setWindowTitle("Look Manager 1.3.0 - [{}]".format(filename))
|
||||
|
|
@ -57,13 +64,13 @@ class App(QtWidgets.QWidget):
|
|||
"""Build the UI"""
|
||||
|
||||
# Assets (left)
|
||||
asset_outliner = widgets.AssetOutliner()
|
||||
asset_outliner = AssetOutliner()
|
||||
|
||||
# Looks (right)
|
||||
looks_widget = QtWidgets.QWidget()
|
||||
looks_layout = QtWidgets.QVBoxLayout(looks_widget)
|
||||
|
||||
look_outliner = widgets.LookOutliner() # Database look overview
|
||||
look_outliner = LookOutliner() # Database look overview
|
||||
|
||||
assign_selected = QtWidgets.QCheckBox("Assign to selected only")
|
||||
assign_selected.setToolTip("Whether to assign only to selected nodes "
|
||||
|
|
@ -124,7 +131,7 @@ class App(QtWidgets.QWidget):
|
|||
lambda: self.echo("Loaded assets.."))
|
||||
|
||||
self.look_outliner.menu_apply_action.connect(self.on_process_selected)
|
||||
self.remove_unused.clicked.connect(commands.remove_unused_looks)
|
||||
self.remove_unused.clicked.connect(remove_unused_looks)
|
||||
|
||||
# Maya renderlayer switch callback
|
||||
callback = om.MEventMessage.addEventCallback(
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ from openpype.hosts.maya.api import lib
|
|||
from avalon import io, api
|
||||
|
||||
|
||||
import vray_proxies
|
||||
from .vray_proxies import get_alembic_ids_cache
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -146,7 +146,7 @@ def create_items_from_nodes(nodes):
|
|||
vray_proxy_nodes = cmds.ls(nodes, type="VRayProxy")
|
||||
for vp in vray_proxy_nodes:
|
||||
path = cmds.getAttr("{}.fileName".format(vp))
|
||||
ids = vray_proxies.get_alembic_ids_cache(path)
|
||||
ids = get_alembic_ids_cache(path)
|
||||
parent_id = {}
|
||||
for k, _ in ids.items():
|
||||
pid = k.split(":")[0]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
from collections import defaultdict
|
||||
from avalon.tools import models
|
||||
|
||||
from avalon.vendor.Qt import QtCore
|
||||
from Qt import QtCore
|
||||
|
||||
from avalon.tools import models
|
||||
from avalon.vendor import qtawesome
|
||||
from avalon.style import colors
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from avalon.vendor.Qt import QtWidgets, QtCore
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
|
||||
DEFAULT_COLOR = "#fb9c15"
|
||||
|
|
|
|||
|
|
@ -1,13 +1,16 @@
|
|||
import logging
|
||||
from collections import defaultdict
|
||||
|
||||
from avalon.vendor.Qt import QtWidgets, QtCore
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
# TODO: expose this better in avalon core
|
||||
from avalon.tools import lib
|
||||
from avalon.tools.models import TreeModel
|
||||
|
||||
from . import models
|
||||
from .models import (
|
||||
AssetModel,
|
||||
LookModel
|
||||
)
|
||||
from . import commands
|
||||
from . import views
|
||||
|
||||
|
|
@ -30,7 +33,7 @@ class AssetOutliner(QtWidgets.QWidget):
|
|||
title.setAlignment(QtCore.Qt.AlignCenter)
|
||||
title.setStyleSheet("font-weight: bold; font-size: 12px")
|
||||
|
||||
model = models.AssetModel()
|
||||
model = AssetModel()
|
||||
view = views.View()
|
||||
view.setModel(model)
|
||||
view.customContextMenuRequested.connect(self.right_mouse_menu)
|
||||
|
|
@ -201,7 +204,7 @@ class LookOutliner(QtWidgets.QWidget):
|
|||
title.setStyleSheet("font-weight: bold; font-size: 12px")
|
||||
title.setAlignment(QtCore.Qt.AlignCenter)
|
||||
|
||||
model = models.LookModel()
|
||||
model = LookModel()
|
||||
|
||||
# Proxy for dynamic sorting
|
||||
proxy = QtCore.QSortFilterProxyModel()
|
||||
|
|
@ -257,5 +260,3 @@ class LookOutliner(QtWidgets.QWidget):
|
|||
menu.addAction(apply_action)
|
||||
|
||||
menu.exec_(globalpos)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1456,7 +1456,11 @@ class HierarchyModel(QtCore.QAbstractItemModel):
|
|||
return
|
||||
|
||||
raw_data = mime_data.data("application/copy_task")
|
||||
encoded_data = QtCore.QByteArray.fromRawData(raw_data)
|
||||
if isinstance(raw_data, QtCore.QByteArray):
|
||||
# Raw data are already QByteArrat and we don't have to load them
|
||||
encoded_data = raw_data
|
||||
else:
|
||||
encoded_data = QtCore.QByteArray.fromRawData(raw_data)
|
||||
stream = QtCore.QDataStream(encoded_data, QtCore.QIODevice.ReadOnly)
|
||||
text = stream.readQString()
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ an active window manager; such as via Travis-CI.
|
|||
"""
|
||||
import os
|
||||
import sys
|
||||
import traceback
|
||||
import inspect
|
||||
import logging
|
||||
|
||||
|
|
|
|||
|
|
@ -142,18 +142,23 @@ class ConsoleTrayApp:
|
|||
self.tray_reconnect = False
|
||||
ConsoleTrayApp.webserver_client.close()
|
||||
|
||||
def _send_text(self, new_text):
|
||||
def _send_text_queue(self):
|
||||
"""Sends lines and purges queue"""
|
||||
lines = tuple(self.new_text)
|
||||
self.new_text.clear()
|
||||
|
||||
if lines:
|
||||
self._send_lines(lines)
|
||||
|
||||
def _send_lines(self, lines):
|
||||
""" Send console content. """
|
||||
if not ConsoleTrayApp.webserver_client:
|
||||
return
|
||||
|
||||
if isinstance(new_text, str):
|
||||
new_text = collections.deque(new_text.split("\n"))
|
||||
|
||||
payload = {
|
||||
"host": self.host_id,
|
||||
"action": host_console_listener.MsgAction.ADD,
|
||||
"text": "\n".join(new_text)
|
||||
"text": "\n".join(lines)
|
||||
}
|
||||
|
||||
self._send(payload)
|
||||
|
|
@ -174,14 +179,7 @@ class ConsoleTrayApp:
|
|||
if self.tray_reconnect:
|
||||
self._connect() # reconnect
|
||||
|
||||
if ConsoleTrayApp.webserver_client and self.new_text:
|
||||
self._send_text(self.new_text)
|
||||
self.new_text = collections.deque()
|
||||
|
||||
if self.new_text: # no webserver_client, text keeps stashing
|
||||
start = max(len(self.new_text) - self.MAX_LINES, 0)
|
||||
self.new_text = itertools.islice(self.new_text,
|
||||
start, self.MAX_LINES)
|
||||
self._send_text_queue()
|
||||
|
||||
if not self.initialized:
|
||||
if self.initializing:
|
||||
|
|
@ -191,7 +189,7 @@ class ConsoleTrayApp:
|
|||
elif not host_connected:
|
||||
text = "{} process is not alive. Exiting".format(self.host)
|
||||
print(text)
|
||||
self._send_text([text])
|
||||
self._send_lines([text])
|
||||
ConsoleTrayApp.websocket_server.stop()
|
||||
sys.exit(1)
|
||||
elif host_connected:
|
||||
|
|
@ -205,14 +203,15 @@ class ConsoleTrayApp:
|
|||
self.initializing = True
|
||||
|
||||
self.launch_method(*self.subprocess_args)
|
||||
elif ConsoleTrayApp.process.poll() is not None:
|
||||
self.exit()
|
||||
elif ConsoleTrayApp.callback_queue:
|
||||
elif ConsoleTrayApp.callback_queue and \
|
||||
not ConsoleTrayApp.callback_queue.empty():
|
||||
try:
|
||||
callback = ConsoleTrayApp.callback_queue.get(block=False)
|
||||
callback()
|
||||
except queue.Empty:
|
||||
pass
|
||||
elif ConsoleTrayApp.process.poll() is not None:
|
||||
self.exit()
|
||||
|
||||
@classmethod
|
||||
def execute_in_main_thread(cls, func_to_call_from_main_thread):
|
||||
|
|
@ -232,8 +231,9 @@ class ConsoleTrayApp:
|
|||
self._close()
|
||||
if ConsoleTrayApp.websocket_server:
|
||||
ConsoleTrayApp.websocket_server.stop()
|
||||
ConsoleTrayApp.process.kill()
|
||||
ConsoleTrayApp.process.wait()
|
||||
if ConsoleTrayApp.process:
|
||||
ConsoleTrayApp.process.kill()
|
||||
ConsoleTrayApp.process.wait()
|
||||
if self.timer:
|
||||
self.timer.stop()
|
||||
QtCore.QCoreApplication.exit()
|
||||
|
|
|
|||
|
|
@ -35,26 +35,6 @@ def application():
|
|||
yield app
|
||||
|
||||
|
||||
def defer(delay, func):
|
||||
"""Append artificial delay to `func`
|
||||
|
||||
This aids in keeping the GUI responsive, but complicates logic
|
||||
when producing tests. To combat this, the environment variable ensures
|
||||
that every operation is synchonous.
|
||||
|
||||
Arguments:
|
||||
delay (float): Delay multiplier; default 1, 0 means no delay
|
||||
func (callable): Any callable
|
||||
|
||||
"""
|
||||
|
||||
delay *= float(os.getenv("PYBLISH_DELAY", 1))
|
||||
if delay > 0:
|
||||
return QtCore.QTimer.singleShot(delay, func)
|
||||
else:
|
||||
return func()
|
||||
|
||||
|
||||
class SharedObjects:
|
||||
jobs = {}
|
||||
|
||||
|
|
@ -82,18 +62,6 @@ def schedule(func, time, channel="default"):
|
|||
SharedObjects.jobs[channel] = timer
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def dummy():
|
||||
"""Dummy context manager
|
||||
|
||||
Usage:
|
||||
>> with some_context() if False else dummy():
|
||||
.. pass
|
||||
|
||||
"""
|
||||
yield
|
||||
|
||||
|
||||
def iter_model_rows(model, column, include_root=False):
|
||||
"""Iterate over all row indices in a model"""
|
||||
indices = [QtCore.QModelIndex()] # start iteration at root
|
||||
|
|
@ -111,76 +79,6 @@ def iter_model_rows(model, column, include_root=False):
|
|||
yield index
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def preserve_states(tree_view,
|
||||
column=0,
|
||||
role=None,
|
||||
preserve_expanded=True,
|
||||
preserve_selection=True,
|
||||
expanded_role=QtCore.Qt.DisplayRole,
|
||||
selection_role=QtCore.Qt.DisplayRole):
|
||||
"""Preserves row selection in QTreeView by column's data role.
|
||||
This function is created to maintain the selection status of
|
||||
the model items. When refresh is triggered the items which are expanded
|
||||
will stay expanded and vise versa.
|
||||
tree_view (QWidgets.QTreeView): the tree view nested in the application
|
||||
column (int): the column to retrieve the data from
|
||||
role (int): the role which dictates what will be returned
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
# When `role` is set then override both expanded and selection roles
|
||||
if role:
|
||||
expanded_role = role
|
||||
selection_role = role
|
||||
|
||||
model = tree_view.model()
|
||||
selection_model = tree_view.selectionModel()
|
||||
flags = selection_model.Select | selection_model.Rows
|
||||
|
||||
expanded = set()
|
||||
|
||||
if preserve_expanded:
|
||||
for index in iter_model_rows(
|
||||
model, column=column, include_root=False
|
||||
):
|
||||
if tree_view.isExpanded(index):
|
||||
value = index.data(expanded_role)
|
||||
expanded.add(value)
|
||||
|
||||
selected = None
|
||||
|
||||
if preserve_selection:
|
||||
selected_rows = selection_model.selectedRows()
|
||||
if selected_rows:
|
||||
selected = set(row.data(selection_role) for row in selected_rows)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if expanded:
|
||||
for index in iter_model_rows(
|
||||
model, column=0, include_root=False
|
||||
):
|
||||
value = index.data(expanded_role)
|
||||
is_expanded = value in expanded
|
||||
# skip if new index was created meanwhile
|
||||
if is_expanded is None:
|
||||
continue
|
||||
tree_view.setExpanded(index, is_expanded)
|
||||
|
||||
if selected:
|
||||
# Go through all indices, select the ones with similar data
|
||||
for index in iter_model_rows(
|
||||
model, column=column, include_root=False
|
||||
):
|
||||
value = index.data(selection_role)
|
||||
state = value in selected
|
||||
if state:
|
||||
tree_view.scrollTo(index) # Ensure item is visible
|
||||
selection_model.select(index, flags)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def preserve_expanded_rows(tree_view, column=0, role=None):
|
||||
"""Preserves expanded row in QTreeView by column's data role.
|
||||
|
|
|
|||
104
openpype/vendor/python/common/capture.py
vendored
104
openpype/vendor/python/common/capture.py
vendored
|
|
@ -161,37 +161,62 @@ def capture(camera=None,
|
|||
cmds.currentTime(cmds.currentTime(query=True))
|
||||
|
||||
padding = 10 # Extend panel to accommodate for OS window manager
|
||||
|
||||
with _independent_panel(width=width + padding,
|
||||
height=height + padding,
|
||||
off_screen=off_screen) as panel:
|
||||
cmds.setFocus(panel)
|
||||
|
||||
with contextlib.nested(
|
||||
_disabled_inview_messages(),
|
||||
_maintain_camera(panel, camera),
|
||||
_applied_viewport_options(viewport_options, panel),
|
||||
_applied_camera_options(camera_options, panel),
|
||||
_applied_display_options(display_options),
|
||||
_applied_viewport2_options(viewport2_options),
|
||||
_isolated_nodes(isolate, panel),
|
||||
_maintained_time()):
|
||||
all_playblast_kwargs = {
|
||||
"compression": compression,
|
||||
"format": format,
|
||||
"percent": 100,
|
||||
"quality": quality,
|
||||
"viewer": viewer,
|
||||
"startTime": start_frame,
|
||||
"endTime": end_frame,
|
||||
"offScreen": off_screen,
|
||||
"showOrnaments": show_ornaments,
|
||||
"forceOverwrite": overwrite,
|
||||
"filename": filename,
|
||||
"widthHeight": [width, height],
|
||||
"rawFrameNumbers": raw_frame_numbers,
|
||||
"framePadding": frame_padding
|
||||
}
|
||||
all_playblast_kwargs.update(playblast_kwargs)
|
||||
|
||||
output = cmds.playblast(
|
||||
compression=compression,
|
||||
format=format,
|
||||
percent=100,
|
||||
quality=quality,
|
||||
viewer=viewer,
|
||||
startTime=start_frame,
|
||||
endTime=end_frame,
|
||||
offScreen=off_screen,
|
||||
showOrnaments=show_ornaments,
|
||||
forceOverwrite=overwrite,
|
||||
filename=filename,
|
||||
widthHeight=[width, height],
|
||||
rawFrameNumbers=raw_frame_numbers,
|
||||
framePadding=frame_padding,
|
||||
**playblast_kwargs)
|
||||
if getattr(contextlib, "nested", None):
|
||||
with contextlib.nested(
|
||||
_disabled_inview_messages(),
|
||||
_maintain_camera(panel, camera),
|
||||
_applied_viewport_options(viewport_options, panel),
|
||||
_applied_camera_options(camera_options, panel),
|
||||
_applied_display_options(display_options),
|
||||
_applied_viewport2_options(viewport2_options),
|
||||
_isolated_nodes(isolate, panel),
|
||||
_maintained_time()
|
||||
):
|
||||
output = cmds.playblast(**all_playblast_kwargs)
|
||||
else:
|
||||
with contextlib.ExitStack() as stack:
|
||||
stack.enter_context(_disabled_inview_messages())
|
||||
stack.enter_context(_maintain_camera(panel, camera))
|
||||
stack.enter_context(
|
||||
_applied_viewport_options(viewport_options, panel)
|
||||
)
|
||||
stack.enter_context(
|
||||
_applied_camera_options(camera_options, panel)
|
||||
)
|
||||
stack.enter_context(
|
||||
_applied_display_options(display_options)
|
||||
)
|
||||
stack.enter_context(
|
||||
_applied_viewport2_options(viewport2_options)
|
||||
)
|
||||
stack.enter_context(_isolated_nodes(isolate, panel))
|
||||
stack.enter_context(_maintained_time())
|
||||
|
||||
output = cmds.playblast(**all_playblast_kwargs)
|
||||
|
||||
return output
|
||||
|
||||
|
|
@ -364,7 +389,8 @@ def apply_view(panel, **options):
|
|||
|
||||
# Display options
|
||||
display_options = options.get("display_options", {})
|
||||
for key, value in display_options.iteritems():
|
||||
_iteritems = getattr(display_options, "iteritems", display_options.items)
|
||||
for key, value in _iteritems():
|
||||
if key in _DisplayOptionsRGB:
|
||||
cmds.displayRGBColor(key, *value)
|
||||
else:
|
||||
|
|
@ -372,16 +398,21 @@ def apply_view(panel, **options):
|
|||
|
||||
# Camera options
|
||||
camera_options = options.get("camera_options", {})
|
||||
for key, value in camera_options.iteritems():
|
||||
_iteritems = getattr(camera_options, "iteritems", camera_options.items)
|
||||
for key, value in _iteritems:
|
||||
cmds.setAttr("{0}.{1}".format(camera, key), value)
|
||||
|
||||
# Viewport options
|
||||
viewport_options = options.get("viewport_options", {})
|
||||
for key, value in viewport_options.iteritems():
|
||||
_iteritems = getattr(viewport_options, "iteritems", viewport_options.items)
|
||||
for key, value in _iteritems():
|
||||
cmds.modelEditor(panel, edit=True, **{key: value})
|
||||
|
||||
viewport2_options = options.get("viewport2_options", {})
|
||||
for key, value in viewport2_options.iteritems():
|
||||
_iteritems = getattr(
|
||||
viewport2_options, "iteritems", viewport2_options.items
|
||||
)
|
||||
for key, value in _iteritems():
|
||||
attr = "hardwareRenderingGlobals.{0}".format(key)
|
||||
cmds.setAttr(attr, value)
|
||||
|
||||
|
|
@ -629,14 +660,16 @@ def _applied_camera_options(options, panel):
|
|||
"for capture: %s" % opt)
|
||||
options.pop(opt)
|
||||
|
||||
for opt, value in options.iteritems():
|
||||
_iteritems = getattr(options, "iteritems", options.items)
|
||||
for opt, value in _iteritems():
|
||||
cmds.setAttr(camera + "." + opt, value)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if old_options:
|
||||
for opt, value in old_options.iteritems():
|
||||
_iteritems = getattr(old_options, "iteritems", old_options.items)
|
||||
for opt, value in _iteritems():
|
||||
cmds.setAttr(camera + "." + opt, value)
|
||||
|
||||
|
||||
|
|
@ -722,14 +755,16 @@ def _applied_viewport2_options(options):
|
|||
options.pop(opt)
|
||||
|
||||
# Apply settings
|
||||
for opt, value in options.iteritems():
|
||||
_iteritems = getattr(options, "iteritems", options.items)
|
||||
for opt, value in _iteritems():
|
||||
cmds.setAttr("hardwareRenderingGlobals." + opt, value)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
# Restore previous settings
|
||||
for opt, value in original.iteritems():
|
||||
_iteritems = getattr(original, "iteritems", original.items)
|
||||
for opt, value in _iteritems():
|
||||
cmds.setAttr("hardwareRenderingGlobals." + opt, value)
|
||||
|
||||
|
||||
|
|
@ -769,7 +804,8 @@ def _maintain_camera(panel, camera):
|
|||
try:
|
||||
yield
|
||||
finally:
|
||||
for camera, renderable in state.iteritems():
|
||||
_iteritems = getattr(state, "iteritems", state.items)
|
||||
for camera, renderable in _iteritems():
|
||||
cmds.setAttr(camera + ".rnd", renderable)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.5.0"
|
||||
__version__ = "3.6.0-nightly.2"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.0.0"
|
||||
version = "3.6.0-nightly.2" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,94 @@
|
|||
import pytest
|
||||
import os
|
||||
import shutil
|
||||
|
||||
from tests.lib.testing_classes import PublishTest
|
||||
|
||||
|
||||
class TestPublishInPhotoshop(PublishTest):
|
||||
"""Basic test case for publishing in Photoshop
|
||||
|
||||
Uses generic TestCase to prepare fixtures for test data, testing DBs,
|
||||
env vars.
|
||||
|
||||
Opens Maya, run publish on prepared workile.
|
||||
|
||||
Then checks content of DB (if subset, version, representations were
|
||||
created.
|
||||
Checks tmp folder if all expected files were published.
|
||||
|
||||
"""
|
||||
PERSIST = True
|
||||
|
||||
TEST_FILES = [
|
||||
("1Bciy2pCwMKl1UIpxuPnlX_LHMo_Xkq0K", "test_photoshop_publish.zip", "")
|
||||
]
|
||||
|
||||
APP = "photoshop"
|
||||
APP_VARIANT = "2020"
|
||||
|
||||
APP_NAME = "{}/{}".format(APP, APP_VARIANT)
|
||||
|
||||
TIMEOUT = 120 # publish timeout
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def last_workfile_path(self, download_test_data):
|
||||
"""Get last_workfile_path from source data.
|
||||
|
||||
Maya expects workfile in proper folder, so copy is done first.
|
||||
"""
|
||||
src_path = os.path.join(download_test_data,
|
||||
"input",
|
||||
"workfile",
|
||||
"test_project_test_asset_TestTask_v001.psd")
|
||||
dest_folder = os.path.join(download_test_data,
|
||||
self.PROJECT,
|
||||
self.ASSET,
|
||||
"work",
|
||||
self.TASK)
|
||||
os.makedirs(dest_folder)
|
||||
dest_path = os.path.join(dest_folder,
|
||||
"test_project_test_asset_TestTask_v001.psd")
|
||||
shutil.copy(src_path, dest_path)
|
||||
|
||||
yield dest_path
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def startup_scripts(self, monkeypatch_session, download_test_data):
|
||||
"""Points Maya to userSetup file from input data"""
|
||||
os.environ["IS_HEADLESS"] = "true"
|
||||
|
||||
def test_db_asserts(self, dbcon, publish_finished):
|
||||
"""Host and input data dependent expected results in DB."""
|
||||
print("test_db_asserts")
|
||||
assert 5 == dbcon.count_documents({"type": "version"}), \
|
||||
"Not expected no of versions"
|
||||
|
||||
assert 0 == dbcon.count_documents({"type": "version",
|
||||
"name": {"$ne": 1}}), \
|
||||
"Only versions with 1 expected"
|
||||
|
||||
assert 1 == dbcon.count_documents({"type": "subset",
|
||||
"name": "modelMain"}), \
|
||||
"modelMain subset must be present"
|
||||
|
||||
assert 1 == dbcon.count_documents({"type": "subset",
|
||||
"name": "workfileTest_task"}), \
|
||||
"workfileTest_task subset must be present"
|
||||
|
||||
assert 11 == dbcon.count_documents({"type": "representation"}), \
|
||||
"Not expected no of representations"
|
||||
|
||||
assert 2 == dbcon.count_documents({"type": "representation",
|
||||
"context.subset": "modelMain",
|
||||
"context.ext": "abc"}), \
|
||||
"Not expected no of representations with ext 'abc'"
|
||||
|
||||
assert 2 == dbcon.count_documents({"type": "representation",
|
||||
"context.subset": "modelMain",
|
||||
"context.ext": "ma"}), \
|
||||
"Not expected no of representations with ext 'abc'"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_case = TestPublishInPhotoshop()
|
||||
|
|
@ -228,6 +228,7 @@ class PublishTest(ModuleUnitTest):
|
|||
while launched_app.poll() is None:
|
||||
time.sleep(0.5)
|
||||
if time.time() - time_start > self.TIMEOUT:
|
||||
launched_app.terminate()
|
||||
raise ValueError("Timeout reached")
|
||||
|
||||
# some clean exit test possible?
|
||||
|
|
|
|||
|
|
@ -76,3 +76,28 @@ I've selected `vdb1` and went **OpenPype -> Create** and selected **VDB Cache**.
|
|||
geometry ROP in `/out` and sets its paths to output vdb files. During the publishing process
|
||||
whole dops are cooked.
|
||||
|
||||
## Publishing Houdini Digital Assets (HDA)
|
||||
|
||||
You can publish most of the nodes in Houdini as hda for easy interchange of data between Houdini instances or even
|
||||
other DCCs with Houdini Engine.
|
||||
|
||||
## Creating HDA
|
||||
|
||||
Simply select nodes you want to include in hda and go **OpenPype -> Create** and select **Houdini digital asset (hda)**.
|
||||
You can even use already existing hda as a selected node, and it will be published (see below for limitation).
|
||||
|
||||
:::caution HDA Workflow limitations
|
||||
As long as the hda is of same type - it is created from different nodes but using the same (subset) name, everything
|
||||
is ok. But once you've published version of hda subset, you cannot change its type. For example, you create hda **Foo**
|
||||
from *Cube* and *Sphere* - it will create hda subset named `hdaFoo` with the same type. You publish it as version 1.
|
||||
Then you create version 2 with added *Torus*. Then you create version 3 from the scratch from completely different nodes,
|
||||
but still using resulting subset name `hdaFoo`. Everything still works as expected. But then you use already
|
||||
existing hda as a base, for example from different artist. Its type cannot be changed from what it was and so even if
|
||||
it is named `hdaFoo` it has different type. It could be published, but you would never load it and retain ability to
|
||||
switch versions between different hda types.
|
||||
:::
|
||||
|
||||
## Loading HDA
|
||||
|
||||
When you load hda, it will install its type in your hip file and add published version as its definition file. When
|
||||
you switch version via Scene Manager, it will add its definition and set it as preferred.
|
||||
Loading…
Add table
Add a link
Reference in a new issue