Merge remote-tracking branch 'origin/develop' into feature/PYPE-1120-nk-geo-workflow

This commit is contained in:
karimmozlia 2021-11-18 08:24:09 +02:00
commit 23748e9100
308 changed files with 29174 additions and 4341 deletions

View file

@ -33,7 +33,7 @@ jobs:
id: version
if: steps.version_type.outputs.type != 'skip'
run: |
RESULT=$(python ./tools/ci_tools.py --nightly)
RESULT=$(python ./tools/ci_tools.py --nightly --github_token ${{ secrets.GITHUB_TOKEN }})
echo ::set-output name=next_tag::$RESULT

View file

@ -1,39 +1,99 @@
# Changelog
## [3.6.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.1...HEAD)
**🚀 Enhancements**
- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255)
- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251)
- Added endpoint for configured extensions [\#2221](https://github.com/pypeclub/OpenPype/pull/2221)
**🐛 Bug fixes**
- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247)
## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.1-nightly.1...3.6.1)
**🐛 Bug fixes**
- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254)
## [3.6.0](https://github.com/pypeclub/OpenPype/tree/3.6.0) (2021-11-15)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.0-nightly.6...3.6.0)
### 📖 Documentation
- Add alternative sites for Site Sync [\#2206](https://github.com/pypeclub/OpenPype/pull/2206)
- Add command line way of running site sync server [\#2188](https://github.com/pypeclub/OpenPype/pull/2188)
**🆕 New features**
- Add validate active site button to sync queue on a project [\#2176](https://github.com/pypeclub/OpenPype/pull/2176)
- Maya : Colorspace configuration [\#2170](https://github.com/pypeclub/OpenPype/pull/2170)
- Blender: Added support for audio [\#2168](https://github.com/pypeclub/OpenPype/pull/2168)
**🚀 Enhancements**
- Tools: Subset manager in OpenPype [\#2243](https://github.com/pypeclub/OpenPype/pull/2243)
- General: Skip module directories without init file [\#2239](https://github.com/pypeclub/OpenPype/pull/2239)
- General: Static interfaces [\#2238](https://github.com/pypeclub/OpenPype/pull/2238)
- Style: Fix transparent image in style [\#2235](https://github.com/pypeclub/OpenPype/pull/2235)
- Add a "following workfile versioning" option on publish [\#2225](https://github.com/pypeclub/OpenPype/pull/2225)
- Modules: Module can add cli commands [\#2224](https://github.com/pypeclub/OpenPype/pull/2224)
- Webpublisher: Separate webpublisher logic [\#2222](https://github.com/pypeclub/OpenPype/pull/2222)
- Add both side availability on Site Sync sites to Loader [\#2220](https://github.com/pypeclub/OpenPype/pull/2220)
- Tools: Center loader and library loader on show [\#2219](https://github.com/pypeclub/OpenPype/pull/2219)
- Maya : Validate shape zero [\#2212](https://github.com/pypeclub/OpenPype/pull/2212)
- Maya : validate unique names [\#2211](https://github.com/pypeclub/OpenPype/pull/2211)
- Tools: OpenPype stylesheet in workfiles tool [\#2208](https://github.com/pypeclub/OpenPype/pull/2208)
- Ftrack: Replace Queue with deque in event handlers logic [\#2204](https://github.com/pypeclub/OpenPype/pull/2204)
- Tools: New select context dialog [\#2200](https://github.com/pypeclub/OpenPype/pull/2200)
- Maya : Validate mesh ngons [\#2199](https://github.com/pypeclub/OpenPype/pull/2199)
- Dirmap in Nuke [\#2198](https://github.com/pypeclub/OpenPype/pull/2198)
- Delivery: Check 'frame' key in template for sequence delivery [\#2196](https://github.com/pypeclub/OpenPype/pull/2196)
- Settings: Site sync project settings improvement [\#2193](https://github.com/pypeclub/OpenPype/pull/2193)
- Usage of tools code [\#2185](https://github.com/pypeclub/OpenPype/pull/2185)
- Settings: Dictionary based on project roots [\#2184](https://github.com/pypeclub/OpenPype/pull/2184)
- Subset name: Be able to pass asset document to get subset name [\#2179](https://github.com/pypeclub/OpenPype/pull/2179)
**🐛 Bug fixes**
- Ftrack: Sync project ftrack id cache issue [\#2250](https://github.com/pypeclub/OpenPype/pull/2250)
- Ftrack: Session creation and Prepare project [\#2245](https://github.com/pypeclub/OpenPype/pull/2245)
- Added queue for studio processing in PS [\#2237](https://github.com/pypeclub/OpenPype/pull/2237)
- Python 2: Unicode to string conversion [\#2236](https://github.com/pypeclub/OpenPype/pull/2236)
- Fix - enum for color coding in PS [\#2234](https://github.com/pypeclub/OpenPype/pull/2234)
- Pyblish Tool: Fix targets handling [\#2232](https://github.com/pypeclub/OpenPype/pull/2232)
- Ftrack: Base event fix of 'get\_project\_from\_entity' method [\#2214](https://github.com/pypeclub/OpenPype/pull/2214)
- Maya : multiple subsets review broken [\#2210](https://github.com/pypeclub/OpenPype/pull/2210)
- Fix - different command used for Linux and Mac OS [\#2207](https://github.com/pypeclub/OpenPype/pull/2207)
- Tools: Workfiles tool don't use avalon widgets [\#2205](https://github.com/pypeclub/OpenPype/pull/2205)
- Ftrack: Fill missing ftrack id on mongo project [\#2203](https://github.com/pypeclub/OpenPype/pull/2203)
- Project Manager: Fix copying of tasks [\#2191](https://github.com/pypeclub/OpenPype/pull/2191)
- Blender: Fix trying to pack an image when the shader node has no texture [\#2183](https://github.com/pypeclub/OpenPype/pull/2183)
- Maya: review viewport settings [\#2177](https://github.com/pypeclub/OpenPype/pull/2177)
- Maya: Aspect ratio [\#2174](https://github.com/pypeclub/OpenPype/pull/2174)
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.1...3.5.0)
**Deprecated:**
- Maya: Change mayaAscii family to mayaScene [\#2106](https://github.com/pypeclub/OpenPype/pull/2106)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)
**🆕 New features**
- Added project and task into context change message in Maya [\#2131](https://github.com/pypeclub/OpenPype/pull/2131)
- Add ExtractBurnin to photoshop review [\#2124](https://github.com/pypeclub/OpenPype/pull/2124)
- PYPE-1218 - changed namespace to contain subset name in Maya [\#2114](https://github.com/pypeclub/OpenPype/pull/2114)
- Added running configurable disk mapping command before start of OP [\#2091](https://github.com/pypeclub/OpenPype/pull/2091)
- SFTP provider [\#2073](https://github.com/pypeclub/OpenPype/pull/2073)
- Maya: Validate setdress top group [\#2068](https://github.com/pypeclub/OpenPype/pull/2068)
**🚀 Enhancements**
- Maya: make rig validators configurable in settings [\#2137](https://github.com/pypeclub/OpenPype/pull/2137)
- Settings: Updated readme for entity types in settings [\#2132](https://github.com/pypeclub/OpenPype/pull/2132)
- Nuke: unified clip loader [\#2128](https://github.com/pypeclub/OpenPype/pull/2128)
- Settings UI: Project model refreshing and sorting [\#2104](https://github.com/pypeclub/OpenPype/pull/2104)
- Create Read From Rendered - Disable Relative paths by default [\#2093](https://github.com/pypeclub/OpenPype/pull/2093)
- Added choosing different dirmap mapping if workfile synched locally [\#2088](https://github.com/pypeclub/OpenPype/pull/2088)
- General: Remove IdleManager module [\#2084](https://github.com/pypeclub/OpenPype/pull/2084)
- Tray UI: Message box about missing settings defaults [\#2080](https://github.com/pypeclub/OpenPype/pull/2080)
- Tray UI: Show menu where first click happened [\#2079](https://github.com/pypeclub/OpenPype/pull/2079)
- Global: add global validators to settings [\#2078](https://github.com/pypeclub/OpenPype/pull/2078)
- Use CRF for burnin when available [\#2070](https://github.com/pypeclub/OpenPype/pull/2070)
- Project manager: Filter first item after selection of project [\#2069](https://github.com/pypeclub/OpenPype/pull/2069)
- Nuke: Adding `still` image family workflow [\#2064](https://github.com/pypeclub/OpenPype/pull/2064)
- Maya: validate authorized loaded plugins [\#2062](https://github.com/pypeclub/OpenPype/pull/2062)
- Tools: add support for pyenv on windows [\#2051](https://github.com/pypeclub/OpenPype/pull/2051)
**🐛 Bug fixes**
@ -41,96 +101,15 @@
- Fix - oiiotool wasn't recognized even if present [\#2129](https://github.com/pypeclub/OpenPype/pull/2129)
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
- Hiero: publishing effect first time makes wrong resources path [\#2115](https://github.com/pypeclub/OpenPype/pull/2115)
- Add startup script for Houdini Core. [\#2110](https://github.com/pypeclub/OpenPype/pull/2110)
- TVPaint: Behavior name of loop also accept repeat [\#2109](https://github.com/pypeclub/OpenPype/pull/2109)
- Ftrack: Project settings save custom attributes skip unknown attributes [\#2103](https://github.com/pypeclub/OpenPype/pull/2103)
- Blender: Fix NoneType error when animation\_data is missing for a rig [\#2101](https://github.com/pypeclub/OpenPype/pull/2101)
- Fix broken import in sftp provider [\#2100](https://github.com/pypeclub/OpenPype/pull/2100)
- Global: Fix docstring on publish plugin extract review [\#2097](https://github.com/pypeclub/OpenPype/pull/2097)
- Delivery Action Files Sequence fix [\#2096](https://github.com/pypeclub/OpenPype/pull/2096)
- General: Cloud mongo ca certificate issue [\#2095](https://github.com/pypeclub/OpenPype/pull/2095)
- TVPaint: Creator use context from workfile [\#2087](https://github.com/pypeclub/OpenPype/pull/2087)
- Blender: fix texture missing when publishing blend files [\#2085](https://github.com/pypeclub/OpenPype/pull/2085)
- General: Startup validations oiio tool path fix on linux [\#2083](https://github.com/pypeclub/OpenPype/pull/2083)
- Deadline: Collect deadline server does not check existence of deadline key [\#2082](https://github.com/pypeclub/OpenPype/pull/2082)
- Blender: fixed Curves with modifiers in Rigs [\#2081](https://github.com/pypeclub/OpenPype/pull/2081)
- Nuke UI scaling [\#2077](https://github.com/pypeclub/OpenPype/pull/2077)
- Maya: Fix multi-camera renders [\#2065](https://github.com/pypeclub/OpenPype/pull/2065)
- Fix Sync Queue when project disabled [\#2063](https://github.com/pypeclub/OpenPype/pull/2063)
**Merged pull requests:**
- Bump pywin32 from 300 to 301 [\#2086](https://github.com/pypeclub/OpenPype/pull/2086)
## [3.4.1](https://github.com/pypeclub/OpenPype/tree/3.4.1) (2021-09-23)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.1-nightly.1...3.4.1)
**🆕 New features**
- Settings: Flag project as deactivated and hide from tools' view [\#2008](https://github.com/pypeclub/OpenPype/pull/2008)
**🚀 Enhancements**
- General: Startup validations [\#2054](https://github.com/pypeclub/OpenPype/pull/2054)
- Nuke: proxy mode validator [\#2052](https://github.com/pypeclub/OpenPype/pull/2052)
- Ftrack: Removed ftrack interface [\#2049](https://github.com/pypeclub/OpenPype/pull/2049)
- Settings UI: Deffered set value on entity [\#2044](https://github.com/pypeclub/OpenPype/pull/2044)
- Loader: Families filtering [\#2043](https://github.com/pypeclub/OpenPype/pull/2043)
- Settings UI: Project view enhancements [\#2042](https://github.com/pypeclub/OpenPype/pull/2042)
- Settings for Nuke IncrementScriptVersion [\#2039](https://github.com/pypeclub/OpenPype/pull/2039)
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
- Adding predefined project folders creation in PM [\#2030](https://github.com/pypeclub/OpenPype/pull/2030)
- WebserverModule: Removed interface of webserver module [\#2028](https://github.com/pypeclub/OpenPype/pull/2028)
- TimersManager: Removed interface of timers manager [\#2024](https://github.com/pypeclub/OpenPype/pull/2024)
- Feature Maya import asset from scene inventory [\#2018](https://github.com/pypeclub/OpenPype/pull/2018)
**🐛 Bug fixes**
- Timers manger: Typo fix [\#2058](https://github.com/pypeclub/OpenPype/pull/2058)
- Hiero: Editorial fixes [\#2057](https://github.com/pypeclub/OpenPype/pull/2057)
- Differentiate jpg sequences from thumbnail [\#2056](https://github.com/pypeclub/OpenPype/pull/2056)
- FFmpeg: Split command to list does not work [\#2046](https://github.com/pypeclub/OpenPype/pull/2046)
- Removed shell flag in subprocess call [\#2045](https://github.com/pypeclub/OpenPype/pull/2045)
**Merged pull requests:**
- Bump prismjs from 1.24.0 to 1.25.0 in /website [\#2050](https://github.com/pypeclub/OpenPype/pull/2050)
## [3.4.0](https://github.com/pypeclub/OpenPype/tree/3.4.0) (2021-09-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.0-nightly.6...3.4.0)
**🆕 New features**
- Nuke: Compatibility with Nuke 13 [\#2003](https://github.com/pypeclub/OpenPype/pull/2003)
**🚀 Enhancements**
- Added possibility to configure of synchronization of workfile version… [\#2041](https://github.com/pypeclub/OpenPype/pull/2041)
- General: Task types in profiles [\#2036](https://github.com/pypeclub/OpenPype/pull/2036)
- Console interpreter: Handle invalid sizes on initialization [\#2022](https://github.com/pypeclub/OpenPype/pull/2022)
- Ftrack: Show OpenPype versions in event server status [\#2019](https://github.com/pypeclub/OpenPype/pull/2019)
- General: Staging icon [\#2017](https://github.com/pypeclub/OpenPype/pull/2017)
- Ftrack: Sync to avalon actions have jobs [\#2015](https://github.com/pypeclub/OpenPype/pull/2015)
- Modules: Connect method is not required [\#2009](https://github.com/pypeclub/OpenPype/pull/2009)
- Settings UI: Number with configurable steps [\#2001](https://github.com/pypeclub/OpenPype/pull/2001)
- Moving project folder structure creation out of ftrack module \#1989 [\#1996](https://github.com/pypeclub/OpenPype/pull/1996)
**🐛 Bug fixes**
- Workfiles tool: Task selection [\#2040](https://github.com/pypeclub/OpenPype/pull/2040)
- Ftrack: Delete old versions missing settings key [\#2037](https://github.com/pypeclub/OpenPype/pull/2037)
- Nuke: typo on a button [\#2034](https://github.com/pypeclub/OpenPype/pull/2034)
- Hiero: Fix "none" named tags [\#2033](https://github.com/pypeclub/OpenPype/pull/2033)
- FFmpeg: Subprocess arguments as list [\#2032](https://github.com/pypeclub/OpenPype/pull/2032)
- General: Fix Python 2 breaking line [\#2016](https://github.com/pypeclub/OpenPype/pull/2016)
- Bugfix/webpublisher task type [\#2006](https://github.com/pypeclub/OpenPype/pull/2006)
### 📖 Documentation
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
## [3.3.1](https://github.com/pypeclub/OpenPype/tree/3.3.1) (2021-08-20)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.3.1-nightly.1...3.3.1)

View file

@ -57,6 +57,17 @@ def tray(debug=False):
PypeCommands().launch_tray(debug)
@PypeCommands.add_modules
@main.group(help="Run command line arguments of OpenPype modules")
@click.pass_context
def module(ctx):
"""Module specific commands created dynamically.
These commands are generated dynamically by currently loaded addon/modules.
"""
pass
@main.command()
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
@click.option("--ftrack-url", envvar="FTRACK_SERVER",
@ -147,7 +158,9 @@ def extractenvironments(output_json_path, project, asset, task, app):
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
@click.option("-t", "--targets", help="Targets module", default=None,
multiple=True)
def publish(debug, paths, targets):
@click.option("-g", "--gui", is_flag=True,
help="Show Publish UI", default=False)
def publish(debug, paths, targets, gui):
"""Start CLI publishing.
Publish collects json from paths provided as an argument.
@ -155,7 +168,7 @@ def publish(debug, paths, targets):
"""
if debug:
os.environ['OPENPYPE_DEBUG'] = '3'
PypeCommands.publish(list(paths), targets)
PypeCommands.publish(list(paths), targets, gui)
@main.command()
@ -166,7 +179,7 @@ def publish(debug, paths, targets):
@click.option("-p", "--project", help="Project")
@click.option("-t", "--targets", help="Targets", default=None,
multiple=True)
def remotepublish(debug, project, path, host, targets=None, user=None):
def remotepublishfromapp(debug, project, path, host, user=None, targets=None):
"""Start CLI publishing.
Publish collects json from paths provided as an argument.
@ -174,7 +187,27 @@ def remotepublish(debug, project, path, host, targets=None, user=None):
"""
if debug:
os.environ['OPENPYPE_DEBUG'] = '3'
PypeCommands.remotepublish(project, path, host, user, targets=targets)
PypeCommands.remotepublishfromapp(
project, path, host, user, targets=targets
)
@main.command()
@click.argument("path")
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
@click.option("-u", "--user", help="User email address")
@click.option("-p", "--project", help="Project")
@click.option("-t", "--targets", help="Targets", default=None,
multiple=True)
def remotepublish(debug, project, path, user=None, targets=None):
"""Start CLI publishing.
Publish collects json from paths provided as an argument.
More than one path is allowed.
"""
if debug:
os.environ['OPENPYPE_DEBUG'] = '3'
PypeCommands.remotepublish(project, path, user, targets=targets)
@main.command()
@ -263,6 +296,34 @@ def projectmanager():
PypeCommands().launch_project_manager()
@main.command()
@click.argument("output_path")
@click.option("--project", help="Define project context")
@click.option("--asset", help="Define asset in project (project must be set)")
@click.option(
"--strict",
is_flag=True,
help="Full context must be set otherwise dialog can't be closed."
)
def contextselection(
output_path,
project,
asset,
strict
):
"""Show Qt dialog to select context.
Context is project name, asset name and task name. The result is stored
into json file which path is passed in first argument.
"""
PypeCommands.contextselection(
output_path,
project,
asset,
strict
)
@main.command(
context_settings=dict(
ignore_unknown_options=True,
@ -298,3 +359,28 @@ def run(script):
def runtests(folder, mark, pyargs):
"""Run all automatic tests after proper initialization via start.py"""
PypeCommands().run_tests(folder, mark, pyargs)
@main.command()
@click.option("-d", "--debug",
is_flag=True, help=("Run process in debug mode"))
@click.option("-a", "--active_site", required=True,
help="Name of active stie")
def syncserver(debug, active_site):
"""Run sync site server in background.
Some Site Sync use cases need to expose site to another one.
For example if majority of artists work in studio, they are not using
SS at all, but if you want to expose published assets to 'studio' site
to SFTP for only a couple of artists, some background process must
mark published assets to live on multiple sites (they might be
physically in same location - mounted shared disk).
Process mimics OP Tray with specific 'active_site' name, all
configuration for this "dummy" user comes from Setting or Local
Settings (configured by starting OP Tray with env
var OPENPYPE_LOCAL_ID set to 'active_site'.
"""
if debug:
os.environ['OPENPYPE_DEBUG'] = '3'
PypeCommands().syncserver(active_site)

View file

@ -13,7 +13,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
# Should be as last hook because must change launch arguments to string
order = 1000
app_groups = ["nuke", "nukex", "hiero", "nukestudio"]
app_groups = ["nuke", "nukex", "hiero", "nukestudio", "photoshop"]
platforms = ["windows"]
def execute(self):

View file

@ -95,6 +95,30 @@ def get_local_collection_with_name(name):
return None
def deselect_all():
"""Deselect all objects in the scene.
Blender gives context error if trying to deselect object that it isn't
in object mode.
"""
modes = []
active = bpy.context.view_layer.objects.active
for obj in bpy.data.objects:
if obj.mode != 'OBJECT':
modes.append((obj, obj.mode))
bpy.context.view_layer.objects.active = obj
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
for p in modes:
bpy.context.view_layer.objects.active = p[0]
bpy.ops.object.mode_set(mode=p[1])
bpy.context.view_layer.objects.active = active
class Creator(PypeCreatorMixin, blender.Creator):
pass

View file

@ -3,11 +3,12 @@
import bpy
from avalon import api
from avalon.blender import lib
import openpype.hosts.blender.api.plugin
from avalon.blender import lib, ops
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype.hosts.blender.api import plugin
class CreateCamera(openpype.hosts.blender.api.plugin.Creator):
class CreateCamera(plugin.Creator):
"""Polygonal static geometry"""
name = "cameraMain"
@ -16,17 +17,46 @@ class CreateCamera(openpype.hosts.blender.api.plugin.Creator):
icon = "video-camera"
def process(self):
""" Run the creator on Blender main thread"""
mti = ops.MainThreadItem(self._process)
ops.execute_in_main_thread(mti)
def _process(self):
# Get Instance Containter or create it if it does not exist
instances = bpy.data.collections.get(AVALON_INSTANCES)
if not instances:
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
bpy.context.scene.collection.children.link(instances)
# Create instance object
asset = self.data["asset"]
subset = self.data["subset"]
name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
collection = bpy.data.collections.new(name=name)
bpy.context.scene.collection.children.link(collection)
name = plugin.asset_name(asset, subset)
camera = bpy.data.cameras.new(subset)
camera_obj = bpy.data.objects.new(subset, camera)
instances.objects.link(camera_obj)
asset_group = bpy.data.objects.new(name=name, object_data=None)
asset_group.empty_display_type = 'SINGLE_ARROW'
instances.objects.link(asset_group)
self.data['task'] = api.Session.get('AVALON_TASK')
lib.imprint(collection, self.data)
print(f"self.data: {self.data}")
lib.imprint(asset_group, self.data)
if (self.options or {}).get("useSelection"):
for obj in lib.get_selection():
collection.objects.link(obj)
bpy.context.view_layer.objects.active = asset_group
selected = lib.get_selection()
for obj in selected:
obj.select_set(True)
selected.append(asset_group)
bpy.ops.object.parent_set(keep_transform=True)
else:
plugin.deselect_all()
camera_obj.select_set(True)
asset_group.select_set(True)
bpy.context.view_layer.objects.active = asset_group
bpy.ops.object.parent_set(keep_transform=True)
return collection
return asset_group

View file

@ -47,7 +47,7 @@ class CacheModelLoader(plugin.AssetLoader):
bpy.data.objects.remove(empty)
def _process(self, libpath, asset_group, group_name):
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
collection = bpy.context.view_layer.active_layer_collection.collection
@ -109,7 +109,7 @@ class CacheModelLoader(plugin.AssetLoader):
avalon_info = obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
return objects

View file

@ -0,0 +1,217 @@
"""Load audio in Blender."""
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
class AudioLoader(plugin.AssetLoader):
"""Load audio in Blender."""
families = ["audio"]
representations = ["wav"]
label = "Load Audio"
icon = "volume-up"
color = "orange"
def process_asset(
self, context: dict, name: str, namespace: Optional[str] = None,
options: Optional[Dict] = None
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
asset_name = plugin.asset_name(asset, subset)
unique_number = plugin.get_unique_number(asset, subset)
group_name = plugin.asset_name(asset, subset, unique_number)
namespace = namespace or f"{asset}_{unique_number}"
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_container)
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
# Blender needs the Sequence Editor in the current window, to be able
# to load the audio. We take one of the areas in the window, save its
# type, and switch to the Sequence Editor. After loading the audio,
# we switch back to the previous area.
window_manager = bpy.context.window_manager
old_type = window_manager.windows[-1].screen.areas[0].type
window_manager.windows[-1].screen.areas[0].type = "SEQUENCE_EDITOR"
# We override the context to load the audio in the sequence editor.
oc = bpy.context.copy()
oc["area"] = window_manager.windows[-1].screen.areas[0]
bpy.ops.sequencer.sound_strip_add(oc, filepath=libpath, frame_start=1)
window_manager.windows[-1].screen.areas[0].type = old_type
p = Path(libpath)
audio = p.name
asset_group[AVALON_PROPERTY] = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or '',
"loader": str(self.__class__.__name__),
"representation": str(context["representation"]["_id"]),
"libpath": libpath,
"asset_name": asset_name,
"parent": str(context["representation"]["parent"]),
"family": context["representation"]["context"]["family"],
"objectName": group_name,
"audio": audio
}
objects = []
self[:] = objects
return [objects]
def exec_update(self, container: Dict, representation: Dict):
"""Update an audio strip in the sequence editor.
Arguments:
container (openpype:container-1.0): Container to update,
from `host.ls()`.
representation (openpype:representation-1.0): Representation to
update, from `host.ls()`.
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
libpath = Path(api.get_representation_path(representation))
self.log.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert asset_group, (
f"The asset is not loaded: {container['objectName']}"
)
assert libpath, (
"No existing library file found for {container['objectName']}"
)
assert libpath.is_file(), (
f"The file doesn't exist: {libpath}"
)
metadata = asset_group.get(AVALON_PROPERTY)
group_libpath = metadata["libpath"]
normalized_group_libpath = (
str(Path(bpy.path.abspath(group_libpath)).resolve())
)
normalized_libpath = (
str(Path(bpy.path.abspath(str(libpath))).resolve())
)
self.log.debug(
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_group_libpath,
normalized_libpath,
)
if normalized_group_libpath == normalized_libpath:
self.log.info("Library already loaded, not updating...")
return
old_audio = container["audio"]
p = Path(libpath)
new_audio = p.name
# Blender needs the Sequence Editor in the current window, to be able
# to update the audio. We take one of the areas in the window, save its
# type, and switch to the Sequence Editor. After updating the audio,
# we switch back to the previous area.
window_manager = bpy.context.window_manager
old_type = window_manager.windows[-1].screen.areas[0].type
window_manager.windows[-1].screen.areas[0].type = "SEQUENCE_EDITOR"
# We override the context to load the audio in the sequence editor.
oc = bpy.context.copy()
oc["area"] = window_manager.windows[-1].screen.areas[0]
# We deselect all sequencer strips, and then select the one we
# need to remove.
bpy.ops.sequencer.select_all(oc, action='DESELECT')
scene = bpy.context.scene
scene.sequence_editor.sequences_all[old_audio].select = True
bpy.ops.sequencer.delete(oc)
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
bpy.ops.sequencer.sound_strip_add(
oc, filepath=str(libpath), frame_start=1)
window_manager.windows[-1].screen.areas[0].type = old_type
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
metadata["parent"] = str(representation["parent"])
metadata["audio"] = new_audio
def exec_remove(self, container: Dict) -> bool:
"""Remove an audio strip from the sequence editor and the container.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
if not asset_group:
return False
audio = container["audio"]
# Blender needs the Sequence Editor in the current window, to be able
# to remove the audio. We take one of the areas in the window, save its
# type, and switch to the Sequence Editor. After removing the audio,
# we switch back to the previous area.
window_manager = bpy.context.window_manager
old_type = window_manager.windows[-1].screen.areas[0].type
window_manager.windows[-1].screen.areas[0].type = "SEQUENCE_EDITOR"
# We override the context to load the audio in the sequence editor.
oc = bpy.context.copy()
oc["area"] = window_manager.windows[-1].screen.areas[0]
# We deselect all sequencer strips, and then select the one we
# need to remove.
bpy.ops.sequencer.select_all(oc, action='DESELECT')
bpy.context.scene.sequence_editor.sequences_all[audio].select = True
bpy.ops.sequencer.delete(oc)
window_manager.windows[-1].screen.areas[0].type = old_type
bpy.data.sounds.remove(bpy.data.sounds[audio])
bpy.data.objects.remove(asset_group)
return True

View file

@ -1,247 +0,0 @@
"""Load a camera asset in Blender."""
import logging
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
from avalon import api, blender
import bpy
import openpype.hosts.blender.api.plugin
logger = logging.getLogger("openpype").getChild("blender").getChild("load_camera")
class BlendCameraLoader(openpype.hosts.blender.api.plugin.AssetLoader):
"""Load a camera from a .blend file.
Warning:
Loading the same asset more then once is not properly supported at the
moment.
"""
families = ["camera"]
representations = ["blend"]
label = "Link Camera"
icon = "code-fork"
color = "orange"
def _remove(self, objects, lib_container):
for obj in list(objects):
bpy.data.cameras.remove(obj.data)
bpy.data.collections.remove(bpy.data.collections[lib_container])
def _process(self, libpath, lib_container, container_name, actions):
relative = bpy.context.preferences.filepaths.use_relative_paths
with bpy.data.libraries.load(
libpath, link=True, relative=relative
) as (_, data_to):
data_to.collections = [lib_container]
scene = bpy.context.scene
scene.collection.children.link(bpy.data.collections[lib_container])
camera_container = scene.collection.children[lib_container].make_local()
objects_list = []
for obj in camera_container.objects:
local_obj = obj.make_local()
local_obj.data.make_local()
if not local_obj.get(blender.pipeline.AVALON_PROPERTY):
local_obj[blender.pipeline.AVALON_PROPERTY] = dict()
avalon_info = local_obj[blender.pipeline.AVALON_PROPERTY]
avalon_info.update({"container_name": container_name})
if actions[0] is not None:
if local_obj.animation_data is None:
local_obj.animation_data_create()
local_obj.animation_data.action = actions[0]
if actions[1] is not None:
if local_obj.data.animation_data is None:
local_obj.data.animation_data_create()
local_obj.data.animation_data.action = actions[1]
objects_list.append(local_obj)
camera_container.pop(blender.pipeline.AVALON_PROPERTY)
bpy.ops.object.select_all(action='DESELECT')
return objects_list
def process_asset(
self, context: dict, name: str, namespace: Optional[str] = None,
options: Optional[Dict] = None
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
lib_container = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
container_name = openpype.hosts.blender.api.plugin.asset_name(
asset, subset, namespace
)
container = bpy.data.collections.new(lib_container)
container.name = container_name
blender.pipeline.containerise_existing(
container,
name,
namespace,
context,
self.__class__.__name__,
)
container_metadata = container.get(
blender.pipeline.AVALON_PROPERTY)
container_metadata["libpath"] = libpath
container_metadata["lib_container"] = lib_container
objects_list = self._process(
libpath, lib_container, container_name, (None, None))
# Save the list of objects in the metadata container
container_metadata["objects"] = objects_list
nodes = list(container.objects)
nodes.append(container)
self[:] = nodes
return nodes
def update(self, container: Dict, representation: Dict):
"""Update the loaded asset.
This will remove all objects of the current collection, load the new
ones and add them to the collection.
If the objects of the collection are used in another collection they
will not be removed, only unlinked. Normally this should not be the
case though.
Warning:
No nested collections are supported at the moment!
"""
collection = bpy.data.collections.get(
container["objectName"]
)
libpath = Path(api.get_representation_path(representation))
extension = libpath.suffix.lower()
logger.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert collection, (
f"The asset is not loaded: {container['objectName']}"
)
assert not (collection.children), (
"Nested collections are not supported."
)
assert libpath, (
"No existing library file found for {container['objectName']}"
)
assert libpath.is_file(), (
f"The file doesn't exist: {libpath}"
)
assert extension in openpype.hosts.blender.api.plugin.VALID_EXTENSIONS, (
f"Unsupported file: {libpath}"
)
collection_metadata = collection.get(
blender.pipeline.AVALON_PROPERTY)
collection_libpath = collection_metadata["libpath"]
objects = collection_metadata["objects"]
lib_container = collection_metadata["lib_container"]
normalized_collection_libpath = (
str(Path(bpy.path.abspath(collection_libpath)).resolve())
)
normalized_libpath = (
str(Path(bpy.path.abspath(str(libpath))).resolve())
)
logger.debug(
"normalized_collection_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_collection_libpath,
normalized_libpath,
)
if normalized_collection_libpath == normalized_libpath:
logger.info("Library already loaded, not updating...")
return
camera = objects[0]
camera_action = None
camera_data_action = None
if camera.animation_data and camera.animation_data.action:
camera_action = camera.animation_data.action
if camera.data.animation_data and camera.data.animation_data.action:
camera_data_action = camera.data.animation_data.action
actions = (camera_action, camera_data_action)
self._remove(objects, lib_container)
objects_list = self._process(
str(libpath), lib_container, collection.name, actions)
# Save the list of objects in the metadata container
collection_metadata["objects"] = objects_list
collection_metadata["libpath"] = str(libpath)
collection_metadata["representation"] = str(representation["_id"])
bpy.ops.object.select_all(action='DESELECT')
def remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
Warning:
No nested collections are supported at the moment!
"""
collection = bpy.data.collections.get(
container["objectName"]
)
if not collection:
return False
assert not (collection.children), (
"Nested collections are not supported."
)
collection_metadata = collection.get(
blender.pipeline.AVALON_PROPERTY)
objects = collection_metadata["objects"]
lib_container = collection_metadata["lib_container"]
self._remove(objects, lib_container)
bpy.data.collections.remove(collection)
return True

View file

@ -0,0 +1,252 @@
"""Load a camera asset in Blender."""
import logging
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
logger = logging.getLogger("openpype").getChild(
"blender").getChild("load_camera")
class BlendCameraLoader(plugin.AssetLoader):
"""Load a camera from a .blend file.
Warning:
Loading the same asset more then once is not properly supported at the
moment.
"""
families = ["camera"]
representations = ["blend"]
label = "Link Camera (Blend)"
icon = "code-fork"
color = "orange"
def _remove(self, asset_group):
objects = list(asset_group.children)
for obj in objects:
if obj.type == 'CAMERA':
bpy.data.cameras.remove(obj.data)
def _process(self, libpath, asset_group, group_name):
with bpy.data.libraries.load(
libpath, link=True, relative=False
) as (data_from, data_to):
data_to.objects = data_from.objects
parent = bpy.context.scene.collection
empties = [obj for obj in data_to.objects if obj.type == 'EMPTY']
container = None
for empty in empties:
if empty.get(AVALON_PROPERTY):
container = empty
break
assert container, "No asset group found"
# Children must be linked before parents,
# otherwise the hierarchy will break
objects = []
nodes = list(container.children)
for obj in nodes:
obj.parent = asset_group
for obj in nodes:
objects.append(obj)
nodes.extend(list(obj.children))
objects.reverse()
for obj in objects:
parent.objects.link(obj)
for obj in objects:
local_obj = plugin.prepare_data(obj, group_name)
if local_obj.type != 'EMPTY':
plugin.prepare_data(local_obj.data, group_name)
if not local_obj.get(AVALON_PROPERTY):
local_obj[AVALON_PROPERTY] = dict()
avalon_info = local_obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
objects.reverse()
bpy.data.orphans_purge(do_local_ids=False)
plugin.deselect_all()
return objects
def process_asset(
self, context: dict, name: str, namespace: Optional[str] = None,
options: Optional[Dict] = None
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
asset_name = plugin.asset_name(asset, subset)
unique_number = plugin.get_unique_number(asset, subset)
group_name = plugin.asset_name(asset, subset, unique_number)
namespace = namespace or f"{asset}_{unique_number}"
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_container)
asset_group = bpy.data.objects.new(group_name, object_data=None)
asset_group.empty_display_type = 'SINGLE_ARROW'
avalon_container.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)
bpy.context.scene.collection.objects.link(asset_group)
asset_group[AVALON_PROPERTY] = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or '',
"loader": str(self.__class__.__name__),
"representation": str(context["representation"]["_id"]),
"libpath": libpath,
"asset_name": asset_name,
"parent": str(context["representation"]["parent"]),
"family": context["representation"]["context"]["family"],
"objectName": group_name
}
self[:] = objects
return objects
def exec_update(self, container: Dict, representation: Dict):
"""Update the loaded asset.
This will remove all children of the asset group, load the new ones
and add them as children of the group.
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
libpath = Path(api.get_representation_path(representation))
extension = libpath.suffix.lower()
self.log.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert asset_group, (
f"The asset is not loaded: {container['objectName']}"
)
assert libpath, (
"No existing library file found for {container['objectName']}"
)
assert libpath.is_file(), (
f"The file doesn't exist: {libpath}"
)
assert extension in plugin.VALID_EXTENSIONS, (
f"Unsupported file: {libpath}"
)
metadata = asset_group.get(AVALON_PROPERTY)
group_libpath = metadata["libpath"]
normalized_group_libpath = (
str(Path(bpy.path.abspath(group_libpath)).resolve())
)
normalized_libpath = (
str(Path(bpy.path.abspath(str(libpath))).resolve())
)
self.log.debug(
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_group_libpath,
normalized_libpath,
)
if normalized_group_libpath == normalized_libpath:
self.log.info("Library already loaded, not updating...")
return
# Check how many assets use the same library
count = 0
for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects:
if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath:
count += 1
mat = asset_group.matrix_basis.copy()
self._remove(asset_group)
# If it is the last object to use that library, remove it
if count == 1:
library = bpy.data.libraries.get(bpy.path.basename(group_libpath))
if library:
bpy.data.libraries.remove(library)
self._process(str(libpath), asset_group, object_name)
asset_group.matrix_basis = mat
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
metadata["parent"] = str(representation["parent"])
def exec_remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
libpath = asset_group.get(AVALON_PROPERTY).get('libpath')
# Check how many assets use the same library
count = 0
for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects:
if obj.get(AVALON_PROPERTY).get('libpath') == libpath:
count += 1
if not asset_group:
return False
self._remove(asset_group)
bpy.data.objects.remove(asset_group)
# If it is the last object to use that library, remove it
if count == 1:
library = bpy.data.libraries.get(bpy.path.basename(libpath))
bpy.data.libraries.remove(library)
return True

View file

@ -0,0 +1,218 @@
"""Load an asset in Blender from an Alembic file."""
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
import bpy
from avalon import api
from avalon.blender import lib
from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from openpype.hosts.blender.api import plugin
class FbxCameraLoader(plugin.AssetLoader):
"""Load a camera from FBX.
Stores the imported asset in an empty named after the asset.
"""
families = ["camera"]
representations = ["fbx"]
label = "Load Camera (FBX)"
icon = "code-fork"
color = "orange"
def _remove(self, asset_group):
objects = list(asset_group.children)
for obj in objects:
if obj.type == 'CAMERA':
bpy.data.cameras.remove(obj.data)
elif obj.type == 'EMPTY':
objects.extend(obj.children)
bpy.data.objects.remove(obj)
def _process(self, libpath, asset_group, group_name):
plugin.deselect_all()
collection = bpy.context.view_layer.active_layer_collection.collection
bpy.ops.import_scene.fbx(filepath=libpath)
parent = bpy.context.scene.collection
objects = lib.get_selection()
for obj in objects:
obj.parent = asset_group
for obj in objects:
parent.objects.link(obj)
collection.objects.unlink(obj)
for obj in objects:
name = obj.name
obj.name = f"{group_name}:{name}"
if obj.type != 'EMPTY':
name_data = obj.data.name
obj.data.name = f"{group_name}:{name_data}"
if not obj.get(AVALON_PROPERTY):
obj[AVALON_PROPERTY] = dict()
avalon_info = obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
plugin.deselect_all()
return objects
def process_asset(
self, context: dict, name: str, namespace: Optional[str] = None,
options: Optional[Dict] = None
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
asset_name = plugin.asset_name(asset, subset)
unique_number = plugin.get_unique_number(asset, subset)
group_name = plugin.asset_name(asset, subset, unique_number)
namespace = namespace or f"{asset}_{unique_number}"
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_container)
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)
objects = []
nodes = list(asset_group.children)
for obj in nodes:
objects.append(obj)
nodes.extend(list(obj.children))
bpy.context.scene.collection.objects.link(asset_group)
asset_group[AVALON_PROPERTY] = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or '',
"loader": str(self.__class__.__name__),
"representation": str(context["representation"]["_id"]),
"libpath": libpath,
"asset_name": asset_name,
"parent": str(context["representation"]["parent"]),
"family": context["representation"]["context"]["family"],
"objectName": group_name
}
self[:] = objects
return objects
def exec_update(self, container: Dict, representation: Dict):
"""Update the loaded asset.
This will remove all objects of the current collection, load the new
ones and add them to the collection.
If the objects of the collection are used in another collection they
will not be removed, only unlinked. Normally this should not be the
case though.
Warning:
No nested collections are supported at the moment!
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
libpath = Path(api.get_representation_path(representation))
extension = libpath.suffix.lower()
self.log.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert asset_group, (
f"The asset is not loaded: {container['objectName']}"
)
assert libpath, (
"No existing library file found for {container['objectName']}"
)
assert libpath.is_file(), (
f"The file doesn't exist: {libpath}"
)
assert extension in plugin.VALID_EXTENSIONS, (
f"Unsupported file: {libpath}"
)
metadata = asset_group.get(AVALON_PROPERTY)
group_libpath = metadata["libpath"]
normalized_group_libpath = (
str(Path(bpy.path.abspath(group_libpath)).resolve())
)
normalized_libpath = (
str(Path(bpy.path.abspath(str(libpath))).resolve())
)
self.log.debug(
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_group_libpath,
normalized_libpath,
)
if normalized_group_libpath == normalized_libpath:
self.log.info("Library already loaded, not updating...")
return
mat = asset_group.matrix_basis.copy()
self._remove(asset_group)
self._process(str(libpath), asset_group, object_name)
asset_group.matrix_basis = mat
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
def exec_remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
Warning:
No nested collections are supported at the moment!
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
if not asset_group:
return False
self._remove(asset_group)
bpy.data.objects.remove(asset_group)
return True

View file

@ -46,7 +46,7 @@ class FbxModelLoader(plugin.AssetLoader):
bpy.data.objects.remove(obj)
def _process(self, libpath, asset_group, group_name, action):
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
collection = bpy.context.view_layer.active_layer_collection.collection
@ -112,7 +112,7 @@ class FbxModelLoader(plugin.AssetLoader):
avalon_info = obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
return objects

View file

@ -150,7 +150,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
bpy.data.orphans_purge(do_local_ids=False)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
return objects

View file

@ -12,6 +12,7 @@ from avalon.blender.pipeline import AVALON_CONTAINERS
from avalon.blender.pipeline import AVALON_CONTAINER_ID
from avalon.blender.pipeline import AVALON_PROPERTY
from avalon.blender.pipeline import AVALON_INSTANCES
from openpype import lib
from openpype.hosts.blender.api import plugin
@ -59,7 +60,7 @@ class JsonLayoutLoader(plugin.AssetLoader):
return None
def _process(self, libpath, asset, asset_group, actions):
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
with open(libpath, "r") as fp:
data = json.load(fp)
@ -103,6 +104,21 @@ class JsonLayoutLoader(plugin.AssetLoader):
options=options
)
# Create the camera asset and the camera instance
creator_plugin = lib.get_creator_by_name("CreateCamera")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateCamera\" was "
"not found.")
api.create(
creator_plugin,
name="camera",
# name=f"{unique_number}_{subset}_animation",
asset=asset,
options={"useSelection": False}
# data={"dependencies": str(context["representation"]["_id"])}
)
def process_asset(self,
context: dict,
name: str,

View file

@ -93,7 +93,7 @@ class BlendModelLoader(plugin.AssetLoader):
bpy.data.orphans_purge(do_local_ids=False)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
return objects
@ -126,7 +126,7 @@ class BlendModelLoader(plugin.AssetLoader):
asset_group.empty_display_type = 'SINGLE_ARROW'
avalon_container.objects.link(asset_group)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
if options is not None:
parent = options.get('parent')
@ -158,7 +158,7 @@ class BlendModelLoader(plugin.AssetLoader):
bpy.ops.object.parent_set(keep_transform=True)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
objects = self._process(libpath, asset_group, group_name)

View file

@ -156,7 +156,7 @@ class BlendRigLoader(plugin.AssetLoader):
while bpy.data.orphans_purge(do_local_ids=False):
pass
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
return objects
@ -191,7 +191,7 @@ class BlendRigLoader(plugin.AssetLoader):
action = None
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
create_animation = False
@ -227,7 +227,7 @@ class BlendRigLoader(plugin.AssetLoader):
bpy.ops.object.parent_set(keep_transform=True)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
objects = self._process(libpath, asset_group, group_name, action)
@ -250,7 +250,7 @@ class BlendRigLoader(plugin.AssetLoader):
data={"dependencies": str(context["representation"]["_id"])}
)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
bpy.context.scene.collection.objects.link(asset_group)

View file

@ -28,7 +28,7 @@ class ExtractABC(api.Extractor):
# Perform extraction
self.log.info("Performing extraction..")
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
selected = []
asset_group = None
@ -50,7 +50,7 @@ class ExtractABC(api.Extractor):
flatten=False
)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
if "representations" not in instance.data:
instance.data["representations"] = []

View file

@ -37,7 +37,8 @@ class ExtractBlend(openpype.api.Extractor):
if tree.type == 'SHADER':
for node in tree.nodes:
if node.bl_idname == 'ShaderNodeTexImage':
node.image.pack()
if node.image:
node.image.pack()
bpy.data.libraries.write(filepath, data_blocks)

View file

@ -0,0 +1,73 @@
import os
from openpype import api
from openpype.hosts.blender.api import plugin
import bpy
class ExtractCamera(api.Extractor):
"""Extract as the camera as FBX."""
label = "Extract Camera"
hosts = ["blender"]
families = ["camera"]
optional = True
def process(self, instance):
# Define extract output file path
stagingdir = self.staging_dir(instance)
filename = f"{instance.name}.fbx"
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
plugin.deselect_all()
selected = []
camera = None
for obj in instance:
if obj.type == "CAMERA":
obj.select_set(True)
selected.append(obj)
camera = obj
break
assert camera, "No camera found"
context = plugin.create_blender_context(
active=camera, selected=selected)
scale_length = bpy.context.scene.unit_settings.scale_length
bpy.context.scene.unit_settings.scale_length = 0.01
# We export the fbx
bpy.ops.export_scene.fbx(
context,
filepath=filepath,
use_active_collection=False,
use_selection=True,
object_types={'CAMERA'},
bake_anim_simplify_factor=0.0
)
bpy.context.scene.unit_settings.scale_length = scale_length
plugin.deselect_all()
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'fbx',
'ext': 'fbx',
'files': filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s",
instance.name, representation)

View file

@ -24,7 +24,7 @@ class ExtractFBX(api.Extractor):
# Perform extraction
self.log.info("Performing extraction..")
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
selected = []
asset_group = None
@ -60,7 +60,7 @@ class ExtractFBX(api.Extractor):
add_leaf_bones=False
)
bpy.ops.object.select_all(action='DESELECT')
plugin.deselect_all()
for mat in new_materials:
bpy.data.materials.remove(mat)

View file

@ -0,0 +1,48 @@
from typing import List
import mathutils
import pyblish.api
import openpype.hosts.blender.api.action
class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
"""Camera must have a keyframe at frame 0.
Unreal shifts the first keyframe to frame 0. Forcing the camera to have
a keyframe at frame 0 will ensure that the animation will be the same
in Unreal and Blender.
"""
order = openpype.api.ValidateContentsOrder
hosts = ["blender"]
families = ["camera"]
category = "geometry"
version = (0, 1, 0)
label = "Zero Keyframe"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
_identity = mathutils.Matrix()
@classmethod
def get_invalid(cls, instance) -> List:
invalid = []
for obj in [obj for obj in instance]:
if obj.type == "CAMERA":
if obj.animation_data and obj.animation_data.action:
action = obj.animation_data.action
frames_set = set()
for fcu in action.fcurves:
for kp in fcu.keyframe_points:
frames_set.add(kp.co[0])
frames = list(frames_set)
frames.sort()
if frames[0] != 0.0:
invalid.append(obj)
return invalid
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
f"Object found in instance is not in Object Mode: {invalid}")

View file

@ -0,0 +1,105 @@
from .api.utils import (
setup
)
from .api.pipeline import (
install,
uninstall,
ls,
containerise,
update_container,
maintained_selection,
remove_instance,
list_instances,
imprint
)
from .api.lib import (
FlameAppFramework,
maintain_current_timeline,
get_project_manager,
get_current_project,
get_current_timeline,
create_bin,
)
from .api.menu import (
FlameMenuProjectConnect,
FlameMenuTimeline
)
from .api.workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root
)
import os
HOST_DIR = os.path.dirname(
os.path.abspath(__file__)
)
API_DIR = os.path.join(HOST_DIR, "api")
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
app_framework = None
apps = []
__all__ = [
"HOST_DIR",
"API_DIR",
"PLUGINS_DIR",
"PUBLISH_PATH",
"LOAD_PATH",
"CREATE_PATH",
"INVENTORY_PATH",
"INVENTORY_PATH",
"app_framework",
"apps",
# pipeline
"install",
"uninstall",
"ls",
"containerise",
"update_container",
"reload_pipeline",
"maintained_selection",
"remove_instance",
"list_instances",
"imprint",
# utils
"setup",
# lib
"FlameAppFramework",
"maintain_current_timeline",
"get_project_manager",
"get_current_project",
"get_current_timeline",
"create_bin",
# menu
"FlameMenuProjectConnect",
"FlameMenuTimeline",
# plugin
# workio
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root"
]

View file

@ -0,0 +1,3 @@
"""
OpenPype Autodesk Flame api
"""

View file

@ -0,0 +1,276 @@
import sys
import os
import pickle
import contextlib
from pprint import pformat
from openpype.api import Logger
log = Logger().get_logger(__name__)
@contextlib.contextmanager
def io_preferences_file(klass, filepath, write=False):
try:
flag = "w" if write else "r"
yield open(filepath, flag)
except IOError as _error:
klass.log.info("Unable to work with preferences `{}`: {}".format(
filepath, _error))
class FlameAppFramework(object):
# flameAppFramework class takes care of preferences
class prefs_dict(dict):
def __init__(self, master, name, **kwargs):
self.name = name
self.master = master
if not self.master.get(self.name):
self.master[self.name] = {}
self.master[self.name].__init__()
def __getitem__(self, k):
return self.master[self.name].__getitem__(k)
def __setitem__(self, k, v):
return self.master[self.name].__setitem__(k, v)
def __delitem__(self, k):
return self.master[self.name].__delitem__(k)
def get(self, k, default=None):
return self.master[self.name].get(k, default)
def setdefault(self, k, default=None):
return self.master[self.name].setdefault(k, default)
def pop(self, k, v=object()):
if v is object():
return self.master[self.name].pop(k)
return self.master[self.name].pop(k, v)
def update(self, mapping=(), **kwargs):
self.master[self.name].update(mapping, **kwargs)
def __contains__(self, k):
return self.master[self.name].__contains__(k)
def copy(self): # don"t delegate w/ super - dict.copy() -> dict :(
return type(self)(self)
def keys(self):
return self.master[self.name].keys()
@classmethod
def fromkeys(cls, keys, v=None):
return cls.master[cls.name].fromkeys(keys, v)
def __repr__(self):
return "{0}({1})".format(
type(self).__name__, self.master[self.name].__repr__())
def master_keys(self):
return self.master.keys()
def __init__(self):
self.name = self.__class__.__name__
self.bundle_name = "OpenPypeFlame"
# self.prefs scope is limited to flame project and user
self.prefs = {}
self.prefs_user = {}
self.prefs_global = {}
self.log = log
try:
import flame
self.flame = flame
self.flame_project_name = self.flame.project.current_project.name
self.flame_user_name = flame.users.current_user.name
except Exception:
self.flame = None
self.flame_project_name = None
self.flame_user_name = None
import socket
self.hostname = socket.gethostname()
if sys.platform == "darwin":
self.prefs_folder = os.path.join(
os.path.expanduser("~"),
"Library",
"Caches",
"OpenPype",
self.bundle_name
)
elif sys.platform.startswith("linux"):
self.prefs_folder = os.path.join(
os.path.expanduser("~"),
".OpenPype",
self.bundle_name)
self.prefs_folder = os.path.join(
self.prefs_folder,
self.hostname,
)
self.log.info("[{}] waking up".format(self.__class__.__name__))
self.load_prefs()
# menu auto-refresh defaults
if not self.prefs_global.get("menu_auto_refresh"):
self.prefs_global["menu_auto_refresh"] = {
"media_panel": True,
"batch": True,
"main_menu": True,
"timeline_menu": True
}
self.apps = []
def get_pref_file_paths(self):
prefix = self.prefs_folder + os.path.sep + self.bundle_name
prefs_file_path = "_".join([
prefix, self.flame_user_name,
self.flame_project_name]) + ".prefs"
prefs_user_file_path = "_".join([
prefix, self.flame_user_name]) + ".prefs"
prefs_global_file_path = prefix + ".prefs"
return (prefs_file_path, prefs_user_file_path, prefs_global_file_path)
def load_prefs(self):
(proj_pref_path, user_pref_path,
glob_pref_path) = self.get_pref_file_paths()
with io_preferences_file(self, proj_pref_path) as prefs_file:
self.prefs = pickle.load(prefs_file)
self.log.info(
"Project - preferences contents:\n{}".format(
pformat(self.prefs)
))
with io_preferences_file(self, user_pref_path) as prefs_file:
self.prefs_user = pickle.load(prefs_file)
self.log.info(
"User - preferences contents:\n{}".format(
pformat(self.prefs_user)
))
with io_preferences_file(self, glob_pref_path) as prefs_file:
self.prefs_global = pickle.load(prefs_file)
self.log.info(
"Global - preferences contents:\n{}".format(
pformat(self.prefs_global)
))
return True
def save_prefs(self):
# make sure the preference folder is available
if not os.path.isdir(self.prefs_folder):
try:
os.makedirs(self.prefs_folder)
except Exception:
self.log.info("Unable to create folder {}".format(
self.prefs_folder))
return False
# get all pref file paths
(proj_pref_path, user_pref_path,
glob_pref_path) = self.get_pref_file_paths()
with io_preferences_file(self, proj_pref_path, True) as prefs_file:
pickle.dump(self.prefs, prefs_file)
self.log.info(
"Project - preferences contents:\n{}".format(
pformat(self.prefs)
))
with io_preferences_file(self, user_pref_path, True) as prefs_file:
pickle.dump(self.prefs_user, prefs_file)
self.log.info(
"User - preferences contents:\n{}".format(
pformat(self.prefs_user)
))
with io_preferences_file(self, glob_pref_path, True) as prefs_file:
pickle.dump(self.prefs_global, prefs_file)
self.log.info(
"Global - preferences contents:\n{}".format(
pformat(self.prefs_global)
))
return True
@contextlib.contextmanager
def maintain_current_timeline(to_timeline, from_timeline=None):
"""Maintain current timeline selection during context
Attributes:
from_timeline (resolve.Timeline)[optional]:
Example:
>>> print(from_timeline.GetName())
timeline1
>>> print(to_timeline.GetName())
timeline2
>>> with maintain_current_timeline(to_timeline):
... print(get_current_timeline().GetName())
timeline2
>>> print(get_current_timeline().GetName())
timeline1
"""
# todo: this is still Resolve's implementation
project = get_current_project()
working_timeline = from_timeline or project.GetCurrentTimeline()
# swith to the input timeline
project.SetCurrentTimeline(to_timeline)
try:
# do a work
yield
finally:
# put the original working timeline to context
project.SetCurrentTimeline(working_timeline)
def get_project_manager():
# TODO: get_project_manager
return
def get_media_storage():
# TODO: get_media_storage
return
def get_current_project():
# TODO: get_current_project
return
def get_current_timeline(new=False):
# TODO: get_current_timeline
return
def create_bin(name, root=None):
# TODO: create_bin
return
def rescan_hooks():
import flame
try:
flame.execute_shortcut('Rescan Python Hooks')
except Exception:
pass

View file

@ -0,0 +1,208 @@
import os
from Qt import QtWidgets
from copy import deepcopy
from openpype.tools.utils.host_tools import HostToolsHelper
menu_group_name = 'OpenPype'
default_flame_export_presets = {
'Publish': {
'PresetVisibility': 2,
'PresetType': 0,
'PresetFile': 'OpenEXR/OpenEXR (16-bit fp PIZ).xml'
},
'Preview': {
'PresetVisibility': 3,
'PresetType': 2,
'PresetFile': 'Generate Preview.xml'
},
'Thumbnail': {
'PresetVisibility': 3,
'PresetType': 0,
'PresetFile': 'Generate Thumbnail.xml'
}
}
class _FlameMenuApp(object):
def __init__(self, framework):
self.name = self.__class__.__name__
self.framework = framework
self.log = framework.log
self.menu_group_name = menu_group_name
self.dynamic_menu_data = {}
# flame module is only avaliable when a
# flame project is loaded and initialized
self.flame = None
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
self.flame_project_name = flame.project.current_project.name
self.prefs = self.framework.prefs_dict(self.framework.prefs, self.name)
self.prefs_user = self.framework.prefs_dict(
self.framework.prefs_user, self.name)
self.prefs_global = self.framework.prefs_dict(
self.framework.prefs_global, self.name)
self.mbox = QtWidgets.QMessageBox()
self.menu = {
"actions": [{
'name': os.getenv("AVALON_PROJECT", "project"),
'isEnabled': False
}],
"name": self.menu_group_name
}
self.tools_helper = HostToolsHelper()
def __getattr__(self, name):
def method(*args, **kwargs):
print('calling %s' % name)
return method
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')
class FlameMenuProjectConnect(_FlameMenuApp):
# flameMenuProjectconnect app takes care of the preferences dialog as well
def __init__(self, framework):
_FlameMenuApp.__init__(self, framework)
def __getattr__(self, name):
def method(*args, **kwargs):
project = self.dynamic_menu_data.get(name)
if project:
self.link_project(project)
return method
def build_menu(self):
if not self.flame:
return []
flame_project_name = self.flame_project_name
self.log.info("______ {} ______".format(flame_project_name))
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Workfiles ...",
"execute": lambda x: self.tools_helper.show_workfiles()
})
menu['actions'].append({
"name": "Create ...",
"execute": lambda x: self.tools_helper.show_creator()
})
menu['actions'].append({
"name": "Publish ...",
"execute": lambda x: self.tools_helper.show_publish()
})
menu['actions'].append({
"name": "Load ...",
"execute": lambda x: self.tools_helper.show_loader()
})
menu['actions'].append({
"name": "Manage ...",
"execute": lambda x: self.tools_helper.show_scene_inventory()
})
menu['actions'].append({
"name": "Library ...",
"execute": lambda x: self.tools_helper.show_library_loader()
})
return menu
def get_projects(self, *args, **kwargs):
pass
def refresh(self, *args, **kwargs):
self.rescan()
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')
class FlameMenuTimeline(_FlameMenuApp):
# flameMenuProjectconnect app takes care of the preferences dialog as well
def __init__(self, framework):
_FlameMenuApp.__init__(self, framework)
def __getattr__(self, name):
def method(*args, **kwargs):
project = self.dynamic_menu_data.get(name)
if project:
self.link_project(project)
return method
def build_menu(self):
if not self.flame:
return []
flame_project_name = self.flame_project_name
self.log.info("______ {} ______".format(flame_project_name))
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Create ...",
"execute": lambda x: self.tools_helper.show_creator()
})
menu['actions'].append({
"name": "Publish ...",
"execute": lambda x: self.tools_helper.show_publish()
})
menu['actions'].append({
"name": "Load ...",
"execute": lambda x: self.tools_helper.show_loader()
})
menu['actions'].append({
"name": "Manage ...",
"execute": lambda x: self.tools_helper.show_scene_inventory()
})
return menu
def get_projects(self, *args, **kwargs):
pass
def refresh(self, *args, **kwargs):
self.rescan()
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')

View file

@ -0,0 +1,155 @@
"""
Basic avalon integration
"""
import contextlib
from avalon import api as avalon
from pyblish import api as pyblish
from openpype.api import Logger
AVALON_CONTAINERS = "AVALON_CONTAINERS"
log = Logger().get_logger(__name__)
def install():
from .. import (
PUBLISH_PATH,
LOAD_PATH,
CREATE_PATH,
INVENTORY_PATH
)
# TODO: install
# Disable all families except for the ones we explicitly want to see
family_states = [
"imagesequence",
"render2d",
"plate",
"render",
"mov",
"clip"
]
avalon.data["familiesStateDefault"] = False
avalon.data["familiesStateToggled"] = family_states
log.info("openpype.hosts.flame installed")
pyblish.register_host("flame")
pyblish.register_plugin_path(PUBLISH_PATH)
log.info("Registering Flame plug-ins..")
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
# register callback for switching publishable
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
def uninstall():
from .. import (
PUBLISH_PATH,
LOAD_PATH,
CREATE_PATH,
INVENTORY_PATH
)
# TODO: uninstall
pyblish.deregister_host("flame")
pyblish.deregister_plugin_path(PUBLISH_PATH)
log.info("Deregistering DaVinci Resovle plug-ins..")
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
# register callback for switching publishable
pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled)
def containerise(tl_segment,
name,
namespace,
context,
loader=None,
data=None):
# TODO: containerise
pass
def ls():
"""List available containers.
"""
# TODO: ls
pass
def parse_container(tl_segment, validate=True):
"""Return container data from timeline_item's openpype tag.
"""
# TODO: parse_container
pass
def update_container(tl_segment, data=None):
"""Update container data to input timeline_item's openpype tag.
"""
# TODO: update_container
pass
@contextlib.contextmanager
def maintained_selection():
"""Maintain selection during context
Example:
>>> with maintained_selection():
... node['selected'].setValue(True)
>>> print(node['selected'].value())
False
"""
# TODO: maintained_selection + remove undo steps
try:
# do the operation
yield
finally:
pass
def reset_selection():
"""Deselect all selected nodes
"""
pass
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle node passthrough states on instance toggles."""
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
instance, old_value, new_value))
# from openpype.hosts.resolve import (
# set_publish_attribute
# )
# # Whether instances should be passthrough based on new value
# timeline_item = instance.data["item"]
# set_publish_attribute(timeline_item, new_value)
def remove_instance(instance):
"""Remove instance marker from track item."""
# TODO: remove_instance
pass
def list_instances():
"""List all created instances from current workfile."""
# TODO: list_instances
pass
def imprint(item, data=None):
# TODO: imprint
pass

View file

@ -0,0 +1,3 @@
# Creator plugin functions
# Publishing plugin functions
# Loader plugin functions

View file

@ -0,0 +1,108 @@
"""
Flame utils for syncing scripts
"""
import os
import shutil
from openpype.api import Logger
log = Logger().get_logger(__name__)
def _sync_utility_scripts(env=None):
""" Synchronizing basic utlility scripts for flame.
To be able to run start OpenPype within Flame we have to copy
all utility_scripts and additional FLAME_SCRIPT_DIR into
`/opt/Autodesk/shared/python`. This will be always synchronizing those
folders.
"""
from .. import HOST_DIR
env = env or os.environ
# initiate inputs
scripts = {}
fsd_env = env.get("FLAME_SCRIPT_DIRS", "")
flame_shared_dir = "/opt/Autodesk/shared/python"
fsd_paths = [os.path.join(
HOST_DIR,
"utility_scripts"
)]
# collect script dirs
log.info("FLAME_SCRIPT_DIRS: `{fsd_env}`".format(**locals()))
log.info("fsd_paths: `{fsd_paths}`".format(**locals()))
# add application environment setting for FLAME_SCRIPT_DIR
# to script path search
for _dirpath in fsd_env.split(os.pathsep):
if not os.path.isdir(_dirpath):
log.warning("Path is not a valid dir: `{_dirpath}`".format(
**locals()))
continue
fsd_paths.append(_dirpath)
# collect scripts from dirs
for path in fsd_paths:
scripts.update({path: os.listdir(path)})
remove_black_list = []
for _k, s_list in scripts.items():
remove_black_list += s_list
log.info("remove_black_list: `{remove_black_list}`".format(**locals()))
log.info("Additional Flame script paths: `{fsd_paths}`".format(**locals()))
log.info("Flame Scripts: `{scripts}`".format(**locals()))
# make sure no script file is in folder
if next(iter(os.listdir(flame_shared_dir)), None):
for _itm in os.listdir(flame_shared_dir):
skip = False
# skip all scripts and folders which are not maintained
if _itm not in remove_black_list:
skip = True
# do not skyp if pyc in extension
if not os.path.isdir(_itm) and "pyc" in os.path.splitext(_itm)[-1]:
skip = False
# continue if skip in true
if skip:
continue
path = os.path.join(flame_shared_dir, _itm)
log.info("Removing `{path}`...".format(**locals()))
if os.path.isdir(path):
shutil.rmtree(path, onerror=None)
else:
os.remove(path)
# copy scripts into Resolve's utility scripts dir
for dirpath, scriptlist in scripts.items():
# directory and scripts list
for _script in scriptlist:
# script in script list
src = os.path.join(dirpath, _script)
dst = os.path.join(flame_shared_dir, _script)
log.info("Copying `{src}` to `{dst}`...".format(**locals()))
if os.path.isdir(src):
shutil.copytree(
src, dst, symlinks=False,
ignore=None, ignore_dangling_symlinks=False
)
else:
shutil.copy2(src, dst)
def setup(env=None):
""" Wrapper installer started from
`flame/hooks/pre_flame_setup.py`
"""
env = env or os.environ
# synchronize resolve utility scripts
_sync_utility_scripts(env)
log.info("Flame OpenPype wrapper has been installed")

View file

@ -0,0 +1,37 @@
"""Host API required Work Files tool"""
import os
from openpype.api import Logger
# from .. import (
# get_project_manager,
# get_current_project
# )
log = Logger().get_logger(__name__)
exported_projet_ext = ".otoc"
def file_extensions():
return [exported_projet_ext]
def has_unsaved_changes():
pass
def save_file(filepath):
pass
def open_file(filepath):
pass
def current_file():
pass
def work_root(session):
return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/")

View file

@ -0,0 +1,132 @@
import os
import json
import tempfile
import contextlib
from openpype.lib import (
PreLaunchHook, get_openpype_username)
from openpype.hosts import flame as opflame
import openpype
from pprint import pformat
class FlamePrelaunch(PreLaunchHook):
""" Flame prelaunch hook
Will make sure flame_script_dirs are coppied to user's folder defined
in environment var FLAME_SCRIPT_DIR.
"""
app_groups = ["flame"]
# todo: replace version number with avalon launch app version
flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7"
wtc_script_path = os.path.join(
opflame.HOST_DIR, "scripts", "wiretap_com.py")
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.signature = "( {} )".format(self.__class__.__name__)
def execute(self):
"""Hook entry method."""
project_doc = self.data["project_doc"]
user_name = get_openpype_username()
self.log.debug("Collected user \"{}\"".format(user_name))
self.log.info(pformat(project_doc))
_db_p_data = project_doc["data"]
width = _db_p_data["resolutionWidth"]
height = _db_p_data["resolutionHeight"]
fps = int(_db_p_data["fps"])
project_data = {
"Name": project_doc["name"],
"Nickname": _db_p_data["code"],
"Description": "Created by OpenPype",
"SetupDir": project_doc["name"],
"FrameWidth": int(width),
"FrameHeight": int(height),
"AspectRatio": float((width / height) * _db_p_data["pixelAspect"]),
"FrameRate": "{} fps".format(fps),
"FrameDepth": "16-bit fp",
"FieldDominance": "PROGRESSIVE"
}
data_to_script = {
# from settings
"host_name": "localhost",
"volume_name": "stonefs",
"group_name": "staff",
"color_policy": "ACES 1.1",
# from project
"project_name": project_doc["name"],
"user_name": user_name,
"project_data": project_data
}
app_arguments = self._get_launch_arguments(data_to_script)
self.log.info(pformat(dict(self.launch_context.env)))
opflame.setup(self.launch_context.env)
self.launch_context.launch_args.extend(app_arguments)
def _get_launch_arguments(self, script_data):
# Dump data to string
dumped_script_data = json.dumps(script_data)
with make_temp_file(dumped_script_data) as tmp_json_path:
# Prepare subprocess arguments
args = [
self.flame_python_exe,
self.wtc_script_path,
tmp_json_path
]
self.log.info("Executing: {}".format(" ".join(args)))
process_kwargs = {
"logger": self.log,
"env": {}
}
openpype.api.run_subprocess(args, **process_kwargs)
# process returned json file to pass launch args
return_json_data = open(tmp_json_path).read()
returned_data = json.loads(return_json_data)
app_args = returned_data.get("app_args")
self.log.info("____ app_args: `{}`".format(app_args))
if not app_args:
RuntimeError("App arguments were not solved")
return app_args
@contextlib.contextmanager
def make_temp_file(data):
try:
# Store dumped json to temporary file
temporary_json_file = tempfile.NamedTemporaryFile(
mode="w", suffix=".json", delete=False
)
temporary_json_file.write(data)
temporary_json_file.close()
temporary_json_filepath = temporary_json_file.name.replace(
"\\", "/"
)
yield temporary_json_filepath
except IOError as _error:
raise IOError(
"Not able to create temp json file: {}".format(
_error
)
)
finally:
# Remove the temporary json
os.remove(temporary_json_filepath)

View file

@ -0,0 +1,490 @@
#!/usr/bin/env python2.7
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import os
import sys
import subprocess
import json
import xml.dom.minidom as minidom
from copy import deepcopy
import datetime
try:
from libwiretapPythonClientAPI import (
WireTapClientInit)
except ImportError:
flame_python_path = "/opt/Autodesk/flame_2021/python"
flame_exe_path = (
"/opt/Autodesk/flame_2021/bin/flame.app"
"/Contents/MacOS/startApp")
sys.path.append(flame_python_path)
from libwiretapPythonClientAPI import (
WireTapClientInit,
WireTapClientUninit,
WireTapNodeHandle,
WireTapServerHandle,
WireTapInt,
WireTapStr
)
class WireTapCom(object):
"""
Comunicator class wrapper for talking to WireTap db.
This way we are able to set new project with settings and
correct colorspace policy. Also we are able to create new user
or get actuall user with similar name (users are usually cloning
their profiles and adding date stamp into suffix).
"""
def __init__(self, host_name=None, volume_name=None, group_name=None):
"""Initialisation of WireTap communication class
Args:
host_name (str, optional): Name of host server. Defaults to None.
volume_name (str, optional): Name of volume. Defaults to None.
group_name (str, optional): Name of user group. Defaults to None.
"""
# set main attributes of server
# if there are none set the default installation
self.host_name = host_name or "localhost"
self.volume_name = volume_name or "stonefs"
self.group_name = group_name or "staff"
# initialize WireTap client
WireTapClientInit()
# add the server to shared variable
self._server = WireTapServerHandle("{}:IFFFS".format(self.host_name))
print("WireTap connected at '{}'...".format(
self.host_name))
def close(self):
self._server = None
WireTapClientUninit()
print("WireTap closed...")
def get_launch_args(
self, project_name, project_data, user_name, *args, **kwargs):
"""Forming launch arguments for OpenPype launcher.
Args:
project_name (str): name of project
project_data (dict): Flame compatible project data
user_name (str): name of user
Returns:
list: arguments
"""
workspace_name = kwargs.get("workspace_name")
color_policy = kwargs.get("color_policy")
self._project_prep(project_name)
self._set_project_settings(project_name, project_data)
self._set_project_colorspace(project_name, color_policy)
user_name = self._user_prep(user_name)
if workspace_name is None:
# default workspace
print("Using a default workspace")
return [
"--start-project={}".format(project_name),
"--start-user={}".format(user_name),
"--create-workspace"
]
else:
print(
"Using a custom workspace '{}'".format(workspace_name))
self._workspace_prep(project_name, workspace_name)
return [
"--start-project={}".format(project_name),
"--start-user={}".format(user_name),
"--create-workspace",
"--start-workspace={}".format(workspace_name)
]
def _workspace_prep(self, project_name, workspace_name):
"""Preparing a workspace
In case it doesn not exists it will create one
Args:
project_name (str): project name
workspace_name (str): workspace name
Raises:
AttributeError: unable to create workspace
"""
workspace_exists = self._child_is_in_parent_path(
"/projects/{}".format(project_name), workspace_name, "WORKSPACE"
)
if not workspace_exists:
project = WireTapNodeHandle(
self._server, "/projects/{}".format(project_name))
workspace_node = WireTapNodeHandle()
created_workspace = project.createNode(
workspace_name, "WORKSPACE", workspace_node)
if not created_workspace:
raise AttributeError(
"Cannot create workspace `{}` in "
"project `{}`: `{}`".format(
workspace_name, project_name, project.lastError())
)
print(
"Workspace `{}` is successfully created".format(workspace_name))
def _project_prep(self, project_name):
"""Preparing a project
In case it doesn not exists it will create one
Args:
project_name (str): project name
Raises:
AttributeError: unable to create project
"""
# test if projeft exists
project_exists = self._child_is_in_parent_path(
"/projects", project_name, "PROJECT")
if not project_exists:
volumes = self._get_all_volumes()
if len(volumes) == 0:
raise AttributeError(
"Not able to create new project. No Volumes existing"
)
# check if volumes exists
if self.volume_name not in volumes:
raise AttributeError(
("Volume '{}' does not exist '{}'").format(
self.volume_name, volumes)
)
# form cmd arguments
project_create_cmd = [
os.path.join(
"/opt/Autodesk/",
"wiretap",
"tools",
"2021",
"wiretap_create_node",
),
'-n',
os.path.join("/volumes", self.volume_name),
'-d',
project_name,
'-g',
]
project_create_cmd.append(self.group_name)
print(project_create_cmd)
exit_code = subprocess.call(
project_create_cmd,
cwd=os.path.expanduser('~'))
if exit_code != 0:
RuntimeError("Cannot create project in flame db")
print(
"A new project '{}' is created.".format(project_name))
def _get_all_volumes(self):
"""Request all available volumens from WireTap
Returns:
list: all available volumes in server
Rises:
AttributeError: unable to get any volumes childs from server
"""
root = WireTapNodeHandle(self._server, "/volumes")
children_num = WireTapInt(0)
get_children_num = root.getNumChildren(children_num)
if not get_children_num:
raise AttributeError(
"Cannot get number of volumes: {}".format(root.lastError())
)
volumes = []
# go trough all children and get volume names
child_obj = WireTapNodeHandle()
for child_idx in range(children_num):
# get a child
if not root.getChild(child_idx, child_obj):
raise AttributeError(
"Unable to get child: {}".format(root.lastError()))
node_name = WireTapStr()
get_children_name = child_obj.getDisplayName(node_name)
if not get_children_name:
raise AttributeError(
"Unable to get child name: {}".format(
child_obj.lastError())
)
volumes.append(node_name.c_str())
return volumes
def _user_prep(self, user_name):
"""Ensuring user does exists in user's stack
Args:
user_name (str): name of a user
Raises:
AttributeError: unable to create user
"""
# get all used usernames in db
used_names = self._get_usernames()
print(">> used_names: {}".format(used_names))
# filter only those which are sharing input user name
filtered_users = [user for user in used_names if user_name in user]
if filtered_users:
# todo: need to find lastly created following regex patern for
# date used in name
return filtered_users.pop()
# create new user name with date in suffix
now = datetime.datetime.now() # current date and time
date = now.strftime("%Y%m%d")
new_user_name = "{}_{}".format(user_name, date)
print(new_user_name)
if not self._child_is_in_parent_path("/users", new_user_name, "USER"):
# Create the new user
users = WireTapNodeHandle(self._server, "/users")
user_node = WireTapNodeHandle()
created_user = users.createNode(new_user_name, "USER", user_node)
if not created_user:
raise AttributeError(
"User {} cannot be created: {}".format(
new_user_name, users.lastError())
)
print("User `{}` is created".format(new_user_name))
return new_user_name
def _get_usernames(self):
"""Requesting all available users from WireTap
Returns:
list: all available user names
Raises:
AttributeError: there are no users in server
"""
root = WireTapNodeHandle(self._server, "/users")
children_num = WireTapInt(0)
get_children_num = root.getNumChildren(children_num)
if not get_children_num:
raise AttributeError(
"Cannot get number of volumes: {}".format(root.lastError())
)
usernames = []
# go trough all children and get volume names
child_obj = WireTapNodeHandle()
for child_idx in range(children_num):
# get a child
if not root.getChild(child_idx, child_obj):
raise AttributeError(
"Unable to get child: {}".format(root.lastError()))
node_name = WireTapStr()
get_children_name = child_obj.getDisplayName(node_name)
if not get_children_name:
raise AttributeError(
"Unable to get child name: {}".format(
child_obj.lastError())
)
usernames.append(node_name.c_str())
return usernames
def _child_is_in_parent_path(self, parent_path, child_name, child_type):
"""Checking if a given child is in parent path.
Args:
parent_path (str): db path to parent
child_name (str): name of child
child_type (str): type of child
Raises:
AttributeError: Not able to get number of children
AttributeError: Not able to get children form parent
AttributeError: Not able to get children name
AttributeError: Not able to get children type
Returns:
bool: True if child is in parent path
"""
parent = WireTapNodeHandle(self._server, parent_path)
# iterate number of children
children_num = WireTapInt(0)
requested = parent.getNumChildren(children_num)
if not requested:
raise AttributeError((
"Error: Cannot request number of "
"childrens from the node {}. Make sure your "
"wiretap service is running: {}").format(
parent_path, parent.lastError())
)
# iterate children
child_obj = WireTapNodeHandle()
for child_idx in range(children_num):
if not parent.getChild(child_idx, child_obj):
raise AttributeError(
"Cannot get child: {}".format(
parent.lastError()))
node_name = WireTapStr()
node_type = WireTapStr()
if not child_obj.getDisplayName(node_name):
raise AttributeError(
"Unable to get child name: %s" % child_obj.lastError()
)
if not child_obj.getNodeTypeStr(node_type):
raise AttributeError(
"Unable to obtain child type: %s" % child_obj.lastError()
)
if (node_name.c_str() == child_name) and (
node_type.c_str() == child_type):
return True
return False
def _set_project_settings(self, project_name, project_data):
"""Setting project attributes.
Args:
project_name (str): name of project
project_data (dict): data with project attributes
(flame compatible)
Raises:
AttributeError: Not able to set project attributes
"""
# generated xml from project_data dict
_xml = "<Project>"
for key, value in project_data.items():
_xml += "<{}>{}</{}>".format(key, value, key)
_xml += "</Project>"
pretty_xml = minidom.parseString(_xml).toprettyxml()
print("__ xml: {}".format(pretty_xml))
# set project data to wiretap
project_node = WireTapNodeHandle(
self._server, "/projects/{}".format(project_name))
if not project_node.setMetaData("XML", _xml):
raise AttributeError(
"Not able to set project attributes {}. Error: {}".format(
project_name, project_node.lastError())
)
print("Project settings successfully set.")
def _set_project_colorspace(self, project_name, color_policy):
"""Set project's colorspace policy.
Args:
project_name (str): name of project
color_policy (str): name of policy
Raises:
RuntimeError: Not able to set colorspace policy
"""
color_policy = color_policy or "Legacy"
project_colorspace_cmd = [
os.path.join(
"/opt/Autodesk/",
"wiretap",
"tools",
"2021",
"wiretap_duplicate_node",
),
"-s",
"/syncolor/policies/Autodesk/{}".format(color_policy),
"-n",
"/projects/{}/syncolor".format(project_name)
]
print(project_colorspace_cmd)
exit_code = subprocess.call(
project_colorspace_cmd,
cwd=os.path.expanduser('~'))
if exit_code != 0:
RuntimeError("Cannot set colorspace {} on project {}".format(
color_policy, project_name
))
if __name__ == "__main__":
# get json exchange data
json_path = sys.argv[-1]
json_data = open(json_path).read()
in_data = json.loads(json_data)
out_data = deepcopy(in_data)
# get main server attributes
host_name = in_data.pop("host_name")
volume_name = in_data.pop("volume_name")
group_name = in_data.pop("group_name")
# initialize class
wiretap_handler = WireTapCom(host_name, volume_name, group_name)
try:
app_args = wiretap_handler.get_launch_args(
project_name=in_data.pop("project_name"),
project_data=in_data.pop("project_data"),
user_name=in_data.pop("user_name"),
**in_data
)
finally:
wiretap_handler.close()
# set returned args back to out data
out_data.update({
"app_args": app_args
})
# write it out back to the exchange json file
with open(json_path, "w") as file_stream:
json.dump(out_data, file_stream, indent=4)

View file

@ -0,0 +1,191 @@
from __future__ import print_function
import sys
from Qt import QtWidgets
from pprint import pformat
import atexit
import openpype
import avalon
import openpype.hosts.flame as opflame
flh = sys.modules[__name__]
flh._project = None
def openpype_install():
"""Registering OpenPype in context
"""
openpype.install()
avalon.api.install(opflame)
print("Avalon registred hosts: {}".format(
avalon.api.registered_host()))
# Exception handler
def exeption_handler(exctype, value, _traceback):
"""Exception handler for improving UX
Args:
exctype (str): type of exception
value (str): exception value
tb (str): traceback to show
"""
import traceback
msg = "OpenPype: Python exception {} in {}".format(value, exctype)
mbox = QtWidgets.QMessageBox()
mbox.setText(msg)
mbox.setDetailedText(
pformat(traceback.format_exception(exctype, value, _traceback)))
mbox.setStyleSheet('QLabel{min-width: 800px;}')
mbox.exec_()
sys.__excepthook__(exctype, value, _traceback)
# add exception handler into sys module
sys.excepthook = exeption_handler
# register clean up logic to be called at Flame exit
def cleanup():
"""Cleaning up Flame framework context
"""
if opflame.apps:
print('`{}` cleaning up apps:\n {}\n'.format(
__file__, pformat(opflame.apps)))
while len(opflame.apps):
app = opflame.apps.pop()
print('`{}` removing : {}'.format(__file__, app.name))
del app
opflame.apps = []
if opflame.app_framework:
print('PYTHON\t: %s cleaning up' % opflame.app_framework.bundle_name)
opflame.app_framework.save_prefs()
opflame.app_framework = None
atexit.register(cleanup)
def load_apps():
"""Load available apps into Flame framework
"""
opflame.apps.append(opflame.FlameMenuProjectConnect(opflame.app_framework))
opflame.apps.append(opflame.FlameMenuTimeline(opflame.app_framework))
opflame.app_framework.log.info("Apps are loaded")
def project_changed_dict(info):
"""Hook for project change action
Args:
info (str): info text
"""
cleanup()
def app_initialized(parent=None):
"""Inicialization of Framework
Args:
parent (obj, optional): Parent object. Defaults to None.
"""
opflame.app_framework = opflame.FlameAppFramework()
print("{} initializing".format(
opflame.app_framework.bundle_name))
load_apps()
"""
Initialisation of the hook is starting from here
First it needs to test if it can import the flame modul.
This will happen only in case a project has been loaded.
Then `app_initialized` will load main Framework which will load
all menu objects as apps.
"""
try:
import flame # noqa
app_initialized(parent=None)
except ImportError:
print("!!!! not able to import flame module !!!!")
def rescan_hooks():
import flame # noqa
flame.execute_shortcut('Rescan Python Hooks')
def _build_app_menu(app_name):
"""Flame menu object generator
Args:
app_name (str): name of menu object app
Returns:
list: menu object
"""
menu = []
# first find the relative appname
app = None
for _app in opflame.apps:
if _app.__class__.__name__ == app_name:
app = _app
if app:
menu.append(app.build_menu())
if opflame.app_framework:
menu_auto_refresh = opflame.app_framework.prefs_global.get(
'menu_auto_refresh', {})
if menu_auto_refresh.get('timeline_menu', True):
try:
import flame # noqa
flame.schedule_idle_event(rescan_hooks)
except ImportError:
print("!-!!! not able to import flame module !!!!")
return menu
""" Flame hooks are starting here
"""
def project_saved(project_name, save_time, is_auto_save):
"""Hook to activate when project is saved
Args:
project_name (str): name of project
save_time (str): time when it was saved
is_auto_save (bool): autosave is on or off
"""
if opflame.app_framework:
opflame.app_framework.save_prefs()
def get_main_menu_custom_ui_actions():
"""Hook to create submenu in start menu
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuProjectConnect")
def get_timeline_custom_ui_actions():
"""Hook to create submenu in timeline
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuTimeline")

View file

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
"""Houdini specific Avalon/Pyblish plugin definitions."""
import sys
from avalon.api import CreatorError
from avalon import houdini
import six
@ -8,7 +9,7 @@ import hou
from openpype.api import PypeCreatorMixin
class OpenPypeCreatorError(Exception):
class OpenPypeCreatorError(CreatorError):
pass

View file

@ -4,8 +4,8 @@ import contextlib
import logging
from Qt import QtCore, QtGui
from avalon.tools.widgets import AssetWidget
from avalon import style
from openpype.tools.utils.widgets import AssetWidget
from avalon import style, io
from pxr import Sdf
@ -31,7 +31,7 @@ def pick_asset(node):
# Construct the AssetWidget as a frameless popup so it automatically
# closes when clicked outside of it.
global tool
tool = AssetWidget(silo_creatable=False)
tool = AssetWidget(io)
tool.setContentsMargins(5, 5, 5, 5)
tool.setWindowTitle("Pick Asset")
tool.setStyleSheet(style.load_stylesheet())
@ -41,8 +41,6 @@ def pick_asset(node):
# Select the current asset if there is any
name = parm.eval()
if name:
from avalon import io
db_asset = io.find_one({"name": name, "type": "asset"})
if db_asset:
silo = db_asset.get("silo")

View file

@ -0,0 +1,96 @@
# -*- coding: utf-8 -*-
from openpype.hosts.houdini.api import plugin
from avalon.houdini import lib
from avalon import io
import hou
class CreateHDA(plugin.Creator):
"""Publish Houdini Digital Asset file."""
name = "hda"
label = "Houdini Digital Asset (Hda)"
family = "hda"
icon = "gears"
maintain_selection = False
def __init__(self, *args, **kwargs):
super(CreateHDA, self).__init__(*args, **kwargs)
self.data.pop("active", None)
def _check_existing(self, subset_name):
# type: (str) -> bool
"""Check if existing subset name versions already exists."""
# Get all subsets of the current asset
asset_id = io.find_one({"name": self.data["asset"], "type": "asset"},
projection={"_id": True})['_id']
subset_docs = io.find(
{
"type": "subset",
"parent": asset_id
}, {"name": 1}
)
existing_subset_names = set(subset_docs.distinct("name"))
existing_subset_names_low = {
_name.lower() for _name in existing_subset_names
}
return subset_name.lower() in existing_subset_names_low
def _process(self, instance):
subset_name = self.data["subset"]
# get selected nodes
out = hou.node("/obj")
self.nodes = hou.selectedNodes()
if (self.options or {}).get("useSelection") and self.nodes:
# if we have `use selection` enabled and we have some
# selected nodes ...
to_hda = self.nodes[0]
if len(self.nodes) > 1:
# if there is more then one node, create subnet first
subnet = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
to_hda = subnet
else:
# in case of no selection, just create subnet node
subnet = out.createNode(
"subnet", node_name="{}_subnet".format(self.name))
subnet.moveToGoodPosition()
to_hda = subnet
if not to_hda.type().definition():
# if node type has not its definition, it is not user
# created hda. We test if hda can be created from the node.
if not to_hda.canCreateDigitalAsset():
raise Exception(
"cannot create hda from node {}".format(to_hda))
hda_node = to_hda.createDigitalAsset(
name=subset_name,
hda_file_name="$HIP/{}.hda".format(subset_name)
)
hou.moveNodesTo(self.nodes, hda_node)
hda_node.layoutChildren()
else:
if self._check_existing(subset_name):
raise plugin.OpenPypeCreatorError(
("subset {} is already published with different HDA"
"definition.").format(subset_name))
hda_node = to_hda
hda_node.setName(subset_name)
# delete node created by Avalon in /out
# this needs to be addressed in future Houdini workflow refactor.
hou.node("/out/{}".format(subset_name)).destroy()
try:
lib.imprint(hda_node, self.data)
except hou.OperationFailed:
raise plugin.OpenPypeCreatorError(
("Cannot set metadata on asset. Might be that it already is "
"OpenPype asset.")
)
return hda_node

View file

@ -0,0 +1,62 @@
# -*- coding: utf-8 -*-
from avalon import api
from avalon.houdini import pipeline
class HdaLoader(api.Loader):
"""Load Houdini Digital Asset file."""
families = ["hda"]
label = "Load Hda"
representations = ["hda"]
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
import os
import hou
# Format file name, Houdini only wants forward slashes
file_path = os.path.normpath(self.fname)
file_path = file_path.replace("\\", "/")
# Get the root node
obj = hou.node("/obj")
# Create a unique name
counter = 1
namespace = namespace or context["asset"]["name"]
formatted = "{}_{}".format(namespace, name) if namespace else name
node_name = "{0}_{1:03d}".format(formatted, counter)
hou.hda.installFile(file_path)
hda_node = obj.createNode(name, node_name)
self[:] = [hda_node]
return pipeline.containerise(
node_name,
namespace,
[hda_node],
context,
self.__class__.__name__,
suffix="",
)
def update(self, container, representation):
import hou
hda_node = container["node"]
file_path = api.get_representation_path(representation)
file_path = file_path.replace("\\", "/")
hou.hda.installFile(file_path)
defs = hda_node.type().allInstalledDefinitions()
def_paths = [d.libraryFilePath() for d in defs]
new = def_paths.index(file_path)
defs[new].setIsPreferred(True)
def remove(self, container):
node = container["node"]
node.destroy()

View file

@ -23,8 +23,10 @@ class CollectInstanceActiveState(pyblish.api.InstancePlugin):
return
# Check bypass state and reverse
active = True
node = instance[0]
active = not node.isBypassed()
if hasattr(node, "isBypassed"):
active = not node.isBypassed()
# Set instance active state
instance.data.update(

View file

@ -31,6 +31,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
def process(self, context):
nodes = hou.node("/out").children()
nodes += hou.node("/obj").children()
# Include instances in USD stage only when it exists so it
# remains backwards compatible with version before houdini 18
@ -49,9 +50,12 @@ class CollectInstances(pyblish.api.ContextPlugin):
has_family = node.evalParm("family")
assert has_family, "'%s' is missing 'family'" % node.name()
self.log.info("processing {}".format(node))
data = lib.read(node)
# Check bypass state and reverse
data.update({"active": not node.isBypassed()})
if hasattr(node, "isBypassed"):
data.update({"active": not node.isBypassed()})
# temporarily translation of `active` to `publish` till issue has
# been resolved, https://github.com/pyblish/pyblish-base/issues/307

View file

@ -0,0 +1,43 @@
# -*- coding: utf-8 -*-
import os
from pprint import pformat
import pyblish.api
import openpype.api
class ExtractHDA(openpype.api.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract HDA"
hosts = ["houdini"]
families = ["hda"]
def process(self, instance):
self.log.info(pformat(instance.data))
hda_node = instance[0]
hda_def = hda_node.type().definition()
hda_options = hda_def.options()
hda_options.setSaveInitialParmsAndContents(True)
next_version = instance.data["anatomyData"]["version"]
self.log.info("setting version: {}".format(next_version))
hda_def.setVersion(str(next_version))
hda_def.setOptions(hda_options)
hda_def.save(hda_def.libraryFilePath(), hda_node, hda_options)
if "representations" not in instance.data:
instance.data["representations"] = []
file = os.path.basename(hda_def.libraryFilePath())
staging_dir = os.path.dirname(hda_def.libraryFilePath())
self.log.info("Using HDA from {}".format(hda_def.libraryFilePath()))
representation = {
'name': 'hda',
'ext': 'hda',
'files': file,
"stagingDir": staging_dir,
}
instance.data["representations"].append(representation)

View file

@ -35,5 +35,5 @@ class ValidateBypassed(pyblish.api.InstancePlugin):
def get_invalid(cls, instance):
rop = instance[0]
if rop.isBypassed():
if hasattr(rop, "isBypassed") and rop.isBypassed():
return [rop]

View file

@ -13,6 +13,7 @@ from pyblish import api as pyblish
from openpype.lib import any_outdated
import openpype.hosts.maya
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.lib.path_tools import HostDirmap
from . import menu, lib
log = logging.getLogger("openpype.hosts.maya")
@ -30,7 +31,8 @@ def install():
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
process_dirmap(project_settings)
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.register_plugin_path(PUBLISH_PATH)
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
@ -60,110 +62,6 @@ def install():
avalon.data["familiesStateToggled"] = ["imagesequence"]
def process_dirmap(project_settings):
# type: (dict) -> None
"""Go through all paths in Settings and set them using `dirmap`.
If artists has Site Sync enabled, take dirmap mapping directly from
Local Settings when artist is syncing workfile locally.
Args:
project_settings (dict): Settings for current project.
"""
local_mapping = _get_local_sync_dirmap(project_settings)
if not project_settings["maya"].get("maya-dirmap") and not local_mapping:
return
mapping = local_mapping or \
project_settings["maya"]["maya-dirmap"]["paths"] \
or {}
mapping_enabled = project_settings["maya"]["maya-dirmap"]["enabled"] \
or bool(local_mapping)
if not mapping or not mapping_enabled:
return
if mapping.get("source-path") and mapping_enabled is True:
log.info("Processing directory mapping ...")
cmds.dirmap(en=True)
for k, sp in enumerate(mapping["source-path"]):
try:
print("{} -> {}".format(sp, mapping["destination-path"][k]))
cmds.dirmap(m=(sp, mapping["destination-path"][k]))
cmds.dirmap(m=(mapping["destination-path"][k], sp))
except IndexError:
# missing corresponding destination path
log.error(("invalid dirmap mapping, missing corresponding"
" destination directory."))
break
except RuntimeError:
log.error("invalid path {} -> {}, mapping not registered".format(
sp, mapping["destination-path"][k]
))
continue
def _get_local_sync_dirmap(project_settings):
"""
Returns dirmap if synch to local project is enabled.
Only valid mapping is from roots of remote site to local site set in
Local Settings.
Args:
project_settings (dict)
Returns:
dict : { "source-path": [XXX], "destination-path": [YYYY]}
"""
import json
mapping = {}
if not project_settings["global"]["sync_server"]["enabled"]:
log.debug("Site Sync not enabled")
return mapping
from openpype.settings.lib import get_site_local_overrides
from openpype.modules import ModulesManager
manager = ModulesManager()
sync_module = manager.modules_by_name["sync_server"]
project_name = os.getenv("AVALON_PROJECT")
sync_settings = sync_module.get_sync_project_setting(
os.getenv("AVALON_PROJECT"), exclude_locals=False, cached=False)
log.debug(json.dumps(sync_settings, indent=4))
active_site = sync_module.get_local_normalized_site(
sync_module.get_active_site(project_name))
remote_site = sync_module.get_local_normalized_site(
sync_module.get_remote_site(project_name))
log.debug("active {} - remote {}".format(active_site, remote_site))
if active_site == "local" \
and project_name in sync_module.get_enabled_projects()\
and active_site != remote_site:
overrides = get_site_local_overrides(os.getenv("AVALON_PROJECT"),
active_site)
for root_name, value in overrides.items():
if os.path.isdir(value):
try:
mapping["destination-path"] = [value]
mapping["source-path"] = [sync_settings["sites"]\
[remote_site]\
["root"]\
[root_name]]
except IndexError:
# missing corresponding destination path
log.debug("overrides".format(overrides))
log.error(
("invalid dirmap mapping, missing corresponding"
" destination directory."))
break
log.debug("local sync mapping:: {}".format(mapping))
return mapping
def uninstall():
pyblish.deregister_plugin_path(PUBLISH_PATH)
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
@ -275,8 +173,7 @@ def on_open(_):
# Show outdated pop-up
def _on_show_inventory():
import avalon.tools.sceneinventory as tool
tool.show(parent=parent)
host_tools.show_scene_inventory(parent=parent)
dialog = popup.Popup(parent=parent)
dialog.setWindowTitle("Maya scene has outdated content")
@ -327,3 +224,12 @@ def before_workfile_save(workfile_path):
workdir = os.path.dirname(workfile_path)
copy_workspace_mel(workdir)
class MayaDirmap(HostDirmap):
def on_enable_dirmap(self):
cmds.dirmap(en=True)
def dirmap_routine(self, source_path, destination_path):
cmds.dirmap(m=(source_path, destination_path))
cmds.dirmap(m=(destination_path, source_path))

View file

@ -2,6 +2,7 @@
import re
import os
import platform
import uuid
import math
@ -22,6 +23,7 @@ import avalon.maya.lib
import avalon.maya.interactive
from openpype import lib
from openpype.api import get_anatomy_settings
log = logging.getLogger(__name__)
@ -437,7 +439,8 @@ def empty_sets(sets, force=False):
cmds.connectAttr(src, dest)
# Restore original members
for origin_set, members in original.iteritems():
_iteritems = getattr(original, "iteritems", original.items)
for origin_set, members in _iteritems():
cmds.sets(members, forceElement=origin_set)
@ -581,7 +584,7 @@ def get_shader_assignments_from_shapes(shapes, components=True):
# Build a mapping from parent to shapes to include in lookup.
transforms = {shape.rsplit("|", 1)[0]: shape for shape in shapes}
lookup = set(shapes + transforms.keys())
lookup = set(shapes) | set(transforms.keys())
component_assignments = defaultdict(list)
for shading_group in assignments.keys():
@ -669,7 +672,8 @@ def displaySmoothness(nodes,
yield
finally:
# Revert state
for node, state in originals.iteritems():
_iteritems = getattr(originals, "iteritems", originals.items)
for node, state in _iteritems():
if state:
cmds.displaySmoothness(node, **state)
@ -712,7 +716,8 @@ def no_display_layers(nodes):
yield
finally:
# Restore original members
for layer, members in original.iteritems():
_iteritems = getattr(original, "iteritems", original.items)
for layer, members in _iteritems():
cmds.editDisplayLayerMembers(layer, members, noRecurse=True)
@ -1819,7 +1824,7 @@ def set_scene_fps(fps, update=True):
cmds.file(modified=True)
def set_scene_resolution(width, height):
def set_scene_resolution(width, height, pixelAspect):
"""Set the render resolution
Args:
@ -1847,6 +1852,36 @@ def set_scene_resolution(width, height):
cmds.setAttr("%s.width" % control_node, width)
cmds.setAttr("%s.height" % control_node, height)
deviceAspectRatio = ((float(width) / float(height)) * float(pixelAspect))
cmds.setAttr("%s.deviceAspectRatio" % control_node, deviceAspectRatio)
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
def reset_scene_resolution():
"""Apply the scene resolution from the project definition
scene resolution can be overwritten by an asset if the asset.data contains
any information regarding scene resolution .
Returns:
None
"""
project_doc = io.find_one({"type": "project"})
project_data = project_doc["data"]
asset_data = lib.get_asset()["data"]
# Set project resolution
width_key = "resolutionWidth"
height_key = "resolutionHeight"
pixelAspect_key = "pixelAspect"
width = asset_data.get(width_key, project_data.get(width_key, 1920))
height = asset_data.get(height_key, project_data.get(height_key, 1080))
pixelAspect = asset_data.get(pixelAspect_key,
project_data.get(pixelAspect_key, 1))
set_scene_resolution(width, height, pixelAspect)
def set_context_settings():
"""Apply the project settings from the project definition
@ -1873,18 +1908,14 @@ def set_context_settings():
api.Session["AVALON_FPS"] = str(fps)
set_scene_fps(fps)
# Set project resolution
width_key = "resolutionWidth"
height_key = "resolutionHeight"
width = asset_data.get(width_key, project_data.get(width_key, 1920))
height = asset_data.get(height_key, project_data.get(height_key, 1080))
set_scene_resolution(width, height)
reset_scene_resolution()
# Set frame range.
avalon.maya.interactive.reset_frame_range()
# Set colorspace
set_colorspace()
# Valid FPS
def validate_fps():
@ -2152,10 +2183,11 @@ def load_capture_preset(data=None):
for key in preset['Display Options']:
if key.startswith('background'):
disp_options[key] = preset['Display Options'][key]
disp_options[key][0] = (float(disp_options[key][0])/255)
disp_options[key][1] = (float(disp_options[key][1])/255)
disp_options[key][2] = (float(disp_options[key][2])/255)
disp_options[key].pop()
if len(disp_options[key]) == 4:
disp_options[key][0] = (float(disp_options[key][0])/255)
disp_options[key][1] = (float(disp_options[key][1])/255)
disp_options[key][2] = (float(disp_options[key][2])/255)
disp_options[key].pop()
else:
disp_options['displayGradient'] = True
@ -2740,3 +2772,49 @@ def iter_shader_edits(relationships, shader_nodes, nodes_by_id, label=None):
"uuid": data["uuid"],
"nodes": nodes,
"attributes": attr_value}
def set_colorspace():
"""Set Colorspace from project configuration
"""
project_name = os.getenv("AVALON_PROJECT")
imageio = get_anatomy_settings(project_name)["imageio"]["maya"]
root_dict = imageio["colorManagementPreference"]
if not isinstance(root_dict, dict):
msg = "set_colorspace(): argument should be dictionary"
log.error(msg)
log.debug(">> root_dict: {}".format(root_dict))
# first enable color management
cmds.colorManagementPrefs(e=True, cmEnabled=True)
cmds.colorManagementPrefs(e=True, ocioRulesEnabled=True)
# second set config path
if root_dict.get("configFilePath"):
unresolved_path = root_dict["configFilePath"]
ocio_paths = unresolved_path[platform.system().lower()]
resolved_path = None
for ocio_p in ocio_paths:
resolved_path = str(ocio_p).format(**os.environ)
if not os.path.exists(resolved_path):
continue
if resolved_path:
filepath = str(resolved_path).replace("\\", "/")
cmds.colorManagementPrefs(e=True, configFilePath=filepath)
cmds.colorManagementPrefs(e=True, cmConfigFileEnabled=True)
log.debug("maya '{}' changed to: {}".format(
"configFilePath", resolved_path))
root_dict.pop("configFilePath")
else:
cmds.colorManagementPrefs(e=True, cmConfigFileEnabled=False)
cmds.colorManagementPrefs(e=True, configFilePath="" )
# third set rendering space and view transform
renderSpace = root_dict["renderSpace"]
cmds.colorManagementPrefs(e=True, renderingSpaceName=renderSpace)
viewTransform = root_dict["viewTransform"]
cmds.colorManagementPrefs(e=True, viewTransformName=viewTransform)

View file

@ -11,6 +11,7 @@ from avalon.maya import pipeline
from openpype.api import BuildWorkfile
from openpype.settings import get_project_settings
from openpype.tools.utils import host_tools
from openpype.hosts.maya.api import lib
log = logging.getLogger(__name__)
@ -21,10 +22,8 @@ def _get_menu(menu_name=None):
if menu_name is None:
menu_name = pipeline._menu
widgets = dict((
w.objectName(), w) for w in QtWidgets.QApplication.allWidgets())
menu = widgets.get(menu_name)
return menu
widgets = {w.objectName(): w for w in QtWidgets.QApplication.allWidgets()}
return widgets.get(menu_name)
def deferred():
@ -46,6 +45,43 @@ def deferred():
)
)
def add_experimental_item():
cmds.menuItem(
"Experimental tools...",
parent=pipeline._menu,
command=lambda *args: host_tools.show_experimental_tools_dialog(
pipeline._parent
)
)
def add_scripts_menu():
try:
import scriptsmenu.launchformaya as launchformaya
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["maya"]["scriptsmenu"]["definition"]
_menu = project_settings["maya"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Maya menu
studio_menu = launchformaya.main(
title=_menu.title(),
objectName=_menu.title().lower().replace(" ", "_")
)
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)
def modify_workfiles():
# Find the pipeline menu
top_menu = _get_menu()
@ -75,6 +111,35 @@ def deferred():
if workfile_action:
top_menu.removeAction(workfile_action)
def modify_resolution():
# Find the pipeline menu
top_menu = _get_menu()
# Try to find resolution tool action in the menu
resolution_action = None
for action in top_menu.actions():
if action.text() == "Reset Resolution":
resolution_action = action
break
# Add at the top of menu if "Work Files" action was not found
after_action = ""
if resolution_action:
# Use action's object name for `insertAfter` argument
after_action = resolution_action.objectName()
# Insert action to menu
cmds.menuItem(
"Reset Resolution",
parent=pipeline._menu,
command=lambda *args: lib.reset_scene_resolution(),
insertAfter=after_action
)
# Remove replaced action
if resolution_action:
top_menu.removeAction(resolution_action)
def remove_project_manager():
top_menu = _get_menu()
@ -99,40 +164,42 @@ def deferred():
if project_manager_action is not None:
system_menu.menu().removeAction(project_manager_action)
def add_colorspace():
# Find the pipeline menu
top_menu = _get_menu()
# Try to find workfile tool action in the menu
workfile_action = None
for action in top_menu.actions():
if action.text() == "Reset Resolution":
workfile_action = action
break
# Add at the top of menu if "Work Files" action was not found
after_action = ""
if workfile_action:
# Use action's object name for `insertAfter` argument
after_action = workfile_action.objectName()
# Insert action to menu
cmds.menuItem(
"Set Colorspace",
parent=pipeline._menu,
command=lambda *args: lib.set_colorspace(),
insertAfter=after_action
)
log.info("Attempting to install scripts menu ...")
# add_scripts_menu()
add_build_workfiles_item()
add_look_assigner_item()
add_experimental_item()
modify_workfiles()
modify_resolution()
remove_project_manager()
try:
import scriptsmenu.launchformaya as launchformaya
import scriptsmenu.scriptsmenu as scriptsmenu
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["maya"]["scriptsmenu"]["definition"]
_menu = project_settings["maya"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Maya menu
studio_menu = launchformaya.main(
title=_menu.title(),
objectName=_menu.title().lower().replace(" ", "_")
)
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)
add_colorspace()
add_scripts_menu()
def uninstall():
@ -153,7 +220,7 @@ def install():
return
# Allow time for uninstallation to finish.
cmds.evalDeferred(deferred)
cmds.evalDeferred(deferred, lowestPriority=True)
def popup():

View file

@ -5,6 +5,7 @@ import os
import contextlib
import copy
import six
from maya import cmds
from avalon import api, io
@ -69,7 +70,8 @@ def unlocked(nodes):
yield
finally:
# Reapply original states
for uuid, state in states.iteritems():
_iteritems = getattr(states, "iteritems", states.items)
for uuid, state in _iteritems():
nodes_from_id = cmds.ls(uuid, long=True)
if nodes_from_id:
node = nodes_from_id[0]
@ -94,7 +96,7 @@ def load_package(filepath, name, namespace=None):
# Define a unique namespace for the package
namespace = os.path.basename(filepath).split(".")[0]
unique_namespace(namespace)
assert isinstance(namespace, basestring)
assert isinstance(namespace, six.string_types)
# Load the setdress package data
with open(filepath, "r") as fp:

View file

@ -183,7 +183,8 @@ class ExtractFBX(openpype.api.Extractor):
# Apply the FBX overrides through MEL since the commands
# only work correctly in MEL according to online
# available discussions on the topic
for option, value in options.iteritems():
_iteritems = getattr(options, "iteritems", options.items)
for option, value in _iteritems():
key = option[0].upper() + option[1:] # uppercase first letter
# Boolean must be passed as lower-case strings

View file

@ -45,9 +45,12 @@ class ExtractPlayblast(openpype.api.Extractor):
# get cameras
camera = instance.data['review_camera']
override_viewport_options = (
self.capture_preset['Viewport Options']
['override_viewport_options']
)
preset = lib.load_capture_preset(data=self.capture_preset)
preset['camera'] = camera
preset['start_frame'] = start
preset['end_frame'] = end
@ -92,6 +95,12 @@ class ExtractPlayblast(openpype.api.Extractor):
self.log.info('using viewport preset: {}'.format(preset))
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel_preset = capture.parse_active_view()
preset.update(panel_preset)
path = capture.capture(**preset)
self.log.debug("playblast path {}".format(path))

View file

@ -32,6 +32,9 @@ class ExtractThumbnail(openpype.api.Extractor):
capture_preset = (
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
)
override_viewport_options = (
capture_preset['Viewport Options']['override_viewport_options']
)
try:
preset = lib.load_capture_preset(data=capture_preset)
@ -86,6 +89,12 @@ class ExtractThumbnail(openpype.api.Extractor):
# playblast and viewer
preset['viewer'] = False
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel_preset = capture.parse_active_view()
preset.update(panel_preset)
path = capture.capture(**preset)
playblast = self._fix_playblast_output_path(path)

View file

@ -383,7 +383,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
"attributes": {
"environmental_variables": {
"value": ", ".join("{!s}={!r}".format(k, v)
for (k, v) in env.iteritems()),
for (k, v) in env.items()),
"state": True,
"subst": False

View file

@ -2,6 +2,8 @@ import pyblish.api
import openpype.api
import string
import six
# Allow only characters, numbers and underscore
allowed = set(string.ascii_lowercase +
string.ascii_uppercase +
@ -29,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin):
raise RuntimeError("Instance is missing subset "
"name: {0}".format(subset))
if not isinstance(subset, basestring):
if not isinstance(subset, six.string_types):
raise TypeError("Instance subset name must be string, "
"got: {0} ({1})".format(subset, type(subset)))

View file

@ -0,0 +1,53 @@
from maya import cmds
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
from avalon import maya
from openpype.hosts.maya.api import lib
def polyConstraint(objects, *args, **kwargs):
kwargs.pop('mode', None)
with lib.no_undo(flush=False):
with maya.maintained_selection():
with lib.reset_polySelectConstraint():
cmds.select(objects, r=1, noExpand=True)
# Acting as 'polyCleanupArgList' for n-sided polygon selection
cmds.polySelectConstraint(*args, mode=3, **kwargs)
result = cmds.ls(selection=True)
cmds.select(clear=True)
return result
class ValidateMeshNgons(pyblish.api.Validator):
"""Ensure that meshes don't have ngons
Ngon are faces with more than 4 sides.
To debug the problem on the meshes you can use Maya's modeling
tool: "Mesh > Cleanup..."
"""
order = openpype.api.ValidateContentsOrder
hosts = ["maya"]
families = ["model"]
label = "Mesh ngons"
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
@staticmethod
def get_invalid(instance):
meshes = cmds.ls(instance, type='mesh')
return polyConstraint(meshes, type=8, size=3)
def process(self, instance):
"""Process all the nodes in the instance "objectSet"""
invalid = self.get_invalid(instance)
if invalid:
raise ValueError("Meshes found with n-gons"
"values: {0}".format(invalid))

View file

@ -52,7 +52,8 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin):
# Take only the ids with more than one member
invalid = list()
for _ids, members in ids.iteritems():
_iteritems = getattr(ids, "iteritems", ids.items)
for _ids, members in _iteritems():
if len(members) > 1:
cls.log.error("ID found on multiple nodes: '%s'" % members)
invalid.extend(members)

View file

@ -32,7 +32,10 @@ class ValidateNodeNoGhosting(pyblish.api.InstancePlugin):
nodes = cmds.ls(instance, long=True, type=['transform', 'shape'])
invalid = []
for node in nodes:
for attr, required_value in cls._attributes.iteritems():
_iteritems = getattr(
cls._attributes, "iteritems", cls._attributes.items
)
for attr, required_value in _iteritems():
if cmds.attributeQuery(attr, node=node, exists=True):
value = cmds.getAttr('{0}.{1}'.format(node, attr))

View file

@ -33,7 +33,8 @@ class ValidateShapeRenderStats(pyblish.api.Validator):
shapes = cmds.ls(instance, long=True, type='surfaceShape')
invalid = []
for shape in shapes:
for attr, default_value in cls.defaults.iteritems():
_iteritems = getattr(cls.defaults, "iteritems", cls.defaults.items)
for attr, default_value in _iteritems():
if cmds.attributeQuery(attr, node=shape, exists=True):
value = cmds.getAttr('{}.{}'.format(shape, attr))
if value != default_value:
@ -52,7 +53,8 @@ class ValidateShapeRenderStats(pyblish.api.Validator):
@classmethod
def repair(cls, instance):
for shape in cls.get_invalid(instance):
for attr, default_value in cls.defaults.iteritems():
_iteritems = getattr(cls.defaults, "iteritems", cls.defaults.items)
for attr, default_value in _iteritems():
if cmds.attributeQuery(attr, node=shape, exists=True):
plug = '{0}.{1}'.format(shape, attr)

View file

@ -0,0 +1,59 @@
from maya import cmds
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
class ValidateShapeZero(pyblish.api.Validator):
"""shape can't have any values
To solve this issue, try freezing the shapes. So long
as the translation, rotation and scaling values are zero,
you're all good.
"""
order = openpype.api.ValidateContentsOrder
hosts = ["maya"]
families = ["model"]
label = "Shape Zero (Freeze)"
actions = [
openpype.hosts.maya.api.action.SelectInvalidAction,
openpype.api.RepairAction
]
@staticmethod
def get_invalid(instance):
"""Returns the invalid shapes in the instance.
This is the same as checking:
- all(pnt == [0,0,0] for pnt in shape.pnts[:])
Returns:
list: Shape with non freezed vertex
"""
shapes = cmds.ls(instance, type="shape")
invalid = []
for shape in shapes:
if cmds.polyCollapseTweaks(shape, q=True, hasVertexTweaks=True):
invalid.append(shape)
return invalid
@classmethod
def repair(cls, instance):
invalid_shapes = cls.get_invalid(instance)
for shape in invalid_shapes:
cmds.polyCollapseTweaks(shape)
def process(self, instance):
"""Process all the nodes in the instance "objectSet"""
invalid = self.get_invalid(instance)
if invalid:
raise ValueError("Nodes found with shape or vertices not freezed"
"values: {0}".format(invalid))

View file

@ -24,6 +24,10 @@ from openpype.api import (
ApplicationManager
)
from openpype.tools.utils import host_tools
from openpype.lib.path_tools import HostDirmap
from openpype.settings import get_project_settings
from openpype.modules import ModulesManager
import nuke
from .utils import set_context_favorites
@ -1795,3 +1799,69 @@ def recreate_instance(origin_node, avalon_data=None):
dn.setInput(0, new_node)
return new_node
class NukeDirmap(HostDirmap):
def __init__(self, host_name, project_settings, sync_module, file_name):
"""
Args:
host_name (str): Nuke
project_settings (dict): settings of current project
sync_module (SyncServerModule): to limit reinitialization
file_name (str): full path of referenced file from workfiles
"""
self.host_name = host_name
self.project_settings = project_settings
self.file_name = file_name
self.sync_module = sync_module
self._mapping = None # cache mapping
def on_enable_dirmap(self):
pass
def dirmap_routine(self, source_path, destination_path):
log.debug("{}: {}->{}".format(self.file_name,
source_path, destination_path))
source_path = source_path.lower().replace(os.sep, '/')
destination_path = destination_path.lower().replace(os.sep, '/')
if platform.system().lower() == "windows":
self.file_name = self.file_name.lower().replace(
source_path, destination_path)
else:
self.file_name = self.file_name.replace(
source_path, destination_path)
class DirmapCache:
"""Caching class to get settings and sync_module easily and only once."""
_project_settings = None
_sync_module = None
@classmethod
def project_settings(cls):
if cls._project_settings is None:
cls._project_settings = get_project_settings(
os.getenv("AVALON_PROJECT"))
return cls._project_settings
@classmethod
def sync_module(cls):
if cls._sync_module is None:
cls._sync_module = ModulesManager().modules_by_name["sync_server"]
return cls._sync_module
def dirmap_file_name_filter(file_name):
"""Nuke callback function with single full path argument.
Checks project settings for potential mapping from source to dest.
"""
dirmap_processor = NukeDirmap("nuke",
DirmapCache.project_settings(),
DirmapCache.sync_module(),
file_name)
dirmap_processor.process_dirmap()
if os.path.exists(dirmap_processor.file_name):
return dirmap_processor.file_name
return file_name

View file

@ -84,6 +84,12 @@ def install():
)
log.debug("Adding menu item: {}".format(name))
# Add experimental tools action
menu.addSeparator()
menu.addCommand(
"Experimental tools...",
host_tools.show_experimental_tools_dialog
)
# adding shortcuts
add_shortcuts_from_presets()

View file

@ -6,10 +6,10 @@ from openpype.hosts.nuke.api.lib import (
import nuke
from openpype.api import Logger
from openpype.hosts.nuke.api.lib import dirmap_file_name_filter
log = Logger().get_logger(__name__)
# fix ffmpeg settings on script
nuke.addOnScriptLoad(on_script_load)
@ -20,4 +20,6 @@ nuke.addOnScriptSave(check_inventory_versions)
# # set apply all workfile settings on script load and save
nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
nuke.addFilenameFilter(dirmap_file_name_filter)
log.info('Automatic syncing of write file knob to script version')

View file

@ -6,7 +6,6 @@ from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
stub = photoshop.stub()
class ImageLoader(api.Loader):
"""Load images
@ -21,7 +20,7 @@ class ImageLoader(api.Loader):
context["asset"]["name"],
name)
with photoshop.maintained_selection():
layer = stub.import_smart_object(self.fname, layer_name)
layer = self.import_layer(self.fname, layer_name)
self[:] = [layer]
namespace = namespace or layer_name
@ -45,8 +44,9 @@ class ImageLoader(api.Loader):
layer_name = "{}_{}".format(context["asset"], context["subset"])
# switching assets
if namespace_from_container != layer_name:
layer_name = self._get_unique_layer_name(context["asset"],
context["subset"])
layer_name = get_unique_layer_name(stub.get_layers(),
context["asset"],
context["subset"])
else: # switching version - keep same name
layer_name = container["namespace"]
@ -72,3 +72,6 @@ class ImageLoader(api.Loader):
def switch(self, container, representation):
self.update(container, representation)
def import_layer(self, file_name, layer_name):
return stub.import_smart_object(file_name, layer_name)

View file

@ -0,0 +1,82 @@
import re
from avalon import api, photoshop
from openpype.hosts.photoshop.plugins.lib import get_unique_layer_name
stub = photoshop.stub()
class ReferenceLoader(api.Loader):
"""Load reference images
Stores the imported asset in a container named after the asset.
Inheriting from 'load_image' didn't work because of
"Cannot write to closing transport", possible refactor.
"""
families = ["image", "render"]
representations = ["*"]
def load(self, context, name=None, namespace=None, data=None):
layer_name = get_unique_layer_name(stub.get_layers(),
context["asset"]["name"],
name)
with photoshop.maintained_selection():
layer = self.import_layer(self.fname, layer_name)
self[:] = [layer]
namespace = namespace or layer_name
return photoshop.containerise(
name,
namespace,
layer,
context,
self.__class__.__name__
)
def update(self, container, representation):
""" Switch asset or change version """
layer = container.pop("layer")
context = representation.get("context", {})
namespace_from_container = re.sub(r'_\d{3}$', '',
container["namespace"])
layer_name = "{}_{}".format(context["asset"], context["subset"])
# switching assets
if namespace_from_container != layer_name:
layer_name = get_unique_layer_name(stub.get_layers(),
context["asset"],
context["subset"])
else: # switching version - keep same name
layer_name = container["namespace"]
path = api.get_representation_path(representation)
with photoshop.maintained_selection():
stub.replace_smart_object(
layer, path, layer_name
)
stub.imprint(
layer, {"representation": str(representation["_id"])}
)
def remove(self, container):
"""
Removes element from scene: deletes layer + removes from Headline
Args:
container (dict): container to be removed - used to get layer_id
"""
layer = container.pop("layer")
stub.imprint(layer, {})
stub.delete_layer(layer.id)
def switch(self, container, representation):
self.update(container, representation)
def import_layer(self, file_name, layer_name):
return stub.import_smart_object(file_name, layer_name,
as_reference=True)

View file

@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
"""Close PS after publish. For Webpublishing only."""
import os
import pyblish.api
from avalon import photoshop
class ClosePS(pyblish.api.ContextPlugin):
"""Close PS after publish. For Webpublishing only.
"""
order = pyblish.api.IntegratorOrder + 14
label = "Close PS"
optional = True
active = True
hosts = ["photoshop"]
targets = ["remotepublish"]
def process(self, context):
self.log.info("ClosePS")
stub = photoshop.stub()
self.log.info("Shutting down PS")
stub.save()
stub.close()
self.log.info("PS closed")

View file

@ -0,0 +1,133 @@
import pyblish.api
import os
import re
from avalon import photoshop
from openpype.lib import prepare_template_data
from openpype.lib.plugin_tools import parse_json
class CollectRemoteInstances(pyblish.api.ContextPlugin):
"""Gather instances configured color code of a layer.
Used in remote publishing when artists marks publishable layers by color-
coding.
Identifier:
id (str): "pyblish.avalon.instance"
"""
order = pyblish.api.CollectorOrder + 0.100
label = "Instances"
order = pyblish.api.CollectorOrder
hosts = ["photoshop"]
targets = ["remotepublish"]
# configurable by Settings
color_code_mapping = []
def process(self, context):
self.log.info("CollectRemoteInstances")
self.log.info("mapping:: {}".format(self.color_code_mapping))
# parse variant if used in webpublishing, comes from webpublisher batch
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
variant = "Main"
if batch_dir and os.path.exists(batch_dir):
# TODO check if batch manifest is same as tasks manifests
task_data = parse_json(os.path.join(batch_dir,
"manifest.json"))
if not task_data:
raise ValueError(
"Cannot parse batch meta in {} folder".format(batch_dir))
variant = task_data["variant"]
stub = photoshop.stub()
layers = stub.get_layers()
instance_names = []
for layer in layers:
self.log.info("Layer:: {}".format(layer))
resolved_family, resolved_subset_template = self._resolve_mapping(
layer
)
self.log.info("resolved_family {}".format(resolved_family))
self.log.info("resolved_subset_template {}".format(
resolved_subset_template))
if not resolved_subset_template or not resolved_family:
self.log.debug("!!! Not marked, skip")
continue
if layer.parents:
self.log.debug("!!! Not a top layer, skip")
continue
instance = context.create_instance(layer.name)
instance.append(layer)
instance.data["family"] = resolved_family
instance.data["publish"] = layer.visible
instance.data["asset"] = context.data["assetEntity"]["name"]
instance.data["task"] = context.data["taskType"]
fill_pairs = {
"variant": variant,
"family": instance.data["family"],
"task": instance.data["task"],
"layer": layer.name
}
subset = resolved_subset_template.format(
**prepare_template_data(fill_pairs))
instance.data["subset"] = subset
instance_names.append(layer.name)
# Produce diagnostic message for any graphical
# user interface interested in visualising it.
self.log.info("Found: \"%s\" " % instance.data["name"])
self.log.info("instance: {} ".format(instance.data))
if len(instance_names) != len(set(instance_names)):
self.log.warning("Duplicate instances found. " +
"Remove unwanted via SubsetManager")
def _resolve_mapping(self, layer):
"""Matches 'layer' color code and name to mapping.
If both color code AND name regex is configured, BOTH must be valid
If layer matches to multiple mappings, only first is used!
"""
family_list = []
family = None
subset_name_list = []
resolved_subset_template = None
for mapping in self.color_code_mapping:
if mapping["color_code"] and \
layer.color_code not in mapping["color_code"]:
continue
if mapping["layer_name_regex"] and \
not any(re.search(pattern, layer.name)
for pattern in mapping["layer_name_regex"]):
continue
family_list.append(mapping["family"])
subset_name_list.append(mapping["subset_template_name"])
if len(subset_name_list) > 1:
self.log.warning("Multiple mappings found for '{}'".
format(layer.name))
self.log.warning("Only first subset name template used!")
subset_name_list[:] = subset_name_list[0]
if len(family_list) > 1:
self.log.warning("Multiple mappings found for '{}'".
format(layer.name))
self.log.warning("Only first family used!")
family_list[:] = family_list[0]
if subset_name_list:
resolved_subset_template = subset_name_list.pop()
if family_list:
family = family_list.pop()
return family, resolved_subset_template

View file

@ -12,7 +12,7 @@ class ExtractImage(openpype.api.Extractor):
label = "Extract Image"
hosts = ["photoshop"]
families = ["image"]
families = ["image", "background"]
formats = ["png", "jpg"]
def process(self, instance):

View file

@ -3,7 +3,7 @@ import json
import pyblish.api
from avalon import io
from openpype.lib import get_subset_name
from openpype.lib import get_subset_name_with_asset_doc
class CollectBulkMovInstances(pyblish.api.InstancePlugin):
@ -26,16 +26,10 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
context = instance.context
asset_name = instance.data["asset"]
asset_doc = io.find_one(
{
"type": "asset",
"name": asset_name
},
{
"_id": 1,
"data.tasks": 1
}
)
asset_doc = io.find_one({
"type": "asset",
"name": asset_name
})
if not asset_doc:
raise AssertionError((
"Couldn't find Asset document with name \"{}\""
@ -53,11 +47,11 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
task_name = available_task_names[_task_name_low]
break
subset_name = get_subset_name(
subset_name = get_subset_name_with_asset_doc(
self.new_instance_family,
self.subset_name_variant,
task_name,
asset_doc["_id"],
asset_doc,
io.Session["AVALON_PROJECT"]
)
instance_name = f"{asset_name}_{subset_name}"

View file

@ -0,0 +1,37 @@
import pyblish.api
import openpype.api
import os
class ValidateSources(pyblish.api.InstancePlugin):
"""Validates source files.
Loops through all 'files' in 'stagingDir' if actually exist. They might
got deleted between starting of SP and now.
"""
order = openpype.api.ValidateContentsOrder
label = "Check source files"
optional = True # only for unforeseeable cases
hosts = ["standalonepublisher"]
def process(self, instance):
self.log.info("instance {}".format(instance.data))
for repre in instance.data.get("representations") or []:
files = []
if isinstance(repre["files"], str):
files.append(repre["files"])
else:
files = list(repre["files"])
for file_name in files:
source_file = os.path.join(repre["stagingDir"],
file_name)
if not os.path.exists(source_file):
raise ValueError("File {} not found".format(source_file))

View file

@ -0,0 +1,16 @@
# What is `testhost`
Host `testhost` was created to fake running host for testing of publisher.
Does not have any proper launch mechanism at the moment. There is python script `./run_publish.py` which will show publisher window. The script requires to set few variables to run. Execution will register host `testhost`, register global publish plugins and register creator and publish plugins from `./plugins`.
## Data
Created instances and context data are stored into json files inside `./api` folder. Can be easily modified to save them to a different place.
## Plugins
Test host has few plugins to be able test publishing.
### Creators
They are just example plugins using functions from `api` to create/remove/update data. One of them is auto creator which means that is triggered on each reset of create context. Others are manual creators both creating the same family.
### Publishers
Collectors are example plugin to use `get_attribute_defs` to define attributes for specific families or for context. Validators are to test `PublishValidationError`.

View file

View file

@ -0,0 +1,43 @@
import os
import logging
import pyblish.api
import avalon.api
from openpype.pipeline import BaseCreator
from .pipeline import (
ls,
list_instances,
update_instances,
remove_instances,
get_context_data,
update_context_data,
get_context_title
)
HOST_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
log = logging.getLogger(__name__)
def install():
log.info("OpenPype - Installing TestHost integration")
pyblish.api.register_host("testhost")
pyblish.api.register_plugin_path(PUBLISH_PATH)
avalon.api.register_plugin_path(BaseCreator, CREATE_PATH)
__all__ = (
"ls",
"list_instances",
"update_instances",
"remove_instances",
"get_context_data",
"update_context_data",
"get_context_title",
"install"
)

View file

@ -0,0 +1 @@
{}

View file

@ -0,0 +1,108 @@
[
{
"id": "pyblish.avalon.instance",
"active": true,
"family": "test",
"subset": "testMyVariant",
"version": 1,
"asset": "sq01_sh0010",
"task": "Compositing",
"variant": "myVariant",
"uuid": "a485f148-9121-46a5-8157-aa64df0fb449",
"creator_attributes": {
"number_key": 10,
"ha": 10
},
"publish_attributes": {
"CollectFtrackApi": {
"add_ftrack_family": false
}
},
"creator_identifier": "test_one"
},
{
"id": "pyblish.avalon.instance",
"active": true,
"family": "test",
"subset": "testMyVariant2",
"version": 1,
"asset": "sq01_sh0010",
"task": "Compositing",
"variant": "myVariant2",
"uuid": "a485f148-9121-46a5-8157-aa64df0fb444",
"creator_attributes": {},
"publish_attributes": {
"CollectFtrackApi": {
"add_ftrack_family": true
}
},
"creator_identifier": "test_one"
},
{
"id": "pyblish.avalon.instance",
"active": true,
"family": "test",
"subset": "testMain",
"version": 1,
"asset": "sq01_sh0010",
"task": "Compositing",
"variant": "Main",
"uuid": "3607bc95-75f6-4648-a58d-e699f413d09f",
"creator_attributes": {},
"publish_attributes": {
"CollectFtrackApi": {
"add_ftrack_family": true
}
},
"creator_identifier": "test_two"
},
{
"id": "pyblish.avalon.instance",
"active": true,
"family": "test",
"subset": "testMain2",
"version": 1,
"asset": "sq01_sh0020",
"task": "Compositing",
"variant": "Main2",
"uuid": "4ccf56f6-9982-4837-967c-a49695dbe8eb",
"creator_attributes": {},
"publish_attributes": {
"CollectFtrackApi": {
"add_ftrack_family": true
}
},
"creator_identifier": "test_two"
},
{
"id": "pyblish.avalon.instance",
"family": "test_three",
"subset": "test_threeMain2",
"active": true,
"version": 1,
"asset": "sq01_sh0020",
"task": "Compositing",
"variant": "Main2",
"uuid": "4ccf56f6-9982-4837-967c-a49695dbe8ec",
"creator_attributes": {},
"publish_attributes": {
"CollectFtrackApi": {
"add_ftrack_family": true
}
}
},
{
"id": "pyblish.avalon.instance",
"family": "workfile",
"subset": "workfileMain",
"active": true,
"creator_identifier": "workfile",
"version": 1,
"asset": "Alpaca_01",
"task": "modeling",
"variant": "Main",
"uuid": "7c9ddfc7-9f9c-4c1c-b233-38c966735fb6",
"creator_attributes": {},
"publish_attributes": {}
}
]

View file

@ -0,0 +1,156 @@
import os
import json
class HostContext:
instances_json_path = None
context_json_path = None
@classmethod
def get_context_title(cls):
project_name = os.environ.get("AVALON_PROJECT")
if not project_name:
return "TestHost"
asset_name = os.environ.get("AVALON_ASSET")
if not asset_name:
return project_name
from avalon import io
asset_doc = io.find_one(
{"type": "asset", "name": asset_name},
{"data.parents": 1}
)
parents = asset_doc.get("data", {}).get("parents") or []
hierarchy = [project_name]
hierarchy.extend(parents)
hierarchy.append("<b>{}</b>".format(asset_name))
task_name = os.environ.get("AVALON_TASK")
if task_name:
hierarchy.append(task_name)
return "/".join(hierarchy)
@classmethod
def get_current_dir_filepath(cls, filename):
return os.path.join(
os.path.dirname(os.path.abspath(__file__)),
filename
)
@classmethod
def get_instances_json_path(cls):
if cls.instances_json_path is None:
cls.instances_json_path = cls.get_current_dir_filepath(
"instances.json"
)
return cls.instances_json_path
@classmethod
def get_context_json_path(cls):
if cls.context_json_path is None:
cls.context_json_path = cls.get_current_dir_filepath(
"context.json"
)
return cls.context_json_path
@classmethod
def add_instance(cls, instance):
instances = cls.get_instances()
instances.append(instance)
cls.save_instances(instances)
@classmethod
def save_instances(cls, instances):
json_path = cls.get_instances_json_path()
with open(json_path, "w") as json_stream:
json.dump(instances, json_stream, indent=4)
@classmethod
def get_instances(cls):
json_path = cls.get_instances_json_path()
if not os.path.exists(json_path):
instances = []
with open(json_path, "w") as json_stream:
json.dump(json_stream, instances)
else:
with open(json_path, "r") as json_stream:
instances = json.load(json_stream)
return instances
@classmethod
def get_context_data(cls):
json_path = cls.get_context_json_path()
if not os.path.exists(json_path):
data = {}
with open(json_path, "w") as json_stream:
json.dump(data, json_stream)
else:
with open(json_path, "r") as json_stream:
data = json.load(json_stream)
return data
@classmethod
def save_context_data(cls, data):
json_path = cls.get_context_json_path()
with open(json_path, "w") as json_stream:
json.dump(data, json_stream, indent=4)
def ls():
return []
def list_instances():
return HostContext.get_instances()
def update_instances(update_list):
updated_instances = {}
for instance, _changes in update_list:
updated_instances[instance.id] = instance.data_to_store()
instances = HostContext.get_instances()
for instance_data in instances:
instance_id = instance_data["uuid"]
if instance_id in updated_instances:
new_instance_data = updated_instances[instance_id]
old_keys = set(instance_data.keys())
new_keys = set(new_instance_data.keys())
instance_data.update(new_instance_data)
for key in (old_keys - new_keys):
instance_data.pop(key)
HostContext.save_instances(instances)
def remove_instances(instances):
if not isinstance(instances, (tuple, list)):
instances = [instances]
current_instances = HostContext.get_instances()
for instance in instances:
instance_id = instance.data["uuid"]
found_idx = None
for idx, _instance in enumerate(current_instances):
if instance_id == _instance["uuid"]:
found_idx = idx
break
if found_idx is not None:
current_instances.pop(found_idx)
HostContext.save_instances(current_instances)
def get_context_data():
return HostContext.get_context_data()
def update_context_data(data, changes):
HostContext.save_context_data(data)
def get_context_title():
return HostContext.get_context_title()

View file

@ -0,0 +1,74 @@
from openpype.hosts.testhost.api import pipeline
from openpype.pipeline import (
AutoCreator,
CreatedInstance,
lib
)
from avalon import io
class MyAutoCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
def get_attribute_defs(self):
output = [
lib.NumberDef("number_key", label="Number")
]
return output
def collect_instances(self):
for instance_data in pipeline.list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
subset_name = instance_data["subset"]
instance = CreatedInstance(
self.family, subset_name, instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
pipeline.update_instances(update_list)
def create(self, options=None):
existing_instance = None
for instance in self.create_context.instances:
if instance.family == self.family:
existing_instance = instance
break
variant = "Main"
project_name = io.Session["AVALON_PROJECT"]
asset_name = io.Session["AVALON_ASSET"]
task_name = io.Session["AVALON_TASK"]
host_name = io.Session["AVALON_APP"]
if existing_instance is None:
asset_doc = io.find_one({"type": "asset", "name": asset_name})
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
data = {
"asset": asset_name,
"task": task_name,
"variant": variant
}
data.update(self.get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name
))
new_instance = CreatedInstance(
self.family, subset_name, data, self
)
self._add_instance_to_context(new_instance)
elif (
existing_instance["asset"] != asset_name
or existing_instance["task"] != task_name
):
asset_doc = io.find_one({"type": "asset", "name": asset_name})
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name

View file

@ -0,0 +1,70 @@
from openpype import resources
from openpype.hosts.testhost.api import pipeline
from openpype.pipeline import (
Creator,
CreatedInstance,
lib
)
class TestCreatorOne(Creator):
identifier = "test_one"
label = "test"
family = "test"
description = "Testing creator of testhost"
def get_icon(self):
return resources.get_openpype_splash_filepath()
def collect_instances(self):
for instance_data in pipeline.list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
pipeline.update_instances(update_list)
def remove_instances(self, instances):
pipeline.remove_instances(instances)
for instance in instances:
self._remove_instance_from_context(instance)
def create(self, subset_name, data, options=None):
new_instance = CreatedInstance(self.family, subset_name, data, self)
pipeline.HostContext.add_instance(new_instance.data_to_store())
self.log.info(new_instance.data)
self._add_instance_to_context(new_instance)
def get_default_variants(self):
return [
"myVariant",
"variantTwo",
"different_variant"
]
def get_attribute_defs(self):
output = [
lib.NumberDef("number_key", label="Number")
]
return output
def get_detail_description(self):
return """# Relictus funes est Nyseides currusque nunc oblita
## Causa sed
Lorem markdownum posito consumptis, *plebe Amorque*, abstitimus rogatus fictaque
gladium Circe, nos? Bos aeternum quae. Utque me, si aliquem cladis, et vestigia
arbor, sic mea ferre lacrimae agantur prospiciens hactenus. Amanti dentes pete,
vos quid laudemque rastrorumque terras in gratantibus **radix** erat cedemus?
Pudor tu ponderibus verbaque illa; ire ergo iam Venus patris certe longae
cruentum lecta, et quaeque. Sit doce nox. Anteit ad tempora magni plenaque et
videres mersit sibique auctor in tendunt mittit cunctos ventisque gravitate
volucris quemquam Aeneaden. Pectore Mensis somnus; pectora
[ferunt](http://www.mox.org/oculosbracchia)? Fertilitatis bella dulce et suum?
"""

View file

@ -0,0 +1,74 @@
from openpype.hosts.testhost.api import pipeline
from openpype.pipeline import (
Creator,
CreatedInstance,
lib
)
class TestCreatorTwo(Creator):
identifier = "test_two"
label = "test"
family = "test"
description = "A second testing creator"
def get_icon(self):
return "cube"
def create(self, subset_name, data, options=None):
new_instance = CreatedInstance(self.family, subset_name, data, self)
pipeline.HostContext.add_instance(new_instance.data_to_store())
self.log.info(new_instance.data)
self._add_instance_to_context(new_instance)
def collect_instances(self):
for instance_data in pipeline.list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
pipeline.update_instances(update_list)
def remove_instances(self, instances):
pipeline.remove_instances(instances)
for instance in instances:
self._remove_instance_from_context(instance)
def get_attribute_defs(self):
output = [
lib.NumberDef("number_key"),
lib.TextDef("text_key")
]
return output
def get_detail_description(self):
return """# Lorem ipsum, dolor sit amet. [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
> A curated list of awesome lorem ipsum generators.
Inspired by the [awesome](https://github.com/sindresorhus/awesome) list thing.
## Table of Contents
- [Legend](#legend)
- [Practical](#briefcase-practical)
- [Whimsical](#roller_coaster-whimsical)
- [Animals](#rabbit-animals)
- [Eras](#tophat-eras)
- [Famous Individuals](#sunglasses-famous-individuals)
- [Music](#microphone-music)
- [Food and Drink](#pizza-food-and-drink)
- [Geographic and Dialects](#earth_africa-geographic-and-dialects)
- [Literature](#books-literature)
- [Miscellaneous](#cyclone-miscellaneous)
- [Sports and Fitness](#bicyclist-sports-and-fitness)
- [TV and Film](#movie_camera-tv-and-film)
- [Tools, Apps, and Extensions](#wrench-tools-apps-and-extensions)
- [Contribute](#contribute)
- [TODO](#todo)
"""

View file

@ -0,0 +1,34 @@
import pyblish.api
from openpype.pipeline import (
OpenPypePyblishPluginMixin,
attribute_definitions
)
class CollectContextDataTestHost(
pyblish.api.ContextPlugin, OpenPypePyblishPluginMixin
):
"""
Collecting temp json data sent from a host context
and path for returning json data back to hostself.
"""
label = "Collect Source - Test Host"
order = pyblish.api.CollectorOrder - 0.4
hosts = ["testhost"]
@classmethod
def get_attribute_defs(cls):
return [
attribute_definitions.BoolDef(
"test_bool",
True,
label="Bool input"
)
]
def process(self, context):
# get json paths from os and load them
for instance in context:
instance.data["source"] = "testhost"

View file

@ -0,0 +1,54 @@
import json
import pyblish.api
from openpype.pipeline import (
OpenPypePyblishPluginMixin,
attribute_definitions
)
class CollectInstanceOneTestHost(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""
Collecting temp json data sent from a host context
and path for returning json data back to hostself.
"""
label = "Collect Instance 1 - Test Host"
order = pyblish.api.CollectorOrder - 0.3
hosts = ["testhost"]
@classmethod
def get_attribute_defs(cls):
return [
attribute_definitions.NumberDef(
"version",
default=1,
minimum=1,
maximum=999,
decimals=0,
label="Version"
)
]
def process(self, instance):
self._debug_log(instance)
publish_attributes = instance.data.get("publish_attributes")
if not publish_attributes:
return
values = publish_attributes.get(self.__class__.__name__)
if not values:
return
instance.data["version"] = values["version"]
def _debug_log(self, instance):
def _default_json(value):
return str(value)
self.log.info(
json.dumps(instance.data, indent=4, default=_default_json)
)

View file

@ -0,0 +1,57 @@
import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateInstanceAssetRepair(pyblish.api.Action):
"""Repair the instance asset."""
label = "Repair"
icon = "wrench"
on = "failed"
def process(self, context, plugin):
pass
description = """
## Publish plugins
### Validate Scene Settings
#### Skip Resolution Check for Tasks
Set regex pattern(s) to look for in a Task name to skip resolution check against values from DB.
#### Skip Timeline Check for Tasks
Set regex pattern(s) to look for in a Task name to skip `frameStart`, `frameEnd` check against values from DB.
### AfterEffects Submit to Deadline
* `Use Published scene` - Set to True (green) when Deadline should take published scene as a source instead of uploaded local one.
* `Priority` - priority of job on farm
* `Primary Pool` - here is list of pool fetched from server you can select from.
* `Secondary Pool`
* `Frames Per Task` - number of sequence division between individual tasks (chunks)
making one job on farm.
"""
class ValidateContextWithError(pyblish.api.ContextPlugin):
"""Validate the instance asset is the current selected context asset.
As it might happen that multiple worfiles are opened, switching
between them would mess with selected context.
In that case outputs might be output under wrong asset!
Repair action will use Context asset value (from Workfiles or Launcher)
Closing and reopening with Workfiles will refresh Context value.
"""
label = "Validate Context With Error"
hosts = ["testhost"]
actions = [ValidateInstanceAssetRepair]
order = pyblish.api.ValidatorOrder
def process(self, context):
raise PublishValidationError("Crashing", "Context error", description)

View file

@ -0,0 +1,57 @@
import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateInstanceAssetRepair(pyblish.api.Action):
"""Repair the instance asset."""
label = "Repair"
icon = "wrench"
on = "failed"
def process(self, context, plugin):
pass
description = """
## Publish plugins
### Validate Scene Settings
#### Skip Resolution Check for Tasks
Set regex pattern(s) to look for in a Task name to skip resolution check against values from DB.
#### Skip Timeline Check for Tasks
Set regex pattern(s) to look for in a Task name to skip `frameStart`, `frameEnd` check against values from DB.
### AfterEffects Submit to Deadline
* `Use Published scene` - Set to True (green) when Deadline should take published scene as a source instead of uploaded local one.
* `Priority` - priority of job on farm
* `Primary Pool` - here is list of pool fetched from server you can select from.
* `Secondary Pool`
* `Frames Per Task` - number of sequence division between individual tasks (chunks)
making one job on farm.
"""
class ValidateWithError(pyblish.api.InstancePlugin):
"""Validate the instance asset is the current selected context asset.
As it might happen that multiple worfiles are opened, switching
between them would mess with selected context.
In that case outputs might be output under wrong asset!
Repair action will use Context asset value (from Workfiles or Launcher)
Closing and reopening with Workfiles will refresh Context value.
"""
label = "Validate With Error"
hosts = ["testhost"]
actions = [ValidateInstanceAssetRepair]
order = pyblish.api.ValidatorOrder
def process(self, instance):
raise PublishValidationError("Crashing", "Instance error", description)

View file

@ -0,0 +1,70 @@
import os
import sys
mongo_url = ""
project_name = ""
asset_name = ""
task_name = ""
ftrack_url = ""
ftrack_username = ""
ftrack_api_key = ""
def multi_dirname(path, times=1):
for _ in range(times):
path = os.path.dirname(path)
return path
host_name = "testhost"
current_file = os.path.abspath(__file__)
openpype_dir = multi_dirname(current_file, 4)
os.environ["OPENPYPE_MONGO"] = mongo_url
os.environ["OPENPYPE_ROOT"] = openpype_dir
os.environ["AVALON_MONGO"] = mongo_url
os.environ["AVALON_PROJECT"] = project_name
os.environ["AVALON_ASSET"] = asset_name
os.environ["AVALON_TASK"] = task_name
os.environ["AVALON_APP"] = host_name
os.environ["OPENPYPE_DATABASE_NAME"] = "openpype"
os.environ["AVALON_CONFIG"] = "openpype"
os.environ["AVALON_TIMEOUT"] = "1000"
os.environ["AVALON_DB"] = "avalon"
os.environ["FTRACK_SERVER"] = ftrack_url
os.environ["FTRACK_API_USER"] = ftrack_username
os.environ["FTRACK_API_KEY"] = ftrack_api_key
for path in [
openpype_dir,
r"{}\repos\avalon-core".format(openpype_dir),
r"{}\.venv\Lib\site-packages".format(openpype_dir)
]:
sys.path.append(path)
from Qt import QtWidgets, QtCore
from openpype.tools.publisher.window import PublisherWindow
def main():
"""Main function for testing purposes."""
import avalon.api
import pyblish.api
from openpype.modules import ModulesManager
from openpype.hosts.testhost import api as testhost
manager = ModulesManager()
for plugin_path in manager.collect_plugin_paths()["publish"]:
pyblish.api.register_plugin_path(plugin_path)
avalon.api.install(testhost)
QtWidgets.QApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling)
app = QtWidgets.QApplication([])
window = PublisherWindow()
window.show()
app.exec_()
if __name__ == "__main__":
main()

View file

@ -4,7 +4,7 @@ import copy
import pyblish.api
from avalon import io
from openpype.lib import get_subset_name
from openpype.lib import get_subset_name_with_asset_doc
class CollectInstances(pyblish.api.ContextPlugin):
@ -70,16 +70,10 @@ class CollectInstances(pyblish.api.ContextPlugin):
# - not sure if it's good idea to require asset id in
# get_subset_name?
asset_name = context.data["workfile_context"]["asset"]
asset_doc = io.find_one(
{
"type": "asset",
"name": asset_name
},
{"_id": 1}
)
asset_id = None
if asset_doc:
asset_id = asset_doc["_id"]
asset_doc = io.find_one({
"type": "asset",
"name": asset_name
})
# Project name from workfile context
project_name = context.data["workfile_context"]["project"]
@ -88,11 +82,11 @@ class CollectInstances(pyblish.api.ContextPlugin):
# Use empty variant value
variant = ""
task_name = io.Session["AVALON_TASK"]
new_subset_name = get_subset_name(
new_subset_name = get_subset_name_with_asset_doc(
family,
variant,
task_name,
asset_id,
asset_doc,
project_name,
host_name
)

View file

@ -3,7 +3,7 @@ import json
import pyblish.api
from avalon import io
from openpype.lib import get_subset_name
from openpype.lib import get_subset_name_with_asset_doc
class CollectWorkfile(pyblish.api.ContextPlugin):
@ -28,16 +28,10 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
# get_subset_name?
family = "workfile"
asset_name = context.data["workfile_context"]["asset"]
asset_doc = io.find_one(
{
"type": "asset",
"name": asset_name
},
{"_id": 1}
)
asset_id = None
if asset_doc:
asset_id = asset_doc["_id"]
asset_doc = io.find_one({
"type": "asset",
"name": asset_name
})
# Project name from workfile context
project_name = context.data["workfile_context"]["project"]
@ -46,11 +40,11 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
# Use empty variant value
variant = ""
task_name = io.Session["AVALON_TASK"]
subset_name = get_subset_name(
subset_name = get_subset_name_with_asset_doc(
family,
variant,
task_name,
asset_id,
asset_doc,
project_name,
host_name
)

View file

@ -253,6 +253,7 @@ def create_unreal_project(project_name: str,
"Plugins": [
{"Name": "PythonScriptPlugin", "Enabled": True},
{"Name": "EditorScriptingUtilities", "Enabled": True},
{"Name": "SequencerScripting", "Enabled": True},
{"Name": "Avalon", "Enabled": True}
]
}

View file

@ -6,7 +6,9 @@ from pathlib import Path
from openpype.lib import (
PreLaunchHook,
ApplicationLaunchFailed,
ApplicationNotFound
ApplicationNotFound,
get_workdir_data,
get_workfile_template_key
)
from openpype.hosts.unreal.api import lib as unreal_lib
@ -25,13 +27,46 @@ class UnrealPrelaunchHook(PreLaunchHook):
self.signature = "( {} )".format(self.__class__.__name__)
def _get_work_filename(self):
# Use last workfile if was found
if self.data.get("last_workfile_path"):
last_workfile = Path(self.data.get("last_workfile_path"))
if last_workfile and last_workfile.exists():
return last_workfile.name
# Prepare data for fill data and for getting workfile template key
task_name = self.data["task_name"]
anatomy = self.data["anatomy"]
asset_doc = self.data["asset_doc"]
project_doc = self.data["project_doc"]
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
task_info = asset_tasks.get(task_name) or {}
task_type = task_info.get("type")
workdir_data = get_workdir_data(
project_doc, asset_doc, task_name, self.host_name
)
# QUESTION raise exception if version is part of filename template?
workdir_data["version"] = 1
workdir_data["ext"] = "uproject"
# Get workfile template key for current context
workfile_template_key = get_workfile_template_key(
task_type,
self.host_name,
project_name=project_doc["name"]
)
# Fill templates
filled_anatomy = anatomy.format(workdir_data)
# Return filename
return filled_anatomy[workfile_template_key]["file"]
def execute(self):
"""Hook entry method."""
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
workdir = self.launch_context.env["AVALON_WORKDIR"]
engine_version = self.app_name.split("/")[-1].replace("-", ".")
unreal_project_name = f"{asset_name}_{task_name}"
try:
if int(engine_version.split(".")[0]) < 4 and \
int(engine_version.split(".")[1]) < 26:
@ -45,6 +80,8 @@ class UnrealPrelaunchHook(PreLaunchHook):
# so lets keep it quite.
...
unreal_project_filename = self._get_work_filename()
unreal_project_name = os.path.splitext(unreal_project_filename)[0]
# Unreal is sensitive about project names longer then 20 chars
if len(unreal_project_name) > 20:
self.log.warning((
@ -89,10 +126,10 @@ class UnrealPrelaunchHook(PreLaunchHook):
ue4_path = unreal_lib.get_editor_executable_path(
Path(detected[engine_version]))
self.launch_context.launch_args.append(ue4_path.as_posix())
self.launch_context.launch_args = [ue4_path.as_posix()]
project_path.mkdir(parents=True, exist_ok=True)
project_file = project_path / f"{unreal_project_name}.uproject"
project_file = project_path / unreal_project_filename
if not project_file.is_file():
engine_path = detected[engine_version]
self.log.info((

View file

@ -0,0 +1,43 @@
import unreal
from unreal import EditorAssetLibrary as eal
from unreal import EditorLevelLibrary as ell
from openpype.hosts.unreal.api.plugin import Creator
from avalon.unreal import (
instantiate,
)
class CreateCamera(Creator):
"""Layout output for character rigs"""
name = "layoutMain"
label = "Camera"
family = "camera"
icon = "cubes"
root = "/Game/Avalon/Instances"
suffix = "_INS"
def __init__(self, *args, **kwargs):
super(CreateCamera, self).__init__(*args, **kwargs)
def process(self):
data = self.data
name = data["subset"]
data["level"] = ell.get_editor_world().get_path_name()
if not eal.does_directory_exist(self.root):
eal.make_directory(self.root)
factory = unreal.LevelSequenceFactoryNew()
tools = unreal.AssetToolsHelpers().get_asset_tools()
tools.create_asset(name, f"{self.root}/{name}", None, factory)
asset_name = f"{self.root}/{name}/{name}.{name}"
data["members"] = [asset_name]
instantiate(f"{self.root}", name, data, None, self.suffix)

View file

@ -0,0 +1,206 @@
import os
from avalon import api, io, pipeline
from avalon.unreal import lib
from avalon.unreal import pipeline as unreal_pipeline
import unreal
class CameraLoader(api.Loader):
"""Load Unreal StaticMesh from FBX"""
families = ["camera"]
label = "Load Camera"
representations = ["fbx"]
icon = "cube"
color = "orange"
def load(self, context, name, namespace, data):
"""
Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
Returns:
list(str): list of container content
"""
# Create directory for asset and avalon container
root = "/Game/Avalon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
tools = unreal.AssetToolsHelpers().get_asset_tools()
unique_number = 1
if unreal.EditorAssetLibrary.does_directory_exist(f"{root}/{asset}"):
asset_content = unreal.EditorAssetLibrary.list_assets(
f"{root}/{asset}", recursive=False, include_folder=True
)
# Get highest number to make a unique name
folders = [a for a in asset_content
if a[-1] == "/" and f"{name}_" in a]
f_numbers = []
for f in folders:
# Get number from folder name. Splits the string by "_" and
# removes the last element (which is a "/").
f_numbers.append(int(f.split("_")[-1][:-1]))
f_numbers.sort()
if not f_numbers:
unique_number = 1
else:
unique_number = f_numbers[-1] + 1
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name}_{unique_number:02d}", suffix="")
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
sequence = tools.create_asset(
asset_name=asset_name,
package_path=asset_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
io_asset = io.Session["AVALON_ASSET"]
asset_doc = io.find_one({
"type": "asset",
"name": io_asset
})
data = asset_doc.get("data")
if data:
sequence.set_display_rate(unreal.FrameRate(data.get("fps"), 1.0))
sequence.set_playback_start(data.get("frameStart"))
sequence.set_playback_end(data.get("frameEnd"))
settings = unreal.MovieSceneUserImportFBXSettings()
settings.set_editor_property('reduce_keys', False)
unreal.SequencerTools.import_fbx(
unreal.EditorLevelLibrary.get_editor_world(),
sequence,
sequence.get_bindings(),
settings,
self.fname
)
# Create Asset Container
lib.create_avalon_container(container=container_name, path=asset_dir)
data = {
"schema": "openpype:container-2.0",
"id": pipeline.AVALON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
)
for a in asset_content:
unreal.EditorAssetLibrary.save_asset(a)
return asset_content
def update(self, container, representation):
path = container["namespace"]
ar = unreal.AssetRegistryHelpers.get_asset_registry()
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_content = unreal.EditorAssetLibrary.list_assets(
path, recursive=False, include_folder=False
)
asset_name = ""
for a in asset_content:
asset = ar.get_asset_by_object_path(a)
if a.endswith("_CON"):
loaded_asset = unreal.EditorAssetLibrary.load_asset(a)
unreal.EditorAssetLibrary.set_metadata_tag(
loaded_asset, "representation", str(representation["_id"])
)
unreal.EditorAssetLibrary.set_metadata_tag(
loaded_asset, "parent", str(representation["parent"])
)
asset_name = unreal.EditorAssetLibrary.get_metadata_tag(
loaded_asset, "asset_name"
)
elif asset.asset_class == "LevelSequence":
unreal.EditorAssetLibrary.delete_asset(a)
sequence = tools.create_asset(
asset_name=asset_name,
package_path=path,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
io_asset = io.Session["AVALON_ASSET"]
asset_doc = io.find_one({
"type": "asset",
"name": io_asset
})
data = asset_doc.get("data")
if data:
sequence.set_display_rate(unreal.FrameRate(data.get("fps"), 1.0))
sequence.set_playback_start(data.get("frameStart"))
sequence.set_playback_end(data.get("frameEnd"))
settings = unreal.MovieSceneUserImportFBXSettings()
settings.set_editor_property('reduce_keys', False)
unreal.SequencerTools.import_fbx(
unreal.EditorLevelLibrary.get_editor_world(),
sequence,
sequence.get_bindings(),
settings,
str(representation["data"]["path"])
)
def remove(self, container):
path = container["namespace"]
parent_path = os.path.dirname(path)
unreal.EditorAssetLibrary.delete_directory(path)
asset_content = unreal.EditorAssetLibrary.list_assets(
parent_path, recursive=False, include_folder=True
)
if len(asset_content) == 0:
unreal.EditorAssetLibrary.delete_directory(parent_path)

View file

@ -0,0 +1,54 @@
import os
import unreal
from unreal import EditorAssetLibrary as eal
from unreal import EditorLevelLibrary as ell
import openpype.api
class ExtractCamera(openpype.api.Extractor):
"""Extract a camera."""
label = "Extract Camera"
hosts = ["unreal"]
families = ["camera"]
optional = True
def process(self, instance):
# Define extract output file path
stagingdir = self.staging_dir(instance)
fbx_filename = "{}.fbx".format(instance.name)
# Perform extraction
self.log.info("Performing extraction..")
# Check if the loaded level is the same of the instance
current_level = ell.get_editor_world().get_path_name()
assert current_level == instance.data.get("level"), \
"Wrong level loaded"
for member in instance[:]:
data = eal.find_asset_data(member)
if data.asset_class == "LevelSequence":
ar = unreal.AssetRegistryHelpers.get_asset_registry()
sequence = ar.get_asset_by_object_path(member).get_asset()
unreal.SequencerTools.export_fbx(
ell.get_editor_world(),
sequence,
sequence.get_bindings(),
unreal.FbxExportOption(),
os.path.join(stagingdir, fbx_filename)
)
break
if "representations" not in instance.data:
instance.data["representations"] = []
fbx_representation = {
'name': 'fbx',
'ext': 'fbx',
'files': fbx_filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(fbx_representation)

View file

@ -0,0 +1,84 @@
"""Loads batch context from json and continues in publish process.
Provides:
context -> Loaded batch file.
"""
import os
import pyblish.api
from avalon import io
from openpype.lib.plugin_tools import (
parse_json,
get_batch_asset_task_info
)
from openpype.lib.remote_publish import get_webpublish_conn
class CollectBatchData(pyblish.api.ContextPlugin):
"""Collect batch data from json stored in 'OPENPYPE_PUBLISH_DATA' env dir.
The directory must contain 'manifest.json' file where batch data should be
stored.
"""
# must be really early, context values are only in json file
order = pyblish.api.CollectorOrder - 0.495
label = "Collect batch data"
host = ["webpublisher"]
def process(self, context):
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
assert batch_dir, (
"Missing `OPENPYPE_PUBLISH_DATA`")
assert os.path.exists(batch_dir), \
"Folder {} doesn't exist".format(batch_dir)
project_name = os.environ.get("AVALON_PROJECT")
if project_name is None:
raise AssertionError(
"Environment `AVALON_PROJECT` was not found."
"Could not set project `root` which may cause issues."
)
batch_data = parse_json(os.path.join(batch_dir, "manifest.json"))
context.data["batchDir"] = batch_dir
context.data["batchData"] = batch_data
asset_name, task_name, task_type = get_batch_asset_task_info(
batch_data["context"]
)
os.environ["AVALON_ASSET"] = asset_name
io.Session["AVALON_ASSET"] = asset_name
os.environ["AVALON_TASK"] = task_name
io.Session["AVALON_TASK"] = task_name
context.data["asset"] = asset_name
context.data["task"] = task_name
context.data["taskType"] = task_type
self._set_ctx_path(batch_data)
def _set_ctx_path(self, batch_data):
dbcon = get_webpublish_conn()
batch_id = batch_data["batch"]
ctx_path = batch_data["context"]["path"]
self.log.info("ctx_path: {}".format(ctx_path))
self.log.info("batch_id: {}".format(batch_id))
if ctx_path and batch_id:
self.log.info("Updating log record")
dbcon.update_one(
{
"batch_id": batch_id,
"status": "in_progress"
},
{
"$set": {
"path": ctx_path
}
}
)

View file

@ -20,9 +20,8 @@ class CollectFPS(pyblish.api.InstancePlugin):
hosts = ["webpublisher"]
def process(self, instance):
fps = instance.context.data["fps"]
instance_fps = instance.data.get("fps")
if instance_fps is None:
instance.data["fps"] = instance.context.data["fps"]
instance.data.update({
"fps": fps
})
self.log.debug(f"instance.data: {pformat(instance.data)}")

View file

@ -1,20 +1,19 @@
"""Loads publishing context from json and continues in publish process.
"""Create instances from batch data and continues in publish process.
Requires:
anatomy -> context["anatomy"] *(pyblish.api.CollectorOrder - 0.11)
CollectBatchData
Provides:
context, instances -> All data from previous publishing process.
"""
import os
import json
import clique
import tempfile
import pyblish.api
from avalon import io
import pyblish.api
from openpype.lib import prepare_template_data
from openpype.lib.plugin_tools import parse_json
class CollectPublishedFiles(pyblish.api.ContextPlugin):
@ -27,51 +26,28 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder - 0.490
label = "Collect rendered frames"
host = ["webpublisher"]
_context = None
targets = ["filespublish"]
# from Settings
task_type_to_family = {}
def _load_json(self, path):
path = path.strip('\"')
assert os.path.isfile(path), (
"Path to json file doesn't exist. \"{}\"".format(path)
)
data = None
with open(path, "r") as json_file:
try:
data = json.load(json_file)
except Exception as exc:
self.log.error(
"Error loading json: "
"{} - Exception: {}".format(path, exc)
)
return data
def process(self, context):
batch_dir = context.data["batchDir"]
task_subfolders = []
for folder_name in os.listdir(batch_dir):
full_path = os.path.join(batch_dir, folder_name)
if os.path.isdir(full_path):
task_subfolders.append(full_path)
def _process_batch(self, dir_url):
task_subfolders = [
os.path.join(dir_url, o)
for o in os.listdir(dir_url)
if os.path.isdir(os.path.join(dir_url, o))]
self.log.info("task_sub:: {}".format(task_subfolders))
for task_dir in task_subfolders:
task_data = self._load_json(os.path.join(task_dir,
"manifest.json"))
self.log.info("task_data:: {}".format(task_data))
ctx = task_data["context"]
task_type = "default_task_type"
task_name = None
if ctx["type"] == "task":
items = ctx["path"].split('/')
asset = items[-2]
os.environ["AVALON_TASK"] = ctx["name"]
task_name = ctx["name"]
task_type = ctx["attributes"]["type"]
else:
asset = ctx["name"]
os.environ["AVALON_TASK"] = ""
asset_name = context.data["asset"]
task_name = context.data["task"]
task_type = context.data["taskType"]
for task_dir in task_subfolders:
task_data = parse_json(os.path.join(task_dir,
"manifest.json"))
self.log.info("task_data:: {}".format(task_data))
is_sequence = len(task_data["files"]) > 1
@ -82,26 +58,20 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
is_sequence,
extension.replace(".", ''))
subset = self._get_subset_name(family, subset_template, task_name,
task_data["variant"])
subset = self._get_subset_name(
family, subset_template, task_name, task_data["variant"]
)
version = self._get_last_version(asset_name, subset) + 1
os.environ["AVALON_ASSET"] = asset
io.Session["AVALON_ASSET"] = asset
instance = self._context.create_instance(subset)
instance.data["asset"] = asset
instance = context.create_instance(subset)
instance.data["asset"] = asset_name
instance.data["subset"] = subset
instance.data["family"] = family
instance.data["families"] = families
instance.data["version"] = \
self._get_last_version(asset, subset) + 1
instance.data["version"] = version
instance.data["stagingDir"] = tempfile.mkdtemp()
instance.data["source"] = "webpublisher"
# to store logging info into DB openpype.webpublishes
instance.data["ctx_path"] = ctx["path"]
instance.data["batch_id"] = task_data["batch"]
# to convert from email provided into Ftrack username
instance.data["user_email"] = task_data["user"]
@ -252,23 +222,3 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
return version[0].get("version") or 0
else:
return 0
def process(self, context):
self._context = context
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
assert batch_dir, (
"Missing `OPENPYPE_PUBLISH_DATA`")
assert batch_dir, \
"Folder {} doesn't exist".format(batch_dir)
project_name = os.environ.get("AVALON_PROJECT")
if project_name is None:
raise AssertionError(
"Environment `AVALON_PROJECT` was not found."
"Could not set project `root` which may cause issues."
)
self._process_batch(batch_dir)

View file

@ -1,38 +0,0 @@
import os
import pyblish.api
from openpype.lib import OpenPypeMongoConnection
class IntegrateContextToLog(pyblish.api.ContextPlugin):
""" Adds context information to log document for displaying in front end"""
label = "Integrate Context to Log"
order = pyblish.api.IntegratorOrder - 0.1
hosts = ["webpublisher"]
def process(self, context):
self.log.info("Integrate Context to Log")
mongo_client = OpenPypeMongoConnection.get_mongo_client()
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
dbcon = mongo_client[database_name]["webpublishes"]
for instance in context:
self.log.info("ctx_path: {}".format(instance.data.get("ctx_path")))
self.log.info("batch_id: {}".format(instance.data.get("batch_id")))
if instance.data.get("ctx_path") and instance.data.get("batch_id"):
self.log.info("Updating log record")
dbcon.update_one(
{
"batch_id": instance.data.get("batch_id"),
"status": "in_progress"
},
{"$set":
{
"path": instance.data.get("ctx_path")
}}
)
return

View file

@ -11,6 +11,8 @@ from avalon.api import AvalonMongoDB
from openpype.lib import OpenPypeMongoConnection
from openpype_modules.avalon_apps.rest_api import _RestApiEndpoint
from openpype.lib.remote_publish import get_task_data
from openpype.settings import get_project_settings
from openpype.lib import PypeLogger
@ -19,11 +21,16 @@ log = PypeLogger.get_logger("WebServer")
class RestApiResource:
"""Resource carrying needed info and Avalon DB connection for publish."""
def __init__(self, server_manager, executable, upload_dir):
def __init__(self, server_manager, executable, upload_dir,
studio_task_queue=None):
self.server_manager = server_manager
self.upload_dir = upload_dir
self.executable = executable
if studio_task_queue is None:
studio_task_queue = collections.deque().dequeu
self.studio_task_queue = studio_task_queue
self.dbcon = AvalonMongoDB()
self.dbcon.install()
@ -33,6 +40,8 @@ class RestApiResource:
return value.isoformat()
if isinstance(value, ObjectId):
return str(value)
if isinstance(value, set):
return list(value)
raise TypeError(value)
@classmethod
@ -175,40 +184,95 @@ class TaskNode(Node):
class WebpublisherBatchPublishEndpoint(_RestApiEndpoint):
"""Triggers headless publishing of batch."""
async def post(self, request) -> Response:
output = {}
log.info("WebpublisherBatchPublishEndpoint called")
content = await request.json()
batch_path = os.path.join(self.resource.upload_dir,
content["batch"])
# Validate existence of openpype executable
openpype_app = self.resource.executable
args = [
openpype_app,
'remotepublish',
batch_path
]
if not openpype_app or not os.path.exists(openpype_app):
msg = "Non existent OpenPype executable {}".format(openpype_app)
raise RuntimeError(msg)
log.info("WebpublisherBatchPublishEndpoint called")
content = await request.json()
# Each filter have extensions which are checked on first task item
# - first filter with extensions that are on first task is used
# - filter defines command and can extend arguments dictionary
# This is used only if 'studio_processing' is enabled on batch
studio_processing_filters = [
# Photoshop filter
{
"extensions": [".psd", ".psb"],
"command": "remotepublishfromapp",
"arguments": {
# Command 'remotepublishfromapp' requires --host argument
"host": "photoshop",
# Make sure targets are set to None for cases that default
# would change
# - targets argument is not used in 'remotepublishfromapp'
"targets": ["remotepublish"]
},
# does publish need to be handled by a queue, eg. only
# single process running concurrently?
"add_to_queue": True
}
]
batch_dir = os.path.join(self.resource.upload_dir, content["batch"])
# Default command and arguments
command = "remotepublish"
add_args = {
"host": "webpublisher",
# All commands need 'project' and 'user'
"project": content["project_name"],
"user": content["user"]
"user": content["user"],
"targets": ["filespublish"]
}
add_to_queue = False
if content.get("studio_processing"):
log.info("Post processing called for {}".format(batch_dir))
task_data = get_task_data(batch_dir)
for process_filter in studio_processing_filters:
filter_extensions = process_filter.get("extensions") or []
for file_name in task_data["files"]:
file_ext = os.path.splitext(file_name)[-1].lower()
if file_ext in filter_extensions:
# Change command
command = process_filter["command"]
# Update arguments
add_args.update(
process_filter.get("arguments") or {}
)
add_to_queue = process_filter["add_to_queue"]
break
args = [
openpype_app,
command,
batch_dir
]
for key, value in add_args.items():
args.append("--{}".format(key))
args.append(value)
# Skip key values where value is None
if value is not None:
args.append("--{}".format(key))
# Extend list into arguments (targets can be a list)
if isinstance(value, (tuple, list)):
args.extend(value)
else:
args.append(value)
log.info("args:: {}".format(args))
if add_to_queue:
log.debug("Adding to queue")
self.resource.studio_task_queue.append(args)
else:
subprocess.call(args)
subprocess.call(args)
return Response(
status=200,
body=self.resource.encode(output),
content_type="application/json"
)
@ -245,3 +309,36 @@ class PublishesStatusEndpoint(_RestApiEndpoint):
body=self.resource.encode(output),
content_type="application/json"
)
class ConfiguredExtensionsEndpoint(_RestApiEndpoint):
"""Returns dict of extensions which have mapping to family.
Returns:
{
"file_exts": [],
"sequence_exts": []
}
"""
async def get(self, project_name=None) -> Response:
sett = get_project_settings(project_name)
configured = {
"file_exts": set(),
"sequence_exts": set(),
# workfiles that could have "Studio Procesing" hardcoded for now
"studio_exts": set(["psd", "psb", "tvpp", "tvp"])
}
collect_conf = sett["webpublisher"]["publish"]["CollectPublishedFiles"]
for _, mapping in collect_conf.get("task_type_to_family", {}).items():
for _family, config in mapping.items():
if config["is_sequence"]:
configured["sequence_exts"].update(config["extensions"])
else:
configured["file_exts"].update(config["extensions"])
return Response(
status=200,
body=self.resource.encode(dict(configured)),
content_type="application/json"
)

View file

@ -1,8 +1,10 @@
import collections
import time
import os
from datetime import datetime
import requests
import json
import subprocess
from openpype.lib import PypeLogger
@ -14,7 +16,8 @@ from .webpublish_routes import (
WebpublisherHiearchyEndpoint,
WebpublisherProjectsEndpoint,
BatchStatusEndpoint,
PublishesStatusEndpoint
PublishesStatusEndpoint,
ConfiguredExtensionsEndpoint
)
@ -31,10 +34,13 @@ def run_webserver(*args, **kwargs):
port = kwargs.get("port") or 8079
server_manager = webserver_module.create_new_server_manager(port, host)
webserver_url = server_manager.url
# queue for remotepublishfromapp tasks
studio_task_queue = collections.deque()
resource = RestApiResource(server_manager,
upload_dir=kwargs["upload_dir"],
executable=kwargs["executable"])
executable=kwargs["executable"],
studio_task_queue=studio_task_queue)
projects_endpoint = WebpublisherProjectsEndpoint(resource)
server_manager.add_route(
"GET",
@ -49,6 +55,13 @@ def run_webserver(*args, **kwargs):
hiearchy_endpoint.dispatch
)
configured_ext_endpoint = ConfiguredExtensionsEndpoint(resource)
server_manager.add_route(
"GET",
"/api/webpublish/configured_ext/{project_name}",
configured_ext_endpoint.dispatch
)
# triggers publish
webpublisher_task_publish_endpoint = \
WebpublisherBatchPublishEndpoint(resource)
@ -88,6 +101,10 @@ def run_webserver(*args, **kwargs):
if time.time() - last_reprocessed > 20:
reprocess_failed(kwargs["upload_dir"], webserver_url)
last_reprocessed = time.time()
if studio_task_queue:
args = studio_task_queue.popleft()
subprocess.call(args) # blocking call
time.sleep(1.0)

View file

@ -130,6 +130,7 @@ from .applications import (
from .plugin_tools import (
TaskNotSetError,
get_subset_name,
get_subset_name_with_asset_doc,
prepare_template_data,
filter_pyblish_plugins,
set_plugin_attributes_from_settings,
@ -249,6 +250,7 @@ __all__ = [
"TaskNotSetError",
"get_subset_name",
"get_subset_name_with_asset_doc",
"filter_pyblish_plugins",
"set_plugin_attributes_from_settings",
"source_hash",

View file

@ -461,13 +461,8 @@ class ApplicationExecutable:
# On MacOS check if exists path to executable when ends with `.app`
# - it is common that path will lead to "/Applications/Blender" but
# real path is "/Applications/Blender.app"
if (
platform.system().lower() == "darwin"
and not os.path.exists(executable)
):
_executable = executable + ".app"
if os.path.exists(_executable):
executable = _executable
if platform.system().lower() == "darwin":
executable = self.macos_executable_prep(executable)
self.executable_path = executable
@ -477,6 +472,45 @@ class ApplicationExecutable:
def __repr__(self):
return "<{}> {}".format(self.__class__.__name__, self.executable_path)
@staticmethod
def macos_executable_prep(executable):
"""Try to find full path to executable file.
Real executable is stored in '*.app/Contents/MacOS/<executable>'.
Having path to '*.app' gives ability to read it's plist info and
use "CFBundleExecutable" key from plist to know what is "executable."
Plist is stored in '*.app/Contents/Info.plist'.
This is because some '*.app' directories don't have same permissions
as real executable.
"""
# Try to find if there is `.app` file
if not os.path.exists(executable):
_executable = executable + ".app"
if os.path.exists(_executable):
executable = _executable
# Try to find real executable if executable has `Contents` subfolder
contents_dir = os.path.join(executable, "Contents")
if os.path.exists(contents_dir):
executable_filename = None
# Load plist file and check for bundle executable
plist_filepath = os.path.join(contents_dir, "Info.plist")
if os.path.exists(plist_filepath):
import plistlib
parsed_plist = plistlib.readPlist(plist_filepath)
executable_filename = parsed_plist.get("CFBundleExecutable")
if executable_filename:
executable = os.path.join(
contents_dir, "MacOS", executable_filename
)
return executable
def as_args(self):
return [self.executable_path]

View file

@ -245,6 +245,27 @@ def process_sequence(
report_items["Source file was not found"].append(msg)
return report_items, 0
delivery_templates = anatomy.templates.get("delivery") or {}
delivery_template = delivery_templates.get(template_name)
if delivery_template is None:
msg = (
"Delivery template \"{}\" in anatomy of project \"{}\""
" was not found"
).format(template_name, anatomy.project_name)
report_items[""].append(msg)
return report_items, 0
# Check if 'frame' key is available in template which is required
# for sequence delivery
if "{frame" not in delivery_template:
msg = (
"Delivery template \"{}\" in anatomy of project \"{}\""
"does not contain '{{frame}}' key to fill. Delivery of sequence"
" can't be processed."
).format(template_name, anatomy.project_name)
report_items[""].append(msg)
return report_items, 0
dir_path, file_name = os.path.split(str(src_path))
context = repre["context"]

View file

@ -522,6 +522,11 @@ def get_local_site_id():
Identifier is created if does not exists yet.
"""
# override local id from environment
# used for background syncing
if os.environ.get("OPENPYPE_LOCAL_ID"):
return os.environ["OPENPYPE_LOCAL_ID"]
registry = OpenPypeSettingsRegistry()
try:
return registry.get_item("localId")

View file

@ -2,6 +2,8 @@ import json
import logging
import os
import re
import abc
import six
from .anatomy import Anatomy
@ -196,3 +198,159 @@ def get_project_basic_paths(project_name):
if isinstance(folder_structure, str):
folder_structure = json.loads(folder_structure)
return _list_path_items(folder_structure)
@six.add_metaclass(abc.ABCMeta)
class HostDirmap:
"""
Abstract class for running dirmap on a workfile in a host.
Dirmap is used to translate paths inside of host workfile from one
OS to another. (Eg. arstist created workfile on Win, different artists
opens same file on Linux.)
Expects methods to be implemented inside of host:
on_dirmap_enabled: run host code for enabling dirmap
do_dirmap: run host code to do actual remapping
"""
def __init__(self, host_name, project_settings, sync_module=None):
self.host_name = host_name
self.project_settings = project_settings
self.sync_module = sync_module # to limit reinit of Modules
self._mapping = None # cache mapping
@abc.abstractmethod
def on_enable_dirmap(self):
"""
Run host dependent operation for enabling dirmap if necessary.
"""
@abc.abstractmethod
def dirmap_routine(self, source_path, destination_path):
"""
Run host dependent remapping from source_path to destination_path
"""
def process_dirmap(self):
# type: (dict) -> None
"""Go through all paths in Settings and set them using `dirmap`.
If artists has Site Sync enabled, take dirmap mapping directly from
Local Settings when artist is syncing workfile locally.
Args:
project_settings (dict): Settings for current project.
"""
if not self._mapping:
self._mapping = self.get_mappings(self.project_settings)
if not self._mapping:
return
log.info("Processing directory mapping ...")
self.on_enable_dirmap()
log.info("mapping:: {}".format(self._mapping))
for k, sp in enumerate(self._mapping["source-path"]):
try:
print("{} -> {}".format(sp,
self._mapping["destination-path"][k]))
self.dirmap_routine(sp,
self._mapping["destination-path"][k])
except IndexError:
# missing corresponding destination path
log.error(("invalid dirmap mapping, missing corresponding"
" destination directory."))
break
except RuntimeError:
log.error("invalid path {} -> {}, mapping not registered".format( # noqa: E501
sp, self._mapping["destination-path"][k]
))
continue
def get_mappings(self, project_settings):
"""Get translation from source-path to destination-path.
It checks if Site Sync is enabled and user chose to use local
site, in that case configuration in Local Settings takes precedence
"""
local_mapping = self._get_local_sync_dirmap(project_settings)
dirmap_label = "{}-dirmap".format(self.host_name)
if not self.project_settings[self.host_name].get(dirmap_label) and \
not local_mapping:
return []
mapping = local_mapping or \
self.project_settings[self.host_name][dirmap_label]["paths"] or {}
enbled = self.project_settings[self.host_name][dirmap_label]["enabled"]
mapping_enabled = enbled or bool(local_mapping)
if not mapping or not mapping_enabled or \
not mapping.get("destination-path") or \
not mapping.get("source-path"):
return []
return mapping
def _get_local_sync_dirmap(self, project_settings):
"""
Returns dirmap if synch to local project is enabled.
Only valid mapping is from roots of remote site to local site set
in Local Settings.
Args:
project_settings (dict)
Returns:
dict : { "source-path": [XXX], "destination-path": [YYYY]}
"""
import json
mapping = {}
if not project_settings["global"]["sync_server"]["enabled"]:
log.debug("Site Sync not enabled")
return mapping
from openpype.settings.lib import get_site_local_overrides
if not self.sync_module:
from openpype.modules import ModulesManager
manager = ModulesManager()
self.sync_module = manager.modules_by_name["sync_server"]
project_name = os.getenv("AVALON_PROJECT")
active_site = self.sync_module.get_local_normalized_site(
self.sync_module.get_active_site(project_name))
remote_site = self.sync_module.get_local_normalized_site(
self.sync_module.get_remote_site(project_name))
log.debug("active {} - remote {}".format(active_site, remote_site))
if active_site == "local" \
and project_name in self.sync_module.get_enabled_projects()\
and active_site != remote_site:
sync_settings = self.sync_module.get_sync_project_setting(
os.getenv("AVALON_PROJECT"), exclude_locals=False,
cached=False)
active_overrides = get_site_local_overrides(
os.getenv("AVALON_PROJECT"), active_site)
remote_overrides = get_site_local_overrides(
os.getenv("AVALON_PROJECT"), remote_site)
log.debug("local overrides".format(active_overrides))
log.debug("remote overrides".format(remote_overrides))
for root_name, active_site_dir in active_overrides.items():
remote_site_dir = remote_overrides.get(root_name) or\
sync_settings["sites"][remote_site]["root"][root_name]
if os.path.isdir(active_site_dir):
if not mapping.get("destination-path"):
mapping["destination-path"] = []
mapping["destination-path"].append(active_site_dir)
if not mapping.get("source-path"):
mapping["source-path"] = []
mapping["source-path"].append(remote_site_dir)
log.debug("local sync mapping:: {}".format(mapping))
return mapping

View file

@ -28,17 +28,44 @@ class TaskNotSetError(KeyError):
super(TaskNotSetError, self).__init__(msg)
def get_subset_name(
def get_subset_name_with_asset_doc(
family,
variant,
task_name,
asset_id,
asset_doc,
project_name=None,
host_name=None,
default_template=None,
dynamic_data=None,
dbcon=None
dynamic_data=None
):
"""Calculate subset name based on passed context and OpenPype settings.
Subst name templates are defined in `project_settings/global/tools/creator
/subset_name_profiles` where are profiles with host name, family, task name
and task type filters. If context does not match any profile then
`DEFAULT_SUBSET_TEMPLATE` is used as default template.
That's main reason why so many arguments are required to calculate subset
name.
Args:
family (str): Instance family.
variant (str): In most of cases it is user input during creation.
task_name (str): Task name on which context is instance created.
asset_doc (dict): Queried asset document with it's tasks in data.
Used to get task type.
project_name (str): Name of project on which is instance created.
Important for project settings that are loaded.
host_name (str): One of filtering criteria for template profile
filters.
default_template (str): Default template if any profile does not match
passed context. Constant 'DEFAULT_SUBSET_TEMPLATE' is used if
is not passed.
dynamic_data (dict): Dynamic data specific for a creator which creates
instance.
dbcon (AvalonMongoDB): Mongo connection to be able query asset document
if 'asset_doc' is not passed.
"""
if not family:
return ""
@ -53,25 +80,6 @@ def get_subset_name(
project_name = avalon.api.Session["AVALON_PROJECT"]
# Function should expect asset document instead of asset id
# - that way `dbcon` is not needed
if dbcon is None:
from avalon.api import AvalonMongoDB
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
dbcon.install()
asset_doc = dbcon.find_one(
{
"type": "asset",
"_id": asset_id
},
{
"data.tasks": True
}
)
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
task_info = asset_tasks.get(task_name) or {}
task_type = task_info.get("type")
@ -113,6 +121,49 @@ def get_subset_name(
return template.format(**prepare_template_data(fill_pairs))
def get_subset_name(
family,
variant,
task_name,
asset_id,
project_name=None,
host_name=None,
default_template=None,
dynamic_data=None,
dbcon=None
):
"""Calculate subset name using OpenPype settings.
This variant of function expects asset id as argument.
This is legacy function should be replaced with
`get_subset_name_with_asset_doc` where asset document is expected.
"""
if dbcon is None:
from avalon.api import AvalonMongoDB
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
dbcon.install()
asset_doc = dbcon.find_one(
{"_id": asset_id},
{"data.tasks": True}
) or {}
return get_subset_name_with_asset_doc(
family,
variant,
task_name,
asset_doc,
project_name,
host_name,
default_template,
dynamic_data
)
def prepare_template_data(fill_pairs):
"""
Prepares formatted data for filling template.
@ -487,3 +538,48 @@ def should_decompress(file_url):
"compression: \"dwab\"" in output
return False
def parse_json(path):
"""Parses json file at 'path' location
Returns:
(dict) or None if unparsable
Raises:
AsssertionError if 'path' doesn't exist
"""
path = path.strip('\"')
assert os.path.isfile(path), (
"Path to json file doesn't exist. \"{}\"".format(path)
)
data = None
with open(path, "r") as json_file:
try:
data = json.load(json_file)
except Exception as exc:
log.error(
"Error loading json: "
"{} - Exception: {}".format(path, exc)
)
return data
def get_batch_asset_task_info(ctx):
"""Parses context data from webpublisher's batch metadata
Returns:
(tuple): asset, task_name (Optional), task_type
"""
task_type = "default_task_type"
task_name = None
asset = None
if ctx["type"] == "task":
items = ctx["path"].split('/')
asset = items[-2]
task_name = ctx["name"]
task_type = ctx["attributes"]["type"]
else:
asset = ctx["name"]
return asset, task_name, task_type

View file

@ -0,0 +1,41 @@
import weakref
class _weak_callable:
def __init__(self, obj, func):
self.im_self = obj
self.im_func = func
def __call__(self, *args, **kws):
if self.im_self is None:
return self.im_func(*args, **kws)
else:
return self.im_func(self.im_self, *args, **kws)
class WeakMethod:
""" Wraps a function or, more importantly, a bound method in
a way that allows a bound method's object to be GCed, while
providing the same interface as a normal weak reference. """
def __init__(self, fn):
try:
self._obj = weakref.ref(fn.im_self)
self._meth = fn.im_func
except AttributeError:
# It's not a bound method
self._obj = None
self._meth = fn
def __call__(self):
if self._dead():
return None
return _weak_callable(self._getobj(), self._meth)
def _dead(self):
return self._obj is not None and self._obj() is None
def _getobj(self):
if self._obj is None:
return None
return self._obj()

View file

@ -22,6 +22,9 @@ def import_filepath(filepath, module_name=None):
if module_name is None:
module_name = os.path.splitext(os.path.basename(filepath))[0]
# Make sure it is not 'unicode' in Python 2
module_name = str(module_name)
# Prepare module object where content of file will be parsed
module = types.ModuleType(module_name)

View file

@ -0,0 +1,186 @@
import os
from datetime import datetime
import sys
from bson.objectid import ObjectId
import pyblish.util
import pyblish.api
from openpype import uninstall
from openpype.lib.mongo import OpenPypeMongoConnection
from openpype.lib.plugin_tools import parse_json
def get_webpublish_conn():
"""Get connection to OP 'webpublishes' collection."""
mongo_client = OpenPypeMongoConnection.get_mongo_client()
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
return mongo_client[database_name]["webpublishes"]
def start_webpublish_log(dbcon, batch_id, user):
"""Start new log record for 'batch_id'
Args:
dbcon (OpenPypeMongoConnection)
batch_id (str)
user (str)
Returns
(ObjectId) from DB
"""
return dbcon.insert_one({
"batch_id": batch_id,
"start_date": datetime.now(),
"user": user,
"status": "in_progress",
"progress": 0.0
}).inserted_id
def publish_and_log(dbcon, _id, log, close_plugin_name=None):
"""Loops through all plugins, logs ok and fails into OP DB.
Args:
dbcon (OpenPypeMongoConnection)
_id (str)
log (OpenPypeLogger)
close_plugin_name (str): name of plugin with responsibility to
close host app
"""
# Error exit as soon as any error occurs.
error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"
close_plugin = _get_close_plugin(close_plugin_name, log)
if isinstance(_id, str):
_id = ObjectId(_id)
log_lines = []
for result in pyblish.util.publish_iter():
for record in result["records"]:
log_lines.append("{}: {}".format(
result["plugin"].label, record.msg))
if result["error"]:
log.error(error_format.format(**result))
uninstall()
log_lines.append(error_format.format(**result))
dbcon.update_one(
{"_id": _id},
{"$set":
{
"finish_date": datetime.now(),
"status": "error",
"log": os.linesep.join(log_lines)
}}
)
if close_plugin: # close host app explicitly after error
context = pyblish.api.Context()
close_plugin().process(context)
sys.exit(1)
else:
dbcon.update_one(
{"_id": _id},
{"$set":
{
"progress": max(result["progress"], 0.95),
"log": os.linesep.join(log_lines)
}}
)
# final update
dbcon.update_one(
{"_id": _id},
{"$set":
{
"finish_date": datetime.now(),
"status": "finished_ok",
"progress": 1,
"log": os.linesep.join(log_lines)
}}
)
def fail_batch(_id, batches_in_progress, dbcon):
"""Set current batch as failed as there are some stuck batches."""
running_batches = [str(batch["_id"])
for batch in batches_in_progress
if batch["_id"] != _id]
msg = "There are still running batches {}\n". \
format("\n".join(running_batches))
msg += "Ask admin to check them and reprocess current batch"
dbcon.update_one(
{"_id": _id},
{"$set":
{
"finish_date": datetime.now(),
"status": "error",
"log": msg
}}
)
raise ValueError(msg)
def find_variant_key(application_manager, host):
"""Searches for latest installed variant for 'host'
Args:
application_manager (ApplicationManager)
host (str)
Returns
(string) (optional)
Raises:
(ValueError) if no variant found
"""
app_group = application_manager.app_groups.get(host)
if not app_group or not app_group.enabled:
raise ValueError("No application {} configured".format(host))
found_variant_key = None
# finds most up-to-date variant if any installed
for variant_key, variant in app_group.variants.items():
for executable in variant.executables:
if executable.exists():
found_variant_key = variant_key
if not found_variant_key:
raise ValueError("No executable for {} found".format(host))
return found_variant_key
def _get_close_plugin(close_plugin_name, log):
if close_plugin_name:
plugins = pyblish.api.discover()
for plugin in plugins:
if plugin.__name__ == close_plugin_name:
return plugin
log.warning("Close plugin not found, app might not close.")
def get_task_data(batch_dir):
"""Return parsed data from first task manifest.json
Used for `remotepublishfromapp` command where batch contains only
single task with publishable workfile.
Returns:
(dict)
Throws:
(ValueError) if batch or task manifest not found or broken
"""
batch_data = parse_json(os.path.join(batch_dir, "manifest.json"))
if not batch_data:
raise ValueError(
"Cannot parse batch meta in {} folder".format(batch_dir))
task_dir_name = batch_data["tasks"][0]
task_data = parse_json(os.path.join(batch_dir, task_dir_name,
"manifest.json"))
if not task_data:
raise ValueError(
"Cannot parse batch meta in {} folder".format(task_data))
return task_data

Some files were not shown because too many files have changed in this diff Show more