Merge branch 'develop' into enhancement/OP-6951_Resolve-load-clip-to-timeline-at-set-time

This commit is contained in:
Jakub Ježek 2023-11-03 12:20:26 +01:00 committed by GitHub
commit 4e25d4fa52
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
177 changed files with 9084 additions and 1838 deletions

View file

@ -35,6 +35,13 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.17.5-nightly.2
- 3.17.5-nightly.1
- 3.17.4
- 3.17.4-nightly.2
- 3.17.4-nightly.1
- 3.17.3
- 3.17.3-nightly.2
- 3.17.3-nightly.1
- 3.17.2
- 3.17.2-nightly.4
@ -128,13 +135,6 @@ body:
- 3.15.1-nightly.4
- 3.15.1-nightly.3
- 3.15.1-nightly.2
- 3.15.1-nightly.1
- 3.15.0
- 3.15.0-nightly.1
- 3.14.11-nightly.4
- 3.14.11-nightly.3
- 3.14.11-nightly.2
- 3.14.11-nightly.1
validations:
required: true
- type: dropdown

View file

@ -1,6 +1,647 @@
# Changelog
## [3.17.4](https://github.com/ynput/OpenPype/tree/3.17.4)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.3...3.17.4)
### **🆕 New features**
<details>
<summary>Add Support for Husk-AYON Integration <a href="https://github.com/ynput/OpenPype/pull/5816">#5816</a></summary>
This draft pull request introduces support for integrating Husk with AYON within the OpenPype repository.
___
</details>
<details>
<summary>Push to project tool: Prepare push to project tool for AYON <a href="https://github.com/ynput/OpenPype/pull/5770">#5770</a></summary>
Cloned Push to project tool for AYON and modified it.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Max: tycache family support <a href="https://github.com/ynput/OpenPype/pull/5624">#5624</a></summary>
Tycache family supports for Tyflow Plugin in Max
___
</details>
<details>
<summary>Unreal: Changed behaviour for updating assets <a href="https://github.com/ynput/OpenPype/pull/5670">#5670</a></summary>
Changed how assets are updated in Unreal.
___
</details>
<details>
<summary>Unreal: Improved error reporting for Sequence Frame Validator <a href="https://github.com/ynput/OpenPype/pull/5730">#5730</a></summary>
Improved error reporting for Sequence Frame Validator.
___
</details>
<details>
<summary>Max: Setting tweaks on Review Family <a href="https://github.com/ynput/OpenPype/pull/5744">#5744</a></summary>
- Bug fix of not being able to publish the preferred visual style when creating preview animation
- Exposes the parameters after creating instance
- Add the Quality settings and viewport texture settings for preview animation
- add use selection for create review
___
</details>
<details>
<summary>Max: Add families with frame range extractions back to the frame range validator <a href="https://github.com/ynput/OpenPype/pull/5757">#5757</a></summary>
In 3dsMax, there are some instances which exports the files in frame range but not being added to the optional frame range validator. In this PR, these instances would have the optional frame range validators to allow users to check if frame range aligns with the context data from DB.The following families have been added to have optional frame range validator:
- maxrender
- review
- camera
- redshift proxy
- pointcache
- point cloud(tyFlow PRT)
___
</details>
<details>
<summary>TimersManager: Use available data to get context info <a href="https://github.com/ynput/OpenPype/pull/5804">#5804</a></summary>
Get context information from pyblish context data instead of using `legacy_io`.
___
</details>
<details>
<summary>Chore: Removed unused variable from `AbstractCollectRender` <a href="https://github.com/ynput/OpenPype/pull/5805">#5805</a></summary>
Removed unused `_asset` variable from `RenderInstance`.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Bugfix/houdini: wrong frame calculation with handles <a href="https://github.com/ynput/OpenPype/pull/5698">#5698</a></summary>
This PR make collect plugins to consider `handleStart` and `handleEnd` when collecting frame range it affects three parts:
- get frame range in collect plugins
- expected file in render plugins
- submit houdini job deadline plugin
___
</details>
<details>
<summary>Nuke: ayon server settings improvements <a href="https://github.com/ynput/OpenPype/pull/5746">#5746</a></summary>
Nuke settings were not aligned with OpenPype settings. Also labels needed to be improved.
___
</details>
<details>
<summary>Blender: Fix pointcache family and fix alembic extractor <a href="https://github.com/ynput/OpenPype/pull/5747">#5747</a></summary>
Fixed `pointcache` family and fixed behaviour of the alembic extractor.
___
</details>
<details>
<summary>AYON: Remove 'shotgun_api3' from dependencies <a href="https://github.com/ynput/OpenPype/pull/5803">#5803</a></summary>
Removed `shotgun_api3` dependency from openpype dependencies for AYON launcher. The dependency is already defined in shotgrid addon and change of version causes clashes.
___
</details>
<details>
<summary>Chore: Fix typo in filename <a href="https://github.com/ynput/OpenPype/pull/5807">#5807</a></summary>
Move content of `contants.py` into `constants.py`.
___
</details>
<details>
<summary>Chore: Create context respects instance changes <a href="https://github.com/ynput/OpenPype/pull/5809">#5809</a></summary>
Fix issue with unrespected change propagation in `CreateContext`. All successfully saved instances are marked as saved so they have no changes. Origin data of an instance are explicitly not handled directly by the object but by the attribute wrappers.
___
</details>
<details>
<summary>Blender: Fix tools handling in AYON mode <a href="https://github.com/ynput/OpenPype/pull/5811">#5811</a></summary>
Skip logic in `before_window_show` in blender when in AYON mode. Most of the stuff called there happes on show automatically.
___
</details>
<details>
<summary>Blender: Include Grease Pencil in review and thumbnails <a href="https://github.com/ynput/OpenPype/pull/5812">#5812</a></summary>
Include Grease Pencil in review and thumbnails.
___
</details>
<details>
<summary>Workfiles tool AYON: Fix double click of workfile <a href="https://github.com/ynput/OpenPype/pull/5813">#5813</a></summary>
Fix double click on workfiles in workfiles tool to open the file.
___
</details>
<details>
<summary>Webpublisher: removal of usage of no_of_frames in error message <a href="https://github.com/ynput/OpenPype/pull/5819">#5819</a></summary>
If it throws exception, `no_of_frames` value wont be available, so it doesn't make sense to log it.
___
</details>
<details>
<summary>Attribute Defs: Hide multivalue widget in Number by default <a href="https://github.com/ynput/OpenPype/pull/5821">#5821</a></summary>
Fixed default look of `NumberAttrWidget` by hiding its multiselection widget.
___
</details>
### **Merged pull requests**
<details>
<summary>Corrected a typo in Readme.md (Top -> To) <a href="https://github.com/ynput/OpenPype/pull/5800">#5800</a></summary>
___
</details>
<details>
<summary>Photoshop: Removed redundant copy of extension.zxp <a href="https://github.com/ynput/OpenPype/pull/5802">#5802</a></summary>
`extension.zxp` shouldn't be inside of extension folder.
___
</details>
## [3.17.3](https://github.com/ynput/OpenPype/tree/3.17.3)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.2...3.17.3)
### **🆕 New features**
<details>
<summary>Maya: Multi-shot Layout Creator <a href="https://github.com/ynput/OpenPype/pull/5710">#5710</a></summary>
New Multi-shot Layout creator is a way of automating creation of the new Layout instances in Maya, associated with correct shots, frame ranges and Camera Sequencer in Maya.
___
</details>
<details>
<summary>Colorspace: ociolook file product type workflow <a href="https://github.com/ynput/OpenPype/pull/5541">#5541</a></summary>
Traypublisher support for publishing of colorspace look files (ociolook) which are json files holding any LUT files. This new product is available for loading in Nuke host at the moment.Added colorspace selector to publisher attribute with better labeling. We are supporting also Roles and Alias (only v2 configs).
___
</details>
<details>
<summary>Scene Inventory tool: Refactor Scene Inventory tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5758">#5758</a></summary>
Modified scene inventory tool for AYON. The main difference is in how project name is defined and replacement of assets combobox with folders dialog.
___
</details>
<details>
<summary>AYON: Support dev bundles <a href="https://github.com/ynput/OpenPype/pull/5783">#5783</a></summary>
Modules can be loaded in AYON dev mode from different location.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Testing: Ingest Maya userSetup <a href="https://github.com/ynput/OpenPype/pull/5734">#5734</a></summary>
Suggesting to ingest `userSetup.py` startup script for easier collaboration and transparency of testing.
___
</details>
<details>
<summary>Fusion: Work with pathmaps <a href="https://github.com/ynput/OpenPype/pull/5329">#5329</a></summary>
Path maps are a big part of our Fusion workflow. We map the project folder to a path map within Fusion so all loaders and savers point to the path map variable. This way any computer on any OS can open any comp no matter where the project folder is located.
___
</details>
<details>
<summary>Maya: Add Maya 2024 and remove pre 2022. <a href="https://github.com/ynput/OpenPype/pull/5674">#5674</a></summary>
Adding Maya 2024 as default application variant.Removing Maya 2020 and older, as these are not supported anymore.
___
</details>
<details>
<summary>Enhancement: Houdini: Allow using template keys in Houdini shelves manager <a href="https://github.com/ynput/OpenPype/pull/5727">#5727</a></summary>
Allow using Template keys in Houdini shelves manager.
___
</details>
<details>
<summary>Houdini: Fix Show in usdview loader action <a href="https://github.com/ynput/OpenPype/pull/5737">#5737</a></summary>
Fix the "Show in USD View" loader to show up in Houdini
___
</details>
<details>
<summary>Nuke: validator of asset context with repair actions <a href="https://github.com/ynput/OpenPype/pull/5749">#5749</a></summary>
Instance nodes with different context of asset and task can be now validated and repaired via repair action.
___
</details>
<details>
<summary>AYON: Tools enhancements <a href="https://github.com/ynput/OpenPype/pull/5753">#5753</a></summary>
Few enhancements and tweaks of AYON related tools.
___
</details>
<details>
<summary>Max: Tweaks on ValidateMaxContents <a href="https://github.com/ynput/OpenPype/pull/5759">#5759</a></summary>
This PR provides enhancements on ValidateMaxContent as follow:
- Rename `ValidateMaxContents` to `ValidateContainers`
- Add related families which are required to pass the validation(All families except `Render` as the render instance is the one which only allows empty container)
___
</details>
<details>
<summary>Enhancement: Nuke refactor `SelectInvalidAction` <a href="https://github.com/ynput/OpenPype/pull/5762">#5762</a></summary>
Refactor `SelectInvalidAction` to behave like other action for other host, create `SelectInstanceNodeAction` as dedicated action to select the instance node for a failed plugin.
- Note: Selecting Instance Node will still select the instance node even if the user has currently 'fixed' the problem.
___
</details>
<details>
<summary>Enhancement: Tweak logging for Nuke for artist facing reports <a href="https://github.com/ynput/OpenPype/pull/5763">#5763</a></summary>
Tweak logs that are not artist-facing to debug level + in some cases clarify what the logged value is.
___
</details>
<details>
<summary>AYON Settings: Disk mapping <a href="https://github.com/ynput/OpenPype/pull/5786">#5786</a></summary>
Added disk mapping settings to core addon settings.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: add colorspace argument to redshiftTextureProcessor <a href="https://github.com/ynput/OpenPype/pull/5645">#5645</a></summary>
In color managed Maya, texture processing during Look Extraction wasn't passing texture colorspaces set on textures to `redshiftTextureProcessor` tool. This in effect caused this tool to produce non-zero exit code (even though the texture was converted into wrong colorspace) and therefor crash of the extractor. This PR is passing colorspace to that tool if color management is enabled.
___
</details>
<details>
<summary>Maya: don't call `cmds.ogs()` in headless mode <a href="https://github.com/ynput/OpenPype/pull/5769">#5769</a></summary>
`cmds.ogs()` is a call that will crash if Maya is running in headless mode (mayabatch, mayapy). This is handling that case.
___
</details>
<details>
<summary>Resolve: inventory management fix <a href="https://github.com/ynput/OpenPype/pull/5673">#5673</a></summary>
Loaded Timeline item containers are now updating correctly and version management is working as it suppose to.
- [x] updating loaded timeline items
- [x] Removing of loaded timeline items
___
</details>
<details>
<summary>Blender: Remove 'update_hierarchy' <a href="https://github.com/ynput/OpenPype/pull/5756">#5756</a></summary>
Remove `update_hierarchy` function which is causing crashes in scene inventory tool.
___
</details>
<details>
<summary>Max: bug fix on the settings in pointcloud family <a href="https://github.com/ynput/OpenPype/pull/5768">#5768</a></summary>
Bug fix on the settings being errored out in validate point cloud(see links:https://github.com/ynput/OpenPype/pull/5759#pullrequestreview-1676681705) and passibly in point cloud extractor.
___
</details>
<details>
<summary>AYON settings: Fix default factory of tools <a href="https://github.com/ynput/OpenPype/pull/5773">#5773</a></summary>
Fix default factory of application tools.
___
</details>
<details>
<summary>Fusion: added missing OPENPYPE_VERSION <a href="https://github.com/ynput/OpenPype/pull/5776">#5776</a></summary>
Fusion submission to Deadline was missing OPENPYPE_VERSION env var when submitting from build (not source code directly). This missing env var might break rendering on DL if path to OP executable (openpype_console.exe) is not set explicitly and might cause an issue when different versions of OP are deployed.This PR adds this environment variable.
___
</details>
<details>
<summary>Ftrack: Skip tasks when looking for asset equivalent entity <a href="https://github.com/ynput/OpenPype/pull/5777">#5777</a></summary>
Skip tasks when looking for asset equivalent entity.
___
</details>
<details>
<summary>Nuke: loading gizmos fixes <a href="https://github.com/ynput/OpenPype/pull/5779">#5779</a></summary>
Gizmo product is not offered in Loader as plugin. It is also updating as expected.
___
</details>
<details>
<summary>General: thumbnail extractor as last extractor <a href="https://github.com/ynput/OpenPype/pull/5780">#5780</a></summary>
Fixing issue with the order of the `ExtractOIIOTranscode` and `ExtractThumbnail` plugins. The problem was that the `ExtractThumbnail` plugin was processed before the `ExtractOIIOTranscode` plugin. As a result, the `ExtractThumbnail` plugin did not inherit the `review` tag into the representation data. This caused the `ExtractThumbnail` plugin to fail in processing and creating thumbnails.
___
</details>
<details>
<summary>Bug: fix key in application json <a href="https://github.com/ynput/OpenPype/pull/5787">#5787</a></summary>
In PR #5705 `maya` was wrongly used instead of `mayapy`, breaking AYON defaults in AYON Application Addon.
___
</details>
<details>
<summary>'NumberAttrWidget' shows 'Multiselection' label on multiselection <a href="https://github.com/ynput/OpenPype/pull/5792">#5792</a></summary>
Attribute definition widget 'NumberAttrWidget' shows `< Multiselection >` label on multiselection.
___
</details>
<details>
<summary>Publisher: Selection change by enabled checkbox on instance update attributes <a href="https://github.com/ynput/OpenPype/pull/5793">#5793</a></summary>
Change of instance by clicking on enabled checkbox will actually update attributes on right side to match the selection.
___
</details>
<details>
<summary>Houdini: Remove `setParms` call since it's responsibility of `self.imprint` to set the values <a href="https://github.com/ynput/OpenPype/pull/5796">#5796</a></summary>
Revert a recent change made in #5621 due to this comment. However the change is faulty as can be seen mentioned here
___
</details>
<details>
<summary>AYON loader: Fix SubsetLoader functionality <a href="https://github.com/ynput/OpenPype/pull/5799">#5799</a></summary>
Fix SubsetLoader plugin processing in AYON loader tool.
___
</details>
### **Merged pull requests**
<details>
<summary>Houdini: Add self publish button <a href="https://github.com/ynput/OpenPype/pull/5621">#5621</a></summary>
This PR allows single publishing by adding a publish button to created rop nodes in HoudiniAdmins are much welcomed to enable it from houdini general settingsPublish Button also includes all input publish instances. in this screen shot the alembic instance is ignored because the switch is turned off
___
</details>
<details>
<summary>Nuke: fixing UNC support for OCIO path <a href="https://github.com/ynput/OpenPype/pull/5771">#5771</a></summary>
UNC paths were broken on windows for custom OCIO path and this is solving the issue with removed double slash at start of path
___
</details>
## [3.17.2](https://github.com/ynput/OpenPype/tree/3.17.2)

View file

@ -279,7 +279,7 @@ arguments and it will create zip file that OpenPype can use.
Building documentation
----------------------
Top build API documentation, run `.\tools\make_docs(.ps1|.sh)`. It will create html documentation
To build API documentation, run `.\tools\make_docs(.ps1|.sh)`. It will create html documentation
from current sources in `.\docs\build`.
**Note that it needs existing virtual environment.**

View file

@ -148,13 +148,14 @@ def applied_view(window, camera, isolate=None, options=None):
area.ui_type = "VIEW_3D"
meshes = [obj for obj in window.scene.objects if obj.type == "MESH"]
types = {"MESH", "GPENCIL"}
objects = [obj for obj in window.scene.objects if obj.type in types]
if camera == "AUTO":
space.region_3d.view_perspective = "ORTHO"
isolate_objects(window, isolate or meshes)
isolate_objects(window, isolate or objects)
else:
isolate_objects(window, isolate or meshes)
isolate_objects(window, isolate or objects)
space.camera = window.scene.objects.get(camera)
space.region_3d.view_perspective = "CAMERA"

View file

@ -284,6 +284,8 @@ class LaunchLoader(LaunchQtApp):
_tool_name = "loader"
def before_window_show(self):
if AYON_SERVER_ENABLED:
return
self._window.set_context(
{"asset": get_current_asset_name()},
refresh=True
@ -309,6 +311,8 @@ class LaunchManager(LaunchQtApp):
_tool_name = "sceneinventory"
def before_window_show(self):
if AYON_SERVER_ENABLED:
return
self._window.refresh()
@ -320,6 +324,8 @@ class LaunchLibrary(LaunchQtApp):
_tool_name = "libraryloader"
def before_window_show(self):
if AYON_SERVER_ENABLED:
return
self._window.refresh()
@ -340,6 +346,8 @@ class LaunchWorkFiles(LaunchQtApp):
return result
def before_window_show(self):
if AYON_SERVER_ENABLED:
return
self._window.root = str(Path(
os.environ.get("AVALON_WORKDIR", ""),
os.environ.get("AVALON_SCENEDIR", ""),

View file

@ -3,11 +3,11 @@
import bpy
from openpype.pipeline import get_current_task_name
import openpype.hosts.blender.api.plugin
from openpype.hosts.blender.api import lib
from openpype.hosts.blender.api import plugin, lib, ops
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
class CreatePointcache(plugin.Creator):
"""Polygonal static geometry"""
name = "pointcacheMain"
@ -16,20 +16,36 @@ class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
icon = "gears"
def process(self):
""" Run the creator on Blender main thread"""
mti = ops.MainThreadItem(self._process)
ops.execute_in_main_thread(mti)
def _process(self):
# Get Instance Container or create it if it does not exist
instances = bpy.data.collections.get(AVALON_INSTANCES)
if not instances:
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
bpy.context.scene.collection.children.link(instances)
# Create instance object
asset = self.data["asset"]
subset = self.data["subset"]
name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
collection = bpy.data.collections.new(name=name)
bpy.context.scene.collection.children.link(collection)
name = plugin.asset_name(asset, subset)
asset_group = bpy.data.objects.new(name=name, object_data=None)
asset_group.empty_display_type = 'SINGLE_ARROW'
instances.objects.link(asset_group)
self.data['task'] = get_current_task_name()
lib.imprint(collection, self.data)
lib.imprint(asset_group, self.data)
# Add selected objects to instance
if (self.options or {}).get("useSelection"):
objects = lib.get_selection()
for obj in objects:
collection.objects.link(obj)
if obj.type == 'EMPTY':
objects.extend(obj.children)
bpy.context.view_layer.objects.active = asset_group
selected = lib.get_selection()
for obj in selected:
if obj.parent in selected:
obj.select_set(False)
continue
selected.append(asset_group)
bpy.ops.object.parent_set(keep_transform=True)
return collection
return asset_group

View file

@ -60,18 +60,29 @@ class CacheModelLoader(plugin.AssetLoader):
imported = lib.get_selection()
# Children must be linked before parents,
# otherwise the hierarchy will break
# Use first EMPTY without parent as container
container = next(
(obj for obj in imported
if obj.type == "EMPTY" and not obj.parent),
None
)
objects = []
if container:
nodes = list(container.children)
for obj in imported:
obj.parent = asset_group
for obj in nodes:
obj.parent = asset_group
for obj in imported:
objects.append(obj)
imported.extend(list(obj.children))
bpy.data.objects.remove(container)
objects.reverse()
objects.extend(nodes)
for obj in nodes:
objects.extend(obj.children_recursive)
else:
for obj in imported:
obj.parent = asset_group
objects = imported
for obj in objects:
# Unlink the object from all collections
@ -137,6 +148,7 @@ class CacheModelLoader(plugin.AssetLoader):
bpy.context.scene.collection.children.link(containers)
asset_group = bpy.data.objects.new(group_name, object_data=None)
asset_group.empty_display_type = 'SINGLE_ARROW'
containers.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)

View file

@ -19,85 +19,51 @@ class CollectInstances(pyblish.api.ContextPlugin):
@staticmethod
def get_asset_groups() -> Generator:
"""Return all 'model' collections.
Check if the family is 'model' and if it doesn't have the
representation set. If the representation is set, it is a loaded model
and we don't want to publish it.
"""Return all instances that are empty objects asset groups.
"""
instances = bpy.data.collections.get(AVALON_INSTANCES)
for obj in instances.objects:
avalon_prop = obj.get(AVALON_PROPERTY) or dict()
for obj in list(instances.objects) + list(instances.children):
avalon_prop = obj.get(AVALON_PROPERTY) or {}
if avalon_prop.get('id') == 'pyblish.avalon.instance':
yield obj
@staticmethod
def get_collections() -> Generator:
"""Return all 'model' collections.
Check if the family is 'model' and if it doesn't have the
representation set. If the representation is set, it is a loaded model
and we don't want to publish it.
"""
for collection in bpy.data.collections:
avalon_prop = collection.get(AVALON_PROPERTY) or dict()
if avalon_prop.get('id') == 'pyblish.avalon.instance':
yield collection
def create_instance(context, group):
avalon_prop = group[AVALON_PROPERTY]
asset = avalon_prop['asset']
family = avalon_prop['family']
subset = avalon_prop['subset']
task = avalon_prop['task']
name = f"{asset}_{subset}"
return context.create_instance(
name=name,
family=family,
families=[family],
subset=subset,
asset=asset,
task=task,
)
def process(self, context):
"""Collect the models from the current Blender scene."""
asset_groups = self.get_asset_groups()
collections = self.get_collections()
for group in asset_groups:
avalon_prop = group[AVALON_PROPERTY]
asset = avalon_prop['asset']
family = avalon_prop['family']
subset = avalon_prop['subset']
task = avalon_prop['task']
name = f"{asset}_{subset}"
instance = context.create_instance(
name=name,
family=family,
families=[family],
subset=subset,
asset=asset,
task=task,
)
objects = list(group.children)
members = set()
for obj in objects:
objects.extend(list(obj.children))
members.add(obj)
members.add(group)
instance[:] = list(members)
self.log.debug(json.dumps(instance.data, indent=4))
for obj in instance:
self.log.debug(obj)
instance = self.create_instance(context, group)
members = []
if isinstance(group, bpy.types.Collection):
members = list(group.objects)
family = instance.data["family"]
if family == "animation":
for obj in group.objects:
if obj.type == 'EMPTY' and obj.get(AVALON_PROPERTY):
members.extend(
child for child in obj.children
if child.type == 'ARMATURE')
else:
members = group.children_recursive
for collection in collections:
avalon_prop = collection[AVALON_PROPERTY]
asset = avalon_prop['asset']
family = avalon_prop['family']
subset = avalon_prop['subset']
task = avalon_prop['task']
name = f"{asset}_{subset}"
instance = context.create_instance(
name=name,
family=family,
families=[family],
subset=subset,
asset=asset,
task=task,
)
members = list(collection.objects)
if family == "animation":
for obj in collection.objects:
if obj.type == 'EMPTY' and obj.get(AVALON_PROPERTY):
for child in obj.children:
if child.type == 'ARMATURE':
members.append(child)
members.append(collection)
members.append(group)
instance[:] = members
self.log.debug(json.dumps(instance.data, indent=4))
for obj in instance:

View file

@ -31,11 +31,12 @@ class CollectReview(pyblish.api.InstancePlugin):
focal_length = cameras[0].data.lens
# get isolate objects list from meshes instance members .
# get isolate objects list from meshes instance members.
types = {"MESH", "GPENCIL"}
isolate_objects = [
obj
for obj in instance
if isinstance(obj, bpy.types.Object) and obj.type == "MESH"
if isinstance(obj, bpy.types.Object) and obj.type in types
]
if not instance.data.get("remove"):

View file

@ -12,8 +12,7 @@ class ExtractABC(publish.Extractor):
label = "Extract ABC"
hosts = ["blender"]
families = ["model", "pointcache"]
optional = True
families = ["pointcache"]
def process(self, instance):
# Define extract output file path
@ -22,7 +21,7 @@ class ExtractABC(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()
@ -62,3 +61,12 @@ class ExtractABC(publish.Extractor):
self.log.info("Extracted instance '%s' to: %s",
instance.name, representation)
class ExtractModelABC(ExtractABC):
"""Extract model as ABC."""
label = "Extract Model ABC"
hosts = ["blender"]
families = ["model"]
optional = True

View file

@ -21,7 +21,7 @@ class ExtractAnimationABC(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -21,7 +21,7 @@ class ExtractBlend(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
data_blocks = set()

View file

@ -21,7 +21,7 @@ class ExtractBlendAnimation(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
data_blocks = set()

View file

@ -22,7 +22,7 @@ class ExtractCameraABC(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -21,7 +21,7 @@ class ExtractCamera(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -22,7 +22,7 @@ class ExtractFBX(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -23,7 +23,7 @@ class ExtractAnimationFBX(publish.Extractor):
stagingdir = self.staging_dir(instance)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
# The first collection object in the instance is taken, as there
# should be only one that contains the asset group.

View file

@ -117,7 +117,7 @@ class ExtractLayout(publish.Extractor):
stagingdir = self.staging_dir(instance)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
if "representations" not in instance.data:
instance.data["representations"] = []

View file

@ -24,9 +24,7 @@ class ExtractPlayblast(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.01
def process(self, instance):
self.log.info("Extracting capture..")
self.log.info(instance.data)
self.log.debug("Extracting capture..")
# get scene fps
fps = instance.data.get("fps")
@ -34,14 +32,14 @@ class ExtractPlayblast(publish.Extractor):
fps = bpy.context.scene.render.fps
instance.data["fps"] = fps
self.log.info(f"fps: {fps}")
self.log.debug(f"fps: {fps}")
# If start and end frames cannot be determined,
# get them from Blender timeline.
start = instance.data.get("frameStart", bpy.context.scene.frame_start)
end = instance.data.get("frameEnd", bpy.context.scene.frame_end)
self.log.info(f"start: {start}, end: {end}")
self.log.debug(f"start: {start}, end: {end}")
assert end > start, "Invalid time range !"
# get cameras
@ -55,7 +53,7 @@ class ExtractPlayblast(publish.Extractor):
filename = instance.name
path = os.path.join(stagingdir, filename)
self.log.info(f"Outputting images to {path}")
self.log.debug(f"Outputting images to {path}")
project_settings = instance.context.data["project_settings"]["blender"]
presets = project_settings["publish"]["ExtractPlayblast"]["presets"]
@ -100,7 +98,7 @@ class ExtractPlayblast(publish.Extractor):
frame_collection = collections[0]
self.log.info(f"We found collection of interest {frame_collection}")
self.log.debug(f"Found collection of interest {frame_collection}")
instance.data.setdefault("representations", [])

View file

@ -24,13 +24,13 @@ class ExtractThumbnail(publish.Extractor):
presets = {}
def process(self, instance):
self.log.info("Extracting capture..")
self.log.debug("Extracting capture..")
stagingdir = self.staging_dir(instance)
filename = instance.name
path = os.path.join(stagingdir, filename)
self.log.info(f"Outputting images to {path}")
self.log.debug(f"Outputting images to {path}")
camera = instance.data.get("review_camera", "AUTO")
start = instance.data.get("frameStart", bpy.context.scene.frame_start)
@ -61,7 +61,7 @@ class ExtractThumbnail(publish.Extractor):
thumbnail = os.path.basename(self._fix_output_path(path))
self.log.info(f"thumbnail: {thumbnail}")
self.log.debug(f"thumbnail: {thumbnail}")
instance.data.setdefault("representations", [])

View file

@ -10,7 +10,7 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
optional = True
hosts = ["blender"]
families = ["animation", "model", "rig", "action", "layout", "blendScene",
"render"]
"pointcache", "render"]
def process(self, context):

View file

@ -280,7 +280,11 @@ def get_current_comp():
@contextlib.contextmanager
def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
def comp_lock_and_undo_chunk(
comp,
undo_queue_name="Script CMD",
keep_undo=True,
):
"""Lock comp and open an undo chunk during the context"""
try:
comp.Lock()
@ -288,4 +292,4 @@ def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
yield
finally:
comp.Unlock()
comp.EndUndo()
comp.EndUndo(keep_undo)

View file

@ -69,8 +69,6 @@ class CreateSaver(NewCreator):
# TODO Is this needed?
saver[file_format]["SaveAlpha"] = 1
self._imprint(saver, instance_data)
# Register the CreatedInstance
instance = CreatedInstance(
family=self.family,
@ -78,6 +76,8 @@ class CreateSaver(NewCreator):
data=instance_data,
creator=self,
)
data = instance.data_to_store()
self._imprint(saver, data)
# Insert the transient data
instance.transient_data["tool"] = saver

View file

@ -11,6 +11,7 @@ class FusionSetFrameRangeLoader(load.LoaderPlugin):
families = ["animation",
"camera",
"imagesequence",
"render",
"yeticache",
"pointcache",
"render"]
@ -46,6 +47,7 @@ class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
families = ["animation",
"camera",
"imagesequence",
"render",
"yeticache",
"pointcache",
"render"]

View file

@ -0,0 +1,87 @@
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.fusion.api import (
imprint_container,
get_current_comp,
comp_lock_and_undo_chunk
)
from openpype.hosts.fusion.api.lib import get_fusion_module
class FusionLoadUSD(load.LoaderPlugin):
"""Load USD into Fusion
Support for USD was added since Fusion 18.5
"""
families = ["*"]
representations = ["*"]
extensions = {"usd", "usda", "usdz"}
label = "Load USD"
order = -10
icon = "code-fork"
color = "orange"
tool_type = "uLoader"
@classmethod
def apply_settings(cls, project_settings, system_settings):
super(FusionLoadUSD, cls).apply_settings(project_settings,
system_settings)
if cls.enabled:
# Enable only in Fusion 18.5+
fusion = get_fusion_module()
version = fusion.GetVersion()
major = version[1]
minor = version[2]
is_usd_supported = (major, minor) >= (18, 5)
cls.enabled = is_usd_supported
def load(self, context, name, namespace, data):
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
# Create the Loader with the filename path set
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp, "Create tool"):
path = self.fname
args = (-32768, -32768)
tool = comp.AddTool(self.tool_type, *args)
tool["Filename"] = path
imprint_container(tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__)
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
path = get_representation_path(representation)
with comp_lock_and_undo_chunk(comp, "Update tool"):
tool["Filename"] = path
# Update the imprinted representation
tool.SetData("avalon.representation", str(representation["_id"]))
def remove(self, container):
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
with comp_lock_and_undo_chunk(comp, "Remove tool"):
tool.Delete()

View file

@ -0,0 +1,105 @@
import pyblish.api
from openpype.pipeline import (
PublishValidationError,
OptionalPyblishPluginMixin,
)
from openpype.hosts.fusion.api.action import SelectInvalidAction
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
def get_tool_resolution(tool, frame):
"""Return the 2D input resolution to a Fusion tool
If the current tool hasn't been rendered its input resolution
hasn't been saved. To combat this, add an expression in
the comments field to read the resolution
Args
tool (Fusion Tool): The tool to query input resolution
frame (int): The frame to query the resolution on.
Returns:
tuple: width, height as 2-tuple of integers
"""
comp = tool.Composition
# False undo removes the undo-stack from the undo list
with comp_lock_and_undo_chunk(comp, "Read resolution", False):
# Save old comment
old_comment = ""
has_expression = False
if tool["Comments"][frame] != "":
if tool["Comments"].GetExpression() is not None:
has_expression = True
old_comment = tool["Comments"].GetExpression()
tool["Comments"].SetExpression(None)
else:
old_comment = tool["Comments"][frame]
tool["Comments"][frame] = ""
# Get input width
tool["Comments"].SetExpression("self.Input.OriginalWidth")
width = int(tool["Comments"][frame])
# Get input height
tool["Comments"].SetExpression("self.Input.OriginalHeight")
height = int(tool["Comments"][frame])
# Reset old comment
tool["Comments"].SetExpression(None)
if has_expression:
tool["Comments"].SetExpression(old_comment)
else:
tool["Comments"][frame] = old_comment
return width, height
class ValidateSaverResolution(
pyblish.api.InstancePlugin, OptionalPyblishPluginMixin
):
"""Validate that the saver input resolution matches the asset resolution"""
order = pyblish.api.ValidatorOrder
label = "Validate Asset Resolution"
families = ["render"]
hosts = ["fusion"]
optional = True
actions = [SelectInvalidAction]
def process(self, instance):
if not self.is_active(instance.data):
return
resolution = self.get_resolution(instance)
expected_resolution = self.get_expected_resolution(instance)
if resolution != expected_resolution:
raise PublishValidationError(
"The input's resolution does not match "
"the asset's resolution {}x{}.\n\n"
"The input's resolution is {}x{}.".format(
expected_resolution[0], expected_resolution[1],
resolution[0], resolution[1]
)
)
@classmethod
def get_invalid(cls, instance):
resolution = cls.get_resolution(instance)
expected_resolution = cls.get_expected_resolution(instance)
if resolution != expected_resolution:
saver = instance.data["tool"]
return [saver]
@classmethod
def get_resolution(cls, instance):
saver = instance.data["tool"]
first_frame = instance.data["frameStartHandle"]
return get_tool_resolution(saver, frame=first_frame)
@classmethod
def get_expected_resolution(cls, instance):
data = instance.data["assetEntity"]["data"]
return data["resolutionWidth"], data["resolutionHeight"]

View file

@ -11,14 +11,22 @@ import json
import six
from openpype.lib import StringTemplate
from openpype.client import get_asset_by_name
from openpype.client import get_project, get_asset_by_name
from openpype.settings import get_current_project_settings
from openpype.pipeline import get_current_project_name, get_current_asset_name
from openpype.pipeline.context_tools import (
get_current_context_template_data,
get_current_project_asset
from openpype.pipeline import (
Anatomy,
get_current_project_name,
get_current_asset_name,
registered_host,
get_current_context,
get_current_host_name,
)
from openpype.pipeline.create import CreateContext
from openpype.pipeline.template_data import get_template_data
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.widgets import popup
from openpype.tools.utils.host_tools import get_tool_by_name
import hou
@ -325,52 +333,61 @@ def imprint(node, data, update=False):
return
current_parms = {p.name(): p for p in node.spareParms()}
update_parms = []
templates = []
update_parm_templates = []
new_parm_templates = []
for key, value in data.items():
if value is None:
continue
parm = get_template_from_value(key, value)
parm_template = get_template_from_value(key, value)
if key in current_parms:
if node.evalParm(key) == data[key]:
if node.evalParm(key) == value:
continue
if not update:
log.debug(f"{key} already exists on {node}")
else:
log.debug(f"replacing {key}")
update_parms.append(parm)
update_parm_templates.append(parm_template)
continue
templates.append(parm)
new_parm_templates.append(parm_template)
parm_group = node.parmTemplateGroup()
parm_folder = parm_group.findFolder("Extra")
# if folder doesn't exist yet, create one and append to it,
# else append to existing one
if not parm_folder:
parm_folder = hou.FolderParmTemplate("folder", "Extra")
parm_folder.setParmTemplates(templates)
parm_group.append(parm_folder)
else:
for template in templates:
parm_group.appendToFolder(parm_folder, template)
# this is needed because the pointer to folder
# is for some reason lost every call to `appendToFolder()`
parm_folder = parm_group.findFolder("Extra")
node.setParmTemplateGroup(parm_group)
# TODO: Updating is done here, by calling probably deprecated functions.
# This needs to be addressed in the future.
if not update_parms:
if not new_parm_templates and not update_parm_templates:
return
for parm in update_parms:
node.replaceSpareParmTuple(parm.name(), parm)
parm_group = node.parmTemplateGroup()
# Add new parm templates
if new_parm_templates:
parm_folder = parm_group.findFolder("Extra")
# if folder doesn't exist yet, create one and append to it,
# else append to existing one
if not parm_folder:
parm_folder = hou.FolderParmTemplate("folder", "Extra")
parm_folder.setParmTemplates(new_parm_templates)
parm_group.append(parm_folder)
else:
# Add to parm template folder instance then replace with updated
# one in parm template group
for template in new_parm_templates:
parm_folder.addParmTemplate(template)
parm_group.replace(parm_folder.name(), parm_folder)
# Update existing parm templates
for parm_template in update_parm_templates:
parm_group.replace(parm_template.name(), parm_template)
# When replacing a parm with a parm of the same name it preserves its
# value if before the replacement the parm was not at the default,
# because it has a value override set. Since we're trying to update the
# parm by using the new value as `default` we enforce the parm is at
# default state
node.parm(parm_template.name()).revertToDefaults()
node.setParmTemplateGroup(parm_group)
def lsattr(attr, value=None, root="/"):
@ -552,29 +569,64 @@ def get_template_from_value(key, value):
return parm
def get_frame_data(node):
"""Get the frame data: start frame, end frame and steps.
def get_frame_data(node, handle_start=0, handle_end=0, log=None):
"""Get the frame data: start frame, end frame, steps,
start frame with start handle and end frame with end handle.
This function uses Houdini node's `trange`, `t1, `t2` and `t3`
parameters as the source of truth for the full inclusive frame
range to render, as such these are considered as the frame
range including the handles.
The non-inclusive frame start and frame end without handles
are computed by subtracting the handles from the inclusive
frame range.
Args:
node(hou.Node)
node (hou.Node): ROP node to retrieve frame range from,
the frame range is assumed to be the frame range
*including* the start and end handles.
handle_start (int): Start handles.
handle_end (int): End handles.
log (logging.Logger): Logger to log to.
Returns:
dict: frame data for star, end and steps.
dict: frame data for start, end, steps,
start with handle and end with handle
"""
if log is None:
log = self.log
data = {}
if node.parm("trange") is None:
log.debug(
"Node has no 'trange' parameter: {}".format(node.path())
)
return data
if node.evalParm("trange") == 0:
self.log.debug("trange is 0")
return data
data["frameStartHandle"] = hou.intFrame()
data["frameEndHandle"] = hou.intFrame()
data["byFrameStep"] = 1.0
data["frameStart"] = node.evalParm("f1")
data["frameEnd"] = node.evalParm("f2")
data["steps"] = node.evalParm("f3")
log.info(
"Node '{}' has 'Render current frame' set.\n"
"Asset Handles are ignored.\n"
"frameStart and frameEnd are set to the "
"current frame.".format(node.path())
)
else:
data["frameStartHandle"] = int(node.evalParm("f1"))
data["frameEndHandle"] = int(node.evalParm("f2"))
data["byFrameStep"] = node.evalParm("f3")
data["handleStart"] = handle_start
data["handleEnd"] = handle_end
data["frameStart"] = data["frameStartHandle"] + data["handleStart"]
data["frameEnd"] = data["frameEndHandle"] - data["handleEnd"]
return data
@ -753,6 +805,45 @@ def get_camera_from_container(container):
return cameras[0]
def get_current_context_template_data_with_asset_data():
"""
TODOs:
Support both 'assetData' and 'folderData' in future.
"""
context = get_current_context()
project_name = context["project_name"]
asset_name = context["asset_name"]
task_name = context["task_name"]
host_name = get_current_host_name()
anatomy = Anatomy(project_name)
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
# get context specific vars
asset_data = asset_doc["data"]
# compute `frameStartHandle` and `frameEndHandle`
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
if frame_start is not None and handle_start is not None:
asset_data["frameStartHandle"] = frame_start - handle_start
if frame_end is not None and handle_end is not None:
asset_data["frameEndHandle"] = frame_end + handle_end
template_data = get_template_data(
project_doc, asset_doc, task_name, host_name
)
template_data["root"] = anatomy.roots
template_data["assetData"] = asset_data
return template_data
def get_context_var_changes():
"""get context var changes."""
@ -772,7 +863,7 @@ def get_context_var_changes():
return houdini_vars_to_update
# Get Template data
template_data = get_current_context_template_data()
template_data = get_current_context_template_data_with_asset_data()
# Set Houdini Vars
for item in houdini_vars:
@ -847,3 +938,97 @@ def update_houdini_vars_context_dialog():
dialog.on_clicked.connect(update_houdini_vars_context)
dialog.show()
def publisher_show_and_publish(comment=None):
"""Open publisher window and trigger publishing action.
Args:
comment (Optional[str]): Comment to set in publisher window.
"""
main_window = get_main_window()
publisher_window = get_tool_by_name(
tool_name="publisher",
parent=main_window,
)
publisher_window.show_and_publish(comment)
def find_rop_input_dependencies(input_tuple):
"""Self publish from ROP nodes.
Arguments:
tuple (hou.RopNode.inputDependencies) which can be a nested tuples
represents the input dependencies of the ROP node, consisting of ROPs,
and the frames that need to be be rendered prior to rendering the ROP.
Returns:
list of the RopNode.path() that can be found inside
the input tuple.
"""
out_list = []
if isinstance(input_tuple[0], hou.RopNode):
return input_tuple[0].path()
if isinstance(input_tuple[0], tuple):
for item in input_tuple:
out_list.append(find_rop_input_dependencies(item))
return out_list
def self_publish():
"""Self publish from ROP nodes.
Firstly, it gets the node and its dependencies.
Then, it deactivates all other ROPs
And finaly, it triggers the publishing action.
"""
result, comment = hou.ui.readInput(
"Add Publish Comment",
buttons=("Publish", "Cancel"),
title="Publish comment",
close_choice=1
)
if result:
return
current_node = hou.node(".")
inputs_paths = find_rop_input_dependencies(
current_node.inputDependencies()
)
inputs_paths.append(current_node.path())
host = registered_host()
context = CreateContext(host, reset=True)
for instance in context.instances:
node_path = instance.data.get("instance_node")
instance["active"] = node_path and node_path in inputs_paths
context.save_changes()
publisher_show_and_publish(comment)
def add_self_publish_button(node):
"""Adds a self publish button to the rop node."""
label = os.environ.get("AVALON_LABEL") or "OpenPype"
button_parm = hou.ButtonParmTemplate(
"ayon_self_publish",
"{} Publish".format(label),
script_callback="from openpype.hosts.houdini.api.lib import "
"self_publish; self_publish()",
script_callback_language=hou.scriptLanguage.Python,
join_with_next=True
)
template = node.parmTemplateGroup()
template.insertBefore((0,), button_parm)
node.setParmTemplateGroup(template)

View file

@ -13,7 +13,7 @@ from openpype.pipeline import (
CreatedInstance
)
from openpype.lib import BoolDef
from .lib import imprint, read, lsattr
from .lib import imprint, read, lsattr, add_self_publish_button
class OpenPypeCreatorError(CreatorError):
@ -168,6 +168,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
"""Base class for most of the Houdini creator plugins."""
selected_nodes = []
settings_name = None
add_publish_button = False
def create(self, subset_name, instance_data, pre_create_data):
try:
@ -195,6 +196,10 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
self)
self._add_instance_to_context(instance)
self.imprint(instance_node, instance.data_to_store())
if self.add_publish_button:
add_self_publish_button(instance_node)
return instance
except hou.Error as er:
@ -245,6 +250,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
key: changes[key].new_value
for key in changes.changed_keys
}
# Update parm templates and values
self.imprint(
instance_node,
new_values,
@ -316,6 +322,12 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
def apply_settings(self, project_settings):
"""Method called on initialization of plugin to apply settings."""
# Apply General Settings
houdini_general_settings = project_settings["houdini"]["general"]
self.add_publish_button = houdini_general_settings.get(
"add_self_publish_button", False)
# Apply Creator Settings
settings_name = self.settings_name
if settings_name is None:
settings_name = self.__class__.__name__

View file

@ -6,8 +6,12 @@ import platform
from openpype.settings import get_project_settings
from openpype.pipeline import get_current_project_name
from openpype.lib import StringTemplate
import hou
from .lib import get_current_context_template_data_with_asset_data
log = logging.getLogger("openpype.hosts.houdini.shelves")
@ -20,23 +24,33 @@ def generate_shelves():
# load configuration of houdini shelves
project_name = get_current_project_name()
project_settings = get_project_settings(project_name)
shelves_set_config = project_settings["houdini"]["shelves"]
shelves_configs = project_settings["houdini"]["shelves"]
if not shelves_set_config:
if not shelves_configs:
log.debug("No custom shelves found in project settings.")
return
for shelf_set_config in shelves_set_config:
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
shelf_set_os_filepath = shelf_set_filepath[current_os]
if shelf_set_os_filepath:
if not os.path.isfile(shelf_set_os_filepath):
log.error("Shelf path doesn't exist - "
"{}".format(shelf_set_os_filepath))
continue
# Get Template data
template_data = get_current_context_template_data_with_asset_data()
hou.shelves.newShelfSet(file_path=shelf_set_os_filepath)
continue
for config in shelves_configs:
selected_option = config["options"]
shelf_set_config = config[selected_option]
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
if shelf_set_filepath:
shelf_set_os_filepath = shelf_set_filepath[current_os]
if shelf_set_os_filepath:
shelf_set_os_filepath = get_path_using_template_data(
shelf_set_os_filepath, template_data
)
if not os.path.isfile(shelf_set_os_filepath):
log.error("Shelf path doesn't exist - "
"{}".format(shelf_set_os_filepath))
continue
hou.shelves.loadFile(shelf_set_os_filepath)
continue
shelf_set_name = shelf_set_config.get('shelf_set_name')
if not shelf_set_name:
@ -81,7 +95,9 @@ def generate_shelves():
"script path of the tool.")
continue
tool = get_or_create_tool(tool_definition, shelf)
tool = get_or_create_tool(
tool_definition, shelf, template_data
)
if not tool:
continue
@ -144,7 +160,7 @@ def get_or_create_shelf(shelf_label):
return new_shelf
def get_or_create_tool(tool_definition, shelf):
def get_or_create_tool(tool_definition, shelf, template_data):
"""This function verifies if the tool exists and updates it. If not, creates
a new one.
@ -162,10 +178,16 @@ def get_or_create_tool(tool_definition, shelf):
return
script_path = tool_definition["script"]
script_path = get_path_using_template_data(script_path, template_data)
if not script_path or not os.path.exists(script_path):
log.warning("This path doesn't exist - {}".format(script_path))
return
icon_path = tool_definition["icon"]
if icon_path:
icon_path = get_path_using_template_data(icon_path, template_data)
tool_definition["icon"] = icon_path
existing_tools = shelf.tools()
existing_tool = next(
(tool for tool in existing_tools if tool.label() == tool_label),
@ -184,3 +206,10 @@ def get_or_create_tool(tool_definition, shelf):
tool_name = re.sub(r"[^\w\d]+", "_", tool_label).lower()
return hou.shelves.newTool(name=tool_name, **tool_definition)
def get_path_using_template_data(path, template_data):
path = StringTemplate.format_template(path, template_data)
path = path.replace("\\", "/")
return path

View file

@ -1,4 +1,5 @@
import os
import platform
import subprocess
from openpype.lib.vendor_bin_utils import find_executable
@ -8,17 +9,31 @@ from openpype.pipeline import load
class ShowInUsdview(load.LoaderPlugin):
"""Open USD file in usdview"""
families = ["colorbleed.usd"]
label = "Show in usdview"
representations = ["usd", "usda", "usdlc", "usdnc"]
order = 10
representations = ["*"]
families = ["*"]
extensions = {"usd", "usda", "usdlc", "usdnc", "abc"}
order = 15
icon = "code-fork"
color = "white"
def load(self, context, name=None, namespace=None, data=None):
from pathlib import Path
usdview = find_executable("usdview")
if platform.system() == "Windows":
executable = "usdview.bat"
else:
executable = "usdview"
usdview = find_executable(executable)
if not usdview:
raise RuntimeError("Unable to find usdview")
# For some reason Windows can return the path like:
# C:/PROGRA~1/SIDEEF~1/HOUDIN~1.435/bin/usdview
# convert to resolved path so `subprocess` can take it
usdview = str(Path(usdview).resolve().as_posix())
filepath = self.filepath_from_context(context)
filepath = os.path.normpath(filepath)
@ -30,14 +45,4 @@ class ShowInUsdview(load.LoaderPlugin):
self.log.info("Start houdini variant of usdview...")
# For now avoid some pipeline environment variables that initialize
# Avalon in Houdini as it is redundant for usdview and slows boot time
env = os.environ.copy()
env.pop("PYTHONPATH", None)
env.pop("HOUDINI_SCRIPT_PATH", None)
env.pop("HOUDINI_MENU_PATH", None)
# Force string to avoid unicode issues
env = {str(key): str(value) for key, value in env.items()}
subprocess.Popen([usdview, filepath, "--renderer", "GL"], env=env)
subprocess.Popen([usdview, filepath, "--renderer", "GL"])

View file

@ -20,7 +20,9 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
"""
label = "Arnold ROP Render Products"
order = pyblish.api.CollectorOrder + 0.4
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
hosts = ["houdini"]
families = ["arnold_rop"]
@ -126,8 +128,9 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
return path
expected_files = []
start = instance.data["frameStart"]
end = instance.data["frameEnd"]
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
for i in range(int(start), (int(end) + 1)):
expected_files.append(
os.path.join(dir, (file % i)).replace("\\", "/"))

View file

@ -1,56 +0,0 @@
import hou
import pyblish.api
class CollectInstanceNodeFrameRange(pyblish.api.InstancePlugin):
"""Collect time range frame data for the instance node."""
order = pyblish.api.CollectorOrder + 0.001
label = "Instance Node Frame Range"
hosts = ["houdini"]
def process(self, instance):
node_path = instance.data.get("instance_node")
node = hou.node(node_path) if node_path else None
if not node_path or not node:
self.log.debug("No instance node found for instance: "
"{}".format(instance))
return
frame_data = self.get_frame_data(node)
if not frame_data:
return
self.log.info("Collected time data: {}".format(frame_data))
instance.data.update(frame_data)
def get_frame_data(self, node):
"""Get the frame data: start frame, end frame and steps
Args:
node(hou.Node)
Returns:
dict
"""
data = {}
if node.parm("trange") is None:
self.log.debug("Node has no 'trange' parameter: "
"{}".format(node.path()))
return data
if node.evalParm("trange") == 0:
# Ignore 'render current frame'
self.log.debug("Node '{}' has 'Render current frame' set. "
"Time range data ignored.".format(node.path()))
return data
data["frameStart"] = node.evalParm("f1")
data["frameEnd"] = node.evalParm("f2")
data["byFrameStep"] = node.evalParm("f3")
return data

View file

@ -91,27 +91,3 @@ class CollectInstances(pyblish.api.ContextPlugin):
context[:] = sorted(context, key=sort_by_family)
return context
def get_frame_data(self, node):
"""Get the frame data: start frame, end frame and steps
Args:
node(hou.Node)
Returns:
dict
"""
data = {}
if node.parm("trange") is None:
return data
if node.evalParm("trange") == 0:
return data
data["frameStart"] = node.evalParm("f1")
data["frameEnd"] = node.evalParm("f2")
data["byFrameStep"] = node.evalParm("f3")
return data

View file

@ -24,7 +24,9 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
"""
label = "Karma ROP Render Products"
order = pyblish.api.CollectorOrder + 0.4
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
hosts = ["houdini"]
families = ["karma_rop"]
@ -95,8 +97,9 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
return path
expected_files = []
start = instance.data["frameStart"]
end = instance.data["frameEnd"]
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
for i in range(int(start), (int(end) + 1)):
expected_files.append(
os.path.join(dir, (file % i)).replace("\\", "/"))

View file

@ -24,7 +24,9 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
"""
label = "Mantra ROP Render Products"
order = pyblish.api.CollectorOrder + 0.4
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
hosts = ["houdini"]
families = ["mantra_rop"]
@ -118,8 +120,9 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
return path
expected_files = []
start = instance.data["frameStart"]
end = instance.data["frameEnd"]
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
for i in range(int(start), (int(end) + 1)):
expected_files.append(
os.path.join(dir, (file % i)).replace("\\", "/"))

View file

@ -24,7 +24,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
"""
label = "Redshift ROP Render Products"
order = pyblish.api.CollectorOrder + 0.4
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
hosts = ["houdini"]
families = ["redshift_rop"]
@ -132,8 +134,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
return path
expected_files = []
start = instance.data["frameStart"]
end = instance.data["frameEnd"]
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
for i in range(int(start), (int(end) + 1)):
expected_files.append(
os.path.join(dir, (file % i)).replace("\\", "/"))

View file

@ -2,40 +2,106 @@
"""Collector plugin for frames data on ROP instances."""
import hou # noqa
import pyblish.api
from openpype.lib import BoolDef
from openpype.hosts.houdini.api import lib
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectRopFrameRange(pyblish.api.InstancePlugin):
class CollectRopFrameRange(pyblish.api.InstancePlugin,
OpenPypePyblishPluginMixin):
"""Collect all frames which would be saved from the ROP nodes"""
order = pyblish.api.CollectorOrder
hosts = ["houdini"]
# This specific order value is used so that
# this plugin runs after CollectAnatomyInstanceData
order = pyblish.api.CollectorOrder + 0.499
label = "Collect RopNode Frame Range"
use_asset_handles = True
def process(self, instance):
node_path = instance.data.get("instance_node")
if node_path is None:
# Instance without instance node like a workfile instance
self.log.debug(
"No instance node found for instance: {}".format(instance)
)
return
ropnode = hou.node(node_path)
frame_data = lib.get_frame_data(ropnode)
if "frameStart" in frame_data and "frameEnd" in frame_data:
attr_values = self.get_attr_values_from_data(instance.data)
# Log artist friendly message about the collected frame range
message = (
"Frame range {0[frameStart]} - {0[frameEnd]}"
).format(frame_data)
if frame_data.get("step", 1.0) != 1.0:
message += " with step {0[step]}".format(frame_data)
self.log.info(message)
if attr_values.get("use_handles", self.use_asset_handles):
asset_data = instance.data["assetEntity"]["data"]
handle_start = asset_data.get("handleStart", 0)
handle_end = asset_data.get("handleEnd", 0)
else:
handle_start = 0
handle_end = 0
instance.data.update(frame_data)
frame_data = lib.get_frame_data(
ropnode, handle_start, handle_end, self.log
)
# Add frame range to label if the instance has a frame range.
label = instance.data.get("label", instance.data["name"])
instance.data["label"] = (
"{0} [{1[frameStart]} - {1[frameEnd]}]".format(label,
frame_data)
if not frame_data:
return
# Log debug message about the collected frame range
frame_start = frame_data["frameStart"]
frame_end = frame_data["frameEnd"]
if attr_values.get("use_handles", self.use_asset_handles):
self.log.debug(
"Full Frame range with Handles "
"[{frame_start_handle} - {frame_end_handle}]"
.format(
frame_start_handle=frame_data["frameStartHandle"],
frame_end_handle=frame_data["frameEndHandle"]
)
)
else:
self.log.debug(
"Use handles is deactivated for this instance, "
"start and end handles are set to 0."
)
# Log collected frame range to the user
message = "Frame range [{frame_start} - {frame_end}]".format(
frame_start=frame_start,
frame_end=frame_end
)
if handle_start or handle_end:
message += " with handles [{handle_start}]-[{handle_end}]".format(
handle_start=handle_start,
handle_end=handle_end
)
self.log.info(message)
if frame_data.get("byFrameStep", 1.0) != 1.0:
self.log.info("Frame steps {}".format(frame_data["byFrameStep"]))
instance.data.update(frame_data)
# Add frame range to label if the instance has a frame range.
label = instance.data.get("label", instance.data["name"])
instance.data["label"] = (
"{label} [{frame_start} - {frame_end}]"
.format(
label=label,
frame_start=frame_start,
frame_end=frame_end
)
)
@classmethod
def get_attribute_defs(cls):
return [
BoolDef("use_handles",
tooltip="Disable this if you want the publisher to"
" ignore start and end handles specified in the"
" asset data for this publish instance",
default=cls.use_asset_handles,
label="Use asset handles")
]

View file

@ -24,7 +24,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
"""
label = "VRay ROP Render Products"
order = pyblish.api.CollectorOrder + 0.4
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
hosts = ["houdini"]
families = ["vray_rop"]
@ -115,8 +117,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
return path
expected_files = []
start = instance.data["frameStart"]
end = instance.data["frameEnd"]
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
for i in range(int(start), (int(end) + 1)):
expected_files.append(
os.path.join(dir, (file % i)).replace("\\", "/"))

View file

@ -0,0 +1,106 @@
# -*- coding: utf-8 -*-
import pyblish.api
from openpype.pipeline import PublishValidationError
from openpype.pipeline.publish import RepairAction
from openpype.hosts.houdini.api.action import SelectInvalidAction
import hou
class DisableUseAssetHandlesAction(RepairAction):
label = "Disable use asset handles"
icon = "mdi.toggle-switch-off"
class ValidateFrameRange(pyblish.api.InstancePlugin):
"""Validate Frame Range.
Due to the usage of start and end handles,
then Frame Range must be >= (start handle + end handle)
which results that frameEnd be smaller than frameStart
"""
order = pyblish.api.ValidatorOrder - 0.1
hosts = ["houdini"]
label = "Validate Frame Range"
actions = [DisableUseAssetHandlesAction, SelectInvalidAction]
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise PublishValidationError(
title="Invalid Frame Range",
message=(
"Invalid frame range because the instance "
"start frame ({0[frameStart]}) is higher than "
"the end frame ({0[frameEnd]})"
.format(instance.data)
),
description=(
"## Invalid Frame Range\n"
"The frame range for the instance is invalid because "
"the start frame is higher than the end frame.\n\nThis "
"is likely due to asset handles being applied to your "
"instance or the ROP node's start frame "
"is set higher than the end frame.\n\nIf your ROP frame "
"range is correct and you do not want to apply asset "
"handles make sure to disable Use asset handles on the "
"publish instance."
)
)
@classmethod
def get_invalid(cls, instance):
if not instance.data.get("instance_node"):
return
rop_node = hou.node(instance.data["instance_node"])
frame_start = instance.data.get("frameStart")
frame_end = instance.data.get("frameEnd")
if frame_start is None or frame_end is None:
cls.log.debug(
"Skipping frame range validation for "
"instance without frame data: {}".format(rop_node.path())
)
return
if frame_start > frame_end:
cls.log.info(
"The ROP node render range is set to "
"{0[frameStartHandle]} - {0[frameEndHandle]} "
"The asset handles applied to the instance are start handle "
"{0[handleStart]} and end handle {0[handleEnd]}"
.format(instance.data)
)
return [rop_node]
@classmethod
def repair(cls, instance):
if not cls.get_invalid(instance):
# Already fixed
return
# Disable use asset handles
context = instance.context
create_context = context.data["create_context"]
instance_id = instance.data.get("instance_id")
if not instance_id:
cls.log.debug("'{}' must have instance id"
.format(instance))
return
created_instance = create_context.get_instance_by_id(instance_id)
if not instance_id:
cls.log.debug("Unable to find instance '{}' by id"
.format(instance))
return
created_instance.publish_attributes["CollectRopFrameRange"]["use_handles"] = False # noqa
create_context.save_changes()
cls.log.debug("use asset handles is turned off for '{}'"
.format(instance))

View file

@ -234,27 +234,40 @@ def reset_scene_resolution():
set_scene_resolution(width, height)
def get_frame_range() -> Union[Dict[str, Any], None]:
def get_frame_range(asset_doc=None) -> Union[Dict[str, Any], None]:
"""Get the current assets frame range and handles.
Args:
asset_doc (dict): Asset Entity Data
Returns:
dict: with frame start, frame end, handle start, handle end.
"""
# Set frame start/end
asset = get_current_project_asset()
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
if asset_doc is None:
asset_doc = get_current_project_asset()
data = asset_doc["data"]
frame_start = data.get("frameStart")
frame_end = data.get("frameEnd")
if frame_start is None or frame_end is None:
return
return {}
frame_start = int(frame_start)
frame_end = int(frame_end)
handle_start = int(data.get("handleStart", 0))
handle_end = int(data.get("handleEnd", 0))
frame_start_handle = frame_start - handle_start
frame_end_handle = frame_end + handle_end
handle_start = asset["data"].get("handleStart", 0)
handle_end = asset["data"].get("handleEnd", 0)
return {
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end
"handleEnd": handle_end,
"frameStartHandle": frame_start_handle,
"frameEndHandle": frame_end_handle,
}
@ -274,12 +287,11 @@ def reset_frame_range(fps: bool = True):
fps_number = float(data_fps["data"]["fps"])
rt.frameRate = fps_number
frame_range = get_frame_range()
frame_start_handle = frame_range["frameStart"] - int(
frame_range["handleStart"]
)
frame_end_handle = frame_range["frameEnd"] + int(frame_range["handleEnd"])
set_timeline(frame_start_handle, frame_end_handle)
set_render_frame_range(frame_start_handle, frame_end_handle)
set_timeline(
frame_range["frameStartHandle"], frame_range["frameEndHandle"])
set_render_frame_range(
frame_range["frameStartHandle"], frame_range["frameEndHandle"])
def set_context_setting():
@ -321,21 +333,6 @@ def is_headless():
return rt.maxops.isInNonInteractiveMode()
@contextlib.contextmanager
def viewport_camera(camera):
original = rt.viewport.getCamera()
if not original:
# if there is no original camera
# use the current camera as original
original = rt.getNodeByName(camera)
review_camera = rt.getNodeByName(camera)
try:
rt.viewport.setCamera(review_camera)
yield
finally:
rt.viewport.setCamera(original)
def set_timeline(frameStart, frameEnd):
"""Set frame range for timeline editor in Max
"""
@ -497,3 +494,22 @@ def get_plugins() -> list:
plugin_info_list.append(plugin_info)
return plugin_info_list
@contextlib.contextmanager
def render_resolution(width, height):
"""Set render resolution option during context
Args:
width (int): render width
height (int): render height
"""
current_renderWidth = rt.renderWidth
current_renderHeight = rt.renderHeight
try:
rt.renderWidth = width
rt.renderHeight = height
yield
finally:
rt.renderWidth = current_renderWidth
rt.renderHeight = current_renderHeight

View file

@ -0,0 +1,309 @@
import logging
import contextlib
from pymxs import runtime as rt
from .lib import get_max_version, render_resolution
log = logging.getLogger("openpype.hosts.max")
@contextlib.contextmanager
def play_preview_when_done(has_autoplay):
"""Set preview playback option during context
Args:
has_autoplay (bool): autoplay during creating
preview animation
"""
current_playback = rt.preferences.playPreviewWhenDone
try:
rt.preferences.playPreviewWhenDone = has_autoplay
yield
finally:
rt.preferences.playPreviewWhenDone = current_playback
@contextlib.contextmanager
def viewport_camera(camera):
"""Set viewport camera during context
***For 3dsMax 2024+
Args:
camera (str): viewport camera
"""
original = rt.viewport.getCamera()
if not original:
# if there is no original camera
# use the current camera as original
original = rt.getNodeByName(camera)
review_camera = rt.getNodeByName(camera)
try:
rt.viewport.setCamera(review_camera)
yield
finally:
rt.viewport.setCamera(original)
@contextlib.contextmanager
def viewport_preference_setting(general_viewport,
nitrous_viewport,
vp_button_mgr):
"""Function to set viewport setting during context
***For Max Version < 2024
Args:
camera (str): Viewport camera for review render
general_viewport (dict): General viewport setting
nitrous_viewport (dict): Nitrous setting for
preview animation
vp_button_mgr (dict): Viewport button manager Setting
preview_preferences (dict): Preview Preferences Setting
"""
orig_vp_grid = rt.viewport.getGridVisibility(1)
orig_vp_bkg = rt.viewport.IsSolidBackgroundColorMode()
nitrousGraphicMgr = rt.NitrousGraphicsManager
viewport_setting = nitrousGraphicMgr.GetActiveViewportSetting()
vp_button_mgr_original = {
key: getattr(rt.ViewportButtonMgr, key) for key in vp_button_mgr
}
nitrous_viewport_original = {
key: getattr(viewport_setting, key) for key in nitrous_viewport
}
try:
rt.viewport.setGridVisibility(1, general_viewport["dspGrid"])
rt.viewport.EnableSolidBackgroundColorMode(general_viewport["dspBkg"])
for key, value in vp_button_mgr.items():
setattr(rt.ViewportButtonMgr, key, value)
for key, value in nitrous_viewport.items():
if nitrous_viewport[key] != nitrous_viewport_original[key]:
setattr(viewport_setting, key, value)
yield
finally:
rt.viewport.setGridVisibility(1, orig_vp_grid)
rt.viewport.EnableSolidBackgroundColorMode(orig_vp_bkg)
for key, value in vp_button_mgr_original.items():
setattr(rt.ViewportButtonMgr, key, value)
for key, value in nitrous_viewport_original.items():
setattr(viewport_setting, key, value)
def _render_preview_animation_max_2024(
filepath, start, end, percentSize, ext, viewport_options):
"""Render viewport preview with MaxScript using `CreateAnimation`.
****For 3dsMax 2024+
Args:
filepath (str): filepath for render output without frame number and
extension, for example: /path/to/file
start (int): startFrame
end (int): endFrame
percentSize (float): render resolution multiplier by 100
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
viewport_options (dict): viewport setting options, e.g.
{"vpStyle": "defaultshading", "vpPreset": "highquality"}
Returns:
list: Created files
"""
# the percentSize argument must be integer
percent = int(percentSize)
filepath = filepath.replace("\\", "/")
preview_output = f"{filepath}..{ext}"
frame_template = f"{filepath}.{{:04d}}.{ext}"
job_args = []
for key, value in viewport_options.items():
if isinstance(value, bool):
if value:
job_args.append(f"{key}:{value}")
elif isinstance(value, str):
if key == "vpStyle":
if value == "Realistic":
value = "defaultshading"
elif value == "Shaded":
log.warning(
"'Shaded' Mode not supported in "
"preview animation in Max 2024.\n"
"Using 'defaultshading' instead.")
value = "defaultshading"
elif value == "ConsistentColors":
value = "flatcolor"
else:
value = value.lower()
elif key == "vpPreset":
if value == "Quality":
value = "highquality"
elif value == "Customize":
value = "userdefined"
else:
value = value.lower()
job_args.append(f"{key}: #{value}")
job_str = (
f'CreatePreview filename:"{preview_output}" outputAVI:false '
f"percentSize:{percent} start:{start} end:{end} "
f"{' '.join(job_args)} "
"autoPlay:false"
)
rt.completeRedraw()
rt.execute(job_str)
# Return the created files
return [frame_template.format(frame) for frame in range(start, end + 1)]
def _render_preview_animation_max_pre_2024(
filepath, startFrame, endFrame, percentSize, ext):
"""Render viewport animation by creating bitmaps
***For 3dsMax Version <2024
Args:
filepath (str): filepath without frame numbers and extension
startFrame (int): start frame
endFrame (int): end frame
percentSize (float): render resolution multiplier by 100
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
ext (str): image extension
Returns:
list: Created filepaths
"""
# get the screenshot
percent = percentSize / 100.0
res_width = int(round(rt.renderWidth * percent))
res_height = int(round(rt.renderHeight * percent))
viewportRatio = float(res_width / res_height)
frame_template = "{}.{{:04}}.{}".format(filepath, ext)
frame_template.replace("\\", "/")
files = []
user_cancelled = False
for frame in range(startFrame, endFrame + 1):
rt.sliderTime = frame
filepath = frame_template.format(frame)
preview_res = rt.bitmap(
res_width, res_height, filename=filepath
)
dib = rt.gw.getViewportDib()
dib_width = float(dib.width)
dib_height = float(dib.height)
renderRatio = float(dib_width / dib_height)
if viewportRatio <= renderRatio:
heightCrop = (dib_width / renderRatio)
topEdge = int((dib_height - heightCrop) / 2.0)
tempImage_bmp = rt.bitmap(dib_width, heightCrop)
src_box_value = rt.Box2(0, topEdge, dib_width, heightCrop)
else:
widthCrop = dib_height * renderRatio
leftEdge = int((dib_width - widthCrop) / 2.0)
tempImage_bmp = rt.bitmap(widthCrop, dib_height)
src_box_value = rt.Box2(0, leftEdge, dib_width, dib_height)
rt.pasteBitmap(dib, tempImage_bmp, src_box_value, rt.Point2(0, 0))
# copy the bitmap and close it
rt.copy(tempImage_bmp, preview_res)
rt.close(tempImage_bmp)
rt.save(preview_res)
rt.close(preview_res)
rt.close(dib)
files.append(filepath)
if rt.keyboard.escPressed:
user_cancelled = True
break
# clean up the cache
rt.gc(delayed=True)
if user_cancelled:
raise RuntimeError("User cancelled rendering of viewport animation.")
return files
def render_preview_animation(
filepath,
ext,
camera,
start_frame=None,
end_frame=None,
percentSize=100.0,
width=1920,
height=1080,
viewport_options=None):
"""Render camera review animation
Args:
filepath (str): filepath to render to, without frame number and
extension
ext (str): output file extension
camera (str): viewport camera for preview render
start_frame (int): start frame
end_frame (int): end frame
percentSize (float): render resolution multiplier by 100
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
width (int): render resolution width
height (int): render resolution height
viewport_options (dict): viewport setting options
Returns:
list: Rendered output files
"""
if start_frame is None:
start_frame = int(rt.animationRange.start)
if end_frame is None:
end_frame = int(rt.animationRange.end)
if viewport_options is None:
viewport_options = viewport_options_for_preview_animation()
with play_preview_when_done(False):
with viewport_camera(camera):
with render_resolution(width, height):
if int(get_max_version()) < 2024:
with viewport_preference_setting(
viewport_options["general_viewport"],
viewport_options["nitrous_viewport"],
viewport_options["vp_btn_mgr"]
):
return _render_preview_animation_max_pre_2024(
filepath,
start_frame,
end_frame,
percentSize,
ext
)
else:
return _render_preview_animation_max_2024(
filepath,
start_frame,
end_frame,
percentSize,
ext,
viewport_options
)
def viewport_options_for_preview_animation():
"""Get default viewport options for `render_preview_animation`.
Returns:
dict: viewport setting options
"""
# viewport_options should be the dictionary
if int(get_max_version()) < 2024:
return {
"visualStyleMode": "defaultshading",
"viewportPreset": "highquality",
"vpTexture": False,
"dspGeometry": True,
"dspShapes": False,
"dspLights": False,
"dspCameras": False,
"dspHelpers": False,
"dspParticles": True,
"dspBones": False,
"dspBkg": True,
"dspGrid": False,
"dspSafeFrame": False,
"dspFrameNums": False
}
else:
viewport_options = {}
viewport_options["general_viewport"] = {
"dspBkg": True,
"dspGrid": False
}
viewport_options["nitrous_viewport"] = {
"VisualStyleMode": "defaultshading",
"ViewportPreset": "highquality",
"UseTextureEnabled": False
}
viewport_options["vp_btn_mgr"] = {
"EnableButtons": False}
return viewport_options

View file

@ -13,31 +13,50 @@ class CreateReview(plugin.MaxCreator):
icon = "video-camera"
def create(self, subset_name, instance_data, pre_create_data):
instance_data["imageFormat"] = pre_create_data.get("imageFormat")
instance_data["keepImages"] = pre_create_data.get("keepImages")
instance_data["percentSize"] = pre_create_data.get("percentSize")
instance_data["rndLevel"] = pre_create_data.get("rndLevel")
# Transfer settings from pre create to instance
creator_attributes = instance_data.setdefault(
"creator_attributes", dict())
for key in ["imageFormat",
"keepImages",
"review_width",
"review_height",
"percentSize",
"visualStyleMode",
"viewportPreset",
"vpTexture"]:
if key in pre_create_data:
creator_attributes[key] = pre_create_data[key]
super(CreateReview, self).create(
subset_name,
instance_data,
pre_create_data)
def get_pre_create_attr_defs(self):
attrs = super(CreateReview, self).get_pre_create_attr_defs()
def get_instance_attr_defs(self):
image_format_enum = ["exr", "jpg", "png"]
image_format_enum = [
"bmp", "cin", "exr", "jpg", "hdr", "rgb", "png",
"rla", "rpf", "dds", "sgi", "tga", "tif", "vrimg"
visual_style_preset_enum = [
"Realistic", "Shaded", "Facets",
"ConsistentColors", "HiddenLine",
"Wireframe", "BoundingBox", "Ink",
"ColorInk", "Acrylic", "Tech", "Graphite",
"ColorPencil", "Pastel", "Clay", "ModelAssist"
]
preview_preset_enum = [
"Quality", "Standard", "Performance",
"DXMode", "Customize"]
rndLevel_enum = [
"smoothhighlights", "smooth", "facethighlights",
"facet", "flat", "litwireframe", "wireframe", "box"
]
return attrs + [
return [
NumberDef("review_width",
label="Review width",
decimals=0,
minimum=0,
default=1920),
NumberDef("review_height",
label="Review height",
decimals=0,
minimum=0,
default=1080),
BoolDef("keepImages",
label="Keep Image Sequences",
default=False),
@ -50,8 +69,20 @@ class CreateReview(plugin.MaxCreator):
default=100,
minimum=1,
decimals=0),
EnumDef("rndLevel",
rndLevel_enum,
default="smoothhighlights",
label="Preference")
EnumDef("visualStyleMode",
visual_style_preset_enum,
default="Realistic",
label="Preference"),
EnumDef("viewportPreset",
preview_preset_enum,
default="Quality",
label="Pre-View Preset"),
BoolDef("vpTexture",
label="Viewport Texture",
default=False)
]
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attributes
attrs = super().get_pre_create_attr_defs()
return attrs + self.get_instance_attr_defs()

View file

@ -0,0 +1,11 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating TyCache."""
from openpype.hosts.max.api import plugin
class CreateTyCache(plugin.MaxCreator):
"""Creator plugin for TyCache."""
identifier = "io.openpype.creators.max.tycache"
label = "TyCache"
family = "tycache"
icon = "gear"

View file

@ -0,0 +1,64 @@
import os
from openpype.hosts.max.api import lib, maintained_selection
from openpype.hosts.max.api.lib import (
unique_namespace,
)
from openpype.hosts.max.api.pipeline import (
containerise,
get_previous_loaded_object,
update_custom_attribute_data
)
from openpype.pipeline import get_representation_path, load
class TyCacheLoader(load.LoaderPlugin):
"""TyCache Loader."""
families = ["tycache"]
representations = ["tyc"]
order = -8
icon = "code-fork"
color = "green"
def load(self, context, name=None, namespace=None, data=None):
"""Load tyCache"""
from pymxs import runtime as rt
filepath = os.path.normpath(self.filepath_from_context(context))
obj = rt.tyCache()
obj.filename = filepath
namespace = unique_namespace(
name + "_",
suffix="_",
)
obj.name = f"{namespace}:{obj.name}"
return containerise(
name, [obj], context,
namespace, loader=self.__class__.__name__)
def update(self, container, representation):
"""update the container"""
from pymxs import runtime as rt
path = get_representation_path(representation)
node = rt.GetNodeByName(container["instance_node"])
node_list = get_previous_loaded_object(node)
update_custom_attribute_data(node, node_list)
with maintained_selection():
for tyc in node_list:
tyc.filename = path
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
})
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
"""remove the container"""
from pymxs import runtime as rt
node = rt.GetNodeByName(container["instance_node"])
rt.Delete(node)

View file

@ -0,0 +1,22 @@
# -*- coding: utf-8 -*-
import pyblish.api
from pymxs import runtime as rt
class CollectFrameRange(pyblish.api.InstancePlugin):
"""Collect Frame Range."""
order = pyblish.api.CollectorOrder + 0.01
label = "Collect Frame Range"
hosts = ['max']
families = ["camera", "maxrender",
"pointcache", "pointcloud",
"review", "redshiftproxy"]
def process(self, instance):
if instance.data["family"] == "maxrender":
instance.data["frameStartHandle"] = int(rt.rendStart)
instance.data["frameEndHandle"] = int(rt.rendEnd)
else:
instance.data["frameStartHandle"] = int(rt.animationRange.start)
instance.data["frameEndHandle"] = int(rt.animationRange.end)

View file

@ -4,17 +4,15 @@ import os
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline import get_current_asset_name
from openpype.hosts.max.api import colorspace
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
from openpype.client import get_last_version_by_subset_name
class CollectRender(pyblish.api.InstancePlugin):
"""Collect Render for Deadline"""
order = pyblish.api.CollectorOrder + 0.01
order = pyblish.api.CollectorOrder + 0.02
label = "Collect 3dsmax Render Layers"
hosts = ['max']
families = ["maxrender"]
@ -27,7 +25,6 @@ class CollectRender(pyblish.api.InstancePlugin):
filepath = current_file.replace("\\", "/")
context.data['currentFile'] = current_file
asset = get_current_asset_name()
files_by_aov = RenderProducts().get_beauty(instance.name)
aovs = RenderProducts().get_aovs(instance.name)
@ -49,19 +46,6 @@ class CollectRender(pyblish.api.InstancePlugin):
instance.data["files"].append(files_by_aov)
img_format = RenderProducts().image_format()
project_name = context.data["projectName"]
asset_doc = context.data["assetEntity"]
asset_id = asset_doc["_id"]
version_doc = get_last_version_by_subset_name(project_name,
instance.name,
asset_id)
self.log.debug("version_doc: {0}".format(version_doc))
version_int = 1
if version_doc:
version_int += int(version_doc["name"])
self.log.debug(f"Setting {version_int} to context.")
context.data["version"] = version_int
# OCIO config not support in
# most of the 3dsmax renderers
# so this is currently hard coded
@ -87,7 +71,7 @@ class CollectRender(pyblish.api.InstancePlugin):
renderer = str(renderer_class).split(":")[0]
# also need to get the render dir for conversion
data = {
"asset": asset,
"asset": instance.data["asset"],
"subset": str(instance.name),
"publish": True,
"maxversion": str(get_max_version()),
@ -97,9 +81,8 @@ class CollectRender(pyblish.api.InstancePlugin):
"renderer": renderer,
"source": filepath,
"plugin": "3dsmax",
"frameStart": int(rt.rendStart),
"frameEnd": int(rt.rendEnd),
"version": version_int,
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
"farm": True
}
instance.data.update(data)

View file

@ -5,7 +5,10 @@ import pyblish.api
from pymxs import runtime as rt
from openpype.lib import BoolDef
from openpype.hosts.max.api.lib import get_max_version
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
from openpype.pipeline.publish import (
OpenPypePyblishPluginMixin,
KnownPublishError
)
class CollectReview(pyblish.api.InstancePlugin,
@ -19,30 +22,41 @@ class CollectReview(pyblish.api.InstancePlugin,
def process(self, instance):
nodes = instance.data["members"]
focal_length = None
camera_name = None
for node in nodes:
if rt.classOf(node) in rt.Camera.classes:
camera_name = node.name
focal_length = node.fov
def is_camera(node):
is_camera_class = rt.classOf(node) in rt.Camera.classes
return is_camera_class and rt.isProperty(node, "fov")
# Use first camera in instance
cameras = [node for node in nodes if is_camera(node)]
if cameras:
if len(cameras) > 1:
self.log.warning(
"Found more than one camera in instance, using first "
f"one found: {cameras[0]}"
)
camera = cameras[0]
camera_name = camera.name
focal_length = camera.fov
else:
raise KnownPublishError(
"Unable to find a valid camera in 'Review' container."
" Only native max Camera supported. "
f"Found objects: {nodes}"
)
creator_attrs = instance.data["creator_attributes"]
attr_values = self.get_attr_values_from_data(instance.data)
data = {
general_preview_data = {
"review_camera": camera_name,
"frameStart": instance.context.data["frameStart"],
"frameEnd": instance.context.data["frameEnd"],
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
"percentSize": creator_attrs["percentSize"],
"imageFormat": creator_attrs["imageFormat"],
"keepImages": creator_attrs["keepImages"],
"fps": instance.context.data["fps"],
"dspGeometry": attr_values.get("dspGeometry"),
"dspShapes": attr_values.get("dspShapes"),
"dspLights": attr_values.get("dspLights"),
"dspCameras": attr_values.get("dspCameras"),
"dspHelpers": attr_values.get("dspHelpers"),
"dspParticles": attr_values.get("dspParticles"),
"dspBones": attr_values.get("dspBones"),
"dspBkg": attr_values.get("dspBkg"),
"dspGrid": attr_values.get("dspGrid"),
"dspSafeFrame": attr_values.get("dspSafeFrame"),
"dspFrameNums": attr_values.get("dspFrameNums")
"review_width": creator_attrs["review_width"],
"review_height": creator_attrs["review_height"],
}
if int(get_max_version()) >= 2024:
@ -55,14 +69,46 @@ class CollectReview(pyblish.api.InstancePlugin,
instance.data["colorspaceDisplay"] = display
instance.data["colorspaceView"] = view_transform
preview_data = {
"vpStyle": creator_attrs["visualStyleMode"],
"vpPreset": creator_attrs["viewportPreset"],
"vpTextures": creator_attrs["vpTexture"],
"dspGeometry": attr_values.get("dspGeometry"),
"dspShapes": attr_values.get("dspShapes"),
"dspLights": attr_values.get("dspLights"),
"dspCameras": attr_values.get("dspCameras"),
"dspHelpers": attr_values.get("dspHelpers"),
"dspParticles": attr_values.get("dspParticles"),
"dspBones": attr_values.get("dspBones"),
"dspBkg": attr_values.get("dspBkg"),
"dspGrid": attr_values.get("dspGrid"),
"dspSafeFrame": attr_values.get("dspSafeFrame"),
"dspFrameNums": attr_values.get("dspFrameNums")
}
else:
general_viewport = {
"dspBkg": attr_values.get("dspBkg"),
"dspGrid": attr_values.get("dspGrid")
}
nitrous_viewport = {
"VisualStyleMode": creator_attrs["visualStyleMode"],
"ViewportPreset": creator_attrs["viewportPreset"],
"UseTextureEnabled": creator_attrs["vpTexture"]
}
preview_data = {
"general_viewport": general_viewport,
"nitrous_viewport": nitrous_viewport,
"vp_btn_mgr": {"EnableButtons": False}
}
# Enable ftrack functionality
instance.data.setdefault("families", []).append('ftrack')
burnin_members = instance.data.setdefault("burninDataMembers", {})
burnin_members["focalLength"] = focal_length
self.log.debug(f"data:{data}")
instance.data.update(data)
instance.data.update(general_preview_data)
instance.data["viewport_options"] = preview_data
@classmethod
def get_attribute_defs(cls):

View file

@ -0,0 +1,76 @@
import pyblish.api
from openpype.lib import EnumDef, TextDef
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
class CollectTyCacheData(pyblish.api.InstancePlugin,
OpenPypePyblishPluginMixin):
"""Collect Channel Attributes for TyCache Export"""
order = pyblish.api.CollectorOrder + 0.02
label = "Collect tyCache attribute Data"
hosts = ['max']
families = ["tycache"]
def process(self, instance):
attr_values = self.get_attr_values_from_data(instance.data)
attributes = {}
for attr_key in attr_values.get("tycacheAttributes", []):
attributes[attr_key] = True
for key in ["tycacheLayer", "tycacheObjectName"]:
attributes[key] = attr_values.get(key, "")
# Collect the selected channel data before exporting
instance.data["tyc_attrs"] = attributes
self.log.debug(
f"Found tycache attributes: {attributes}"
)
@classmethod
def get_attribute_defs(cls):
# TODO: Support the attributes with maxObject array
tyc_attr_enum = ["tycacheChanAge", "tycacheChanGroups",
"tycacheChanPos", "tycacheChanRot",
"tycacheChanScale", "tycacheChanVel",
"tycacheChanSpin", "tycacheChanShape",
"tycacheChanMatID", "tycacheChanMapping",
"tycacheChanMaterials", "tycacheChanCustomFloat"
"tycacheChanCustomVector", "tycacheChanCustomTM",
"tycacheChanPhysX", "tycacheMeshBackup",
"tycacheCreateObject",
"tycacheCreateObjectIfNotCreated",
"tycacheAdditionalCloth",
"tycacheAdditionalSkin",
"tycacheAdditionalSkinID",
"tycacheAdditionalSkinIDValue",
"tycacheAdditionalTerrain",
"tycacheAdditionalVDB",
"tycacheAdditionalSplinePaths",
"tycacheAdditionalGeo",
"tycacheAdditionalGeoActivateModifiers",
"tycacheSplines",
"tycacheSplinesAdditionalSplines"
]
tyc_default_attrs = ["tycacheChanGroups", "tycacheChanPos",
"tycacheChanRot", "tycacheChanScale",
"tycacheChanVel", "tycacheChanShape",
"tycacheChanMatID", "tycacheChanMapping",
"tycacheChanMaterials",
"tycacheCreateObjectIfNotCreated"]
return [
EnumDef("tycacheAttributes",
tyc_attr_enum,
default=tyc_default_attrs,
multiselection=True,
label="TyCache Attributes"),
TextDef("tycacheLayer",
label="TyCache Layer",
tooltip="Name of tycache layer",
default="$(tyFlowLayer)"),
TextDef("tycacheObjectName",
label="TyCache Object Name",
tooltip="TyCache Object Name",
default="$(tyFlowName)_tyCache")
]

View file

@ -19,8 +19,8 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
def process(self, instance):
if not self.is_active(instance.data):
return
start = float(instance.data.get("frameStartHandle", 1))
end = float(instance.data.get("frameEndHandle", 1))
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
self.log.info("Extracting Camera ...")

View file

@ -51,8 +51,8 @@ class ExtractAlembic(publish.Extractor):
families = ["pointcache"]
def process(self, instance):
start = float(instance.data.get("frameStartHandle", 1))
end = float(instance.data.get("frameEndHandle", 1))
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
self.log.debug("Extracting pointcache ...")

View file

@ -40,8 +40,8 @@ class ExtractPointCloud(publish.Extractor):
def process(self, instance):
self.settings = self.get_setting(instance)
start = int(instance.context.data.get("frameStart"))
end = int(instance.context.data.get("frameEnd"))
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
self.log.info("Extracting PRT...")
stagingdir = self.staging_dir(instance)

View file

@ -16,8 +16,8 @@ class ExtractRedshiftProxy(publish.Extractor):
families = ["redshiftproxy"]
def process(self, instance):
start = int(instance.context.data.get("frameStart"))
end = int(instance.context.data.get("frameEnd"))
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
self.log.debug("Extracting Redshift Proxy...")
stagingdir = self.staging_dir(instance)

View file

@ -1,8 +1,9 @@
import os
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline import publish
from openpype.hosts.max.api.lib import viewport_camera, get_max_version
from openpype.hosts.max.api.preview_animation import (
render_preview_animation
)
class ExtractReviewAnimation(publish.Extractor):
@ -18,24 +19,26 @@ class ExtractReviewAnimation(publish.Extractor):
def process(self, instance):
staging_dir = self.staging_dir(instance)
ext = instance.data.get("imageFormat")
filename = "{0}..{1}".format(instance.name, ext)
start = int(instance.data["frameStart"])
end = int(instance.data["frameEnd"])
fps = int(instance.data["fps"])
filepath = os.path.join(staging_dir, filename)
filepath = filepath.replace("\\", "/")
filenames = self.get_files(
instance.name, start, end, ext)
filepath = os.path.join(staging_dir, instance.name)
self.log.debug(
"Writing Review Animation to"
" '%s' to '%s'" % (filename, staging_dir))
"Writing Review Animation to '{}'".format(filepath))
review_camera = instance.data["review_camera"]
with viewport_camera(review_camera):
preview_arg = self.set_preview_arg(
instance, filepath, start, end, fps)
rt.execute(preview_arg)
viewport_options = instance.data.get("viewport_options", {})
files = render_preview_animation(
filepath,
ext,
review_camera,
start,
end,
percentSize=instance.data["percentSize"],
width=instance.data["review_width"],
height=instance.data["review_height"],
viewport_options=viewport_options)
filenames = [os.path.basename(path) for path in files]
tags = ["review"]
if not instance.data.get("keepImages"):
@ -48,8 +51,8 @@ class ExtractReviewAnimation(publish.Extractor):
"ext": instance.data["imageFormat"],
"files": filenames,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
"tags": tags,
"preview": True,
"camera_name": review_camera
@ -59,44 +62,3 @@ class ExtractReviewAnimation(publish.Extractor):
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(representation)
def get_files(self, filename, start, end, ext):
file_list = []
for frame in range(int(start), int(end) + 1):
actual_name = "{}.{:04}.{}".format(
filename, frame, ext)
file_list.append(actual_name)
return file_list
def set_preview_arg(self, instance, filepath,
start, end, fps):
job_args = list()
default_option = f'CreatePreview filename:"{filepath}"'
job_args.append(default_option)
frame_option = f"outputAVI:false start:{start} end:{end} fps:{fps}" # noqa
job_args.append(frame_option)
rndLevel = instance.data.get("rndLevel")
if rndLevel:
option = f"rndLevel:#{rndLevel}"
job_args.append(option)
options = [
"percentSize", "dspGeometry", "dspShapes",
"dspLights", "dspCameras", "dspHelpers", "dspParticles",
"dspBones", "dspBkg", "dspGrid", "dspSafeFrame", "dspFrameNums"
]
for key in options:
enabled = instance.data.get(key)
if enabled:
job_args.append(f"{key}:{enabled}")
if get_max_version() == 2024:
# hardcoded for current stage
auto_play_option = "autoPlay:false"
job_args.append(auto_play_option)
job_str = " ".join(job_args)
self.log.debug(job_str)
return job_str

View file

@ -1,14 +1,11 @@
import os
import tempfile
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline import publish
from openpype.hosts.max.api.lib import viewport_camera, get_max_version
from openpype.hosts.max.api.preview_animation import render_preview_animation
class ExtractThumbnail(publish.Extractor):
"""
Extract Thumbnail for Review
"""Extract Thumbnail for Review
"""
order = pyblish.api.ExtractorOrder
@ -17,34 +14,33 @@ class ExtractThumbnail(publish.Extractor):
families = ["review"]
def process(self, instance):
# TODO: Create temp directory for thumbnail
# - this is to avoid "override" of source file
tmp_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
self.log.debug(
f"Create temp directory {tmp_staging} for thumbnail"
)
fps = int(instance.data["fps"])
ext = instance.data.get("imageFormat")
frame = int(instance.data["frameStart"])
instance.context.data["cleanupFullPaths"].append(tmp_staging)
filename = "{name}_thumbnail..png".format(**instance.data)
filepath = os.path.join(tmp_staging, filename)
filepath = filepath.replace("\\", "/")
thumbnail = self.get_filename(instance.name, frame)
staging_dir = self.staging_dir(instance)
filepath = os.path.join(
staging_dir, f"{instance.name}_thumbnail")
self.log.debug("Writing Thumbnail to '{}'".format(filepath))
self.log.debug(
"Writing Thumbnail to"
" '%s' to '%s'" % (filename, tmp_staging))
review_camera = instance.data["review_camera"]
with viewport_camera(review_camera):
preview_arg = self.set_preview_arg(
instance, filepath, fps, frame)
rt.execute(preview_arg)
viewport_options = instance.data.get("viewport_options", {})
files = render_preview_animation(
filepath,
ext,
review_camera,
start_frame=frame,
end_frame=frame,
percentSize=instance.data["percentSize"],
width=instance.data["review_width"],
height=instance.data["review_height"],
viewport_options=viewport_options)
thumbnail = next(os.path.basename(path) for path in files)
representation = {
"name": "thumbnail",
"ext": "png",
"ext": ext,
"files": thumbnail,
"stagingDir": tmp_staging,
"stagingDir": staging_dir,
"thumbnail": True
}
@ -53,39 +49,3 @@ class ExtractThumbnail(publish.Extractor):
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(representation)
def get_filename(self, filename, target_frame):
thumbnail_name = "{}_thumbnail.{:04}.png".format(
filename, target_frame
)
return thumbnail_name
def set_preview_arg(self, instance, filepath, fps, frame):
job_args = list()
default_option = f'CreatePreview filename:"{filepath}"'
job_args.append(default_option)
frame_option = f"outputAVI:false start:{frame} end:{frame} fps:{fps}" # noqa
job_args.append(frame_option)
rndLevel = instance.data.get("rndLevel")
if rndLevel:
option = f"rndLevel:#{rndLevel}"
job_args.append(option)
options = [
"percentSize", "dspGeometry", "dspShapes",
"dspLights", "dspCameras", "dspHelpers", "dspParticles",
"dspBones", "dspBkg", "dspGrid", "dspSafeFrame", "dspFrameNums"
]
for key in options:
enabled = instance.data.get(key)
if enabled:
job_args.append(f"{key}:{enabled}")
if get_max_version() == 2024:
# hardcoded for current stage
auto_play_option = "autoPlay:false"
job_args.append(auto_play_option)
job_str = " ".join(job_args)
self.log.debug(job_str)
return job_str

View file

@ -0,0 +1,157 @@
import os
import pyblish.api
from pymxs import runtime as rt
from openpype.hosts.max.api import maintained_selection
from openpype.pipeline import publish
class ExtractTyCache(publish.Extractor):
"""Extract tycache format with tyFlow operators.
Notes:
- TyCache only works for TyFlow Pro Plugin.
Methods:
self.get_export_particles_job_args(): sets up all job arguments
for attributes to be exported in MAXscript
self.get_operators(): get the export_particle operator
self.get_files(): get the files with tyFlow naming convention
before publishing
"""
order = pyblish.api.ExtractorOrder - 0.2
label = "Extract TyCache"
hosts = ["max"]
families = ["tycache"]
def process(self, instance):
# TODO: let user decide the param
start = int(instance.context.data["frameStart"])
end = int(instance.context.data.get("frameEnd"))
self.log.debug("Extracting Tycache...")
stagingdir = self.staging_dir(instance)
filename = "{name}.tyc".format(**instance.data)
path = os.path.join(stagingdir, filename)
filenames = self.get_files(instance, start, end)
additional_attributes = instance.data.get("tyc_attrs", {})
with maintained_selection():
job_args = self.get_export_particles_job_args(
instance.data["members"],
start, end, path,
additional_attributes)
for job in job_args:
rt.Execute(job)
representations = instance.data.setdefault("representations", [])
representation = {
'name': 'tyc',
'ext': 'tyc',
'files': filenames if len(filenames) > 1 else filenames[0],
"stagingDir": stagingdir,
}
representations.append(representation)
# Get the tyMesh filename for extraction
mesh_filename = f"{instance.name}__tyMesh.tyc"
mesh_repres = {
'name': 'tyMesh',
'ext': 'tyc',
'files': mesh_filename,
"stagingDir": stagingdir,
"outputName": '__tyMesh'
}
representations.append(mesh_repres)
self.log.debug(f"Extracted instance '{instance.name}' to: {filenames}")
def get_files(self, instance, start_frame, end_frame):
"""Get file names for tyFlow in tyCache format.
Set the filenames accordingly to the tyCache file
naming extension(.tyc) for the publishing purpose
Actual File Output from tyFlow in tyCache format:
<InstanceName>__tyPart_<frame>.tyc
e.g. tycacheMain__tyPart_00000.tyc
Args:
instance (pyblish.api.Instance): instance.
start_frame (int): Start frame.
end_frame (int): End frame.
Returns:
filenames(list): list of filenames
"""
filenames = []
for frame in range(int(start_frame), int(end_frame) + 1):
filename = f"{instance.name}__tyPart_{frame:05}.tyc"
filenames.append(filename)
return filenames
def get_export_particles_job_args(self, members, start, end,
filepath, additional_attributes):
"""Sets up all job arguments for attributes.
Those attributes are to be exported in MAX Script.
Args:
members (list): Member nodes of the instance.
start (int): Start frame.
end (int): End frame.
filepath (str): Output path of the TyCache file.
additional_attributes (dict): channel attributes data
which needed to be exported
Returns:
list of arguments for MAX Script.
"""
settings = {
"exportMode": 2,
"frameStart": start,
"frameEnd": end,
"tyCacheFilename": filepath.replace("\\", "/")
}
settings.update(additional_attributes)
job_args = []
for operator in self.get_operators(members):
for key, value in settings.items():
if isinstance(value, str):
# embed in quotes
value = f'"{value}"'
job_args.append(f"{operator}.{key}={value}")
job_args.append(f"{operator}.exportTyCache()")
return job_args
@staticmethod
def get_operators(members):
"""Get Export Particles Operator.
Args:
members (list): Instance members.
Returns:
list of particle operators
"""
opt_list = []
for member in members:
obj = member.baseobject
# TODO: see if it can use maxscript instead
anim_names = rt.GetSubAnimNames(obj)
for anim_name in anim_names:
sub_anim = rt.GetSubAnim(obj, anim_name)
boolean = rt.IsProperty(sub_anim, "Export_Particles")
if boolean:
event_name = sub_anim.Name
opt = f"${member.Name}.{event_name}.export_particles"
opt_list.append(opt)
return opt_list

View file

@ -1,48 +0,0 @@
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError
)
from openpype.hosts.max.api.lib import get_frame_range, set_timeline
class ValidateAnimationTimeline(pyblish.api.InstancePlugin):
"""
Validates Animation Timeline for Preview Animation in Max
"""
label = "Animation Timeline for Review"
order = ValidateContentsOrder
families = ["review"]
hosts = ["max"]
actions = [RepairAction]
def process(self, instance):
frame_range = get_frame_range()
frame_start_handle = frame_range["frameStart"] - int(
frame_range["handleStart"]
)
frame_end_handle = frame_range["frameEnd"] + int(
frame_range["handleEnd"]
)
if rt.animationRange.start != frame_start_handle or (
rt.animationRange.end != frame_end_handle
):
raise PublishValidationError("Incorrect animation timeline "
"set for preview animation.. "
"\nYou can use repair action to "
"the correct animation timeline")
@classmethod
def repair(cls, instance):
frame_range = get_frame_range()
frame_start_handle = frame_range["frameStart"] - int(
frame_range["handleStart"]
)
frame_end_handle = frame_range["frameEnd"] + int(
frame_range["handleEnd"]
)
set_timeline(frame_start_handle, frame_end_handle)

View file

@ -7,8 +7,10 @@ from openpype.pipeline import (
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError
PublishValidationError,
KnownPublishError
)
from openpype.hosts.max.api.lib import get_frame_range, set_timeline
class ValidateFrameRange(pyblish.api.InstancePlugin,
@ -27,38 +29,60 @@ class ValidateFrameRange(pyblish.api.InstancePlugin,
label = "Validate Frame Range"
order = ValidateContentsOrder
families = ["maxrender"]
families = ["camera", "maxrender",
"pointcache", "pointcloud",
"review", "redshiftproxy"]
hosts = ["max"]
optional = True
actions = [RepairAction]
def process(self, instance):
if not self.is_active(instance.data):
self.log.info("Skipping validation...")
self.log.debug("Skipping Validate Frame Range...")
return
context = instance.context
frame_start = int(context.data.get("frameStart"))
frame_end = int(context.data.get("frameEnd"))
inst_frame_start = int(instance.data.get("frameStart"))
inst_frame_end = int(instance.data.get("frameEnd"))
frame_range = get_frame_range(
asset_doc=instance.data["assetEntity"])
inst_frame_start = instance.data.get("frameStartHandle")
inst_frame_end = instance.data.get("frameEndHandle")
if inst_frame_start is None or inst_frame_end is None:
raise KnownPublishError(
"Missing frame start and frame end on "
"instance to to validate."
)
frame_start_handle = frame_range["frameStartHandle"]
frame_end_handle = frame_range["frameEndHandle"]
errors = []
if frame_start != inst_frame_start:
if frame_start_handle != inst_frame_start:
errors.append(
f"Start frame ({inst_frame_start}) on instance does not match " # noqa
f"with the start frame ({frame_start}) set on the asset data. ") # noqa
if frame_end != inst_frame_end:
f"with the start frame ({frame_start_handle}) set on the asset data. ") # noqa
if frame_end_handle != inst_frame_end:
errors.append(
f"End frame ({inst_frame_end}) on instance does not match "
f"with the end frame ({frame_start}) from the asset data. ")
f"with the end frame ({frame_end_handle}) "
"from the asset data. ")
if errors:
errors.append("You can use repair action to fix it.")
raise PublishValidationError("\n".join(errors))
bullet_point_errors = "\n".join(
"- {}".format(error) for error in errors
)
report = (
"Frame range settings are incorrect.\n\n"
f"{bullet_point_errors}\n\n"
"You can use repair action to fix it."
)
raise PublishValidationError(report, title="Frame Range incorrect")
@classmethod
def repair(cls, instance):
rt.rendStart = instance.context.data.get("frameStart")
rt.rendEnd = instance.context.data.get("frameEnd")
frame_range = get_frame_range()
frame_start_handle = frame_range["frameStartHandle"]
frame_end_handle = frame_range["frameEndHandle"]
if instance.data["family"] == "maxrender":
rt.rendStart = frame_start_handle
rt.rendEnd = frame_end_handle
else:
set_timeline(frame_start_handle, frame_end_handle)

View file

@ -14,29 +14,16 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
def process(self, instance):
"""
Notes:
1. Validate the container only include tyFlow objects
2. Validate if tyFlow operator Export Particle exists
3. Validate if the export mode of Export Particle is at PRT format
4. Validate the partition count and range set as default value
1. Validate if the export mode of Export Particle is at PRT format
2. Validate the partition count and range set as default value
Partition Count : 100
Partition Range : 1 to 1
5. Validate if the custom attribute(s) exist as parameter(s)
3. Validate if the custom attribute(s) exist as parameter(s)
of export_particle operator
"""
report = []
invalid_object = self.get_tyflow_object(instance)
if invalid_object:
report.append(f"Non tyFlow object found: {invalid_object}")
invalid_operator = self.get_tyflow_operator(instance)
if invalid_operator:
report.append((
"tyFlow ExportParticle operator not "
f"found: {invalid_operator}"))
if self.validate_export_mode(instance):
report.append("The export mode is not at PRT")
@ -52,46 +39,6 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
if report:
raise PublishValidationError(f"{report}")
def get_tyflow_object(self, instance):
invalid = []
container = instance.data["instance_node"]
self.log.info(f"Validating tyFlow container for {container}")
selection_list = instance.data["members"]
for sel in selection_list:
sel_tmp = str(sel)
if rt.ClassOf(sel) in [rt.tyFlow,
rt.Editable_Mesh]:
if "tyFlow" not in sel_tmp:
invalid.append(sel)
else:
invalid.append(sel)
return invalid
def get_tyflow_operator(self, instance):
invalid = []
container = instance.data["instance_node"]
self.log.info(f"Validating tyFlow object for {container}")
selection_list = instance.data["members"]
bool_list = []
for sel in selection_list:
obj = sel.baseobject
anim_names = rt.GetSubAnimNames(obj)
for anim_name in anim_names:
# get all the names of the related tyFlow nodes
sub_anim = rt.GetSubAnim(obj, anim_name)
# check if there is export particle operator
boolean = rt.IsProperty(sub_anim, "Export_Particles")
bool_list.append(str(boolean))
# if the export_particles property is not there
# it means there is not a "Export Particle" operator
if "True" not in bool_list:
self.log.error("Operator 'Export Particles' not found!")
invalid.append(sel)
return invalid
def validate_custom_attribute(self, instance):
invalid = []
container = instance.data["instance_node"]

View file

@ -21,7 +21,7 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin,
if not self.is_active(instance.data):
return
width, height = self.get_db_resolution(instance)
current_width = rt.renderwidth
current_width = rt.renderWidth
current_height = rt.renderHeight
if current_width != width and current_height != height:
raise PublishValidationError("Resolution Setting "

View file

@ -0,0 +1,88 @@
import pyblish.api
from openpype.pipeline import PublishValidationError
from pymxs import runtime as rt
class ValidateTyFlowData(pyblish.api.InstancePlugin):
"""Validate TyFlow plugins or relevant operators are set correctly."""
order = pyblish.api.ValidatorOrder
families = ["pointcloud", "tycache"]
hosts = ["max"]
label = "TyFlow Data"
def process(self, instance):
"""
Notes:
1. Validate the container only include tyFlow objects
2. Validate if tyFlow operator Export Particle exists
"""
invalid_object = self.get_tyflow_object(instance)
if invalid_object:
self.log.error(f"Non tyFlow object found: {invalid_object}")
invalid_operator = self.get_tyflow_operator(instance)
if invalid_operator:
self.log.error(
"Operator 'Export Particles' not found in tyFlow editor.")
if invalid_object or invalid_operator:
raise PublishValidationError(
"issues occurred",
description="Container should only include tyFlow object "
"and tyflow operator 'Export Particle' should be in "
"the tyFlow editor.")
def get_tyflow_object(self, instance):
"""Get the nodes which are not tyFlow object(s)
and editable mesh(es)
Args:
instance (pyblish.api.Instance): instance
Returns:
list: invalid nodes which are not tyFlow
object(s) and editable mesh(es).
"""
container = instance.data["instance_node"]
self.log.debug(f"Validating tyFlow container for {container}")
allowed_classes = [rt.tyFlow, rt.Editable_Mesh]
return [
member for member in instance.data["members"]
if rt.ClassOf(member) not in allowed_classes
]
def get_tyflow_operator(self, instance):
"""Check if the Export Particle Operators in the node
connections.
Args:
instance (str): instance node
Returns:
invalid(list): list of invalid nodes which do
not consist of Export Particle Operators as parts
of the node connections
"""
invalid = []
members = instance.data["members"]
for member in members:
obj = member.baseobject
# There must be at least one animation with export
# particles enabled
has_export_particles = False
anim_names = rt.GetSubAnimNames(obj)
for anim_name in anim_names:
# get name of the related tyFlow node
sub_anim = rt.GetSubAnim(obj, anim_name)
# check if there is export particle operator
if rt.IsProperty(sub_anim, "Export_Particles"):
has_export_particles = True
break
if not has_export_particles:
invalid.append(member)
return invalid

View file

@ -244,8 +244,14 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
return self.get_load_plugin_options(options)
def post_placeholder_process(self, placeholder, failed):
"""Hide placeholder, add them to placeholder set
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# Hide placeholder and add them to placeholder set
node = placeholder.scene_identifier
cmds.sets(node, addElement=PLACEHOLDER_SET)

View file

@ -40,7 +40,6 @@ from openpype.settings import (
from openpype.modules import ModulesManager
from openpype.pipeline.template_data import get_template_data_with_names
from openpype.pipeline import (
get_current_project_name,
discover_legacy_creator_plugins,
Anatomy,
get_current_host_name,
@ -48,20 +47,15 @@ from openpype.pipeline import (
get_current_asset_name,
)
from openpype.pipeline.context_tools import (
get_current_project_asset,
get_custom_workfile_template_from_session
)
from openpype.pipeline.colorspace import (
get_imageio_config
)
from openpype.pipeline.colorspace import get_imageio_config
from openpype.pipeline.workfile import BuildWorkfile
from . import gizmo_menu
from .constants import ASSIST
from .workio import (
save_file,
open_file
)
from .workio import save_file
from .utils import get_node_outputs
log = Logger.get_logger(__name__)
@ -1104,26 +1098,6 @@ def check_subsetname_exists(nodes, subset_name):
False)
def get_render_path(node):
''' Generate Render path from presets regarding avalon knob data
'''
avalon_knob_data = read_avalon_data(node)
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
data = {
"avalon": avalon_knob_data,
"nuke_imageio_writes": nuke_imageio_writes
}
anatomy_filled = format_anatomy(data)
return anatomy_filled["render"]["path"].replace("\\", "/")
def format_anatomy(data):
''' Helping function for formatting of anatomy paths
@ -2222,7 +2196,6 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
"""
# replace path with env var if possible
ocio_path = self._replace_ocio_path_with_env_var(config_data)
ocio_path = ocio_path.replace("\\", "/")
log.info("Setting OCIO config path to: `{}`".format(
ocio_path))
@ -2802,16 +2775,28 @@ def find_free_space_to_paste_nodes(
@contextlib.contextmanager
def maintained_selection():
def maintained_selection(exclude_nodes=None):
"""Maintain selection during context
Maintain selection during context and unselect
all nodes after context is done.
Arguments:
exclude_nodes (list[nuke.Node]): list of nodes to be unselected
before context is done
Example:
>>> with maintained_selection():
... node["selected"].setValue(True)
>>> print(node["selected"].value())
False
"""
if exclude_nodes:
for node in exclude_nodes:
node["selected"].setValue(False)
previous_selection = nuke.selectedNodes()
try:
yield
finally:
@ -2823,6 +2808,51 @@ def maintained_selection():
select_nodes(previous_selection)
@contextlib.contextmanager
def swap_node_with_dependency(old_node, new_node):
""" Swap node with dependency
Swap node with dependency and reconnect all inputs and outputs.
It removes old node.
Arguments:
old_node (nuke.Node): node to be replaced
new_node (nuke.Node): node to replace with
Example:
>>> old_node_name = old_node["name"].value()
>>> print(old_node_name)
old_node_name_01
>>> with swap_node_with_dependency(old_node, new_node) as node_name:
... new_node["name"].setValue(node_name)
>>> print(new_node["name"].value())
old_node_name_01
"""
# preserve position
xpos, ypos = old_node.xpos(), old_node.ypos()
# preserve selection after all is done
outputs = get_node_outputs(old_node)
inputs = old_node.dependencies()
node_name = old_node["name"].value()
try:
nuke.delete(old_node)
yield node_name
finally:
# Reconnect inputs
for i, node in enumerate(inputs):
new_node.setInput(i, node)
# Reconnect outputs
if outputs:
for n, pipes in outputs.items():
for i in pipes:
n.setInput(i, new_node)
# return to original position
new_node.setXYpos(xpos, ypos)
def reset_selection():
"""Deselect all selected nodes"""
for node in nuke.selectedNodes():
@ -2920,13 +2950,13 @@ def process_workfile_builder():
"workfile_builder", {})
# get settings
createfv_on = workfile_builder.get("create_first_version") or None
create_fv_on = workfile_builder.get("create_first_version") or None
builder_on = workfile_builder.get("builder_on_start") or None
last_workfile_path = os.environ.get("AVALON_LAST_WORKFILE")
# generate first version in file not existing and feature is enabled
if createfv_on and not os.path.exists(last_workfile_path):
if create_fv_on and not os.path.exists(last_workfile_path):
# get custom template path if any
custom_template_path = get_custom_workfile_template_from_session(
project_settings=project_settings

View file

@ -163,8 +163,10 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
)
return loaded_representation_ids
def _before_repre_load(self, placeholder, representation):
def _before_placeholder_load(self, placeholder):
placeholder.data["nodes_init"] = nuke.allNodes()
def _before_repre_load(self, placeholder, representation):
placeholder.data["last_repre_id"] = str(representation["_id"])
def collect_placeholders(self):
@ -197,6 +199,13 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
return self.get_load_plugin_options(options)
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# deselect all selected nodes
placeholder_node = nuke.toNode(placeholder.scene_identifier)
@ -603,6 +612,13 @@ class NukePlaceholderCreatePlugin(
return self.get_create_plugin_options(options)
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# deselect all selected nodes
placeholder_node = nuke.toNode(placeholder.scene_identifier)

View file

@ -12,7 +12,8 @@ from openpype.pipeline import (
from openpype.hosts.nuke.api.lib import (
maintained_selection,
get_avalon_knob_data,
set_avalon_knob_data
set_avalon_knob_data,
swap_node_with_dependency,
)
from openpype.hosts.nuke.api import (
containerise,
@ -26,7 +27,7 @@ class LoadGizmo(load.LoaderPlugin):
families = ["gizmo"]
representations = ["*"]
extensions = {"gizmo"}
extensions = {"nk"}
label = "Load Gizmo"
order = 0
@ -45,7 +46,7 @@ class LoadGizmo(load.LoaderPlugin):
data (dict): compulsory attribute > not used
Returns:
nuke node: containerised nuke node object
nuke node: containerized nuke node object
"""
# get main variables
@ -83,12 +84,12 @@ class LoadGizmo(load.LoaderPlugin):
# add group from nk
nuke.nodePaste(file)
GN = nuke.selectedNode()
group_node = nuke.selectedNode()
GN["name"].setValue(object_name)
group_node["name"].setValue(object_name)
return containerise(
node=GN,
node=group_node,
name=name,
namespace=namespace,
context=context,
@ -110,7 +111,7 @@ class LoadGizmo(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
group_node = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
@ -135,22 +136,24 @@ class LoadGizmo(load.LoaderPlugin):
for k in add_keys:
data_imprint.update({k: version_data[k]})
# capture pipeline metadata
avalon_data = get_avalon_knob_data(group_node)
# adding nodes to node graph
# just in case we are in group lets jump out of it
nuke.endGroup()
with maintained_selection():
xpos = GN.xpos()
ypos = GN.ypos()
avalon_data = get_avalon_knob_data(GN)
nuke.delete(GN)
# add group from nk
with maintained_selection([group_node]):
# insert nuke script to the script
nuke.nodePaste(file)
GN = nuke.selectedNode()
set_avalon_knob_data(GN, avalon_data)
GN.setXYpos(xpos, ypos)
GN["name"].setValue(object_name)
# convert imported to selected node
new_group_node = nuke.selectedNode()
# swap nodes with maintained connections
with swap_node_with_dependency(
group_node, new_group_node) as node_name:
new_group_node["name"].setValue(node_name)
# set updated pipeline metadata
set_avalon_knob_data(new_group_node, avalon_data)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
@ -161,11 +164,12 @@ class LoadGizmo(load.LoaderPlugin):
color_value = self.node_color
else:
color_value = "0xd88467ff"
GN["tile_color"].setValue(int(color_value, 16))
new_group_node["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(GN, data_imprint)
return update_container(new_group_node, data_imprint)
def switch(self, container, representation):
self.update(container, representation)

View file

@ -14,7 +14,8 @@ from openpype.hosts.nuke.api.lib import (
maintained_selection,
create_backdrop,
get_avalon_knob_data,
set_avalon_knob_data
set_avalon_knob_data,
swap_node_with_dependency,
)
from openpype.hosts.nuke.api import (
containerise,
@ -28,7 +29,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
families = ["gizmo"]
representations = ["*"]
extensions = {"gizmo"}
extensions = {"nk"}
label = "Load Gizmo - Input Process"
order = 0
@ -47,7 +48,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
data (dict): compulsory attribute > not used
Returns:
nuke node: containerised nuke node object
nuke node: containerized nuke node object
"""
# get main variables
@ -85,17 +86,17 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
# add group from nk
nuke.nodePaste(file)
GN = nuke.selectedNode()
group_node = nuke.selectedNode()
GN["name"].setValue(object_name)
group_node["name"].setValue(object_name)
# try to place it under Viewer1
if not self.connect_active_viewer(GN):
nuke.delete(GN)
if not self.connect_active_viewer(group_node):
nuke.delete(group_node)
return
return containerise(
node=GN,
node=group_node,
name=name,
namespace=namespace,
context=context,
@ -117,7 +118,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
group_node = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
@ -142,22 +143,24 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
for k in add_keys:
data_imprint.update({k: version_data[k]})
# capture pipeline metadata
avalon_data = get_avalon_knob_data(group_node)
# adding nodes to node graph
# just in case we are in group lets jump out of it
nuke.endGroup()
with maintained_selection():
xpos = GN.xpos()
ypos = GN.ypos()
avalon_data = get_avalon_knob_data(GN)
nuke.delete(GN)
# add group from nk
with maintained_selection([group_node]):
# insert nuke script to the script
nuke.nodePaste(file)
GN = nuke.selectedNode()
set_avalon_knob_data(GN, avalon_data)
GN.setXYpos(xpos, ypos)
GN["name"].setValue(object_name)
# convert imported to selected node
new_group_node = nuke.selectedNode()
# swap nodes with maintained connections
with swap_node_with_dependency(
group_node, new_group_node) as node_name:
new_group_node["name"].setValue(node_name)
# set updated pipeline metadata
set_avalon_knob_data(new_group_node, avalon_data)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
@ -168,11 +171,11 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
color_value = self.node_color
else:
color_value = "0xd88467ff"
GN["tile_color"].setValue(int(color_value, 16))
new_group_node["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(GN, data_imprint)
return update_container(new_group_node, data_imprint)
def connect_active_viewer(self, group_node):
"""

View file

@ -204,8 +204,6 @@ class LoadImage(load.LoaderPlugin):
last = first = int(frame_number)
# Set the global in to the start frame of the sequence
read_name = self._get_node_name(representation)
node["name"].setValue(read_name)
node["file"].setValue(file)
node["origfirst"].setValue(first)
node["first"].setValue(first)

View file

@ -0,0 +1,350 @@
import os
import json
import secrets
import nuke
import six
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id
)
from openpype.pipeline import (
load,
get_current_project_name,
get_representation_path,
)
from openpype.hosts.nuke.api import (
containerise,
viewer_update_and_undo_stop,
update_container,
)
class LoadOcioLookNodes(load.LoaderPlugin):
"""Loading Ocio look to the nuke.Node graph"""
families = ["ociolook"]
representations = ["*"]
extensions = {"json"}
label = "Load OcioLook [nodes]"
order = 0
icon = "cc"
color = "white"
ignore_attr = ["useLifetime"]
# plugin attributes
current_node_color = "0x4ecd91ff"
old_node_color = "0xd88467ff"
# json file variables
schema_version = 1
def load(self, context, name, namespace, data):
"""
Loading function to get the soft effects to particular read node
Arguments:
context (dict): context of version
name (str): name of the version
namespace (str): asset name
data (dict): compulsory attribute > not used
Returns:
nuke.Node: containerized nuke.Node object
"""
namespace = namespace or context['asset']['name']
suffix = secrets.token_hex(nbytes=4)
object_name = "{}_{}_{}".format(
name, namespace, suffix)
# getting file path
filepath = self.filepath_from_context(context)
json_f = self._load_json_data(filepath)
group_node = self._create_group_node(
object_name, filepath, json_f["data"])
self._node_version_color(context["version"], group_node)
self.log.info(
"Loaded lut setup: `{}`".format(group_node["name"].value()))
return containerise(
node=group_node,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
data={
"objectName": object_name,
}
)
def _create_group_node(
self,
object_name,
filepath,
data
):
"""Creates group node with all the nodes inside.
Creating mainly `OCIOFileTransform` nodes with `OCIOColorSpace` nodes
in between - in case those are needed.
Arguments:
object_name (str): name of the group node
filepath (str): path to json file
data (dict): data from json file
Returns:
nuke.Node: group node with all the nodes inside
"""
# get corresponding node
root_working_colorspace = nuke.root()["workingSpaceLUT"].value()
dir_path = os.path.dirname(filepath)
all_files = os.listdir(dir_path)
ocio_working_colorspace = _colorspace_name_by_type(
data["ocioLookWorkingSpace"])
# adding nodes to node graph
# just in case we are in group lets jump out of it
nuke.endGroup()
input_node = None
output_node = None
group_node = nuke.toNode(object_name)
if group_node:
# remove all nodes between Input and Output nodes
for node in group_node.nodes():
if node.Class() not in ["Input", "Output"]:
nuke.delete(node)
elif node.Class() == "Input":
input_node = node
elif node.Class() == "Output":
output_node = node
else:
group_node = nuke.createNode(
"Group",
"name {}_1".format(object_name),
inpanel=False
)
# adding content to the group node
with group_node:
pre_colorspace = root_working_colorspace
# reusing input node if it exists during update
if input_node:
pre_node = input_node
else:
pre_node = nuke.createNode("Input")
pre_node["name"].setValue("rgb")
# Compare script working colorspace with ocio working colorspace
# found in json file and convert to json's if needed
if pre_colorspace != ocio_working_colorspace:
pre_node = _add_ocio_colorspace_node(
pre_node,
pre_colorspace,
ocio_working_colorspace
)
pre_colorspace = ocio_working_colorspace
for ocio_item in data["ocioLookItems"]:
input_space = _colorspace_name_by_type(
ocio_item["input_colorspace"])
output_space = _colorspace_name_by_type(
ocio_item["output_colorspace"])
# making sure we are set to correct colorspace for otio item
if pre_colorspace != input_space:
pre_node = _add_ocio_colorspace_node(
pre_node,
pre_colorspace,
input_space
)
node = nuke.createNode("OCIOFileTransform")
# file path from lut representation
extension = ocio_item["ext"]
item_name = ocio_item["name"]
item_lut_file = next(
(
file for file in all_files
if file.endswith(extension)
),
None
)
if not item_lut_file:
raise ValueError(
"File with extension '{}' not "
"found in directory".format(extension)
)
item_lut_path = os.path.join(
dir_path, item_lut_file).replace("\\", "/")
node["file"].setValue(item_lut_path)
node["name"].setValue(item_name)
node["direction"].setValue(ocio_item["direction"])
node["interpolation"].setValue(ocio_item["interpolation"])
node["working_space"].setValue(input_space)
pre_node.autoplace()
node.setInput(0, pre_node)
node.autoplace()
# pass output space into pre_colorspace for next iteration
# or for output node comparison
pre_colorspace = output_space
pre_node = node
# making sure we are back in script working colorspace
if pre_colorspace != root_working_colorspace:
pre_node = _add_ocio_colorspace_node(
pre_node,
pre_colorspace,
root_working_colorspace
)
# reusing output node if it exists during update
if not output_node:
output = nuke.createNode("Output")
else:
output = output_node
output.setInput(0, pre_node)
return group_node
def update(self, container, representation):
project_name = get_current_project_name()
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
filepath = get_representation_path(representation)
json_f = self._load_json_data(filepath)
group_node = self._create_group_node(
object_name,
filepath,
json_f["data"]
)
self._node_version_color(version_doc, group_node)
self.log.info("Updated lut setup: `{}`".format(
group_node["name"].value()))
return update_container(
group_node, {"representation": str(representation["_id"])})
def _load_json_data(self, filepath):
# getting data from json file with unicode conversion
with open(filepath, "r") as _file:
json_f = {self._bytify(key): self._bytify(value)
for key, value in json.load(_file).items()}
# check if the version in json_f is the same as plugin version
if json_f["version"] != self.schema_version:
raise KeyError(
"Version of json file is not the same as plugin version")
return json_f
def _bytify(self, input):
"""
Converts unicode strings to strings
It goes through all dictionary
Arguments:
input (dict/str): input
Returns:
dict: with fixed values and keys
"""
if isinstance(input, dict):
return {self._bytify(key): self._bytify(value)
for key, value in input.items()}
elif isinstance(input, list):
return [self._bytify(element) for element in input]
elif isinstance(input, six.text_type):
return str(input)
else:
return input
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
with viewer_update_and_undo_stop():
nuke.delete(node)
def _node_version_color(self, version, node):
""" Coloring a node by correct color by actual version"""
project_name = get_current_project_name()
last_version_doc = get_last_version_by_subset_id(
project_name, version["parent"], fields=["_id"]
)
# change color of node
if version["_id"] == last_version_doc["_id"]:
color_value = self.current_node_color
else:
color_value = self.old_node_color
node["tile_color"].setValue(int(color_value, 16))
def _colorspace_name_by_type(colorspace_data):
"""
Returns colorspace name by type
Arguments:
colorspace_data (dict): colorspace data
Returns:
str: colorspace name
"""
if colorspace_data["type"] == "colorspaces":
return colorspace_data["name"]
elif colorspace_data["type"] == "roles":
return colorspace_data["colorspace"]
else:
raise KeyError("Unknown colorspace type: {}".format(
colorspace_data["type"]))
def _add_ocio_colorspace_node(pre_node, input_space, output_space):
"""
Adds OCIOColorSpace node to the node graph
Arguments:
pre_node (nuke.Node): node to connect to
input_space (str): input colorspace
output_space (str): output colorspace
Returns:
nuke.Node: node with OCIOColorSpace node
"""
node = nuke.createNode("OCIOColorSpace")
node.setInput(0, pre_node)
node["in_colorspace"].setValue(input_space)
node["out_colorspace"].setValue(output_space)
pre_node.autoplace()
node.setInput(0, pre_node)
node.autoplace()
return node

View file

@ -0,0 +1,173 @@
# -*- coding: utf-8 -*-
"""Creator of colorspace look files.
This creator is used to publish colorspace look files thanks to
production type `ociolook`. All files are published as representation.
"""
from pathlib import Path
from openpype.client import get_asset_by_name
from openpype.lib.attribute_definitions import (
FileDef, EnumDef, TextDef, UISeparatorDef
)
from openpype.pipeline import (
CreatedInstance,
CreatorError
)
from openpype.pipeline import colorspace
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
class CreateColorspaceLook(TrayPublishCreator):
"""Creates colorspace look files."""
identifier = "io.openpype.creators.traypublisher.colorspace_look"
label = "Colorspace Look"
family = "ociolook"
description = "Publishes color space look file."
extensions = [".cc", ".cube", ".3dl", ".spi1d", ".spi3d", ".csp", ".lut"]
enabled = False
colorspace_items = [
(None, "Not set")
]
colorspace_attr_show = False
config_items = None
config_data = None
def get_detail_description(self):
return """# Colorspace Look
This creator publishes color space look file (LUT).
"""
def get_icon(self):
return "mdi.format-color-fill"
def create(self, subset_name, instance_data, pre_create_data):
repr_file = pre_create_data.get("luts_file")
if not repr_file:
raise CreatorError("No files specified")
files = repr_file.get("filenames")
if not files:
# this should never happen
raise CreatorError("Missing files from representation")
asset_doc = get_asset_by_name(
self.project_name, instance_data["asset"])
subset_name = self.get_subset_name(
variant=instance_data["variant"],
task_name=instance_data["task"] or "Not set",
project_name=self.project_name,
asset_doc=asset_doc,
)
instance_data["creator_attributes"] = {
"abs_lut_path": (
Path(repr_file["directory"]) / files[0]).as_posix()
}
# Create new instance
new_instance = CreatedInstance(self.family, subset_name,
instance_data, self)
new_instance.transient_data["config_items"] = self.config_items
new_instance.transient_data["config_data"] = self.config_data
self._store_new_instance(new_instance)
def collect_instances(self):
super().collect_instances()
for instance in self.create_context.instances:
if instance.creator_identifier == self.identifier:
instance.transient_data["config_items"] = self.config_items
instance.transient_data["config_data"] = self.config_data
def get_instance_attr_defs(self):
return [
EnumDef(
"working_colorspace",
self.colorspace_items,
default="Not set",
label="Working Colorspace",
),
UISeparatorDef(
label="Advanced1"
),
TextDef(
"abs_lut_path",
label="LUT Path",
),
EnumDef(
"input_colorspace",
self.colorspace_items,
default="Not set",
label="Input Colorspace",
),
EnumDef(
"direction",
[
(None, "Not set"),
("forward", "Forward"),
("inverse", "Inverse")
],
default="Not set",
label="Direction"
),
EnumDef(
"interpolation",
[
(None, "Not set"),
("linear", "Linear"),
("tetrahedral", "Tetrahedral"),
("best", "Best"),
("nearest", "Nearest")
],
default="Not set",
label="Interpolation"
),
EnumDef(
"output_colorspace",
self.colorspace_items,
default="Not set",
label="Output Colorspace",
),
]
def get_pre_create_attr_defs(self):
return [
FileDef(
"luts_file",
folders=False,
extensions=self.extensions,
allow_sequences=False,
single_item=True,
label="Look Files",
)
]
def apply_settings(self, project_settings, system_settings):
host = self.create_context.host
host_name = host.name
project_name = host.get_current_project_name()
config_data = colorspace.get_imageio_config(
project_name, host_name,
project_settings=project_settings
)
if not config_data:
self.enabled = False
return
filepath = config_data["path"]
config_items = colorspace.get_ocio_config_colorspaces(filepath)
labeled_colorspaces = colorspace.get_colorspaces_enumerator_items(
config_items,
include_aliases=True,
include_roles=True
)
self.config_items = config_items
self.config_data = config_data
self.colorspace_items.extend(labeled_colorspaces)
self.enabled = True

View file

@ -0,0 +1,86 @@
import os
from pprint import pformat
import pyblish.api
from openpype.pipeline import publish
from openpype.pipeline import colorspace
class CollectColorspaceLook(pyblish.api.InstancePlugin,
publish.OpenPypePyblishPluginMixin):
"""Collect OCIO colorspace look from LUT file
"""
label = "Collect Colorspace Look"
order = pyblish.api.CollectorOrder
hosts = ["traypublisher"]
families = ["ociolook"]
def process(self, instance):
creator_attrs = instance.data["creator_attributes"]
lut_repre_name = "LUTfile"
file_url = creator_attrs["abs_lut_path"]
file_name = os.path.basename(file_url)
base_name, ext = os.path.splitext(file_name)
# set output name with base_name which was cleared
# of all symbols and all parts were capitalized
output_name = (base_name.replace("_", " ")
.replace(".", " ")
.replace("-", " ")
.title()
.replace(" ", ""))
# get config items
config_items = instance.data["transientData"]["config_items"]
config_data = instance.data["transientData"]["config_data"]
# get colorspace items
converted_color_data = {}
for colorspace_key in [
"working_colorspace",
"input_colorspace",
"output_colorspace"
]:
if creator_attrs[colorspace_key]:
color_data = colorspace.convert_colorspace_enumerator_item(
creator_attrs[colorspace_key], config_items)
converted_color_data[colorspace_key] = color_data
else:
converted_color_data[colorspace_key] = None
# add colorspace to config data
if converted_color_data["working_colorspace"]:
config_data["colorspace"] = (
converted_color_data["working_colorspace"]["name"]
)
# create lut representation data
lut_repre = {
"name": lut_repre_name,
"output": output_name,
"ext": ext.lstrip("."),
"files": file_name,
"stagingDir": os.path.dirname(file_url),
"tags": []
}
instance.data.update({
"representations": [lut_repre],
"source": file_url,
"ocioLookWorkingSpace": converted_color_data["working_colorspace"],
"ocioLookItems": [
{
"name": lut_repre_name,
"ext": ext.lstrip("."),
"input_colorspace": converted_color_data[
"input_colorspace"],
"output_colorspace": converted_color_data[
"output_colorspace"],
"direction": creator_attrs["direction"],
"interpolation": creator_attrs["interpolation"],
"config_data": config_data
}
],
})
self.log.debug(pformat(instance.data))

View file

@ -1,6 +1,8 @@
import pyblish.api
from openpype.pipeline import registered_host
from openpype.pipeline import publish
from openpype.pipeline import (
publish,
registered_host
)
from openpype.lib import EnumDef
from openpype.pipeline import colorspace
@ -13,11 +15,14 @@ class CollectColorspace(pyblish.api.InstancePlugin,
label = "Choose representation colorspace"
order = pyblish.api.CollectorOrder + 0.49
hosts = ["traypublisher"]
families = ["render", "plate", "reference", "image", "online"]
enabled = False
colorspace_items = [
(None, "Don't override")
]
colorspace_attr_show = False
config_items = None
def process(self, instance):
values = self.get_attr_values_from_data(instance.data)
@ -48,10 +53,14 @@ class CollectColorspace(pyblish.api.InstancePlugin,
if config_data:
filepath = config_data["path"]
config_items = colorspace.get_ocio_config_colorspaces(filepath)
cls.colorspace_items.extend((
(name, name) for name in config_items.keys()
))
cls.colorspace_attr_show = True
labeled_colorspaces = colorspace.get_colorspaces_enumerator_items(
config_items,
include_aliases=True,
include_roles=True
)
cls.config_items = config_items
cls.colorspace_items.extend(labeled_colorspaces)
cls.enabled = True
@classmethod
def get_attribute_defs(cls):
@ -60,7 +69,6 @@ class CollectColorspace(pyblish.api.InstancePlugin,
"colorspace",
cls.colorspace_items,
default="Don't override",
label="Override Colorspace",
hidden=not cls.colorspace_attr_show
label="Override Colorspace"
)
]

View file

@ -0,0 +1,45 @@
import os
import json
import pyblish.api
from openpype.pipeline import publish
class ExtractColorspaceLook(publish.Extractor,
publish.OpenPypePyblishPluginMixin):
"""Extract OCIO colorspace look from LUT file
"""
label = "Extract Colorspace Look"
order = pyblish.api.ExtractorOrder
hosts = ["traypublisher"]
families = ["ociolook"]
def process(self, instance):
ociolook_items = instance.data["ocioLookItems"]
ociolook_working_color = instance.data["ocioLookWorkingSpace"]
staging_dir = self.staging_dir(instance)
# create ociolook file attributes
ociolook_file_name = "ocioLookFile.json"
ociolook_file_content = {
"version": 1,
"data": {
"ocioLookItems": ociolook_items,
"ocioLookWorkingSpace": ociolook_working_color
}
}
# write ociolook content into json file saved in staging dir
file_url = os.path.join(staging_dir, ociolook_file_name)
with open(file_url, "w") as f_:
json.dump(ociolook_file_content, f_, indent=4)
# create lut representation data
ociolook_repre = {
"name": "ocioLookFile",
"ext": "json",
"files": ociolook_file_name,
"stagingDir": staging_dir,
"tags": []
}
instance.data["representations"].append(ociolook_repre)

View file

@ -18,6 +18,7 @@ class ValidateColorspace(pyblish.api.InstancePlugin,
label = "Validate representation colorspace"
order = pyblish.api.ValidatorOrder
hosts = ["traypublisher"]
families = ["render", "plate", "reference", "image", "online"]
def process(self, instance):

View file

@ -0,0 +1,89 @@
import pyblish.api
from openpype.pipeline import (
publish,
PublishValidationError
)
class ValidateColorspaceLook(pyblish.api.InstancePlugin,
publish.OpenPypePyblishPluginMixin):
"""Validate colorspace look attributes"""
label = "Validate colorspace look attributes"
order = pyblish.api.ValidatorOrder
hosts = ["traypublisher"]
families = ["ociolook"]
def process(self, instance):
create_context = instance.context.data["create_context"]
created_instance = create_context.get_instance_by_id(
instance.data["instance_id"])
creator_defs = created_instance.creator_attribute_defs
ociolook_working_color = instance.data.get("ocioLookWorkingSpace")
ociolook_items = instance.data.get("ocioLookItems", [])
creator_defs_by_key = {_def.key: _def.label for _def in creator_defs}
not_set_keys = {}
if not ociolook_working_color:
not_set_keys["working_colorspace"] = creator_defs_by_key[
"working_colorspace"]
for ociolook_item in ociolook_items:
item_not_set_keys = self.validate_colorspace_set_attrs(
ociolook_item, creator_defs_by_key)
if item_not_set_keys:
not_set_keys[ociolook_item["name"]] = item_not_set_keys
if not_set_keys:
message = (
"Colorspace look attributes are not set: \n"
)
for key, value in not_set_keys.items():
if isinstance(value, list):
values_string = "\n\t- ".join(value)
message += f"\n\t{key}:\n\t- {values_string}"
else:
message += f"\n\t{value}"
raise PublishValidationError(
title="Colorspace Look attributes",
message=message,
description=message
)
def validate_colorspace_set_attrs(
self,
ociolook_item,
creator_defs_by_key
):
"""Validate colorspace look attributes"""
self.log.debug(f"Validate colorspace look attributes: {ociolook_item}")
check_keys = [
"input_colorspace",
"output_colorspace",
"direction",
"interpolation"
]
not_set_keys = []
for key in check_keys:
if ociolook_item[key]:
# key is set and it is correct
continue
def_label = creator_defs_by_key.get(key)
if not def_label:
# raise since key is not recognized by creator defs
raise KeyError(
f"Colorspace look attribute '{key}' is not "
f"recognized by creator attributes: {creator_defs_by_key}"
)
not_set_keys.append(def_label)
return not_set_keys

View file

@ -21,6 +21,7 @@ from aiohttp_json_rpc.protocol import (
)
from aiohttp_json_rpc.exceptions import RpcError
from openpype import AYON_SERVER_ENABLED
from openpype.lib import emit_event
from openpype.hosts.tvpaint.tvpaint_plugin import get_plugin_files_path
@ -834,8 +835,12 @@ class BaseCommunicator:
class QtCommunicator(BaseCommunicator):
label = os.getenv("AVALON_LABEL")
if not label:
label = "AYON" if AYON_SERVER_ENABLED else "OpenPype"
title = "{} Tools".format(label)
menu_definitions = {
"title": "OpenPype Tools",
"title": title,
"menu_items": [
{
"callback": "workfiles_tool",

View file

@ -7,7 +7,7 @@ import requests
import pyblish.api
from openpype.client import get_project, get_asset_by_name
from openpype.client import get_asset_by_name
from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost
from openpype.hosts.tvpaint import TVPAINT_ROOT_DIR
from openpype.settings import get_current_project_settings

View file

@ -69,7 +69,6 @@ class CollectWorkfileData(pyblish.api.ContextPlugin):
"asset_name": context.data["asset"],
"task_name": context.data["task"]
}
context.data["previous_context"] = current_context
self.log.debug("Current context is: {}".format(current_context))
# Collect context from workfile metadata

View file

@ -13,8 +13,10 @@ from openpype.client import get_asset_by_name, get_assets
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
register_inventory_action_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
deregister_inventory_action_path,
AYON_CONTAINER_ID,
legacy_io,
)
@ -28,6 +30,7 @@ import unreal # noqa
logger = logging.getLogger("openpype.hosts.unreal")
AYON_CONTAINERS = "AyonContainers"
AYON_ASSET_DIR = "/Game/Ayon/Assets"
CONTEXT_CONTAINER = "Ayon/context.json"
UNREAL_VERSION = semver.VersionInfo(
*os.getenv("AYON_UNREAL_VERSION").split(".")
@ -127,6 +130,7 @@ def install():
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
register_loader_plugin_path(str(LOAD_PATH))
register_creator_plugin_path(str(CREATE_PATH))
register_inventory_action_path(str(INVENTORY_PATH))
_register_callbacks()
_register_events()
@ -136,6 +140,7 @@ def uninstall():
pyblish.api.deregister_plugin_path(str(PUBLISH_PATH))
deregister_loader_plugin_path(str(LOAD_PATH))
deregister_creator_plugin_path(str(CREATE_PATH))
deregister_inventory_action_path(str(INVENTORY_PATH))
def _register_callbacks():
@ -649,6 +654,141 @@ def generate_sequence(h, h_dir):
return sequence, (min_frame, max_frame)
def _get_comps_and_assets(
component_class, asset_class, old_assets, new_assets, selected
):
eas = unreal.get_editor_subsystem(unreal.EditorActorSubsystem)
components = []
if selected:
sel_actors = eas.get_selected_level_actors()
for actor in sel_actors:
comps = actor.get_components_by_class(component_class)
components.extend(comps)
else:
comps = eas.get_all_level_actors_components()
components = [
c for c in comps if isinstance(c, component_class)
]
# Get all the static meshes among the old assets in a dictionary with
# the name as key
selected_old_assets = {}
for a in old_assets:
asset = unreal.EditorAssetLibrary.load_asset(a)
if isinstance(asset, asset_class):
selected_old_assets[asset.get_name()] = asset
# Get all the static meshes among the new assets in a dictionary with
# the name as key
selected_new_assets = {}
for a in new_assets:
asset = unreal.EditorAssetLibrary.load_asset(a)
if isinstance(asset, asset_class):
selected_new_assets[asset.get_name()] = asset
return components, selected_old_assets, selected_new_assets
def replace_static_mesh_actors(old_assets, new_assets, selected):
smes = unreal.get_editor_subsystem(unreal.StaticMeshEditorSubsystem)
static_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets(
unreal.StaticMeshComponent,
unreal.StaticMesh,
old_assets,
new_assets,
selected
)
for old_name, old_mesh in old_meshes.items():
new_mesh = new_meshes.get(old_name)
if not new_mesh:
continue
smes.replace_mesh_components_meshes(
static_mesh_comps, old_mesh, new_mesh)
def replace_skeletal_mesh_actors(old_assets, new_assets, selected):
skeletal_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets(
unreal.SkeletalMeshComponent,
unreal.SkeletalMesh,
old_assets,
new_assets,
selected
)
for old_name, old_mesh in old_meshes.items():
new_mesh = new_meshes.get(old_name)
if not new_mesh:
continue
for comp in skeletal_mesh_comps:
if comp.get_skeletal_mesh_asset() == old_mesh:
comp.set_skeletal_mesh_asset(new_mesh)
def replace_geometry_cache_actors(old_assets, new_assets, selected):
geometry_cache_comps, old_caches, new_caches = _get_comps_and_assets(
unreal.GeometryCacheComponent,
unreal.GeometryCache,
old_assets,
new_assets,
selected
)
for old_name, old_mesh in old_caches.items():
new_mesh = new_caches.get(old_name)
if not new_mesh:
continue
for comp in geometry_cache_comps:
if comp.get_editor_property("geometry_cache") == old_mesh:
comp.set_geometry_cache(new_mesh)
def delete_asset_if_unused(container, asset_content):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
references = set()
for asset_path in asset_content:
asset = ar.get_asset_by_object_path(asset_path)
refs = ar.get_referencers(
asset.package_name,
unreal.AssetRegistryDependencyOptions(
include_soft_package_references=False,
include_hard_package_references=True,
include_searchable_names=False,
include_soft_management_references=False,
include_hard_management_references=False
))
if not refs:
continue
references = references.union(set(refs))
# Filter out references that are in the Temp folder
cleaned_references = {
ref for ref in references if not str(ref).startswith("/Temp/")}
# Check which of the references are Levels
for ref in cleaned_references:
loaded_asset = unreal.EditorAssetLibrary.load_asset(ref)
if isinstance(loaded_asset, unreal.World):
# If there is at least a level, we can stop, we don't want to
# delete the container
return
unreal.log("Previous version unused, deleting...")
# No levels, delete the asset
unreal.EditorAssetLibrary.delete_directory(container["namespace"])
@contextmanager
def maintained_selection():
"""Stub to be either implemented or replaced.

View file

@ -0,0 +1,66 @@
import unreal
from openpype.hosts.unreal.api.tools_ui import qt_app_context
from openpype.hosts.unreal.api.pipeline import delete_asset_if_unused
from openpype.pipeline import InventoryAction
class DeleteUnusedAssets(InventoryAction):
"""Delete all the assets that are not used in any level.
"""
label = "Delete Unused Assets"
icon = "trash"
color = "red"
order = 1
dialog = None
def _delete_unused_assets(self, containers):
allowed_families = ["model", "rig"]
for container in containers:
container_dir = container.get("namespace")
if container.get("family") not in allowed_families:
unreal.log_warning(
f"Container {container_dir} is not supported.")
continue
asset_content = unreal.EditorAssetLibrary.list_assets(
container_dir, recursive=True, include_folder=False
)
delete_asset_if_unused(container, asset_content)
def _show_confirmation_dialog(self, containers):
from qtpy import QtCore
from openpype.widgets import popup
from openpype.style import load_stylesheet
dialog = popup.Popup()
dialog.setWindowFlags(
QtCore.Qt.Window
| QtCore.Qt.WindowStaysOnTopHint
)
dialog.setFocusPolicy(QtCore.Qt.StrongFocus)
dialog.setWindowTitle("Delete all unused assets")
dialog.setMessage(
"You are about to delete all the assets in the project that \n"
"are not used in any level. Are you sure you want to continue?"
)
dialog.setButtonText("Delete")
dialog.on_clicked.connect(
lambda: self._delete_unused_assets(containers)
)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.setStyleSheet(load_stylesheet())
self.dialog = dialog
def process(self, containers):
with qt_app_context():
self._show_confirmation_dialog(containers)

View file

@ -0,0 +1,84 @@
import unreal
from openpype.hosts.unreal.api.pipeline import (
ls,
replace_static_mesh_actors,
replace_skeletal_mesh_actors,
replace_geometry_cache_actors,
)
from openpype.pipeline import InventoryAction
def update_assets(containers, selected):
allowed_families = ["model", "rig"]
# Get all the containers in the Unreal Project
all_containers = ls()
for container in containers:
container_dir = container.get("namespace")
if container.get("family") not in allowed_families:
unreal.log_warning(
f"Container {container_dir} is not supported.")
continue
# Get all containers with same asset_name but different objectName.
# These are the containers that need to be updated in the level.
sa_containers = [
i
for i in all_containers
if (
i.get("asset_name") == container.get("asset_name") and
i.get("objectName") != container.get("objectName")
)
]
asset_content = unreal.EditorAssetLibrary.list_assets(
container_dir, recursive=True, include_folder=False
)
# Update all actors in level
for sa_cont in sa_containers:
sa_dir = sa_cont.get("namespace")
old_content = unreal.EditorAssetLibrary.list_assets(
sa_dir, recursive=True, include_folder=False
)
if container.get("family") == "rig":
replace_skeletal_mesh_actors(
old_content, asset_content, selected)
replace_static_mesh_actors(
old_content, asset_content, selected)
elif container.get("family") == "model":
if container.get("loader") == "PointCacheAlembicLoader":
replace_geometry_cache_actors(
old_content, asset_content, selected)
else:
replace_static_mesh_actors(
old_content, asset_content, selected)
unreal.EditorLevelLibrary.save_current_level()
class UpdateAllActors(InventoryAction):
"""Update all the Actors in the current level to the version of the asset
selected in the scene manager.
"""
label = "Replace all Actors in level to this version"
icon = "arrow-up"
def process(self, containers):
update_assets(containers, False)
class UpdateSelectedActors(InventoryAction):
"""Update only the selected Actors in the current level to the version
of the asset selected in the scene manager.
"""
label = "Replace selected Actors in level to this version"
icon = "arrow-up"
def process(self, containers):
update_assets(containers, True)

View file

@ -69,7 +69,7 @@ class AnimationAlembicLoader(plugin.Loader):
"""
# Create directory for asset and ayon container
root = "/Game/Ayon/Assets"
root = unreal_pipeline.AYON_ASSET_DIR
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -21,8 +25,11 @@ class PointCacheAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(
self, filename, asset_dir, asset_name, replace,
filename, asset_dir, asset_name, replace,
frame_start=None, frame_end=None
):
task = unreal.AssetImportTask()
@ -38,8 +45,6 @@ class PointCacheAlembicLoader(plugin.Loader):
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options.set_editor_property(
'import_type', unreal.AlembicImportType.GEOMETRY_CACHE)
@ -64,13 +69,42 @@ class PointCacheAlembicLoader(plugin.Loader):
return task
def load(self, context, name, namespace, data):
"""Load and containerise representation into Content Browser.
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name,
frame_start, frame_end
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
task = self.get_task(
filepath, asset_dir, asset_name, False, frame_start, frame_end)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation,
frame_start, frame_end
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"],
"frame_start": frame_start,
"frame_end": frame_end
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
Args:
context (dict): application context
@ -79,30 +113,28 @@ class PointCacheAlembicLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
name_version = f"{name}_hero"
else:
asset_name = "{}".format(name)
name_version = f"{name}_v{version.get('name'):03d}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
"{}/{}/{}".format(root, asset, name), suffix="")
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
frame_start = context.get('asset').get('data').get('frameStart')
frame_end = context.get('asset').get('data').get('frameEnd')
@ -111,30 +143,16 @@ class PointCacheAlembicLoader(plugin.Loader):
if frame_start == frame_end:
frame_end += 1
path = self.filepath_from_context(context)
task = self.get_task(
path, asset_dir, asset_name, False, frame_start, frame_end)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = self.filepath_from_context(context)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
self.import_and_containerize(
path, asset_dir, asset_name, container_name,
frame_start, frame_end)
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"], frame_start, frame_end)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -146,27 +164,43 @@ class PointCacheAlembicLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
representation["context"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, False)
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
unreal.log_warning(context)
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
if not context:
raise RuntimeError("No context found in representation")
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
frame_start = int(container.get("frame_start"))
frame_end = int(container.get("frame_end"))
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(
path, asset_dir, asset_name, container_name,
frame_start, frame_end)
self.imprint(
asset, asset_dir, container_name, asset_name, representation,
frame_start, frame_end)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,10 +24,12 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
def get_task(self, filename, asset_dir, asset_name, replace):
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
task = unreal.AssetImportTask()
options = unreal.AbcImportSettings()
sm_settings = unreal.AbcStaticMeshSettings()
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
@ -37,72 +43,38 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options.set_editor_property(
'import_type', unreal.AlembicImportType.SKELETAL)
options.static_mesh_settings = sm_settings
options.conversion_settings = conversion_settings
if not default_conversion:
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
rotation=[0.0, 0.0, 0.0],
scale=[1.0, 1.0, 1.0])
options.conversion_settings = conversion_settings
task.options = options
return task
def load(self, context, name, namespace, data):
"""Load and containerise representation into Content Browser.
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name,
default_conversion=False
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
task = self.get_task(
filepath, asset_dir, asset_name, False, default_conversion)
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
Returns:
list(str): list of container content
"""
# Create directory for asset and ayon container
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
name_version = f"{name}_hero"
else:
name_version = f"{name}_v{version.get('name'):03d}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
path = self.filepath_from_context(context)
task = self.get_task(path, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
@ -111,12 +83,57 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
unreal_pipeline.imprint(
f"{asset_dir}/{container_name}", data)
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and ayon container
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
name_version = f"{name}_hero"
else:
name_version = f"{name}_v{version.get('name'):03d}"
default_conversion = False
if options.get("default_conversion"):
default_conversion = options.get("default_conversion")
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = self.filepath_from_context(context)
self.import_and_containerize(path, asset_dir, asset_name,
container_name, default_conversion)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -128,26 +145,36 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, True)
if not context:
raise RuntimeError("No context found in representation")
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(path, asset_dir, asset_name,
container_name)
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,14 +24,79 @@ class SkeletalMeshFBXLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace):
task = unreal.AssetImportTask()
options = unreal.FbxImportUI()
task.set_editor_property('filename', filename)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
task.set_editor_property('replace_existing', replace)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
options.set_editor_property(
'automated_import_should_detect_type', False)
options.set_editor_property('import_as_skeletal', True)
options.set_editor_property('import_animations', False)
options.set_editor_property('import_mesh', True)
options.set_editor_property('import_materials', False)
options.set_editor_property('import_textures', False)
options.set_editor_property('skeleton', None)
options.set_editor_property('create_physics_asset', False)
options.set_editor_property(
'mesh_type_to_import',
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_ALL)
options.skeletal_mesh_import_data.set_editor_property(
'normal_import_method',
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS)
task.options = options
return task
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(
filepath, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is a two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
@ -35,23 +104,15 @@ class SkeletalMeshFBXLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
options (dict): Those would be data to be imprinted. This is not
used now, data are imprinted by `containerise()`.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
if options and options.get("asset_dir"):
root = options["asset_dir"]
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
@ -61,67 +122,20 @@ class SkeletalMeshFBXLoader(plugin.Loader):
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix="")
f"{self.root}/{asset}/{name_version}", suffix=""
)
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = unreal.AssetImportTask()
path = self.filepath_from_context(context)
task.set_editor_property('filename', path)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
task.set_editor_property('replace_existing', False)
task.set_editor_property('automated', True)
task.set_editor_property('save', False)
# set import options here
options = unreal.FbxImportUI()
options.set_editor_property('import_as_skeletal', True)
options.set_editor_property('import_animations', False)
options.set_editor_property('import_mesh', True)
options.set_editor_property('import_materials', False)
options.set_editor_property('import_textures', False)
options.set_editor_property('skeleton', None)
options.set_editor_property('create_physics_asset', False)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
options.set_editor_property(
'mesh_type_to_import',
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_ALL)
# set to import normals, otherwise Unreal will compute them
# and it will take a long time, depending on the size of the mesh
options.skeletal_mesh_import_data.set_editor_property(
'normal_import_method',
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS)
task.options = options
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
f"{asset_dir}/{container_name}", data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -133,58 +147,36 @@ class SkeletalMeshFBXLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = unreal.AssetImportTask()
if not context:
raise RuntimeError("No context found in representation")
task.set_editor_property('filename', source_path)
task.set_editor_property('destination_path', destination_path)
task.set_editor_property('destination_name', name)
task.set_editor_property('replace_existing', True)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
# set import options here
options = unreal.FbxImportUI()
options.set_editor_property('import_as_skeletal', True)
options.set_editor_property('import_animations', False)
options.set_editor_property('import_mesh', True)
options.set_editor_property('import_materials', True)
options.set_editor_property('import_textures', True)
options.set_editor_property('skeleton', None)
options.set_editor_property('create_physics_asset', False)
container_name += suffix
options.set_editor_property('mesh_type_to_import',
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_ALL
)
# set to import normals, otherwise Unreal will compute them
# and it will take a long time, depending on the size of the mesh
options.skeletal_mesh_import_data.set_editor_property(
'normal_import_method',
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS
)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
task.options = options
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,6 +24,8 @@ class StaticMeshAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
task = unreal.AssetImportTask()
@ -53,14 +59,40 @@ class StaticMeshAlembicLoader(plugin.Loader):
return task
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name,
default_conversion=False
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(
filepath, asset_dir, asset_name, False, default_conversion)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
@ -68,15 +100,12 @@ class StaticMeshAlembicLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
@ -93,39 +122,22 @@ class StaticMeshAlembicLoader(plugin.Loader):
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix="")
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
path = self.filepath_from_context(context)
task = self.get_task(
path, asset_dir, asset_name, False, default_conversion)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
self.import_and_containerize(path, asset_dir, asset_name,
container_name, default_conversion)
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:
@ -134,27 +146,36 @@ class StaticMeshAlembicLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, True, False)
if not context:
raise RuntimeError("No context found in representation")
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(path, asset_dir, asset_name,
container_name)
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,6 +24,8 @@ class StaticMeshFBXLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace):
task = unreal.AssetImportTask()
@ -46,14 +52,39 @@ class StaticMeshFBXLoader(plugin.Loader):
return task
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(
filepath, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
@ -61,23 +92,15 @@ class StaticMeshFBXLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
options (dict): Those would be data to be imprinted. This is not
used now, data are imprinted by `containerise()`.
options (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
if options and options.get("asset_dir"):
root = options["asset_dir"]
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
@ -87,35 +110,20 @@ class StaticMeshFBXLoader(plugin.Loader):
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix=""
f"{self.root}/{asset}/{name_version}", suffix=""
)
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = self.filepath_from_context(context)
path = self.filepath_from_context(context)
task = self.get_task(path, asset_dir, asset_name, False)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -127,27 +135,36 @@ class StaticMeshFBXLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, True)
if not context:
raise RuntimeError("No context found in representation")
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -41,7 +41,7 @@ class UAssetLoader(plugin.Loader):
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
root = unreal_pipeline.AYON_ASSET_DIR
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"

View file

@ -86,7 +86,7 @@ class YetiLoader(plugin.Loader):
raise RuntimeError("Groom plugin is not activated.")
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
root = unreal_pipeline.AYON_ASSET_DIR
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"

View file

@ -3,6 +3,7 @@ import os
import re
import pyblish.api
from openpype.pipeline.publish import PublishValidationError
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
@ -39,8 +40,22 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
collections, remainder = clique.assemble(
repr["files"], minimum_items=1, patterns=patterns)
assert not remainder, "Must not have remainder"
assert len(collections) == 1, "Must detect single collection"
if remainder:
raise PublishValidationError(
"Some files have been found outside a sequence. "
f"Invalid files: {remainder}")
if not collections:
raise PublishValidationError(
"We have been unable to find a sequence in the "
"files. Please ensure the files are named "
"appropriately. "
f"Files: {repr_files}")
if len(collections) > 1:
raise PublishValidationError(
"Multiple collections detected. There should be a single "
"collection per representation. "
f"Collections identified: {collections}")
collection = collections[0]
frames = list(collection.indexes)
@ -53,8 +68,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
data["clipOut"])
if current_range != required_range:
raise ValueError(f"Invalid frame range: {current_range} - "
f"expected: {required_range}")
raise PublishValidationError(
f"Invalid frame range: {current_range} - "
f"expected: {required_range}")
missing = collection.holes().indexes
assert not missing, "Missing frames: %s" % (missing,)
if missing:
raise PublishValidationError(
"Missing frames have been detected. "
f"Missing frames: {missing}")

View file

@ -156,8 +156,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
self.log.debug("frameEnd:: {}".format(
instance.data["frameEnd"]))
except Exception:
self.log.warning("Unable to count frames "
"duration {}".format(no_of_frames))
self.log.warning("Unable to count frames duration.")
instance.data["handleStart"] = asset_doc["data"]["handleStart"]
instance.data["handleEnd"] = asset_doc["data"]["handleEnd"]

View file

@ -237,8 +237,13 @@ class UISeparatorDef(UIDef):
class UILabelDef(UIDef):
type = "label"
def __init__(self, label):
super(UILabelDef, self).__init__(label=label)
def __init__(self, label, key=None):
super(UILabelDef, self).__init__(label=label, key=key)
def __eq__(self, other):
if not super(UILabelDef, self).__eq__(other):
return False
return self.label == other.label
# ---------------------------------------

View file

@ -611,6 +611,12 @@ def get_openpype_username():
settings and last option is to use `getpass.getuser()` which returns
machine username.
"""
if AYON_SERVER_ENABLED:
import ayon_api
return ayon_api.get_user()["name"]
username = os.environ.get("OPENPYPE_USERNAME")
if not username:
local_settings = get_local_settings()

View file

@ -31,13 +31,13 @@ from openpype.settings.lib import (
get_studio_system_settings_overrides,
load_json_file
)
from openpype.settings.ayon_settings import is_dev_mode_enabled
from openpype.lib import (
Logger,
import_filepath,
import_module_from_dirpath,
)
from openpype.lib.openpype_version import is_staging_enabled
from .interfaces import (
OpenPypeInterface,
@ -66,6 +66,7 @@ IGNORED_FILENAMES_IN_AYON = {
"shotgrid",
"sync_server",
"slack",
"kitsu",
}
@ -317,21 +318,10 @@ def load_modules(force=False):
time.sleep(0.1)
def _get_ayon_addons_information():
"""Receive information about addons to use from server.
Todos:
Actually ask server for the information.
Allow project name as optional argument to be able to query information
about used addons for specific project.
Returns:
List[Dict[str, Any]]: List of addon information to use.
"""
output = []
def _get_ayon_bundle_data():
bundle_name = os.getenv("AYON_BUNDLE_NAME")
bundles = ayon_api.get_bundles()["bundles"]
final_bundle = next(
return next(
(
bundle
for bundle in bundles
@ -339,10 +329,22 @@ def _get_ayon_addons_information():
),
None
)
if final_bundle is None:
return output
bundle_addons = final_bundle["addons"]
def _get_ayon_addons_information(bundle_info):
"""Receive information about addons to use from server.
Todos:
Actually ask server for the information.
Allow project name as optional argument to be able to query information
about used addons for specific project.
Returns:
List[Dict[str, Any]]: List of addon information to use.
"""
output = []
bundle_addons = bundle_info["addons"]
addons = ayon_api.get_addons_info()["addons"]
for addon in addons:
name = addon["name"]
@ -378,38 +380,73 @@ def _load_ayon_addons(openpype_modules, modules_key, log):
v3_addons_to_skip = []
addons_info = _get_ayon_addons_information()
bundle_info = _get_ayon_bundle_data()
addons_info = _get_ayon_addons_information(bundle_info)
if not addons_info:
return v3_addons_to_skip
addons_dir = os.environ.get("AYON_ADDONS_DIR")
if not addons_dir:
addons_dir = os.path.join(
appdirs.user_data_dir("AYON", "Ynput"),
"addons"
)
if not os.path.exists(addons_dir):
dev_mode_enabled = is_dev_mode_enabled()
dev_addons_info = {}
if dev_mode_enabled:
# Get dev addons info only when dev mode is enabled
dev_addons_info = bundle_info.get("addonDevelopment", dev_addons_info)
addons_dir_exists = os.path.exists(addons_dir)
if not addons_dir_exists:
log.warning("Addons directory does not exists. Path \"{}\"".format(
addons_dir
))
return v3_addons_to_skip
for addon_info in addons_info:
addon_name = addon_info["name"]
addon_version = addon_info["version"]
folder_name = "{}_{}".format(addon_name, addon_version)
addon_dir = os.path.join(addons_dir, folder_name)
if not os.path.exists(addon_dir):
log.debug((
"No localized client code found for addon {} {}."
).format(addon_name, addon_version))
dev_addon_info = dev_addons_info.get(addon_name, {})
use_dev_path = dev_addon_info.get("enabled", False)
addon_dir = None
if use_dev_path:
addon_dir = dev_addon_info["path"]
if not addon_dir or not os.path.exists(addon_dir):
log.warning((
"Dev addon {} {} path does not exists. Path \"{}\""
).format(addon_name, addon_version, addon_dir))
continue
elif addons_dir_exists:
folder_name = "{}_{}".format(addon_name, addon_version)
addon_dir = os.path.join(addons_dir, folder_name)
if not os.path.exists(addon_dir):
log.debug((
"No localized client code found for addon {} {}."
).format(addon_name, addon_version))
continue
if not addon_dir:
continue
sys.path.insert(0, addon_dir)
imported_modules = []
for name in os.listdir(addon_dir):
# Ignore of files is implemented to be able to run code from code
# where usually is more files than just the addon
# Ignore start and setup scripts
if name in ("setup.py", "start.py"):
continue
path = os.path.join(addon_dir, name)
basename, ext = os.path.splitext(name)
# Ignore folders/files with dot in name
# - dot names cannot be imported in Python
if "." in basename:
continue
is_dir = os.path.isdir(path)
is_py_file = ext.lower() == ".py"
if not is_py_file and not is_dir:

View file

@ -65,9 +65,11 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
job_info.BatchName += datetime.now().strftime("%d%m%Y%H%M%S")
# Deadline requires integers in frame range
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
frames = "{start}-{end}x{step}".format(
start=int(instance.data["frameStart"]),
end=int(instance.data["frameEnd"]),
start=int(start),
end=int(end),
step=int(instance.data["byFrameStep"]),
)
job_info.Frames = frames

View file

@ -48,6 +48,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
use_gpu = False
env_allowed_keys = []
env_search_replace_values = {}
workfile_dependency = True
@classmethod
def get_attribute_defs(cls):
@ -83,6 +84,11 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
"suspend_publish",
default=False,
label="Suspend publish"
),
BoolDef(
"workfile_dependency",
default=True,
label="Workfile Dependency"
)
]
@ -313,6 +319,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
"AuxFiles": []
}
# Add workfile dependency.
workfile_dependency = instance.data["attributeValues"].get(
"workfile_dependency", self.workfile_dependency
)
if workfile_dependency:
payload["JobInfo"].update({"AssetDependency0": script_path})
# TODO: rewrite for baking with sequences
if baking_submission:
payload["JobInfo"].update({

View file

@ -40,7 +40,7 @@ class LauncherAction(OpenPypeModule, ITrayAction):
actions_paths = self.manager.collect_plugin_paths()["actions"]
for path in actions_paths:
if path and os.path.exists(path):
register_launcher_action_path(actions_dir)
register_launcher_action_path(path)
paths_str = os.environ.get("AVALON_ACTIONS") or ""
if paths_str:

View file

@ -6,8 +6,6 @@ Requires:
import pyblish.api
from openpype.pipeline import legacy_io
class StartTimer(pyblish.api.ContextPlugin):
label = "Start Timer"
@ -25,9 +23,9 @@ class StartTimer(pyblish.api.ContextPlugin):
self.log.debug("Publish is not affecting running timers.")
return
project_name = legacy_io.active_project()
asset_name = legacy_io.Session.get("AVALON_ASSET")
task_name = legacy_io.Session.get("AVALON_TASK")
project_name = context.data["projectName"]
asset_name = context.data.get("asset")
task_name = context.data.get("task")
if not project_name or not asset_name or not task_name:
self.log.info((
"Current context does not contain all"

View file

@ -1,4 +1,3 @@
from copy import deepcopy
import re
import os
import json
@ -7,6 +6,7 @@ import functools
import platform
import tempfile
import warnings
from copy import deepcopy
from openpype import PACKAGE_DIR
from openpype.settings import get_project_settings
@ -356,7 +356,10 @@ def parse_colorspace_from_filepath(
"Must provide `config_path` if `colorspaces` is not provided."
)
colorspaces = colorspaces or get_ocio_config_colorspaces(config_path)
colorspaces = (
colorspaces
or get_ocio_config_colorspaces(config_path)["colorspaces"]
)
underscored_colorspaces = {
key.replace(" ", "_"): key for key in colorspaces
if " " in key
@ -393,7 +396,7 @@ def validate_imageio_colorspace_in_config(config_path, colorspace_name):
Returns:
bool: True if exists
"""
colorspaces = get_ocio_config_colorspaces(config_path)
colorspaces = get_ocio_config_colorspaces(config_path)["colorspaces"]
if colorspace_name not in colorspaces:
raise KeyError(
"Missing colorspace '{}' in config file '{}'".format(
@ -530,6 +533,157 @@ def get_ocio_config_colorspaces(config_path):
return CachedData.ocio_config_colorspaces[config_path]
def convert_colorspace_enumerator_item(
colorspace_enum_item,
config_items
):
"""Convert colorspace enumerator item to dictionary
Args:
colorspace_item (str): colorspace and family in couple
config_items (dict[str,dict]): colorspace data
Returns:
dict: colorspace data
"""
if "::" not in colorspace_enum_item:
return None
# split string with `::` separator and set first as key and second as value
item_type, item_name = colorspace_enum_item.split("::")
item_data = None
if item_type == "aliases":
# loop through all colorspaces and find matching alias
for name, _data in config_items.get("colorspaces", {}).items():
if item_name in _data.get("aliases", []):
item_data = deepcopy(_data)
item_data.update({
"name": name,
"type": "colorspace"
})
break
else:
# find matching colorspace item found in labeled_colorspaces
item_data = config_items.get(item_type, {}).get(item_name)
if item_data:
item_data = deepcopy(item_data)
item_data.update({
"name": item_name,
"type": item_type
})
# raise exception if item is not found
if not item_data:
message_config_keys = ", ".join(
"'{}':{}".format(
key,
set(config_items.get(key, {}).keys())
) for key in config_items.keys()
)
raise KeyError(
"Missing colorspace item '{}' in config data: [{}]".format(
colorspace_enum_item, message_config_keys
)
)
return item_data
def get_colorspaces_enumerator_items(
config_items,
include_aliases=False,
include_looks=False,
include_roles=False,
include_display_views=False
):
"""Get all colorspace data with labels
Wrapper function for aggregating all names and its families.
Families can be used for building menu and submenus in gui.
Args:
config_items (dict[str,dict]): colorspace data coming from
`get_ocio_config_colorspaces` function
include_aliases (bool): include aliases in result
include_looks (bool): include looks in result
include_roles (bool): include roles in result
Returns:
list[tuple[str,str]]: colorspace and family in couple
"""
labeled_colorspaces = []
aliases = set()
colorspaces = set()
looks = set()
roles = set()
display_views = set()
for items_type, colorspace_items in config_items.items():
if items_type == "colorspaces":
for color_name, color_data in colorspace_items.items():
if color_data.get("aliases"):
aliases.update([
(
"aliases::{}".format(alias_name),
"[alias] {} ({})".format(alias_name, color_name)
)
for alias_name in color_data["aliases"]
])
colorspaces.add((
"{}::{}".format(items_type, color_name),
"[colorspace] {}".format(color_name)
))
elif items_type == "looks":
looks.update([
(
"{}::{}".format(items_type, name),
"[look] {} ({})".format(name, role_data["process_space"])
)
for name, role_data in colorspace_items.items()
])
elif items_type == "displays_views":
display_views.update([
(
"{}::{}".format(items_type, name),
"[view (display)] {}".format(name)
)
for name, _ in colorspace_items.items()
])
elif items_type == "roles":
roles.update([
(
"{}::{}".format(items_type, name),
"[role] {} ({})".format(name, role_data["colorspace"])
)
for name, role_data in colorspace_items.items()
])
if roles and include_roles:
roles = sorted(roles, key=lambda x: x[0])
labeled_colorspaces.extend(roles)
# add colorspaces as second so it is not first in menu
colorspaces = sorted(colorspaces, key=lambda x: x[0])
labeled_colorspaces.extend(colorspaces)
if aliases and include_aliases:
aliases = sorted(aliases, key=lambda x: x[0])
labeled_colorspaces.extend(aliases)
if looks and include_looks:
looks = sorted(looks, key=lambda x: x[0])
labeled_colorspaces.extend(looks)
if display_views and include_display_views:
display_views = sorted(display_views, key=lambda x: x[0])
labeled_colorspaces.extend(display_views)
return labeled_colorspaces
# TODO: remove this in future - backward compatibility
@deprecated("_get_wrapped_with_subprocess")
def get_colorspace_data_subprocess(config_path):

View file

@ -25,10 +25,7 @@ from openpype.tests.lib import is_in_tests
from .publish.lib import filter_pyblish_plugins
from .anatomy import Anatomy
from .template_data import (
get_template_data_with_names,
get_template_data
)
from .template_data import get_template_data_with_names
from .workfile import (
get_workfile_template_key,
get_custom_workfile_template_by_string_context,
@ -483,6 +480,27 @@ def get_template_data_from_session(session=None, system_settings=None):
)
def get_current_context_template_data(system_settings=None):
"""Prepare template data for current context.
Args:
system_settings (Optional[Dict[str, Any]]): Prepared system settings.
Returns:
Dict[str, Any] Template data for current context.
"""
context = get_current_context()
project_name = context["project_name"]
asset_name = context["asset_name"]
task_name = context["task_name"]
host_name = get_current_host_name()
return get_template_data_with_names(
project_name, asset_name, task_name, host_name, system_settings
)
def get_workdir_from_session(session=None, template_key=None):
"""Template data for template fill from session keys.
@ -661,70 +679,3 @@ def get_process_id():
if _process_id is None:
_process_id = str(uuid.uuid4())
return _process_id
def get_current_context_template_data():
"""Template data for template fill from current context
Returns:
Dict[str, Any] of the following tokens and their values
Supported Tokens:
- Regular Tokens
- app
- user
- asset
- parent
- hierarchy
- folder[name]
- root[work, ...]
- studio[code, name]
- project[code, name]
- task[type, name, short]
- Context Specific Tokens
- assetData[frameStart]
- assetData[frameEnd]
- assetData[handleStart]
- assetData[handleEnd]
- assetData[frameStartHandle]
- assetData[frameEndHandle]
- assetData[resolutionHeight]
- assetData[resolutionWidth]
"""
# pre-prepare get_template_data args
current_context = get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
anatomy = Anatomy(project_name)
# prepare get_template_data args
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
task_name = current_context["task_name"]
host_name = get_current_host_name()
# get regular template data
template_data = get_template_data(
project_doc, asset_doc, task_name, host_name
)
template_data["root"] = anatomy.roots
# get context specific vars
asset_data = asset_doc["data"].copy()
# compute `frameStartHandle` and `frameEndHandle`
if "frameStart" in asset_data and "handleStart" in asset_data:
asset_data["frameStartHandle"] = \
asset_data["frameStart"] - asset_data["handleStart"]
if "frameEnd" in asset_data and "handleEnd" in asset_data:
asset_data["frameEndHandle"] = \
asset_data["frameEnd"] + asset_data["handleEnd"]
# add assetData
template_data["assetData"] = asset_data
return template_data

View file

@ -758,7 +758,7 @@ class PublishAttributes:
yield name
def mark_as_stored(self):
self._origin_data = copy.deepcopy(self._data)
self._origin_data = copy.deepcopy(self.data_to_store())
def data_to_store(self):
"""Convert attribute values to "data to store"."""
@ -912,6 +912,12 @@ class CreatedInstance:
# Create a copy of passed data to avoid changing them on the fly
data = copy.deepcopy(data or {})
# Pop dictionary values that will be converted to objects to be able
# catch changes
orig_creator_attributes = data.pop("creator_attributes", None) or {}
orig_publish_attributes = data.pop("publish_attributes", None) or {}
# Store original value of passed data
self._orig_data = copy.deepcopy(data)
@ -919,10 +925,6 @@ class CreatedInstance:
data.pop("family", None)
data.pop("subset", None)
# Pop dictionary values that will be converted to objects to be able
# catch changes
orig_creator_attributes = data.pop("creator_attributes", None) or {}
orig_publish_attributes = data.pop("publish_attributes", None) or {}
# QUESTION Does it make sense to have data stored as ordered dict?
self._data = collections.OrderedDict()
@ -1039,7 +1041,10 @@ class CreatedInstance:
@property
def origin_data(self):
return copy.deepcopy(self._orig_data)
output = copy.deepcopy(self._orig_data)
output["creator_attributes"] = self.creator_attributes.origin_data
output["publish_attributes"] = self.publish_attributes.origin_data
return output
@property
def creator_identifier(self):
@ -1095,7 +1100,7 @@ class CreatedInstance:
def changes(self):
"""Calculate and return changes."""
return TrackChangesItem(self._orig_data, self.data_to_store())
return TrackChangesItem(self.origin_data, self.data_to_store())
def mark_as_stored(self):
"""Should be called when instance data are stored.
@ -1211,7 +1216,7 @@ class CreatedInstance:
publish_attributes = self.publish_attributes.serialize_attributes()
return {
"data": self.data_to_store(),
"orig_data": copy.deepcopy(self._orig_data),
"orig_data": self.origin_data,
"creator_attr_defs": creator_attr_defs,
"publish_attributes": publish_attributes,
"creator_label": self._creator_label,
@ -1251,7 +1256,7 @@ class CreatedInstance:
creator_identifier=creator_identifier,
creator_label=creator_label,
group_label=group_label,
creator_attributes=creator_attr_defs
creator_attr_defs=creator_attr_defs
)
obj._orig_data = serialized_data["orig_data"]
obj.publish_attributes.deserialize_attributes(publish_attributes)
@ -2331,6 +2336,10 @@ class CreateContext:
identifier, label, exc_info, add_traceback
)
)
else:
for update_data in update_list:
instance = update_data.instance
instance.mark_as_stored()
if failed_info:
raise CreatorsSaveFailed(failed_info)

Some files were not shown because too many files have changed in this diff Show more