resolve conflict

This commit is contained in:
Kayla Man 2023-11-06 15:39:54 +08:00
commit ad36379837
124 changed files with 3758 additions and 1402 deletions

View file

@ -35,6 +35,11 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.17.5-nightly.3
- 3.17.5-nightly.2
- 3.17.5-nightly.1
- 3.17.4
- 3.17.4-nightly.2
- 3.17.4-nightly.1
- 3.17.3
- 3.17.3-nightly.2
@ -130,11 +135,6 @@ body:
- 3.15.1-nightly.5
- 3.15.1-nightly.4
- 3.15.1-nightly.3
- 3.15.1-nightly.2
- 3.15.1-nightly.1
- 3.15.0
- 3.15.0-nightly.1
- 3.14.11-nightly.4
validations:
required: true
- type: dropdown

View file

@ -1,6 +1,274 @@
# Changelog
## [3.17.4](https://github.com/ynput/OpenPype/tree/3.17.4)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.3...3.17.4)
### **🆕 New features**
<details>
<summary>Add Support for Husk-AYON Integration <a href="https://github.com/ynput/OpenPype/pull/5816">#5816</a></summary>
This draft pull request introduces support for integrating Husk with AYON within the OpenPype repository.
___
</details>
<details>
<summary>Push to project tool: Prepare push to project tool for AYON <a href="https://github.com/ynput/OpenPype/pull/5770">#5770</a></summary>
Cloned Push to project tool for AYON and modified it.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Max: tycache family support <a href="https://github.com/ynput/OpenPype/pull/5624">#5624</a></summary>
Tycache family supports for Tyflow Plugin in Max
___
</details>
<details>
<summary>Unreal: Changed behaviour for updating assets <a href="https://github.com/ynput/OpenPype/pull/5670">#5670</a></summary>
Changed how assets are updated in Unreal.
___
</details>
<details>
<summary>Unreal: Improved error reporting for Sequence Frame Validator <a href="https://github.com/ynput/OpenPype/pull/5730">#5730</a></summary>
Improved error reporting for Sequence Frame Validator.
___
</details>
<details>
<summary>Max: Setting tweaks on Review Family <a href="https://github.com/ynput/OpenPype/pull/5744">#5744</a></summary>
- Bug fix of not being able to publish the preferred visual style when creating preview animation
- Exposes the parameters after creating instance
- Add the Quality settings and viewport texture settings for preview animation
- add use selection for create review
___
</details>
<details>
<summary>Max: Add families with frame range extractions back to the frame range validator <a href="https://github.com/ynput/OpenPype/pull/5757">#5757</a></summary>
In 3dsMax, there are some instances which exports the files in frame range but not being added to the optional frame range validator. In this PR, these instances would have the optional frame range validators to allow users to check if frame range aligns with the context data from DB.The following families have been added to have optional frame range validator:
- maxrender
- review
- camera
- redshift proxy
- pointcache
- point cloud(tyFlow PRT)
___
</details>
<details>
<summary>TimersManager: Use available data to get context info <a href="https://github.com/ynput/OpenPype/pull/5804">#5804</a></summary>
Get context information from pyblish context data instead of using `legacy_io`.
___
</details>
<details>
<summary>Chore: Removed unused variable from `AbstractCollectRender` <a href="https://github.com/ynput/OpenPype/pull/5805">#5805</a></summary>
Removed unused `_asset` variable from `RenderInstance`.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Bugfix/houdini: wrong frame calculation with handles <a href="https://github.com/ynput/OpenPype/pull/5698">#5698</a></summary>
This PR make collect plugins to consider `handleStart` and `handleEnd` when collecting frame range it affects three parts:
- get frame range in collect plugins
- expected file in render plugins
- submit houdini job deadline plugin
___
</details>
<details>
<summary>Nuke: ayon server settings improvements <a href="https://github.com/ynput/OpenPype/pull/5746">#5746</a></summary>
Nuke settings were not aligned with OpenPype settings. Also labels needed to be improved.
___
</details>
<details>
<summary>Blender: Fix pointcache family and fix alembic extractor <a href="https://github.com/ynput/OpenPype/pull/5747">#5747</a></summary>
Fixed `pointcache` family and fixed behaviour of the alembic extractor.
___
</details>
<details>
<summary>AYON: Remove 'shotgun_api3' from dependencies <a href="https://github.com/ynput/OpenPype/pull/5803">#5803</a></summary>
Removed `shotgun_api3` dependency from openpype dependencies for AYON launcher. The dependency is already defined in shotgrid addon and change of version causes clashes.
___
</details>
<details>
<summary>Chore: Fix typo in filename <a href="https://github.com/ynput/OpenPype/pull/5807">#5807</a></summary>
Move content of `contants.py` into `constants.py`.
___
</details>
<details>
<summary>Chore: Create context respects instance changes <a href="https://github.com/ynput/OpenPype/pull/5809">#5809</a></summary>
Fix issue with unrespected change propagation in `CreateContext`. All successfully saved instances are marked as saved so they have no changes. Origin data of an instance are explicitly not handled directly by the object but by the attribute wrappers.
___
</details>
<details>
<summary>Blender: Fix tools handling in AYON mode <a href="https://github.com/ynput/OpenPype/pull/5811">#5811</a></summary>
Skip logic in `before_window_show` in blender when in AYON mode. Most of the stuff called there happes on show automatically.
___
</details>
<details>
<summary>Blender: Include Grease Pencil in review and thumbnails <a href="https://github.com/ynput/OpenPype/pull/5812">#5812</a></summary>
Include Grease Pencil in review and thumbnails.
___
</details>
<details>
<summary>Workfiles tool AYON: Fix double click of workfile <a href="https://github.com/ynput/OpenPype/pull/5813">#5813</a></summary>
Fix double click on workfiles in workfiles tool to open the file.
___
</details>
<details>
<summary>Webpublisher: removal of usage of no_of_frames in error message <a href="https://github.com/ynput/OpenPype/pull/5819">#5819</a></summary>
If it throws exception, `no_of_frames` value wont be available, so it doesn't make sense to log it.
___
</details>
<details>
<summary>Attribute Defs: Hide multivalue widget in Number by default <a href="https://github.com/ynput/OpenPype/pull/5821">#5821</a></summary>
Fixed default look of `NumberAttrWidget` by hiding its multiselection widget.
___
</details>
### **Merged pull requests**
<details>
<summary>Corrected a typo in Readme.md (Top -> To) <a href="https://github.com/ynput/OpenPype/pull/5800">#5800</a></summary>
___
</details>
<details>
<summary>Photoshop: Removed redundant copy of extension.zxp <a href="https://github.com/ynput/OpenPype/pull/5802">#5802</a></summary>
`extension.zxp` shouldn't be inside of extension folder.
___
</details>
## [3.17.3](https://github.com/ynput/OpenPype/tree/3.17.3)

View file

@ -32,7 +32,7 @@ class BlendLoader(plugin.AssetLoader):
empties = [obj for obj in objects if obj.type == 'EMPTY']
for empty in empties:
if empty.get(AVALON_PROPERTY):
if empty.get(AVALON_PROPERTY) and empty.parent is None:
return empty
return None
@ -90,6 +90,7 @@ class BlendLoader(plugin.AssetLoader):
members.append(data)
container = self._get_asset_container(data_to.objects)
print(container)
assert container, "No asset group found"
container.name = group_name
@ -100,8 +101,11 @@ class BlendLoader(plugin.AssetLoader):
# Link all the container children to the collection
for obj in container.children_recursive:
print(obj)
bpy.context.scene.collection.objects.link(obj)
print("")
# Remove the library from the blend file
library = bpy.data.libraries.get(bpy.path.basename(libpath))
bpy.data.libraries.remove(library)

View file

@ -31,11 +31,12 @@ class CollectReview(pyblish.api.InstancePlugin):
focal_length = cameras[0].data.lens
# get isolate objects list from meshes instance members .
# get isolate objects list from meshes instance members.
types = {"MESH", "GPENCIL"}
isolate_objects = [
obj
for obj in instance
if isinstance(obj, bpy.types.Object) and obj.type == "MESH"
if isinstance(obj, bpy.types.Object) and obj.type in types
]
if not instance.data.get("remove"):

View file

@ -21,7 +21,7 @@ class ExtractABC(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -21,7 +21,7 @@ class ExtractAnimationABC(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -21,7 +21,7 @@ class ExtractBlend(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
data_blocks = set()

View file

@ -21,7 +21,7 @@ class ExtractBlendAnimation(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
data_blocks = set()

View file

@ -22,7 +22,7 @@ class ExtractCameraABC(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -21,7 +21,7 @@ class ExtractCamera(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -22,7 +22,7 @@ class ExtractFBX(publish.Extractor):
filepath = os.path.join(stagingdir, filename)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
plugin.deselect_all()

View file

@ -23,7 +23,7 @@ class ExtractAnimationFBX(publish.Extractor):
stagingdir = self.staging_dir(instance)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
# The first collection object in the instance is taken, as there
# should be only one that contains the asset group.

View file

@ -117,7 +117,7 @@ class ExtractLayout(publish.Extractor):
stagingdir = self.staging_dir(instance)
# Perform extraction
self.log.info("Performing extraction..")
self.log.debug("Performing extraction..")
if "representations" not in instance.data:
instance.data["representations"] = []

View file

@ -24,9 +24,7 @@ class ExtractPlayblast(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.01
def process(self, instance):
self.log.info("Extracting capture..")
self.log.info(instance.data)
self.log.debug("Extracting capture..")
# get scene fps
fps = instance.data.get("fps")
@ -34,14 +32,14 @@ class ExtractPlayblast(publish.Extractor):
fps = bpy.context.scene.render.fps
instance.data["fps"] = fps
self.log.info(f"fps: {fps}")
self.log.debug(f"fps: {fps}")
# If start and end frames cannot be determined,
# get them from Blender timeline.
start = instance.data.get("frameStart", bpy.context.scene.frame_start)
end = instance.data.get("frameEnd", bpy.context.scene.frame_end)
self.log.info(f"start: {start}, end: {end}")
self.log.debug(f"start: {start}, end: {end}")
assert end > start, "Invalid time range !"
# get cameras
@ -55,7 +53,7 @@ class ExtractPlayblast(publish.Extractor):
filename = instance.name
path = os.path.join(stagingdir, filename)
self.log.info(f"Outputting images to {path}")
self.log.debug(f"Outputting images to {path}")
project_settings = instance.context.data["project_settings"]["blender"]
presets = project_settings["publish"]["ExtractPlayblast"]["presets"]
@ -100,7 +98,7 @@ class ExtractPlayblast(publish.Extractor):
frame_collection = collections[0]
self.log.info(f"We found collection of interest {frame_collection}")
self.log.debug(f"Found collection of interest {frame_collection}")
instance.data.setdefault("representations", [])

View file

@ -24,13 +24,13 @@ class ExtractThumbnail(publish.Extractor):
presets = {}
def process(self, instance):
self.log.info("Extracting capture..")
self.log.debug("Extracting capture..")
stagingdir = self.staging_dir(instance)
filename = instance.name
path = os.path.join(stagingdir, filename)
self.log.info(f"Outputting images to {path}")
self.log.debug(f"Outputting images to {path}")
camera = instance.data.get("review_camera", "AUTO")
start = instance.data.get("frameStart", bpy.context.scene.frame_start)
@ -61,7 +61,7 @@ class ExtractThumbnail(publish.Extractor):
thumbnail = os.path.basename(self._fix_output_path(path))
self.log.info(f"thumbnail: {thumbnail}")
self.log.debug(f"thumbnail: {thumbnail}")
instance.data.setdefault("representations", [])

View file

@ -280,7 +280,11 @@ def get_current_comp():
@contextlib.contextmanager
def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
def comp_lock_and_undo_chunk(
comp,
undo_queue_name="Script CMD",
keep_undo=True,
):
"""Lock comp and open an undo chunk during the context"""
try:
comp.Lock()
@ -288,4 +292,4 @@ def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
yield
finally:
comp.Unlock()
comp.EndUndo()
comp.EndUndo(keep_undo)

View file

@ -69,8 +69,6 @@ class CreateSaver(NewCreator):
# TODO Is this needed?
saver[file_format]["SaveAlpha"] = 1
self._imprint(saver, instance_data)
# Register the CreatedInstance
instance = CreatedInstance(
family=self.family,
@ -78,6 +76,8 @@ class CreateSaver(NewCreator):
data=instance_data,
creator=self,
)
data = instance.data_to_store()
self._imprint(saver, data)
# Insert the transient data
instance.transient_data["tool"] = saver

View file

@ -11,6 +11,7 @@ class FusionSetFrameRangeLoader(load.LoaderPlugin):
families = ["animation",
"camera",
"imagesequence",
"render",
"yeticache",
"pointcache",
"render"]
@ -46,6 +47,7 @@ class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
families = ["animation",
"camera",
"imagesequence",
"render",
"yeticache",
"pointcache",
"render"]

View file

@ -0,0 +1,87 @@
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.fusion.api import (
imprint_container,
get_current_comp,
comp_lock_and_undo_chunk
)
from openpype.hosts.fusion.api.lib import get_fusion_module
class FusionLoadUSD(load.LoaderPlugin):
"""Load USD into Fusion
Support for USD was added since Fusion 18.5
"""
families = ["*"]
representations = ["*"]
extensions = {"usd", "usda", "usdz"}
label = "Load USD"
order = -10
icon = "code-fork"
color = "orange"
tool_type = "uLoader"
@classmethod
def apply_settings(cls, project_settings, system_settings):
super(FusionLoadUSD, cls).apply_settings(project_settings,
system_settings)
if cls.enabled:
# Enable only in Fusion 18.5+
fusion = get_fusion_module()
version = fusion.GetVersion()
major = version[1]
minor = version[2]
is_usd_supported = (major, minor) >= (18, 5)
cls.enabled = is_usd_supported
def load(self, context, name, namespace, data):
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
# Create the Loader with the filename path set
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp, "Create tool"):
path = self.fname
args = (-32768, -32768)
tool = comp.AddTool(self.tool_type, *args)
tool["Filename"] = path
imprint_container(tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__)
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
path = get_representation_path(representation)
with comp_lock_and_undo_chunk(comp, "Update tool"):
tool["Filename"] = path
# Update the imprinted representation
tool.SetData("avalon.representation", str(representation["_id"]))
def remove(self, container):
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
with comp_lock_and_undo_chunk(comp, "Remove tool"):
tool.Delete()

View file

@ -0,0 +1,105 @@
import pyblish.api
from openpype.pipeline import (
PublishValidationError,
OptionalPyblishPluginMixin,
)
from openpype.hosts.fusion.api.action import SelectInvalidAction
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
def get_tool_resolution(tool, frame):
"""Return the 2D input resolution to a Fusion tool
If the current tool hasn't been rendered its input resolution
hasn't been saved. To combat this, add an expression in
the comments field to read the resolution
Args
tool (Fusion Tool): The tool to query input resolution
frame (int): The frame to query the resolution on.
Returns:
tuple: width, height as 2-tuple of integers
"""
comp = tool.Composition
# False undo removes the undo-stack from the undo list
with comp_lock_and_undo_chunk(comp, "Read resolution", False):
# Save old comment
old_comment = ""
has_expression = False
if tool["Comments"][frame] != "":
if tool["Comments"].GetExpression() is not None:
has_expression = True
old_comment = tool["Comments"].GetExpression()
tool["Comments"].SetExpression(None)
else:
old_comment = tool["Comments"][frame]
tool["Comments"][frame] = ""
# Get input width
tool["Comments"].SetExpression("self.Input.OriginalWidth")
width = int(tool["Comments"][frame])
# Get input height
tool["Comments"].SetExpression("self.Input.OriginalHeight")
height = int(tool["Comments"][frame])
# Reset old comment
tool["Comments"].SetExpression(None)
if has_expression:
tool["Comments"].SetExpression(old_comment)
else:
tool["Comments"][frame] = old_comment
return width, height
class ValidateSaverResolution(
pyblish.api.InstancePlugin, OptionalPyblishPluginMixin
):
"""Validate that the saver input resolution matches the asset resolution"""
order = pyblish.api.ValidatorOrder
label = "Validate Asset Resolution"
families = ["render"]
hosts = ["fusion"]
optional = True
actions = [SelectInvalidAction]
def process(self, instance):
if not self.is_active(instance.data):
return
resolution = self.get_resolution(instance)
expected_resolution = self.get_expected_resolution(instance)
if resolution != expected_resolution:
raise PublishValidationError(
"The input's resolution does not match "
"the asset's resolution {}x{}.\n\n"
"The input's resolution is {}x{}.".format(
expected_resolution[0], expected_resolution[1],
resolution[0], resolution[1]
)
)
@classmethod
def get_invalid(cls, instance):
resolution = cls.get_resolution(instance)
expected_resolution = cls.get_expected_resolution(instance)
if resolution != expected_resolution:
saver = instance.data["tool"]
return [saver]
@classmethod
def get_resolution(cls, instance):
saver = instance.data["tool"]
first_frame = instance.data["frameStartHandle"]
return get_tool_resolution(saver, frame=first_frame)
@classmethod
def get_expected_resolution(cls, instance):
data = instance.data["assetEntity"]["data"]
return data["resolutionWidth"], data["resolutionHeight"]

View file

@ -11,20 +11,21 @@ import json
import six
from openpype.lib import StringTemplate
from openpype.client import get_asset_by_name
from openpype.client import get_project, get_asset_by_name
from openpype.settings import get_current_project_settings
from openpype.pipeline import (
Anatomy,
get_current_project_name,
get_current_asset_name,
registered_host
)
from openpype.pipeline.context_tools import (
get_current_context_template_data,
get_current_project_asset
registered_host,
get_current_context,
get_current_host_name,
)
from openpype.pipeline.create import CreateContext
from openpype.pipeline.template_data import get_template_data
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.widgets import popup
from openpype.tools.utils.host_tools import get_tool_by_name
from openpype.pipeline.create import CreateContext
import hou
@ -568,9 +569,9 @@ def get_template_from_value(key, value):
return parm
def get_frame_data(node, handle_start=0, handle_end=0, log=None):
"""Get the frame data: start frame, end frame, steps,
start frame with start handle and end frame with end handle.
def get_frame_data(node, log=None):
"""Get the frame data: `frameStartHandle`, `frameEndHandle`
and `byFrameStep`.
This function uses Houdini node's `trange`, `t1, `t2` and `t3`
parameters as the source of truth for the full inclusive frame
@ -578,20 +579,17 @@ def get_frame_data(node, handle_start=0, handle_end=0, log=None):
range including the handles.
The non-inclusive frame start and frame end without handles
are computed by subtracting the handles from the inclusive
can be computed by subtracting the handles from the inclusive
frame range.
Args:
node (hou.Node): ROP node to retrieve frame range from,
the frame range is assumed to be the frame range
*including* the start and end handles.
handle_start (int): Start handles.
handle_end (int): End handles.
log (logging.Logger): Logger to log to.
Returns:
dict: frame data for start, end, steps,
start with handle and end with handle
dict: frame data for `frameStartHandle`, `frameEndHandle`
and `byFrameStep`.
"""
@ -622,11 +620,6 @@ def get_frame_data(node, handle_start=0, handle_end=0, log=None):
data["frameEndHandle"] = int(node.evalParm("f2"))
data["byFrameStep"] = node.evalParm("f3")
data["handleStart"] = handle_start
data["handleEnd"] = handle_end
data["frameStart"] = data["frameStartHandle"] + data["handleStart"]
data["frameEnd"] = data["frameEndHandle"] - data["handleEnd"]
return data
@ -804,6 +797,45 @@ def get_camera_from_container(container):
return cameras[0]
def get_current_context_template_data_with_asset_data():
"""
TODOs:
Support both 'assetData' and 'folderData' in future.
"""
context = get_current_context()
project_name = context["project_name"]
asset_name = context["asset_name"]
task_name = context["task_name"]
host_name = get_current_host_name()
anatomy = Anatomy(project_name)
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
# get context specific vars
asset_data = asset_doc["data"]
# compute `frameStartHandle` and `frameEndHandle`
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
if frame_start is not None and handle_start is not None:
asset_data["frameStartHandle"] = frame_start - handle_start
if frame_end is not None and handle_end is not None:
asset_data["frameEndHandle"] = frame_end + handle_end
template_data = get_template_data(
project_doc, asset_doc, task_name, host_name
)
template_data["root"] = anatomy.roots
template_data["assetData"] = asset_data
return template_data
def get_context_var_changes():
"""get context var changes."""
@ -823,7 +855,7 @@ def get_context_var_changes():
return houdini_vars_to_update
# Get Template data
template_data = get_current_context_template_data()
template_data = get_current_context_template_data_with_asset_data()
# Set Houdini Vars
for item in houdini_vars:

View file

@ -3,7 +3,6 @@
import os
import sys
import logging
import contextlib
import hou # noqa
@ -66,10 +65,6 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
register_event_callback("open", on_open)
register_event_callback("new", on_new)
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled
)
self._has_been_setup = True
# add houdini vendor packages
hou_pythonpath = os.path.join(HOUDINI_HOST_DIR, "vendor")
@ -406,54 +401,3 @@ def _set_context_settings():
lib.reset_framerange()
lib.update_houdini_vars_context()
def on_pyblish_instance_toggled(instance, new_value, old_value):
"""Toggle saver tool passthrough states on instance toggles."""
@contextlib.contextmanager
def main_take(no_update=True):
"""Enter root take during context"""
original_take = hou.takes.currentTake()
original_update_mode = hou.updateModeSetting()
root = hou.takes.rootTake()
has_changed = False
try:
if original_take != root:
has_changed = True
if no_update:
hou.setUpdateMode(hou.updateMode.Manual)
hou.takes.setCurrentTake(root)
yield
finally:
if has_changed:
if no_update:
hou.setUpdateMode(original_update_mode)
hou.takes.setCurrentTake(original_take)
if not instance.data.get("_allowToggleBypass", True):
return
nodes = instance[:]
if not nodes:
return
# Assume instance node is first node
instance_node = nodes[0]
if not hasattr(instance_node, "isBypassed"):
# Likely not a node that can actually be bypassed
log.debug("Can't bypass node: %s", instance_node.path())
return
if instance_node.isBypassed() != (not old_value):
print("%s old bypass state didn't match old instance state, "
"updating anyway.." % instance_node.path())
try:
# Go into the main take, because when in another take changing
# the bypass state of a note cannot be done due to it being locked
# by default.
with main_take(no_update=True):
instance_node.bypass(not new_value)
except hou.PermissionError as exc:
log.warning("%s - %s", instance_node.path(), exc)

View file

@ -7,10 +7,11 @@ from openpype.settings import get_project_settings
from openpype.pipeline import get_current_project_name
from openpype.lib import StringTemplate
from openpype.pipeline.context_tools import get_current_context_template_data
import hou
from .lib import get_current_context_template_data_with_asset_data
log = logging.getLogger("openpype.hosts.houdini.shelves")
@ -23,29 +24,33 @@ def generate_shelves():
# load configuration of houdini shelves
project_name = get_current_project_name()
project_settings = get_project_settings(project_name)
shelves_set_config = project_settings["houdini"]["shelves"]
shelves_configs = project_settings["houdini"]["shelves"]
if not shelves_set_config:
if not shelves_configs:
log.debug("No custom shelves found in project settings.")
return
# Get Template data
template_data = get_current_context_template_data()
template_data = get_current_context_template_data_with_asset_data()
for config in shelves_configs:
selected_option = config["options"]
shelf_set_config = config[selected_option]
for shelf_set_config in shelves_set_config:
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
shelf_set_os_filepath = shelf_set_filepath[current_os]
if shelf_set_os_filepath:
shelf_set_os_filepath = get_path_using_template_data(
shelf_set_os_filepath, template_data
)
if not os.path.isfile(shelf_set_os_filepath):
log.error("Shelf path doesn't exist - "
"{}".format(shelf_set_os_filepath))
continue
if shelf_set_filepath:
shelf_set_os_filepath = shelf_set_filepath[current_os]
if shelf_set_os_filepath:
shelf_set_os_filepath = get_path_using_template_data(
shelf_set_os_filepath, template_data
)
if not os.path.isfile(shelf_set_os_filepath):
log.error("Shelf path doesn't exist - "
"{}".format(shelf_set_os_filepath))
continue
hou.shelves.newShelfSet(file_path=shelf_set_os_filepath)
continue
hou.shelves.loadFile(shelf_set_os_filepath)
continue
shelf_set_name = shelf_set_config.get('shelf_set_name')
if not shelf_set_name:

View file

@ -45,6 +45,11 @@ class CreateCompositeSequence(plugin.HoudiniCreator):
instance_node.setParms(parms)
# Manually set f1 & f2 to $FSTART and $FEND respectively
# to match other Houdini nodes default.
instance_node.parm("f1").setExpression("$FSTART")
instance_node.parm("f2").setExpression("$FEND")
# Lock any parameters in this list
to_lock = ["prim_to_detail_pattern"]
self.lock_parameters(instance_node, to_lock)

View file

@ -119,7 +119,8 @@ class ImageLoader(load.LoaderPlugin):
if not parent.children():
parent.destroy()
def _get_file_sequence(self, root):
def _get_file_sequence(self, file_path):
root = os.path.dirname(file_path)
files = sorted(os.listdir(root))
first_fname = files[0]

View file

@ -21,8 +21,8 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
label = "Arnold ROP Render Products"
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
# this plugin runs after CollectFrames
order = pyblish.api.CollectorOrder + 0.11
hosts = ["houdini"]
families = ["arnold_rop"]

View file

@ -0,0 +1,124 @@
# -*- coding: utf-8 -*-
"""Collector plugin for frames data on ROP instances."""
import hou # noqa
import pyblish.api
from openpype.lib import BoolDef
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectAssetHandles(pyblish.api.InstancePlugin,
OpenPypePyblishPluginMixin):
"""Apply asset handles.
If instance does not have:
- frameStart
- frameEnd
- handleStart
- handleEnd
But it does have:
- frameStartHandle
- frameEndHandle
Then we will retrieve the asset's handles to compute
the exclusive frame range and actual handle ranges.
"""
hosts = ["houdini"]
# This specific order value is used so that
# this plugin runs after CollectAnatomyInstanceData
order = pyblish.api.CollectorOrder + 0.499
label = "Collect Asset Handles"
use_asset_handles = True
def process(self, instance):
# Only process instances without already existing handles data
# but that do have frameStartHandle and frameEndHandle defined
# like the data collected from CollectRopFrameRange
if "frameStartHandle" not in instance.data:
return
if "frameEndHandle" not in instance.data:
return
has_existing_data = {
"handleStart",
"handleEnd",
"frameStart",
"frameEnd"
}.issubset(instance.data)
if has_existing_data:
return
attr_values = self.get_attr_values_from_data(instance.data)
if attr_values.get("use_handles", self.use_asset_handles):
asset_data = instance.data["assetEntity"]["data"]
handle_start = asset_data.get("handleStart", 0)
handle_end = asset_data.get("handleEnd", 0)
else:
handle_start = 0
handle_end = 0
frame_start = instance.data["frameStartHandle"] + handle_start
frame_end = instance.data["frameEndHandle"] - handle_end
instance.data.update({
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": frame_start,
"frameEnd": frame_end
})
# Log debug message about the collected frame range
if attr_values.get("use_handles", self.use_asset_handles):
self.log.debug(
"Full Frame range with Handles "
"[{frame_start_handle} - {frame_end_handle}]"
.format(
frame_start_handle=instance.data["frameStartHandle"],
frame_end_handle=instance.data["frameEndHandle"]
)
)
else:
self.log.debug(
"Use handles is deactivated for this instance, "
"start and end handles are set to 0."
)
# Log collected frame range to the user
message = "Frame range [{frame_start} - {frame_end}]".format(
frame_start=frame_start,
frame_end=frame_end
)
if handle_start or handle_end:
message += " with handles [{handle_start}]-[{handle_end}]".format(
handle_start=handle_start,
handle_end=handle_end
)
self.log.info(message)
if instance.data.get("byFrameStep", 1.0) != 1.0:
self.log.info(
"Frame steps {}".format(instance.data["byFrameStep"]))
# Add frame range to label if the instance has a frame range.
label = instance.data.get("label", instance.data["name"])
instance.data["label"] = (
"{label} [{frame_start_handle} - {frame_end_handle}]"
.format(
label=label,
frame_start_handle=instance.data["frameStartHandle"],
frame_end_handle=instance.data["frameEndHandle"]
)
)
@classmethod
def get_attribute_defs(cls):
return [
BoolDef("use_handles",
tooltip="Disable this if you want the publisher to"
" ignore start and end handles specified in the"
" asset data for this publish instance",
default=cls.use_asset_handles,
label="Use asset handles")
]

View file

@ -11,7 +11,9 @@ from openpype.hosts.houdini.api import lib
class CollectFrames(pyblish.api.InstancePlugin):
"""Collect all frames which would be saved from the ROP nodes"""
order = pyblish.api.CollectorOrder + 0.01
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.1
label = "Collect Frames"
families = ["vdbcache", "imagesequence", "ass",
"redshiftproxy", "review", "bgeo"]
@ -20,8 +22,8 @@ class CollectFrames(pyblish.api.InstancePlugin):
ropnode = hou.node(instance.data["instance_node"])
start_frame = instance.data.get("frameStart", None)
end_frame = instance.data.get("frameEnd", None)
start_frame = instance.data.get("frameStartHandle", None)
end_frame = instance.data.get("frameEndHandle", None)
output_parm = lib.get_output_parameter(ropnode)
if start_frame is not None:

View file

@ -122,10 +122,6 @@ class CollectInstancesUsdLayered(pyblish.api.ContextPlugin):
instance.data.update(save_data)
instance.data["usdLayer"] = layer
# Don't allow the Pyblish `instanceToggled` we have installed
# to set this node to bypass.
instance.data["_allowToggleBypass"] = False
instances.append(instance)
# Store the collected ROP node dependencies

View file

@ -25,8 +25,8 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
label = "Karma ROP Render Products"
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
# this plugin runs after CollectFrames
order = pyblish.api.CollectorOrder + 0.11
hosts = ["houdini"]
families = ["karma_rop"]

View file

@ -25,8 +25,8 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
label = "Mantra ROP Render Products"
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
# this plugin runs after CollectFrames
order = pyblish.api.CollectorOrder + 0.11
hosts = ["houdini"]
families = ["mantra_rop"]

View file

@ -25,8 +25,8 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
label = "Redshift ROP Render Products"
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
# this plugin runs after CollectFrames
order = pyblish.api.CollectorOrder + 0.11
hosts = ["houdini"]
families = ["redshift_rop"]

View file

@ -6,6 +6,8 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
"""Collect Review Data."""
label = "Collect Review Data"
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.1
hosts = ["houdini"]
families = ["review"]
@ -41,8 +43,8 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
return
if focal_length_parm.isTimeDependent():
start = instance.data["frameStart"]
end = instance.data["frameEnd"] + 1
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"] + 1
focal_length = [
focal_length_parm.evalAsFloatAtFrame(t)
for t in range(int(start), int(end))

View file

@ -2,22 +2,15 @@
"""Collector plugin for frames data on ROP instances."""
import hou # noqa
import pyblish.api
from openpype.lib import BoolDef
from openpype.hosts.houdini.api import lib
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectRopFrameRange(pyblish.api.InstancePlugin,
OpenPypePyblishPluginMixin):
class CollectRopFrameRange(pyblish.api.InstancePlugin):
"""Collect all frames which would be saved from the ROP nodes"""
hosts = ["houdini"]
# This specific order value is used so that
# this plugin runs after CollectAnatomyInstanceData
order = pyblish.api.CollectorOrder + 0.499
order = pyblish.api.CollectorOrder
label = "Collect RopNode Frame Range"
use_asset_handles = True
def process(self, instance):
@ -30,78 +23,16 @@ class CollectRopFrameRange(pyblish.api.InstancePlugin,
return
ropnode = hou.node(node_path)
attr_values = self.get_attr_values_from_data(instance.data)
if attr_values.get("use_handles", self.use_asset_handles):
asset_data = instance.data["assetEntity"]["data"]
handle_start = asset_data.get("handleStart", 0)
handle_end = asset_data.get("handleEnd", 0)
else:
handle_start = 0
handle_end = 0
frame_data = lib.get_frame_data(
ropnode, handle_start, handle_end, self.log
ropnode, self.log
)
if not frame_data:
return
# Log debug message about the collected frame range
frame_start = frame_data["frameStart"]
frame_end = frame_data["frameEnd"]
if attr_values.get("use_handles", self.use_asset_handles):
self.log.debug(
"Full Frame range with Handles "
"[{frame_start_handle} - {frame_end_handle}]"
.format(
frame_start_handle=frame_data["frameStartHandle"],
frame_end_handle=frame_data["frameEndHandle"]
)
)
else:
self.log.debug(
"Use handles is deactivated for this instance, "
"start and end handles are set to 0."
)
# Log collected frame range to the user
message = "Frame range [{frame_start} - {frame_end}]".format(
frame_start=frame_start,
frame_end=frame_end
self.log.debug(
"Collected frame_data: {}".format(frame_data)
)
if handle_start or handle_end:
message += " with handles [{handle_start}]-[{handle_end}]".format(
handle_start=handle_start,
handle_end=handle_end
)
self.log.info(message)
if frame_data.get("byFrameStep", 1.0) != 1.0:
self.log.info("Frame steps {}".format(frame_data["byFrameStep"]))
instance.data.update(frame_data)
# Add frame range to label if the instance has a frame range.
label = instance.data.get("label", instance.data["name"])
instance.data["label"] = (
"{label} [{frame_start} - {frame_end}]"
.format(
label=label,
frame_start=frame_start,
frame_end=frame_end
)
)
@classmethod
def get_attribute_defs(cls):
return [
BoolDef("use_handles",
tooltip="Disable this if you want the publisher to"
" ignore start and end handles specified in the"
" asset data for this publish instance",
default=cls.use_asset_handles,
label="Use asset handles")
]

View file

@ -25,8 +25,8 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
label = "VRay ROP Render Products"
# This specific order value is used so that
# this plugin runs after CollectRopFrameRange
order = pyblish.api.CollectorOrder + 0.4999
# this plugin runs after CollectFrames
order = pyblish.api.CollectorOrder + 0.11
hosts = ["houdini"]
families = ["vray_rop"]

View file

@ -56,7 +56,7 @@ class ExtractAss(publish.Extractor):
'ext': ext,
"files": files,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
}
instance.data["representations"].append(representation)

View file

@ -47,7 +47,7 @@ class ExtractBGEO(publish.Extractor):
"ext": ext.lstrip("."),
"files": output,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"]
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"]
}
instance.data["representations"].append(representation)

View file

@ -41,8 +41,8 @@ class ExtractComposite(publish.Extractor):
"ext": ext,
"files": output,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
}
from pprint import pformat

View file

@ -40,9 +40,9 @@ class ExtractFBX(publish.Extractor):
}
# A single frame may also be rendered without start/end frame.
if "frameStart" in instance.data and "frameEnd" in instance.data:
representation["frameStart"] = instance.data["frameStart"]
representation["frameEnd"] = instance.data["frameEnd"]
if "frameStartHandle" in instance.data and "frameEndHandle" in instance.data: # noqa
representation["frameStart"] = instance.data["frameStartHandle"]
representation["frameEnd"] = instance.data["frameEndHandle"]
# set value type for 'representations' key to list
if "representations" not in instance.data:

View file

@ -39,8 +39,8 @@ class ExtractOpenGL(publish.Extractor):
"ext": instance.data["imageFormat"],
"files": output,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
"tags": tags,
"preview": True,
"camera_name": instance.data.get("review_camera")

View file

@ -44,8 +44,8 @@ class ExtractRedshiftProxy(publish.Extractor):
}
# A single frame may also be rendered without start/end frame.
if "frameStart" in instance.data and "frameEnd" in instance.data:
representation["frameStart"] = instance.data["frameStart"]
representation["frameEnd"] = instance.data["frameEnd"]
if "frameStartHandle" in instance.data and "frameEndHandle" in instance.data: # noqa
representation["frameStart"] = instance.data["frameStartHandle"]
representation["frameEnd"] = instance.data["frameEndHandle"]
instance.data["representations"].append(representation)

View file

@ -40,7 +40,7 @@ class ExtractVDBCache(publish.Extractor):
"ext": "vdb",
"files": output,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
}
instance.data["representations"].append(representation)

View file

@ -57,7 +57,17 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
return
rop_node = hou.node(instance.data["instance_node"])
if instance.data["frameStart"] > instance.data["frameEnd"]:
frame_start = instance.data.get("frameStart")
frame_end = instance.data.get("frameEnd")
if frame_start is None or frame_end is None:
cls.log.debug(
"Skipping frame range validation for "
"instance without frame data: {}".format(rop_node.path())
)
return
if frame_start > frame_end:
cls.log.info(
"The ROP node render range is set to "
"{0[frameStartHandle]} - {0[frameEndHandle]} "
@ -89,7 +99,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
.format(instance))
return
created_instance.publish_attributes["CollectRopFrameRange"]["use_handles"] = False # noqa
created_instance.publish_attributes["CollectAssetHandles"]["use_handles"] = False # noqa
create_context.save_changes()
cls.log.debug("use asset handles is turned off for '{}'"

View file

@ -4,11 +4,9 @@ import os
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline import get_current_asset_name
from openpype.hosts.max.api import colorspace
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
from openpype.client import get_last_version_by_subset_name
class CollectRender(pyblish.api.InstancePlugin):
@ -27,7 +25,6 @@ class CollectRender(pyblish.api.InstancePlugin):
filepath = current_file.replace("\\", "/")
context.data['currentFile'] = current_file
asset = get_current_asset_name()
files_by_aov = RenderProducts().get_beauty(instance.name)
aovs = RenderProducts().get_aovs(instance.name)
@ -49,19 +46,6 @@ class CollectRender(pyblish.api.InstancePlugin):
instance.data["files"].append(files_by_aov)
img_format = RenderProducts().image_format()
project_name = context.data["projectName"]
asset_doc = context.data["assetEntity"]
asset_id = asset_doc["_id"]
version_doc = get_last_version_by_subset_name(project_name,
instance.name,
asset_id)
self.log.debug("version_doc: {0}".format(version_doc))
version_int = 1
if version_doc:
version_int += int(version_doc["name"])
self.log.debug(f"Setting {version_int} to context.")
context.data["version"] = version_int
# OCIO config not support in
# most of the 3dsmax renderers
# so this is currently hard coded
@ -87,7 +71,7 @@ class CollectRender(pyblish.api.InstancePlugin):
renderer = str(renderer_class).split(":")[0]
# also need to get the render dir for conversion
data = {
"asset": asset,
"asset": instance.data["asset"],
"subset": str(instance.name),
"publish": True,
"maxversion": str(get_max_version()),
@ -99,7 +83,6 @@ class CollectRender(pyblish.api.InstancePlugin):
"plugin": "3dsmax",
"frameStart": instance.data["frameStartHandle"],
"frameEnd": instance.data["frameEndHandle"],
"version": version_int,
"farm": True
}
instance.data.update(data)

View file

@ -0,0 +1,131 @@
# -*- coding: utf-8 -*-
"""Validator for Attributes."""
from pyblish.api import ContextPlugin, ValidatorOrder
from pymxs import runtime as rt
from openpype.pipeline.publish import (
OptionalPyblishPluginMixin,
PublishValidationError,
RepairContextAction
)
def has_property(object_name, property_name):
"""Return whether an object has a property with given name"""
return rt.Execute(f'isProperty {object_name} "{property_name}"')
def is_matching_value(object_name, property_name, value):
"""Return whether an existing property matches value `value"""
property_value = rt.Execute(f"{object_name}.{property_name}")
# Wrap property value if value is a string valued attributes
# starting with a `#`
if (
isinstance(value, str) and
value.startswith("#") and
not value.endswith(")")
):
# prefix value with `#`
# not applicable for #() array value type
# and only applicable for enum i.e. #bob, #sally
property_value = f"#{property_value}"
return property_value == value
class ValidateAttributes(OptionalPyblishPluginMixin,
ContextPlugin):
"""Validates attributes in the project setting are consistent
with the nodes from MaxWrapper Class in 3ds max.
E.g. "renderers.current.separateAovFiles",
"renderers.production.PrimaryGIEngine"
Admin(s) need to put the dict below and enable this validator for a check:
{
"renderers.current":{
"separateAovFiles" : True
},
"renderers.production":{
"PrimaryGIEngine": "#RS_GIENGINE_BRUTE_FORCE"
}
....
}
"""
order = ValidatorOrder
hosts = ["max"]
label = "Attributes"
actions = [RepairContextAction]
optional = True
@classmethod
def get_invalid(cls, context):
attributes = (
context.data["project_settings"]["max"]["publish"]
["ValidateAttributes"]["attributes"]
)
if not attributes:
return
invalid = []
for object_name, required_properties in attributes.items():
if not rt.Execute(f"isValidValue {object_name}"):
# Skip checking if the node does not
# exist in MaxWrapper Class
cls.log.debug(f"Unable to find '{object_name}'."
" Skipping validation of attributes.")
continue
for property_name, value in required_properties.items():
if not has_property(object_name, property_name):
cls.log.error(
"Non-existing property: "
f"{object_name}.{property_name}")
invalid.append((object_name, property_name))
if not is_matching_value(object_name, property_name, value):
cls.log.error(
f"Invalid value for: {object_name}.{property_name}"
f" should be: {value}")
invalid.append((object_name, property_name))
return invalid
def process(self, context):
if not self.is_active(context.data):
self.log.debug("Skipping Validate Attributes...")
return
invalid_attributes = self.get_invalid(context)
if invalid_attributes:
bullet_point_invalid_statement = "\n".join(
"- {}".format(invalid) for invalid
in invalid_attributes
)
report = (
"Required Attribute(s) have invalid value(s).\n\n"
f"{bullet_point_invalid_statement}\n\n"
"You can use repair action to fix them if they are not\n"
"unknown property value(s)."
)
raise PublishValidationError(
report, title="Invalid Value(s) for Required Attribute(s)")
@classmethod
def repair(cls, context):
attributes = (
context.data["project_settings"]["max"]["publish"]
["ValidateAttributes"]["attributes"]
)
invalid_attributes = cls.get_invalid(context)
for attrs in invalid_attributes:
prop, attr = attrs
value = attributes[prop][attr]
if isinstance(value, str) and not value.startswith("#"):
attribute_fix = '{}.{}="{}"'.format(
prop, attr, value
)
else:
attribute_fix = "{}.{}={}".format(
prop, attr, value
)
rt.Execute(attribute_fix)

View file

@ -244,8 +244,14 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
return self.get_load_plugin_options(options)
def post_placeholder_process(self, placeholder, failed):
"""Hide placeholder, add them to placeholder set
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# Hide placeholder and add them to placeholder set
node = placeholder.scene_identifier
cmds.sets(node, addElement=PLACEHOLDER_SET)

View file

@ -40,7 +40,6 @@ from openpype.settings import (
from openpype.modules import ModulesManager
from openpype.pipeline.template_data import get_template_data_with_names
from openpype.pipeline import (
get_current_project_name,
discover_legacy_creator_plugins,
Anatomy,
get_current_host_name,
@ -1099,26 +1098,6 @@ def check_subsetname_exists(nodes, subset_name):
False)
def get_render_path(node):
''' Generate Render path from presets regarding avalon knob data
'''
avalon_knob_data = read_avalon_data(node)
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
data = {
"avalon": avalon_knob_data,
"nuke_imageio_writes": nuke_imageio_writes
}
anatomy_filled = format_anatomy(data)
return anatomy_filled["render"]["path"].replace("\\", "/")
def format_anatomy(data):
''' Helping function for formatting of anatomy paths

View file

@ -478,8 +478,6 @@ def parse_container(node):
"""
data = read_avalon_data(node)
# (TODO) Remove key validation when `ls` has re-implemented.
#
# If not all required data return the empty container
required = ["schema", "id", "name",
"namespace", "loader", "representation"]
@ -487,7 +485,10 @@ def parse_container(node):
return
# Store the node's name
data["objectName"] = node["name"].value()
data.update({
"objectName": node.fullName(),
"node": node,
})
return data

View file

@ -537,6 +537,7 @@ class NukeLoader(LoaderPlugin):
node.addKnob(knob)
def clear_members(self, parent_node):
parent_class = parent_node.Class()
members = self.get_members(parent_node)
dependent_nodes = None
@ -549,6 +550,8 @@ class NukeLoader(LoaderPlugin):
break
for member in members:
if member.Class() == parent_class:
continue
self.log.info("removing node: `{}".format(member.name()))
nuke.delete(member)

View file

@ -163,8 +163,10 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
)
return loaded_representation_ids
def _before_repre_load(self, placeholder, representation):
def _before_placeholder_load(self, placeholder):
placeholder.data["nodes_init"] = nuke.allNodes()
def _before_repre_load(self, placeholder, representation):
placeholder.data["last_repre_id"] = str(representation["_id"])
def collect_placeholders(self):
@ -197,6 +199,13 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
return self.get_load_plugin_options(options)
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# deselect all selected nodes
placeholder_node = nuke.toNode(placeholder.scene_identifier)
@ -603,6 +612,13 @@ class NukePlaceholderCreatePlugin(
return self.get_create_plugin_options(options)
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# deselect all selected nodes
placeholder_node = nuke.toNode(placeholder.scene_identifier)

View file

@ -64,8 +64,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
data_imprint = {
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name
"colorspaceInput": colorspace
}
for k in add_keys:
@ -194,7 +193,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
GN = container["node"]
file = get_representation_path(representation).replace("\\", "/")
@ -207,10 +206,11 @@ class LoadBackdropNodes(load.LoaderPlugin):
add_keys = ["source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"version": vname,
"colorspaceInput": colorspace,
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -252,6 +252,6 @@ class LoadBackdropNodes(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -48,10 +48,11 @@ class AlembicCameraLoader(load.LoaderPlugin):
# add additional metadata from the version to imprint to Avalon knob
add_keys = ["source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"objectName": object_name}
data_imprint = {
"frameStart": first,
"frameEnd": last,
"version": vname,
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -111,7 +112,7 @@ class AlembicCameraLoader(load.LoaderPlugin):
project_name = get_current_project_name()
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
object_name = container["node"]
# get main variables
version_data = version_doc.get("data", {})
@ -124,11 +125,12 @@ class AlembicCameraLoader(load.LoaderPlugin):
# add additional metadata from the version to imprint to Avalon knob
add_keys = ["source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -194,6 +196,6 @@ class AlembicCameraLoader(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -189,8 +189,6 @@ class LoadClip(plugin.NukeLoader):
value_ = value_.replace("\\", "/")
data_imprint[key] = value_
data_imprint["objectName"] = read_name
if add_retime and version_data.get("retime", None):
data_imprint["addRetime"] = True
@ -254,7 +252,7 @@ class LoadClip(plugin.NukeLoader):
is_sequence = len(representation["files"]) > 1
read_node = nuke.toNode(container['objectName'])
read_node = container["node"]
if is_sequence:
representation = self._representation_with_hash_in_frame(
@ -299,9 +297,6 @@ class LoadClip(plugin.NukeLoader):
"Representation id `{}` is failing to load".format(repre_id))
return
read_name = self._get_node_name(representation)
read_node["name"].setValue(read_name)
read_node["file"].setValue(filepath)
# to avoid multiple undo steps for rest of process
@ -356,7 +351,7 @@ class LoadClip(plugin.NukeLoader):
self.set_as_member(read_node)
def remove(self, container):
read_node = nuke.toNode(container['objectName'])
read_node = container["node"]
assert read_node.Class() == "Read", "Must be Read"
with viewer_update_and_undo_stop():

View file

@ -62,11 +62,12 @@ class LoadEffects(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -159,7 +160,7 @@ class LoadEffects(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
GN = container["node"]
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
@ -175,12 +176,13 @@ class LoadEffects(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -212,7 +214,7 @@ class LoadEffects(load.LoaderPlugin):
pre_node = nuke.createNode("Input")
pre_node["name"].setValue("rgb")
for ef_name, ef_val in nodes_order.items():
for _, ef_val in nodes_order.items():
node = nuke.createNode(ef_val["class"])
for k, v in ef_val["node"].items():
if k in self.ignore_attr:
@ -346,6 +348,6 @@ class LoadEffects(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -63,11 +63,12 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -98,7 +99,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
pre_node = nuke.createNode("Input")
pre_node["name"].setValue("rgb")
for ef_name, ef_val in nodes_order.items():
for _, ef_val in nodes_order.items():
node = nuke.createNode(ef_val["class"])
for k, v in ef_val["node"].items():
if k in self.ignore_attr:
@ -164,28 +165,26 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
GN = container["node"]
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
workfile_first_frame = int(nuke.root()["first_frame"].getValue())
namespace = container['namespace']
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -217,7 +216,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
pre_node = nuke.createNode("Input")
pre_node["name"].setValue("rgb")
for ef_name, ef_val in nodes_order.items():
for _, ef_val in nodes_order.items():
node = nuke.createNode(ef_val["class"])
for k, v in ef_val["node"].items():
if k in self.ignore_attr:
@ -251,11 +250,6 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
output = nuke.createNode("Output")
output.setInput(0, pre_node)
# # try to place it under Viewer1
# if not self.connect_active_viewer(GN):
# nuke.delete(GN)
# return
# get all versions in list
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
@ -365,6 +359,6 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -64,11 +64,12 @@ class LoadGizmo(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -111,7 +112,7 @@ class LoadGizmo(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
group_node = nuke.toNode(container['objectName'])
group_node = container["node"]
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
@ -126,12 +127,13 @@ class LoadGizmo(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -175,6 +177,6 @@ class LoadGizmo(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -66,11 +66,12 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -118,7 +119,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
group_node = nuke.toNode(container['objectName'])
group_node = container["node"]
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
@ -133,12 +134,13 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -256,6 +258,6 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -146,8 +146,6 @@ class LoadImage(load.LoaderPlugin):
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
data_imprint.update({"objectName": read_name})
r["tile_color"].setValue(int("0x4ecd25ff", 16))
return containerise(r,
@ -168,7 +166,7 @@ class LoadImage(load.LoaderPlugin):
inputs:
"""
node = nuke.toNode(container["objectName"])
node = container["node"]
frame_number = node["first"].value()
assert node.Class() == "Read", "Must be Read"
@ -204,8 +202,6 @@ class LoadImage(load.LoaderPlugin):
last = first = int(frame_number)
# Set the global in to the start frame of the sequence
read_name = self._get_node_name(representation)
node["name"].setValue(read_name)
node["file"].setValue(file)
node["origfirst"].setValue(first)
node["first"].setValue(first)
@ -239,7 +235,7 @@ class LoadImage(load.LoaderPlugin):
self.log.info("updated to version: {}".format(version_doc.get("name")))
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
assert node.Class() == "Read", "Must be Read"
with viewer_update_and_undo_stop():

View file

@ -46,10 +46,11 @@ class AlembicModelLoader(load.LoaderPlugin):
# add additional metadata from the version to imprint to Avalon knob
add_keys = ["source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"objectName": object_name}
data_imprint = {
"frameStart": first,
"frameEnd": last,
"version": vname
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -114,9 +115,9 @@ class AlembicModelLoader(load.LoaderPlugin):
# Get version from io
project_name = get_current_project_name()
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
# get corresponding node
model_node = nuke.toNode(object_name)
model_node = container["node"]
# get main variables
version_data = version_doc.get("data", {})
@ -129,11 +130,12 @@ class AlembicModelLoader(load.LoaderPlugin):
# add additional metadata from the version to imprint to Avalon knob
add_keys = ["source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"objectName": object_name}
data_imprint = {
"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -142,7 +144,6 @@ class AlembicModelLoader(load.LoaderPlugin):
file = get_representation_path(representation).replace("\\", "/")
with maintained_selection():
model_node = nuke.toNode(object_name)
model_node['selected'].setValue(True)
# collect input output dependencies
@ -163,8 +164,10 @@ class AlembicModelLoader(load.LoaderPlugin):
ypos = model_node.ypos()
nuke.nodeCopy("%clipboard%")
nuke.delete(model_node)
# paste the node back and set the position
nuke.nodePaste("%clipboard%")
model_node = nuke.toNode(object_name)
model_node = nuke.selectedNode()
model_node.setXYpos(xpos, ypos)
# link to original input nodes

View file

@ -55,7 +55,7 @@ class LoadOcioLookNodes(load.LoaderPlugin):
"""
namespace = namespace or context['asset']['name']
suffix = secrets.token_hex(nbytes=4)
object_name = "{}_{}_{}".format(
node_name = "{}_{}_{}".format(
name, namespace, suffix)
# getting file path
@ -64,7 +64,9 @@ class LoadOcioLookNodes(load.LoaderPlugin):
json_f = self._load_json_data(filepath)
group_node = self._create_group_node(
object_name, filepath, json_f["data"])
filepath, json_f["data"])
# renaming group node
group_node["name"].setValue(node_name)
self._node_version_color(context["version"], group_node)
@ -76,17 +78,14 @@ class LoadOcioLookNodes(load.LoaderPlugin):
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
data={
"objectName": object_name,
}
loader=self.__class__.__name__
)
def _create_group_node(
self,
object_name,
filepath,
data
data,
group_node=None
):
"""Creates group node with all the nodes inside.
@ -94,9 +93,9 @@ class LoadOcioLookNodes(load.LoaderPlugin):
in between - in case those are needed.
Arguments:
object_name (str): name of the group node
filepath (str): path to json file
data (dict): data from json file
group_node (Optional[nuke.Node]): group node or None
Returns:
nuke.Node: group node with all the nodes inside
@ -117,7 +116,6 @@ class LoadOcioLookNodes(load.LoaderPlugin):
input_node = None
output_node = None
group_node = nuke.toNode(object_name)
if group_node:
# remove all nodes between Input and Output nodes
for node in group_node.nodes():
@ -130,7 +128,6 @@ class LoadOcioLookNodes(load.LoaderPlugin):
else:
group_node = nuke.createNode(
"Group",
"name {}_1".format(object_name),
inpanel=False
)
@ -227,16 +224,16 @@ class LoadOcioLookNodes(load.LoaderPlugin):
project_name = get_current_project_name()
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
group_node = container["node"]
filepath = get_representation_path(representation)
json_f = self._load_json_data(filepath)
group_node = self._create_group_node(
object_name,
filepath,
json_f["data"]
json_f["data"],
group_node
)
self._node_version_color(version_doc, group_node)

View file

@ -46,8 +46,6 @@ class LinkAsGroup(load.LoaderPlugin):
file = self.filepath_from_context(context).replace("\\", "/")
self.log.info("file: {}\n".format(file))
precomp_name = context["representation"]["context"]["subset"]
self.log.info("versionData: {}\n".format(context["version"]["data"]))
# add additional metadata from the version to imprint to Avalon knob
@ -62,7 +60,6 @@ class LinkAsGroup(load.LoaderPlugin):
}
for k in add_keys:
data_imprint.update({k: context["version"]['data'][k]})
data_imprint.update({"objectName": precomp_name})
# group context is set to precomp, so back up one level.
nuke.endGroup()
@ -118,7 +115,7 @@ class LinkAsGroup(load.LoaderPlugin):
inputs:
"""
node = nuke.toNode(container['objectName'])
node = container["node"]
root = get_representation_path(representation).replace("\\", "/")
@ -159,6 +156,6 @@ class LinkAsGroup(load.LoaderPlugin):
self.log.info("updated to version: {}".format(version_doc.get("name")))
def remove(self, container):
node = nuke.toNode(container['objectName'])
node = container["node"]
with viewer_update_and_undo_stop():
nuke.delete(node)

View file

@ -1,4 +1,4 @@
Updated as of 9 May 2022
Updated as of 26 May 2023
----------------------------
In this package, you will find a brief introduction to the Scripting API for DaVinci Resolve Studio. Apart from this README.txt file, this package contains folders containing the basic import
modules for scripting access (DaVinciResolve.py) and some representative examples.
@ -19,7 +19,7 @@ DaVinci Resolve scripting requires one of the following to be installed (for all
Lua 5.1
Python 2.7 64-bit
Python 3.6 64-bit
Python >= 3.6 64-bit
Using a script
@ -171,6 +171,10 @@ Project
GetRenderResolutions(format, codec) --> [{Resolution}] # Returns list of resolutions applicable for the given render format (string) and render codec (string). Returns full list of resolutions if no argument is provided. Each element in the list is a dictionary with 2 keys "Width" and "Height".
RefreshLUTList() --> Bool # Refreshes LUT List
GetUniqueId() --> string # Returns a unique ID for the project item
InsertAudioToCurrentTrackAtPlayhead(mediaPath, --> Bool # Inserts the media specified by mediaPath (string) with startOffsetInSamples (int) and durationInSamples (int) at the playhead on a selected track on the Fairlight page. Returns True if successful, otherwise False.
startOffsetInSamples, durationInSamples)
LoadBurnInPreset(presetName) --> Bool # Loads user defined data burn in preset for project when supplied presetName (string). Returns true if successful.
ExportCurrentFrameAsStill(filePath) --> Bool # Exports current frame as still to supplied filePath. filePath must end in valid export file format. Returns True if succssful, False otherwise.
MediaStorage
GetMountedVolumeList() --> [paths...] # Returns list of folder paths corresponding to mounted volumes displayed in Resolves Media Storage.
@ -179,6 +183,7 @@ MediaStorage
RevealInStorage(path) --> Bool # Expands and displays given file/folder path in Resolves Media Storage.
AddItemListToMediaPool(item1, item2, ...) --> [clips...] # Adds specified file/folder paths from Media Storage into current Media Pool folder. Input is one or more file/folder paths. Returns a list of the MediaPoolItems created.
AddItemListToMediaPool([items...]) --> [clips...] # Adds specified file/folder paths from Media Storage into current Media Pool folder. Input is an array of file/folder paths. Returns a list of the MediaPoolItems created.
AddItemListToMediaPool([{itemInfo}, ...]) --> [clips...] # Adds list of itemInfos specified as dict of "media", "startFrame" (int), "endFrame" (int) from Media Storage into current Media Pool folder. Returns a list of the MediaPoolItems created.
AddClipMattesToMediaPool(MediaPoolItem, [paths], stereoEye) --> Bool # Adds specified media files as mattes for the specified MediaPoolItem. StereoEye is an optional argument for specifying which eye to add the matte to for stereo clips ("left" or "right"). Returns True if successful.
AddTimelineMattesToMediaPool([paths]) --> [MediaPoolItems] # Adds specified media files as timeline mattes in current media pool folder. Returns a list of created MediaPoolItems.
@ -189,20 +194,22 @@ MediaPool
CreateEmptyTimeline(name) --> Timeline # Adds new timeline with given name.
AppendToTimeline(clip1, clip2, ...) --> [TimelineItem] # Appends specified MediaPoolItem objects in the current timeline. Returns the list of appended timelineItems.
AppendToTimeline([clips]) --> [TimelineItem] # Appends specified MediaPoolItem objects in the current timeline. Returns the list of appended timelineItems.
AppendToTimeline([{clipInfo}, ...]) --> [TimelineItem] # Appends list of clipInfos specified as dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int), (optional) "mediaType" (int; 1 - Video only, 2 - Audio only). Returns the list of appended timelineItems.
AppendToTimeline([{clipInfo}, ...]) --> [TimelineItem] # Appends list of clipInfos specified as dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int), (optional) "mediaType" (int; 1 - Video only, 2 - Audio only), "trackIndex" (int) and "recordFrame" (int). Returns the list of appended timelineItems.
CreateTimelineFromClips(name, clip1, clip2,...) --> Timeline # Creates new timeline with specified name, and appends the specified MediaPoolItem objects.
CreateTimelineFromClips(name, [clips]) --> Timeline # Creates new timeline with specified name, and appends the specified MediaPoolItem objects.
CreateTimelineFromClips(name, [{clipInfo}]) --> Timeline # Creates new timeline with specified name, appending the list of clipInfos specified as a dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int).
ImportTimelineFromFile(filePath, {importOptions}) --> Timeline # Creates timeline based on parameters within given file and optional importOptions dict, with support for the keys:
# "timelineName": string, specifies the name of the timeline to be created
# "importSourceClips": Bool, specifies whether source clips should be imported, True by default
CreateTimelineFromClips(name, [{clipInfo}]) --> Timeline # Creates new timeline with specified name, appending the list of clipInfos specified as a dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int), "recordFrame" (int).
ImportTimelineFromFile(filePath, {importOptions}) --> Timeline # Creates timeline based on parameters within given file (AAF/EDL/XML/FCPXML/DRT/ADL) and optional importOptions dict, with support for the keys:
# "timelineName": string, specifies the name of the timeline to be created. Not valid for DRT import
# "importSourceClips": Bool, specifies whether source clips should be imported, True by default. Not valid for DRT import
# "sourceClipsPath": string, specifies a filesystem path to search for source clips if the media is inaccessible in their original path and if "importSourceClips" is True
# "sourceClipsFolders": List of Media Pool folder objects to search for source clips if the media is not present in current folder and if "importSourceClips" is False
# "sourceClipsFolders": List of Media Pool folder objects to search for source clips if the media is not present in current folder and if "importSourceClips" is False. Not valid for DRT import
# "interlaceProcessing": Bool, specifies whether to enable interlace processing on the imported timeline being created. valid only for AAF import
DeleteTimelines([timeline]) --> Bool # Deletes specified timelines in the media pool.
GetCurrentFolder() --> Folder # Returns currently selected Folder.
SetCurrentFolder(Folder) --> Bool # Sets current folder by given Folder.
DeleteClips([clips]) --> Bool # Deletes specified clips or timeline mattes in the media pool
ImportFolderFromFile(filePath, sourceClipsPath="") --> Bool # Returns true if import from given DRB filePath is successful, false otherwise
# sourceClipsPath is a string that specifies a filesystem path to search for source clips if the media is inaccessible in their original path, empty by default
DeleteFolders([subfolders]) --> Bool # Deletes specified subfolders in the media pool
MoveClips([clips], targetFolder) --> Bool # Moves specified clips to target folder.
MoveFolders([folders], targetFolder) --> Bool # Moves specified folders to target folder.
@ -225,6 +232,7 @@ Folder
GetSubFolderList() --> [folders...] # Returns a list of subfolders in the folder.
GetIsFolderStale() --> bool # Returns true if folder is stale in collaboration mode, false otherwise
GetUniqueId() --> string # Returns a unique ID for the media pool folder
Export(filePath) --> bool # Returns true if export of DRB folder to filePath is successful, false otherwise
MediaPoolItem
GetName() --> string # Returns the clip name.
@ -257,6 +265,8 @@ MediaPoolItem
UnlinkProxyMedia() --> Bool # Unlinks any proxy media associated with clip.
ReplaceClip(filePath) --> Bool # Replaces the underlying asset and metadata of MediaPoolItem with the specified absolute clip path.
GetUniqueId() --> string # Returns a unique ID for the media pool item
TranscribeAudio() --> Bool # Transcribes audio of the MediaPoolItem. Returns True if successful; False otherwise
ClearTranscription() --> Bool # Clears audio transcription of the MediaPoolItem. Returns True if successful; False otherwise.
Timeline
GetName() --> string # Returns the timeline name.
@ -266,6 +276,23 @@ Timeline
SetStartTimecode(timecode) --> Bool # Set the start timecode of the timeline to the string 'timecode'. Returns true when the change is successful, false otherwise.
GetStartTimecode() --> string # Returns the start timecode for the timeline.
GetTrackCount(trackType) --> int # Returns the number of tracks for the given track type ("audio", "video" or "subtitle").
AddTrack(trackType, optionalSubTrackType) --> Bool # Adds track of trackType ("video", "subtitle", "audio"). Second argument optionalSubTrackType is required for "audio"
# optionalSubTrackType can be one of {"mono", "stereo", "5.1", "5.1film", "7.1", "7.1film", "adaptive1", ... , "adaptive24"}
DeleteTrack(trackType, trackIndex) --> Bool # Deletes track of trackType ("video", "subtitle", "audio") and given trackIndex. 1 <= trackIndex <= GetTrackCount(trackType).
SetTrackEnable(trackType, trackIndex, Bool) --> Bool # Enables/Disables track with given trackType and trackIndex
# trackType is one of {"audio", "video", "subtitle"}
# 1 <= trackIndex <= GetTrackCount(trackType).
GetIsTrackEnabled(trackType, trackIndex) --> Bool # Returns True if track with given trackType and trackIndex is enabled and False otherwise.
# trackType is one of {"audio", "video", "subtitle"}
# 1 <= trackIndex <= GetTrackCount(trackType).
SetTrackLock(trackType, trackIndex, Bool) --> Bool # Locks/Unlocks track with given trackType and trackIndex
# trackType is one of {"audio", "video", "subtitle"}
# 1 <= trackIndex <= GetTrackCount(trackType).
GetIsTrackLocked(trackType, trackIndex) --> Bool # Returns True if track with given trackType and trackIndex is locked and False otherwise.
# trackType is one of {"audio", "video", "subtitle"}
# 1 <= trackIndex <= GetTrackCount(trackType).
DeleteClips([timelineItems], Bool) --> Bool # Deletes specified TimelineItems from the timeline, performing ripple delete if the second argument is True. Second argument is optional (The default for this is False)
SetClipsLinked([timelineItems], Bool) --> Bool # Links or unlinks the specified TimelineItems depending on second argument.
GetItemListInTrack(trackType, index) --> [items...] # Returns a list of timeline items on that track (based on trackType and index). 1 <= index <= GetTrackCount(trackType).
AddMarker(frameId, color, name, note, duration, --> Bool # Creates a new marker at given frameId position and with given marker information. 'customData' is optional and helps to attach user specific data to the marker.
customData)
@ -301,7 +328,7 @@ Timeline
# "sourceClipsFolders": string, list of Media Pool folder objects to search for source clips if the media is not present in current folder
Export(fileName, exportType, exportSubtype) --> Bool # Exports timeline to 'fileName' as per input exportType & exportSubtype format.
# Refer to section "Looking up timeline exports properties" for information on the parameters.
# Refer to section "Looking up timeline export properties" for information on the parameters.
GetSetting(settingName) --> string # Returns value of timeline setting (indicated by settingName : string). Check the section below for more information.
SetSetting(settingName, settingValue) --> Bool # Sets timeline setting (indicated by settingName : string) to the value (settingValue : string). Check the section below for more information.
InsertGeneratorIntoTimeline(generatorName) --> TimelineItem # Inserts a generator (indicated by generatorName : string) into the timeline.
@ -313,6 +340,8 @@ Timeline
GrabStill() --> galleryStill # Grabs still from the current video clip. Returns a GalleryStill object.
GrabAllStills(stillFrameSource) --> [galleryStill] # Grabs stills from all the clips of the timeline at 'stillFrameSource' (1 - First frame, 2 - Middle frame). Returns the list of GalleryStill objects.
GetUniqueId() --> string # Returns a unique ID for the timeline
CreateSubtitlesFromAudio() --> Bool # Creates subtitles from audio for the timeline. Returns True on success, False otherwise.
DetectSceneCuts() --> Bool # Detects and makes scene cuts along the timeline. Returns True if successful, False otherwise.
TimelineItem
GetName() --> string # Returns the item name.
@ -362,6 +391,7 @@ TimelineItem
GetStereoLeftFloatingWindowParams() --> {keyframes...} # For the LEFT eye -> returns a dict (offset -> dict) of keyframe offsets and respective floating window params. Value at particular offset includes the left, right, top and bottom floating window values.
GetStereoRightFloatingWindowParams() --> {keyframes...} # For the RIGHT eye -> returns a dict (offset -> dict) of keyframe offsets and respective floating window params. Value at particular offset includes the left, right, top and bottom floating window values.
GetNumNodes() --> int # Returns the number of nodes in the current graph for the timeline item
ApplyArriCdlLut() --> Bool # Applies ARRI CDL and LUT. Returns True if successful, False otherwise.
SetLUT(nodeIndex, lutPath) --> Bool # Sets LUT on the node mapping the node index provided, 1 <= nodeIndex <= total number of nodes.
# The lutPath can be an absolute path, or a relative path (based off custom LUT paths or the master LUT path).
# The operation is successful for valid lut paths that Resolve has already discovered (see Project.RefreshLUTList).
@ -376,8 +406,16 @@ TimelineItem
SelectTakeByIndex(idx) --> Bool # Selects a take by index, 1 <= idx <= number of takes.
FinalizeTake() --> Bool # Finalizes take selection.
CopyGrades([tgtTimelineItems]) --> Bool # Copies the current grade to all the items in tgtTimelineItems list. Returns True on success and False if any error occurred.
SetClipEnabled(Bool) --> Bool # Sets clip enabled based on argument.
GetClipEnabled() --> Bool # Gets clip enabled status.
UpdateSidecar() --> Bool # Updates sidecar file for BRAW clips or RMD file for R3D clips.
GetUniqueId() --> string # Returns a unique ID for the timeline item
LoadBurnInPreset(presetName) --> Bool # Loads user defined data burn in preset for clip when supplied presetName (string). Returns true if successful.
GetNodeLabel(nodeIndex) --> string # Returns the label of the node at nodeIndex.
CreateMagicMask(mode) --> Bool # Returns True if magic mask was created successfully, False otherwise. mode can "F" (forward), "B" (backward), or "BI" (bidirection)
RegenerateMagicMask() --> Bool # Returns True if magic mask was regenerated successfully, False otherwise.
Stabilize() --> Bool # Returns True if stabilization was successful, False otherwise
SmartReframe() --> Bool # Performs Smart Reframe. Returns True if successful, False otherwise.
Gallery
GetAlbumName(galleryStillAlbum) --> string # Returns the name of the GalleryStillAlbum object 'galleryStillAlbum'.
@ -422,9 +460,11 @@ Invoke "Project:SetSetting", "Timeline:SetSetting" or "MediaPoolItem:SetClipProp
ensure the success of the operation. You can troubleshoot the validity of keys and values by setting the desired result from the UI and checking property snapshots before and after the change.
The following Project properties have specifically enumerated values:
"superScale" - the property value is an enumerated integer between 0 and 3 with these meanings: 0=Auto, 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
"superScale" - the property value is an enumerated integer between 0 and 4 with these meanings: 0=Auto, 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
for super scale multiplier '2x Enhanced', exactly 4 arguments must be passed as outlined below. If less than 4 arguments are passed, it will default to 2x.
Affects:
• x = Project:GetSetting('superScale') and Project:SetSetting('superScale', x)
• for '2x Enhanced' --> Project:SetSetting('superScale', 2, sharpnessValue, noiseReductionValue), where sharpnessValue is a float in the range [0.0, 1.0] and noiseReductionValue is a float in the range [0.0, 1.0]
"timelineFrameRate" - the property value is one of the frame rates available to the user in project settings under "Timeline frame rate" option. Drop Frame can be configured for supported frame rates
by appending the frame rate with "DF", e.g. "29.97 DF" will enable drop frame and "29.97" will disable drop frame
@ -432,9 +472,11 @@ Affects:
• x = Project:GetSetting('timelineFrameRate') and Project:SetSetting('timelineFrameRate', x)
The following Clip properties have specifically enumerated values:
"superScale" - the property value is an enumerated integer between 1 and 3 with these meanings: 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
"Super Scale" - the property value is an enumerated integer between 1 and 4 with these meanings: 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
for super scale multiplier '2x Enhanced', exactly 4 arguments must be passed as outlined below. If less than 4 arguments are passed, it will default to 2x.
Affects:
• x = MediaPoolItem:GetClipProperty('Super Scale') and MediaPoolItem:SetClipProperty('Super Scale', x)
• for '2x Enhanced' --> MediaPoolItem:SetClipProperty('Super Scale', 2, sharpnessValue, noiseReductionValue), where sharpnessValue is a float in the range [0.0, 1.0] and noiseReductionValue is a float in the range [0.0, 1.0]
Looking up Render Settings
@ -478,11 +520,6 @@ exportType can be one of the following constants:
- resolve.EXPORT_DRT
- resolve.EXPORT_EDL
- resolve.EXPORT_FCP_7_XML
- resolve.EXPORT_FCPXML_1_3
- resolve.EXPORT_FCPXML_1_4
- resolve.EXPORT_FCPXML_1_5
- resolve.EXPORT_FCPXML_1_6
- resolve.EXPORT_FCPXML_1_7
- resolve.EXPORT_FCPXML_1_8
- resolve.EXPORT_FCPXML_1_9
- resolve.EXPORT_FCPXML_1_10
@ -492,6 +529,8 @@ exportType can be one of the following constants:
- resolve.EXPORT_TEXT_TAB
- resolve.EXPORT_DOLBY_VISION_VER_2_9
- resolve.EXPORT_DOLBY_VISION_VER_4_0
- resolve.EXPORT_DOLBY_VISION_VER_5_1
- resolve.EXPORT_OTIO
exportSubtype can be one of the following enums:
- resolve.EXPORT_NONE
- resolve.EXPORT_AAF_NEW
@ -504,6 +543,16 @@ When exportType is resolve.EXPORT_AAF, valid exportSubtype values are resolve.EX
When exportType is resolve.EXPORT_EDL, valid exportSubtype values are resolve.EXPORT_CDL, resolve.EXPORT_SDL, resolve.EXPORT_MISSING_CLIPS and resolve.EXPORT_NONE.
Note: Replace 'resolve.' when using the constants above, if a different Resolve class instance name is used.
Unsupported exportType types
---------------------------------
Starting with DaVinci Resolve 18.1, the following export types are not supported:
- resolve.EXPORT_FCPXML_1_3
- resolve.EXPORT_FCPXML_1_4
- resolve.EXPORT_FCPXML_1_5
- resolve.EXPORT_FCPXML_1_6
- resolve.EXPORT_FCPXML_1_7
Looking up Timeline item properties
-----------------------------------
This section covers additional notes for the function "TimelineItem:SetProperty" and "TimelineItem:GetProperty". These functions are used to get and set properties mentioned.

View file

@ -6,7 +6,10 @@ import contextlib
from opentimelineio import opentime
from openpype.lib import Logger
from openpype.pipeline.editorial import is_overlapping_otio_ranges
from openpype.pipeline.editorial import (
is_overlapping_otio_ranges,
frames_to_timecode
)
from ..otio import davinci_export as otio_export
@ -246,18 +249,22 @@ def get_media_pool_item(filepath, root: object = None) -> object:
return None
def create_timeline_item(media_pool_item: object,
timeline: object = None,
source_start: int = None,
source_end: int = None) -> object:
def create_timeline_item(
media_pool_item: object,
timeline: object = None,
timeline_in: int = None,
source_start: int = None,
source_end: int = None,
) -> object:
"""
Add media pool item to current or defined timeline.
Args:
media_pool_item (resolve.MediaPoolItem): resolve's object
timeline (resolve.Timeline)[optional]: resolve's object
source_start (int)[optional]: media source input frame (sequence frame)
source_end (int)[optional]: media source output frame (sequence frame)
timeline (Optional[resolve.Timeline]): resolve's object
timeline_in (Optional[int]): timeline input frame (sequence frame)
source_start (Optional[int]): media source input frame (sequence frame)
source_end (Optional[int]): media source output frame (sequence frame)
Returns:
object: resolve.TimelineItem
@ -269,16 +276,29 @@ def create_timeline_item(media_pool_item: object,
clip_name = _clip_property("File Name")
timeline = timeline or get_current_timeline()
# timing variables
if all([timeline_in, source_start, source_end]):
fps = timeline.GetSetting("timelineFrameRate")
duration = source_end - source_start
timecode_in = frames_to_timecode(timeline_in, fps)
timecode_out = frames_to_timecode(timeline_in + duration, fps)
else:
timecode_in = None
timecode_out = None
# if timeline was used then switch it to current timeline
with maintain_current_timeline(timeline):
# Add input mediaPoolItem to clip data
clip_data = {"mediaPoolItem": media_pool_item}
clip_data = {
"mediaPoolItem": media_pool_item,
}
# add source time range if input was given
if source_start is not None:
clip_data.update({"startFrame": source_start})
if source_end is not None:
clip_data.update({"endFrame": source_end})
if source_start:
clip_data["startFrame"] = source_start
if source_end:
clip_data["endFrame"] = source_end
if timecode_in:
clip_data["recordFrame"] = timecode_in
# add to timeline
media_pool.AppendToTimeline([clip_data])
@ -286,10 +306,15 @@ def create_timeline_item(media_pool_item: object,
output_timeline_item = get_timeline_item(
media_pool_item, timeline)
assert output_timeline_item, AssertionError(
"Track Item with name `{}` doesn't exist on the timeline: `{}`".format(
clip_name, timeline.GetName()
))
assert output_timeline_item, AssertionError((
"Clip name '{}' was't created on the timeline: '{}' \n\n"
"Please check if correct track position is activated, \n"
"or if a clip is not already at the timeline in \n"
"position: '{}' out: '{}'. \n\n"
"Clip data: {}"
).format(
clip_name, timeline.GetName(), timecode_in, timecode_out, clip_data
))
return output_timeline_item
@ -490,7 +515,7 @@ def imprint(timeline_item, data=None):
Arguments:
timeline_item (hiero.core.TrackItem): hiero track item object
data (dict): Any data which needst to be imprinted
data (dict): Any data which needs to be imprinted
Examples:
data = {

View file

@ -306,11 +306,18 @@ class ClipLoader:
self.active_project = lib.get_current_project()
# try to get value from options or evaluate key value for `handles`
self.with_handles = options.get("handles") or bool(
options.get("handles") is True)
self.with_handles = options.get("handles") is True
# try to get value from options or evaluate key value for `load_to`
self.new_timeline = options.get("newTimeline") or bool(
"New timeline" in options.get("load_to", ""))
self.new_timeline = (
options.get("newTimeline") or
options.get("load_to") == "New timeline"
)
# try to get value from options or evaluate key value for `load_how`
self.sequential_load = (
options.get("sequentially") or
options.get("load_how") == "Sequentially in order"
)
assert self._populate_data(), str(
"Cannot Load selected data, look into database "
@ -391,30 +398,70 @@ class ClipLoader:
# create project bin for the media to be imported into
self.active_bin = lib.create_bin(self.data["binPath"])
handle_start = self.data["versionData"].get("handleStart") or 0
handle_end = self.data["versionData"].get("handleEnd") or 0
# create mediaItem in active project bin
# create clip media
media_pool_item = lib.create_media_pool_item(
files,
self.active_bin
)
_clip_property = media_pool_item.GetClipProperty
# get handles
handle_start = self.data["versionData"].get("handleStart")
handle_end = self.data["versionData"].get("handleEnd")
if handle_start is None:
handle_start = int(self.data["assetData"]["handleStart"])
if handle_end is None:
handle_end = int(self.data["assetData"]["handleEnd"])
# check frame duration from versionData or assetData
frame_start = self.data["versionData"].get("frameStart")
if frame_start is None:
frame_start = self.data["assetData"]["frameStart"]
# check frame duration from versionData or assetData
frame_end = self.data["versionData"].get("frameEnd")
if frame_end is None:
frame_end = self.data["assetData"]["frameEnd"]
db_frame_duration = int(frame_end) - int(frame_start) + 1
# get timeline in
timeline_start = self.active_timeline.GetStartFrame()
if self.sequential_load:
# set timeline start frame
timeline_in = int(timeline_start)
else:
# set timeline start frame + original clip in frame
timeline_in = int(
timeline_start + self.data["assetData"]["clipIn"])
source_in = int(_clip_property("Start"))
source_out = int(_clip_property("End"))
source_duration = int(_clip_property("Frames"))
if _clip_property("Type") == "Video":
# check if source duration is shorter than db frame duration
source_with_handles = True
if source_duration < db_frame_duration:
source_with_handles = False
# only exclude handles if source has no handles or
# if user wants to load without handles
if (
not self.with_handles
or not source_with_handles
):
source_in += handle_start
source_out -= handle_end
# include handles
if self.with_handles:
source_in -= handle_start
source_out += handle_end
# make track item from source in bin as item
timeline_item = lib.create_timeline_item(
media_pool_item, self.active_timeline, source_in, source_out)
media_pool_item,
self.active_timeline,
timeline_in,
source_in,
source_out,
)
print("Loading clips: `{}`".format(self.data["clip_name"]))
return timeline_item
@ -455,7 +502,7 @@ class TimelineItemLoader(LoaderPlugin):
"""
options = [
qargparse.Toggle(
qargparse.Boolean(
"handles",
label="Include handles",
default=0,
@ -470,6 +517,16 @@ class TimelineItemLoader(LoaderPlugin):
],
default=0,
help="Where do you want clips to be loaded?"
),
qargparse.Choice(
"load_how",
label="How to load clips",
items=[
"Original timing",
"Sequentially in order"
],
default="Original timing",
help="Would you like to place it at original timing?"
)
]

View file

@ -21,6 +21,7 @@ from aiohttp_json_rpc.protocol import (
)
from aiohttp_json_rpc.exceptions import RpcError
from openpype import AYON_SERVER_ENABLED
from openpype.lib import emit_event
from openpype.hosts.tvpaint.tvpaint_plugin import get_plugin_files_path
@ -834,8 +835,12 @@ class BaseCommunicator:
class QtCommunicator(BaseCommunicator):
label = os.getenv("AVALON_LABEL")
if not label:
label = "AYON" if AYON_SERVER_ENABLED else "OpenPype"
title = "{} Tools".format(label)
menu_definitions = {
"title": "OpenPype Tools",
"title": title,
"menu_items": [
{
"callback": "workfiles_tool",

View file

@ -7,7 +7,7 @@ import requests
import pyblish.api
from openpype.client import get_project, get_asset_by_name
from openpype.client import get_asset_by_name
from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost
from openpype.hosts.tvpaint import TVPAINT_ROOT_DIR
from openpype.settings import get_current_project_settings

View file

@ -69,7 +69,6 @@ class CollectWorkfileData(pyblish.api.ContextPlugin):
"asset_name": context.data["asset"],
"task_name": context.data["task"]
}
context.data["previous_context"] = current_context
self.log.debug("Current context is: {}".format(current_context))
# Collect context from workfile metadata

View file

@ -13,8 +13,10 @@ from openpype.client import get_asset_by_name, get_assets
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
register_inventory_action_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
deregister_inventory_action_path,
AYON_CONTAINER_ID,
legacy_io,
)
@ -28,6 +30,7 @@ import unreal # noqa
logger = logging.getLogger("openpype.hosts.unreal")
AYON_CONTAINERS = "AyonContainers"
AYON_ASSET_DIR = "/Game/Ayon/Assets"
CONTEXT_CONTAINER = "Ayon/context.json"
UNREAL_VERSION = semver.VersionInfo(
*os.getenv("AYON_UNREAL_VERSION").split(".")
@ -127,6 +130,7 @@ def install():
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
register_loader_plugin_path(str(LOAD_PATH))
register_creator_plugin_path(str(CREATE_PATH))
register_inventory_action_path(str(INVENTORY_PATH))
_register_callbacks()
_register_events()
@ -136,6 +140,7 @@ def uninstall():
pyblish.api.deregister_plugin_path(str(PUBLISH_PATH))
deregister_loader_plugin_path(str(LOAD_PATH))
deregister_creator_plugin_path(str(CREATE_PATH))
deregister_inventory_action_path(str(INVENTORY_PATH))
def _register_callbacks():
@ -649,6 +654,141 @@ def generate_sequence(h, h_dir):
return sequence, (min_frame, max_frame)
def _get_comps_and_assets(
component_class, asset_class, old_assets, new_assets, selected
):
eas = unreal.get_editor_subsystem(unreal.EditorActorSubsystem)
components = []
if selected:
sel_actors = eas.get_selected_level_actors()
for actor in sel_actors:
comps = actor.get_components_by_class(component_class)
components.extend(comps)
else:
comps = eas.get_all_level_actors_components()
components = [
c for c in comps if isinstance(c, component_class)
]
# Get all the static meshes among the old assets in a dictionary with
# the name as key
selected_old_assets = {}
for a in old_assets:
asset = unreal.EditorAssetLibrary.load_asset(a)
if isinstance(asset, asset_class):
selected_old_assets[asset.get_name()] = asset
# Get all the static meshes among the new assets in a dictionary with
# the name as key
selected_new_assets = {}
for a in new_assets:
asset = unreal.EditorAssetLibrary.load_asset(a)
if isinstance(asset, asset_class):
selected_new_assets[asset.get_name()] = asset
return components, selected_old_assets, selected_new_assets
def replace_static_mesh_actors(old_assets, new_assets, selected):
smes = unreal.get_editor_subsystem(unreal.StaticMeshEditorSubsystem)
static_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets(
unreal.StaticMeshComponent,
unreal.StaticMesh,
old_assets,
new_assets,
selected
)
for old_name, old_mesh in old_meshes.items():
new_mesh = new_meshes.get(old_name)
if not new_mesh:
continue
smes.replace_mesh_components_meshes(
static_mesh_comps, old_mesh, new_mesh)
def replace_skeletal_mesh_actors(old_assets, new_assets, selected):
skeletal_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets(
unreal.SkeletalMeshComponent,
unreal.SkeletalMesh,
old_assets,
new_assets,
selected
)
for old_name, old_mesh in old_meshes.items():
new_mesh = new_meshes.get(old_name)
if not new_mesh:
continue
for comp in skeletal_mesh_comps:
if comp.get_skeletal_mesh_asset() == old_mesh:
comp.set_skeletal_mesh_asset(new_mesh)
def replace_geometry_cache_actors(old_assets, new_assets, selected):
geometry_cache_comps, old_caches, new_caches = _get_comps_and_assets(
unreal.GeometryCacheComponent,
unreal.GeometryCache,
old_assets,
new_assets,
selected
)
for old_name, old_mesh in old_caches.items():
new_mesh = new_caches.get(old_name)
if not new_mesh:
continue
for comp in geometry_cache_comps:
if comp.get_editor_property("geometry_cache") == old_mesh:
comp.set_geometry_cache(new_mesh)
def delete_asset_if_unused(container, asset_content):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
references = set()
for asset_path in asset_content:
asset = ar.get_asset_by_object_path(asset_path)
refs = ar.get_referencers(
asset.package_name,
unreal.AssetRegistryDependencyOptions(
include_soft_package_references=False,
include_hard_package_references=True,
include_searchable_names=False,
include_soft_management_references=False,
include_hard_management_references=False
))
if not refs:
continue
references = references.union(set(refs))
# Filter out references that are in the Temp folder
cleaned_references = {
ref for ref in references if not str(ref).startswith("/Temp/")}
# Check which of the references are Levels
for ref in cleaned_references:
loaded_asset = unreal.EditorAssetLibrary.load_asset(ref)
if isinstance(loaded_asset, unreal.World):
# If there is at least a level, we can stop, we don't want to
# delete the container
return
unreal.log("Previous version unused, deleting...")
# No levels, delete the asset
unreal.EditorAssetLibrary.delete_directory(container["namespace"])
@contextmanager
def maintained_selection():
"""Stub to be either implemented or replaced.

View file

@ -0,0 +1,66 @@
import unreal
from openpype.hosts.unreal.api.tools_ui import qt_app_context
from openpype.hosts.unreal.api.pipeline import delete_asset_if_unused
from openpype.pipeline import InventoryAction
class DeleteUnusedAssets(InventoryAction):
"""Delete all the assets that are not used in any level.
"""
label = "Delete Unused Assets"
icon = "trash"
color = "red"
order = 1
dialog = None
def _delete_unused_assets(self, containers):
allowed_families = ["model", "rig"]
for container in containers:
container_dir = container.get("namespace")
if container.get("family") not in allowed_families:
unreal.log_warning(
f"Container {container_dir} is not supported.")
continue
asset_content = unreal.EditorAssetLibrary.list_assets(
container_dir, recursive=True, include_folder=False
)
delete_asset_if_unused(container, asset_content)
def _show_confirmation_dialog(self, containers):
from qtpy import QtCore
from openpype.widgets import popup
from openpype.style import load_stylesheet
dialog = popup.Popup()
dialog.setWindowFlags(
QtCore.Qt.Window
| QtCore.Qt.WindowStaysOnTopHint
)
dialog.setFocusPolicy(QtCore.Qt.StrongFocus)
dialog.setWindowTitle("Delete all unused assets")
dialog.setMessage(
"You are about to delete all the assets in the project that \n"
"are not used in any level. Are you sure you want to continue?"
)
dialog.setButtonText("Delete")
dialog.on_clicked.connect(
lambda: self._delete_unused_assets(containers)
)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.setStyleSheet(load_stylesheet())
self.dialog = dialog
def process(self, containers):
with qt_app_context():
self._show_confirmation_dialog(containers)

View file

@ -0,0 +1,84 @@
import unreal
from openpype.hosts.unreal.api.pipeline import (
ls,
replace_static_mesh_actors,
replace_skeletal_mesh_actors,
replace_geometry_cache_actors,
)
from openpype.pipeline import InventoryAction
def update_assets(containers, selected):
allowed_families = ["model", "rig"]
# Get all the containers in the Unreal Project
all_containers = ls()
for container in containers:
container_dir = container.get("namespace")
if container.get("family") not in allowed_families:
unreal.log_warning(
f"Container {container_dir} is not supported.")
continue
# Get all containers with same asset_name but different objectName.
# These are the containers that need to be updated in the level.
sa_containers = [
i
for i in all_containers
if (
i.get("asset_name") == container.get("asset_name") and
i.get("objectName") != container.get("objectName")
)
]
asset_content = unreal.EditorAssetLibrary.list_assets(
container_dir, recursive=True, include_folder=False
)
# Update all actors in level
for sa_cont in sa_containers:
sa_dir = sa_cont.get("namespace")
old_content = unreal.EditorAssetLibrary.list_assets(
sa_dir, recursive=True, include_folder=False
)
if container.get("family") == "rig":
replace_skeletal_mesh_actors(
old_content, asset_content, selected)
replace_static_mesh_actors(
old_content, asset_content, selected)
elif container.get("family") == "model":
if container.get("loader") == "PointCacheAlembicLoader":
replace_geometry_cache_actors(
old_content, asset_content, selected)
else:
replace_static_mesh_actors(
old_content, asset_content, selected)
unreal.EditorLevelLibrary.save_current_level()
class UpdateAllActors(InventoryAction):
"""Update all the Actors in the current level to the version of the asset
selected in the scene manager.
"""
label = "Replace all Actors in level to this version"
icon = "arrow-up"
def process(self, containers):
update_assets(containers, False)
class UpdateSelectedActors(InventoryAction):
"""Update only the selected Actors in the current level to the version
of the asset selected in the scene manager.
"""
label = "Replace selected Actors in level to this version"
icon = "arrow-up"
def process(self, containers):
update_assets(containers, True)

View file

@ -69,7 +69,7 @@ class AnimationAlembicLoader(plugin.Loader):
"""
# Create directory for asset and ayon container
root = "/Game/Ayon/Assets"
root = unreal_pipeline.AYON_ASSET_DIR
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -21,8 +25,11 @@ class PointCacheAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(
self, filename, asset_dir, asset_name, replace,
filename, asset_dir, asset_name, replace,
frame_start=None, frame_end=None
):
task = unreal.AssetImportTask()
@ -38,8 +45,6 @@ class PointCacheAlembicLoader(plugin.Loader):
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options.set_editor_property(
'import_type', unreal.AlembicImportType.GEOMETRY_CACHE)
@ -64,13 +69,42 @@ class PointCacheAlembicLoader(plugin.Loader):
return task
def load(self, context, name, namespace, data):
"""Load and containerise representation into Content Browser.
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name,
frame_start, frame_end
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
task = self.get_task(
filepath, asset_dir, asset_name, False, frame_start, frame_end)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation,
frame_start, frame_end
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"],
"frame_start": frame_start,
"frame_end": frame_end
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
Args:
context (dict): application context
@ -79,30 +113,28 @@ class PointCacheAlembicLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
name_version = f"{name}_hero"
else:
asset_name = "{}".format(name)
name_version = f"{name}_v{version.get('name'):03d}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
"{}/{}/{}".format(root, asset, name), suffix="")
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
frame_start = context.get('asset').get('data').get('frameStart')
frame_end = context.get('asset').get('data').get('frameEnd')
@ -111,30 +143,16 @@ class PointCacheAlembicLoader(plugin.Loader):
if frame_start == frame_end:
frame_end += 1
path = self.filepath_from_context(context)
task = self.get_task(
path, asset_dir, asset_name, False, frame_start, frame_end)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = self.filepath_from_context(context)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
self.import_and_containerize(
path, asset_dir, asset_name, container_name,
frame_start, frame_end)
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"], frame_start, frame_end)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -146,27 +164,43 @@ class PointCacheAlembicLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
representation["context"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, False)
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
unreal.log_warning(context)
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
if not context:
raise RuntimeError("No context found in representation")
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
frame_start = int(container.get("frame_start"))
frame_end = int(container.get("frame_end"))
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(
path, asset_dir, asset_name, container_name,
frame_start, frame_end)
self.imprint(
asset, asset_dir, container_name, asset_name, representation,
frame_start, frame_end)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,10 +24,12 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
def get_task(self, filename, asset_dir, asset_name, replace):
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
task = unreal.AssetImportTask()
options = unreal.AbcImportSettings()
sm_settings = unreal.AbcStaticMeshSettings()
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
@ -37,72 +43,38 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options.set_editor_property(
'import_type', unreal.AlembicImportType.SKELETAL)
options.static_mesh_settings = sm_settings
options.conversion_settings = conversion_settings
if not default_conversion:
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
rotation=[0.0, 0.0, 0.0],
scale=[1.0, 1.0, 1.0])
options.conversion_settings = conversion_settings
task.options = options
return task
def load(self, context, name, namespace, data):
"""Load and containerise representation into Content Browser.
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name,
default_conversion=False
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
task = self.get_task(
filepath, asset_dir, asset_name, False, default_conversion)
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
Returns:
list(str): list of container content
"""
# Create directory for asset and ayon container
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
name_version = f"{name}_hero"
else:
name_version = f"{name}_v{version.get('name'):03d}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
path = self.filepath_from_context(context)
task = self.get_task(path, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
@ -111,12 +83,57 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
unreal_pipeline.imprint(
f"{asset_dir}/{container_name}", data)
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and ayon container
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
name_version = f"{name}_hero"
else:
name_version = f"{name}_v{version.get('name'):03d}"
default_conversion = False
if options.get("default_conversion"):
default_conversion = options.get("default_conversion")
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = self.filepath_from_context(context)
self.import_and_containerize(path, asset_dir, asset_name,
container_name, default_conversion)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -128,26 +145,36 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, True)
if not context:
raise RuntimeError("No context found in representation")
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(path, asset_dir, asset_name,
container_name)
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,14 +24,79 @@ class SkeletalMeshFBXLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace):
task = unreal.AssetImportTask()
options = unreal.FbxImportUI()
task.set_editor_property('filename', filename)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
task.set_editor_property('replace_existing', replace)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
options.set_editor_property(
'automated_import_should_detect_type', False)
options.set_editor_property('import_as_skeletal', True)
options.set_editor_property('import_animations', False)
options.set_editor_property('import_mesh', True)
options.set_editor_property('import_materials', False)
options.set_editor_property('import_textures', False)
options.set_editor_property('skeleton', None)
options.set_editor_property('create_physics_asset', False)
options.set_editor_property(
'mesh_type_to_import',
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_ALL)
options.skeletal_mesh_import_data.set_editor_property(
'normal_import_method',
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS)
task.options = options
return task
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(
filepath, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is a two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
@ -35,23 +104,15 @@ class SkeletalMeshFBXLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
options (dict): Those would be data to be imprinted. This is not
used now, data are imprinted by `containerise()`.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
if options and options.get("asset_dir"):
root = options["asset_dir"]
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
@ -61,67 +122,20 @@ class SkeletalMeshFBXLoader(plugin.Loader):
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix="")
f"{self.root}/{asset}/{name_version}", suffix=""
)
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = unreal.AssetImportTask()
path = self.filepath_from_context(context)
task.set_editor_property('filename', path)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
task.set_editor_property('replace_existing', False)
task.set_editor_property('automated', True)
task.set_editor_property('save', False)
# set import options here
options = unreal.FbxImportUI()
options.set_editor_property('import_as_skeletal', True)
options.set_editor_property('import_animations', False)
options.set_editor_property('import_mesh', True)
options.set_editor_property('import_materials', False)
options.set_editor_property('import_textures', False)
options.set_editor_property('skeleton', None)
options.set_editor_property('create_physics_asset', False)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
options.set_editor_property(
'mesh_type_to_import',
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_ALL)
# set to import normals, otherwise Unreal will compute them
# and it will take a long time, depending on the size of the mesh
options.skeletal_mesh_import_data.set_editor_property(
'normal_import_method',
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS)
task.options = options
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
f"{asset_dir}/{container_name}", data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -133,58 +147,36 @@ class SkeletalMeshFBXLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = unreal.AssetImportTask()
if not context:
raise RuntimeError("No context found in representation")
task.set_editor_property('filename', source_path)
task.set_editor_property('destination_path', destination_path)
task.set_editor_property('destination_name', name)
task.set_editor_property('replace_existing', True)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
# set import options here
options = unreal.FbxImportUI()
options.set_editor_property('import_as_skeletal', True)
options.set_editor_property('import_animations', False)
options.set_editor_property('import_mesh', True)
options.set_editor_property('import_materials', True)
options.set_editor_property('import_textures', True)
options.set_editor_property('skeleton', None)
options.set_editor_property('create_physics_asset', False)
container_name += suffix
options.set_editor_property('mesh_type_to_import',
unreal.FBXImportType.FBXIT_SKELETAL_MESH)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
options.skeletal_mesh_import_data.set_editor_property(
'import_content_type',
unreal.FBXImportContentType.FBXICT_ALL
)
# set to import normals, otherwise Unreal will compute them
# and it will take a long time, depending on the size of the mesh
options.skeletal_mesh_import_data.set_editor_property(
'normal_import_method',
unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS
)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
task.options = options
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,6 +24,8 @@ class StaticMeshAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
task = unreal.AssetImportTask()
@ -53,14 +59,40 @@ class StaticMeshAlembicLoader(plugin.Loader):
return task
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name,
default_conversion=False
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(
filepath, asset_dir, asset_name, False, default_conversion)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
@ -68,15 +100,12 @@ class StaticMeshAlembicLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
data (dict): Those would be data to be imprinted. This is not used
now, data are imprinted by `containerise()`.
data (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
@ -93,39 +122,22 @@ class StaticMeshAlembicLoader(plugin.Loader):
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix="")
f"{self.root}/{asset}/{name_version}", suffix="")
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
path = self.filepath_from_context(context)
task = self.get_task(
path, asset_dir, asset_name, False, default_conversion)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
self.import_and_containerize(path, asset_dir, asset_name,
container_name, default_conversion)
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:
@ -134,27 +146,36 @@ class StaticMeshAlembicLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, True, False)
if not context:
raise RuntimeError("No context found in representation")
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(path, asset_dir, asset_name,
container_name)
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -7,7 +7,11 @@ from openpype.pipeline import (
AYON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
AYON_ASSET_DIR,
create_container,
imprint,
)
import unreal # noqa
@ -20,6 +24,8 @@ class StaticMeshFBXLoader(plugin.Loader):
icon = "cube"
color = "orange"
root = AYON_ASSET_DIR
@staticmethod
def get_task(filename, asset_dir, asset_name, replace):
task = unreal.AssetImportTask()
@ -46,14 +52,39 @@ class StaticMeshFBXLoader(plugin.Loader):
return task
def import_and_containerize(
self, filepath, asset_dir, asset_name, container_name
):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(
filepath, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create Asset Container
create_container(container=container_name, path=asset_dir)
def imprint(
self, asset, asset_dir, container_name, asset_name, representation
):
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": representation["_id"],
"parent": representation["parent"],
"family": representation["context"]["family"]
}
imprint(f"{asset_dir}/{container_name}", data)
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
then call `containerise()` on it - this moves all content to new
directory and then it will create AssetContainer there and imprint it
with metadata. This will mark this path as container.
Args:
context (dict): application context
name (str): subset name
@ -61,23 +92,15 @@ class StaticMeshFBXLoader(plugin.Loader):
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
options (dict): Those would be data to be imprinted. This is not
used now, data are imprinted by `containerise()`.
options (dict): Those would be data to be imprinted.
Returns:
list(str): list of container content
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
if options and options.get("asset_dir"):
root = options["asset_dir"]
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
if not version.get("name") and version.get('type') == "hero_version":
@ -87,35 +110,20 @@ class StaticMeshFBXLoader(plugin.Loader):
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name_version}", suffix=""
f"{self.root}/{asset}/{name_version}", suffix=""
)
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = self.filepath_from_context(context)
path = self.filepath_from_context(context)
task = self.get_task(path, asset_dir, asset_name, False)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "ayon:container-2.0",
"id": AYON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
self.imprint(
asset, asset_dir, container_name, asset_name,
context["representation"])
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -127,27 +135,36 @@ class StaticMeshFBXLoader(plugin.Loader):
return asset_content
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
destination_path = container["namespace"]
context = representation.get("context", {})
task = self.get_task(source_path, destination_path, name, True)
if not context:
raise RuntimeError("No context found in representation")
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
# Create directory for asset and Ayon container
asset = context.get('asset')
name = context.get('subset')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"
version = context.get('version')
# Check if version is hero version and use different name
name_version = f"{name}_v{version:03d}" if version else f"{name}_hero"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{self.root}/{asset}/{name_version}", suffix="")
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
container_name += suffix
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
path = get_representation_path(representation)
self.import_and_containerize(
path, asset_dir, asset_name, container_name)
self.imprint(
asset, asset_dir, container_name, asset_name, representation)
asset_content = unreal.EditorAssetLibrary.list_assets(
destination_path, recursive=True, include_folder=True
asset_dir, recursive=True, include_folder=False
)
for a in asset_content:

View file

@ -41,7 +41,7 @@ class UAssetLoader(plugin.Loader):
"""
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
root = unreal_pipeline.AYON_ASSET_DIR
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"

View file

@ -86,7 +86,7 @@ class YetiLoader(plugin.Loader):
raise RuntimeError("Groom plugin is not activated.")
# Create directory for asset and Ayon container
root = "/Game/Ayon/Assets"
root = unreal_pipeline.AYON_ASSET_DIR
asset = context.get('asset').get('name')
suffix = "_CON"
asset_name = f"{asset}_{name}" if asset else f"{name}"

View file

@ -3,6 +3,7 @@ import os
import re
import pyblish.api
from openpype.pipeline.publish import PublishValidationError
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
@ -39,8 +40,22 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
collections, remainder = clique.assemble(
repr["files"], minimum_items=1, patterns=patterns)
assert not remainder, "Must not have remainder"
assert len(collections) == 1, "Must detect single collection"
if remainder:
raise PublishValidationError(
"Some files have been found outside a sequence. "
f"Invalid files: {remainder}")
if not collections:
raise PublishValidationError(
"We have been unable to find a sequence in the "
"files. Please ensure the files are named "
"appropriately. "
f"Files: {repr_files}")
if len(collections) > 1:
raise PublishValidationError(
"Multiple collections detected. There should be a single "
"collection per representation. "
f"Collections identified: {collections}")
collection = collections[0]
frames = list(collection.indexes)
@ -53,8 +68,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
data["clipOut"])
if current_range != required_range:
raise ValueError(f"Invalid frame range: {current_range} - "
f"expected: {required_range}")
raise PublishValidationError(
f"Invalid frame range: {current_range} - "
f"expected: {required_range}")
missing = collection.holes().indexes
assert not missing, "Missing frames: %s" % (missing,)
if missing:
raise PublishValidationError(
"Missing frames have been detected. "
f"Missing frames: {missing}")

View file

@ -237,8 +237,13 @@ class UISeparatorDef(UIDef):
class UILabelDef(UIDef):
type = "label"
def __init__(self, label):
super(UILabelDef, self).__init__(label=label)
def __init__(self, label, key=None):
super(UILabelDef, self).__init__(label=label, key=key)
def __eq__(self, other):
if not super(UILabelDef, self).__eq__(other):
return False
return self.label == other.label
# ---------------------------------------

View file

@ -611,6 +611,12 @@ def get_openpype_username():
settings and last option is to use `getpass.getuser()` which returns
machine username.
"""
if AYON_SERVER_ENABLED:
import ayon_api
return ayon_api.get_user()["name"]
username = os.environ.get("OPENPYPE_USERNAME")
if not username:
local_settings = get_local_settings()

View file

@ -66,6 +66,7 @@ IGNORED_FILENAMES_IN_AYON = {
"shotgrid",
"sync_server",
"slack",
"kitsu",
}

View file

@ -48,6 +48,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
use_gpu = False
env_allowed_keys = []
env_search_replace_values = {}
workfile_dependency = True
@classmethod
def get_attribute_defs(cls):
@ -83,6 +84,11 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
"suspend_publish",
default=False,
label="Suspend publish"
),
BoolDef(
"workfile_dependency",
default=True,
label="Workfile Dependency"
)
]
@ -313,6 +319,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
"AuxFiles": []
}
# Add workfile dependency.
workfile_dependency = instance.data["attributeValues"].get(
"workfile_dependency", self.workfile_dependency
)
if workfile_dependency:
payload["JobInfo"].update({"AssetDependency0": script_path})
# TODO: rewrite for baking with sequences
if baking_submission:
payload["JobInfo"].update({

View file

@ -40,7 +40,7 @@ class LauncherAction(OpenPypeModule, ITrayAction):
actions_paths = self.manager.collect_plugin_paths()["actions"]
for path in actions_paths:
if path and os.path.exists(path):
register_launcher_action_path(actions_dir)
register_launcher_action_path(path)
paths_str = os.environ.get("AVALON_ACTIONS") or ""
if paths_str:

View file

@ -25,10 +25,7 @@ from openpype.tests.lib import is_in_tests
from .publish.lib import filter_pyblish_plugins
from .anatomy import Anatomy
from .template_data import (
get_template_data_with_names,
get_template_data
)
from .template_data import get_template_data_with_names
from .workfile import (
get_workfile_template_key,
get_custom_workfile_template_by_string_context,
@ -483,6 +480,27 @@ def get_template_data_from_session(session=None, system_settings=None):
)
def get_current_context_template_data(system_settings=None):
"""Prepare template data for current context.
Args:
system_settings (Optional[Dict[str, Any]]): Prepared system settings.
Returns:
Dict[str, Any] Template data for current context.
"""
context = get_current_context()
project_name = context["project_name"]
asset_name = context["asset_name"]
task_name = context["task_name"]
host_name = get_current_host_name()
return get_template_data_with_names(
project_name, asset_name, task_name, host_name, system_settings
)
def get_workdir_from_session(session=None, template_key=None):
"""Template data for template fill from session keys.
@ -661,70 +679,3 @@ def get_process_id():
if _process_id is None:
_process_id = str(uuid.uuid4())
return _process_id
def get_current_context_template_data():
"""Template data for template fill from current context
Returns:
Dict[str, Any] of the following tokens and their values
Supported Tokens:
- Regular Tokens
- app
- user
- asset
- parent
- hierarchy
- folder[name]
- root[work, ...]
- studio[code, name]
- project[code, name]
- task[type, name, short]
- Context Specific Tokens
- assetData[frameStart]
- assetData[frameEnd]
- assetData[handleStart]
- assetData[handleEnd]
- assetData[frameStartHandle]
- assetData[frameEndHandle]
- assetData[resolutionHeight]
- assetData[resolutionWidth]
"""
# pre-prepare get_template_data args
current_context = get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
anatomy = Anatomy(project_name)
# prepare get_template_data args
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
task_name = current_context["task_name"]
host_name = get_current_host_name()
# get regular template data
template_data = get_template_data(
project_doc, asset_doc, task_name, host_name
)
template_data["root"] = anatomy.roots
# get context specific vars
asset_data = asset_doc["data"].copy()
# compute `frameStartHandle` and `frameEndHandle`
if "frameStart" in asset_data and "handleStart" in asset_data:
asset_data["frameStartHandle"] = \
asset_data["frameStart"] - asset_data["handleStart"]
if "frameEnd" in asset_data and "handleEnd" in asset_data:
asset_data["frameEndHandle"] = \
asset_data["frameEnd"] + asset_data["handleEnd"]
# add assetData
template_data["assetData"] = asset_data
return template_data

View file

@ -1472,8 +1472,15 @@ class PlaceholderLoadMixin(object):
context_filters=context_filters
))
def _before_placeholder_load(self, placeholder):
"""Can be overridden. It's called before placeholder representations
are loaded.
"""
pass
def _before_repre_load(self, placeholder, representation):
"""Can be overriden. Is called before representation is loaded."""
"""Can be overridden. It's called before representation is loaded."""
pass
@ -1506,7 +1513,7 @@ class PlaceholderLoadMixin(object):
return output
def populate_load_placeholder(self, placeholder, ignore_repre_ids=None):
"""Load placeholder is goind to load matching representations.
"""Load placeholder is going to load matching representations.
Note:
Ignore repre ids is to avoid loading the same representation again
@ -1528,7 +1535,7 @@ class PlaceholderLoadMixin(object):
# TODO check loader existence
loader_name = placeholder.data["loader"]
loader_args = placeholder.data["loader_args"]
loader_args = self.parse_loader_args(placeholder.data["loader_args"])
placeholder_representations = self._get_representations(placeholder)
@ -1550,6 +1557,11 @@ class PlaceholderLoadMixin(object):
self.project_name, filtered_representations
)
loaders_by_name = self.builder.get_loaders_by_name()
self._before_placeholder_load(
placeholder
)
failed = False
for repre_load_context in repre_load_contexts.values():
representation = repre_load_context["representation"]
repre_context = representation["context"]
@ -1562,24 +1574,24 @@ class PlaceholderLoadMixin(object):
repre_context["subset"],
repre_context["asset"],
loader_name,
loader_args
placeholder.data["loader_args"],
)
)
try:
container = load_with_repre_context(
loaders_by_name[loader_name],
repre_load_context,
options=self.parse_loader_args(loader_args)
options=loader_args
)
except Exception:
failed = True
self.load_failed(placeholder, representation)
failed = True
else:
failed = False
self.load_succeed(placeholder, container)
self.post_placeholder_process(placeholder, failed)
# Run post placeholder process after load of all representations
self.post_placeholder_process(placeholder, failed)
if failed:
self.log.debug(
@ -1599,10 +1611,7 @@ class PlaceholderLoadMixin(object):
placeholder.load_succeed(container)
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of single representation.
Can be called multiple times during placeholder item populating and is
called even if loading failed.
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load
@ -1801,10 +1810,7 @@ class PlaceholderCreateMixin(object):
placeholder.create_succeed(creator_instance)
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of single representation.
Can be called multiple times during placeholder item populating and is
called even if loading failed.
"""Cleanup placeholder after load of its corresponding representations.
Args:
placeholder (PlaceholderItem): Item which was just used to load

View file

@ -1,4 +1,6 @@
import pyblish.api
from openpype import AYON_SERVER_ENABLED
from openpype.lib import get_openpype_username
@ -7,7 +9,11 @@ class CollectCurrentUserPype(pyblish.api.ContextPlugin):
# Order must be after default pyblish-base CollectCurrentUser
order = pyblish.api.CollectorOrder + 0.001
label = "Collect Pype User"
label = (
"Collect AYON User"
if AYON_SERVER_ENABLED
else "Collect OpenPype User"
)
def process(self, context):
user = get_openpype_username()

View file

@ -56,6 +56,17 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
data_object["stagingDir"] = anatomy.fill_root(staging_dir)
def _process_path(self, data, anatomy):
"""Process data of a single JSON publish metadata file.
Args:
data: The loaded metadata from the JSON file
anatomy: Anatomy for the current context
Returns:
bool: Whether any instance of this particular metadata file
has a persistent staging dir.
"""
# validate basic necessary data
data_err = "invalid json file - missing data"
required = ["asset", "user", "comment",
@ -89,6 +100,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
os.environ["FTRACK_SERVER"] = ftrack["FTRACK_SERVER"]
# now we can just add instances from json file and we are done
any_staging_dir_persistent = False
for instance_data in data.get("instances"):
self.log.debug(" - processing instance for {}".format(
@ -106,6 +118,9 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
staging_dir_persistent = instance.data.get(
"stagingDir_persistent", False
)
if staging_dir_persistent:
any_staging_dir_persistent = True
representations = []
for repre_data in instance_data.get("representations") or []:
self._fill_staging_dir(repre_data, anatomy)
@ -127,7 +142,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
self.log.debug(
f"Adding audio to instance: {instance.data['audio']}")
return staging_dir_persistent
return any_staging_dir_persistent
def process(self, context):
self._context = context

View file

@ -639,6 +639,15 @@ def _convert_3dsmax_project_settings(ayon_settings, output):
for item in point_cloud_attribute
}
ayon_max["PointCloud"]["attribute"] = new_point_cloud_attribute
# --- Publish (START) ---
ayon_publish = ayon_max["publish"]
try:
attributes = json.loads(
ayon_publish["ValidateAttributes"]["attributes"]
)
except ValueError:
attributes = {}
ayon_publish["ValidateAttributes"]["attributes"] = attributes
ayon_publish = ayon_max["publish"]
if "ValidateLoadedPlugin" in ayon_publish:

View file

@ -27,5 +27,12 @@
"farm_rendering"
]
}
},
"publish": {
"ValidateSaverResolution": {
"enabled": true,
"optional": true,
"active": true
}
}
}

View file

@ -25,6 +25,12 @@
},
"shelves": [],
"create": {
"CreateAlembicCamera": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateArnoldAss": {
"enabled": true,
"default_variants": [
@ -32,6 +38,66 @@
],
"ext": ".ass"
},
"CreateArnoldRop": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateCompositeSequence": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateHDA": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateKarmaROP": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateMantraROP": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreatePointCache": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateBGEO": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateRedshiftProxy": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateRedshiftROP": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateReview": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateStaticMesh": {
"enabled": true,
"default_variants": [
@ -45,31 +111,13 @@
"UCX"
]
},
"CreateAlembicCamera": {
"CreateUSD": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateCompositeSequence": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreatePointCache": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateRedshiftROP": {
"enabled": true,
"default_variants": [
"Main"
]
},
"CreateRemotePublish": {
"CreateUSDRender": {
"enabled": true,
"default_variants": [
"Main"
@ -81,35 +129,42 @@
"Main"
]
},
"CreateUSD": {
"enabled": false,
"default_variants": [
"Main"
]
},
"CreateUSDModel": {
"enabled": false,
"default_variants": [
"Main"
]
},
"USDCreateShadingWorkspace": {
"enabled": false,
"default_variants": [
"Main"
]
},
"CreateUSDRender": {
"enabled": false,
"CreateVrayROP": {
"enabled": true,
"default_variants": [
"Main"
]
}
},
"publish": {
"CollectRopFrameRange": {
"CollectAssetHandles": {
"use_asset_handles": true
},
"ValidateContainers": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateMeshIsStatic": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateReviewColorspace": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateSubsetName": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateUnrealStaticMeshName": {
"enabled": false,
"optional": true,
"active": true
},
"ValidateWorkfilePaths": {
"enabled": true,
"optional": true,
@ -121,31 +176,6 @@
"$HIP",
"$JOB"
]
},
"ValidateReviewColorspace": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateContainers": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateSubsetName": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateMeshIsStatic": {
"enabled": true,
"optional": true,
"active": true
},
"ValidateUnrealStaticMeshName": {
"enabled": false,
"optional": true,
"active": true
}
}
}

View file

@ -37,6 +37,10 @@
"optional": true,
"active": true
},
"ValidateAttributes": {
"enabled": false,
"attributes": {}
},
"ValidateLoadedPlugin": {
"enabled": false,
"optional": true,

View file

@ -344,13 +344,30 @@
},
"environment": {}
},
"11-0": {
"use_python_2": true,
"executables": {
"windows": [
"C:\\Program Files\\Nuke11.0v4\\Nuke11.0.exe"
],
"darwin": [],
"linux": []
},
"arguments": {
"windows": [],
"darwin": [],
"linux": []
},
"environment": {}
},
"__dynamic_keys_labels__": {
"13-2": "13.2",
"13-0": "13.0",
"12-2": "12.2",
"12-0": "12.0",
"11-3": "11.3",
"11-2": "11.2"
"11-2": "11.2",
"11-0": "11.0"
}
}
},

View file

@ -84,6 +84,24 @@
]
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "publish",
"label": "Publish plugins",
"children": [
{
"type": "schema_template",
"name": "template_publish_plugin",
"template_data": [
{
"key": "ValidateSaverResolution",
"label": "Validate Saver Resolution"
}
]
}
]
}
]
}

View file

@ -4,6 +4,16 @@
"key": "create",
"label": "Creator plugins",
"children": [
{
"type": "schema_template",
"name": "template_create_plugin",
"template_data": [
{
"key": "CreateAlembicCamera",
"label": "Create Alembic Camera"
}
]
},
{
"type": "dict",
"collapsible": true,
@ -39,6 +49,52 @@
]
},
{
"type": "schema_template",
"name": "template_create_plugin",
"template_data": [
{
"key": "CreateArnoldRop",
"label": "Create Arnold ROP"
},
{
"key": "CreateCompositeSequence",
"label": "Create Composite (Image Sequence)"
},
{
"key": "CreateHDA",
"label": "Create Houdini Digital Asset"
},
{
"key": "CreateKarmaROP",
"label": "Create Karma ROP"
},
{
"key": "CreateMantraROP",
"label": "Create Mantra ROP"
},
{
"key": "CreatePointCache",
"label": "Create PointCache (Abc)"
},
{
"key": "CreateBGEO",
"label": "Create PointCache (Bgeo)"
},
{
"key": "CreateRedshiftProxy",
"label": "Create Redshift Proxy"
},
{
"key": "CreateRedshiftROP",
"label": "Create Redshift ROP"
},
{
"key": "CreateReview",
"label": "Create Review"
}
]
},
{
"type": "dict",
"collapsible": true,
@ -75,44 +131,20 @@
"name": "template_create_plugin",
"template_data": [
{
"key": "CreateAlembicCamera",
"label": "Create Alembic Camera"
"key": "CreateUSD",
"label": "Create USD (experimental)"
},
{
"key": "CreateCompositeSequence",
"label": "Create Composite (Image Sequence)"
},
{
"key": "CreatePointCache",
"label": "Create Point Cache"
},
{
"key": "CreateRedshiftROP",
"label": "Create Redshift ROP"
},
{
"key": "CreateRemotePublish",
"label": "Create Remote Publish"
"key": "CreateUSDRender",
"label": "Create USD render (experimental)"
},
{
"key": "CreateVDBCache",
"label": "Create VDB Cache"
},
{
"key": "CreateUSD",
"label": "Create USD"
},
{
"key": "CreateUSDModel",
"label": "Create USD Model"
},
{
"key": "USDCreateShadingWorkspace",
"label": "Create USD Shading Workspace"
},
{
"key": "CreateUSDRender",
"label": "Create USD Render"
"key": "CreateVrayROP",
"label": "Create VRay ROP"
}
]
}

View file

@ -11,8 +11,8 @@
{
"type": "dict",
"collapsible": true,
"key": "CollectRopFrameRange",
"label": "Collect Rop Frame Range",
"key": "CollectAssetHandles",
"label": "Collect Asset Handles",
"children": [
{
"type": "label",
@ -25,6 +25,36 @@
}
]
},
{
"type": "label",
"label": "Validators"
},
{
"type": "schema_template",
"name": "template_publish_plugin",
"template_data": [
{
"key": "ValidateContainers",
"label": "Validate Containers"
},
{
"key": "ValidateMeshIsStatic",
"label": "Validate Mesh is Static"
},
{
"key": "ValidateReviewColorspace",
"label": "Validate Review Colorspace"
},
{
"key": "ValidateSubsetName",
"label": "Validate Subset Name"
},
{
"key": "ValidateUnrealStaticMeshName",
"label": "Validate Unreal Static Mesh Name"
}
]
},
{
"type": "dict",
"collapsible": true,
@ -56,32 +86,6 @@
"object_type": "text"
}
]
},
{
"type": "schema_template",
"name": "template_publish_plugin",
"template_data": [
{
"key": "ValidateReviewColorspace",
"label": "Validate Review Colorspace"
},
{
"key": "ValidateContainers",
"label": "ValidateContainers"
},
{
"key": "ValidateSubsetName",
"label": "Validate Subset Name"
},
{
"key": "ValidateMeshIsStatic",
"label": "Validate Mesh is Static"
},
{
"key": "ValidateUnrealStaticMeshName",
"label": "Validate Unreal Static Mesh Name"
}
]
}
]
}

View file

@ -5,67 +5,100 @@
"is_group": true,
"use_label_wrap": true,
"object_type": {
"type": "dict",
"children": [
"type": "dict-conditional",
"enum_key": "options",
"enum_label": "Options",
"enum_children": [
{
"type": "text",
"key": "shelf_set_name",
"label": "Shelf Set Name"
},
{
"type": "path",
"key": "shelf_set_source_path",
"label": "Shelf Set Path (optional)",
"multipath": false,
"multiplatform": true
},
{
"type": "list",
"key": "shelf_definition",
"label": "Shelves",
"use_label_wrap": true,
"object_type": {
"type": "dict",
"children": [
{
"type": "text",
"key": "shelf_name",
"label": "Shelf Name"
},
{
"type": "list",
"key": "tools_list",
"label": "Tools",
"use_label_wrap": true,
"object_type": {
"type": "dict",
"children": [
{
"type": "text",
"key": "label",
"label": "Name"
},
{
"type": "path",
"key": "script",
"label": "Script"
},
{
"type": "path",
"key": "icon",
"label": "Icon"
},
{
"type": "text",
"key": "help",
"label": "Help"
}
]
"key": "add_shelf_file",
"label": "Add a .shelf file",
"children": [
{
"type": "dict",
"key": "add_shelf_file",
"label": "Add a .shelf file",
"children": [
{
"type": "path",
"key": "shelf_set_source_path",
"label": "Shelf Set Path",
"multipath": false,
"multiplatform": true
}
}
]
}
]
}
]
},
{
"key": "add_set_and_definitions",
"label": "Add Shelf Set Name and Shelves Definitions",
"children": [
{
"key": "add_set_and_definitions",
"label": "Add Shelf Set Name and Shelves Definitions",
"type": "dict",
"children": [
{
"type": "text",
"key": "shelf_set_name",
"label": "Shelf Set Name"
},
{
"type": "list",
"key": "shelf_definition",
"label": "Shelves Definitions",
"use_label_wrap": true,
"object_type": {
"type": "dict",
"children": [
{
"type": "text",
"key": "shelf_name",
"label": "Shelf Name"
},
{
"type": "list",
"key": "tools_list",
"label": "Tools",
"use_label_wrap": true,
"object_type": {
"type": "dict",
"children": [
{
"type": "label",
"label": "Name and Script Path are mandatory."
},
{
"type": "text",
"key": "label",
"label": "Name"
},
{
"type": "path",
"key": "script",
"label": "Script"
},
{
"type": "path",
"key": "icon",
"label": "Icon"
},
{
"type": "text",
"key": "help",
"label": "Help"
}
]
}
}
]
}
}
]
}
]
}
]
}
}
}

View file

@ -29,6 +29,25 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "ValidateAttributes",
"label": "ValidateAttributes",
"checkbox_key": "enabled",
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "raw-json",
"key": "attributes",
"label": "Attributes"
}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -298,6 +298,7 @@ class NumberAttrWidget(_BaseAttrDefWidget):
input_widget.installEventFilter(self)
multisel_widget = ClickableLineEdit("< Multiselection >", self)
multisel_widget.setVisible(False)
input_widget.valueChanged.connect(self._on_value_change)
multisel_widget.clicked.connect(self._on_multi_click)

View file

@ -3,7 +3,7 @@ from qtpy import QtWidgets, QtCore
from openpype.tools.flickcharm import FlickCharm
from openpype.tools.utils import PlaceholderLineEdit, RefreshButton
from openpype.tools.ayon_utils.widgets import (
ProjectsModel,
ProjectsQtModel,
ProjectSortFilterProxy,
)
from openpype.tools.ayon_utils.models import PROJECTS_MODEL_SENDER
@ -95,7 +95,7 @@ class ProjectsWidget(QtWidgets.QWidget):
projects_view.setSelectionMode(QtWidgets.QListView.NoSelection)
flick = FlickCharm(parent=self)
flick.activateOn(projects_view)
projects_model = ProjectsModel(controller)
projects_model = ProjectsQtModel(controller)
projects_proxy_model = ProjectSortFilterProxy()
projects_proxy_model.setSourceModel(projects_model)
@ -133,9 +133,14 @@ class ProjectsWidget(QtWidgets.QWidget):
return self._projects_model.has_content()
def _on_view_clicked(self, index):
if index.isValid():
project_name = index.data(QtCore.Qt.DisplayRole)
self._controller.set_selected_project(project_name)
if not index.isValid():
return
model = index.model()
flags = model.flags(index)
if not flags & QtCore.Qt.ItemIsEnabled:
return
project_name = index.data(QtCore.Qt.DisplayRole)
self._controller.set_selected_project(project_name)
def _on_project_filter_change(self, text):
self._projects_proxy_model.setFilterFixedString(text)

Some files were not shown because too many files have changed in this diff Show more