mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into bugfix/OP-4326_Houdini-switching-context-doesnt-update-variables
This commit is contained in:
commit
336517abc2
30 changed files with 965 additions and 153 deletions
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
4
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,8 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.17.1
|
||||
- 3.17.1-nightly.3
|
||||
- 3.17.1-nightly.2
|
||||
- 3.17.1-nightly.1
|
||||
- 3.17.0
|
||||
|
|
@ -133,8 +135,6 @@ body:
|
|||
- 3.14.10-nightly.7
|
||||
- 3.14.10-nightly.6
|
||||
- 3.14.10-nightly.5
|
||||
- 3.14.10-nightly.4
|
||||
- 3.14.10-nightly.3
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
264
CHANGELOG.md
264
CHANGELOG.md
|
|
@ -1,6 +1,270 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.17.1](https://github.com/ynput/OpenPype/tree/3.17.1)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.0...3.17.1)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Yeti support <a href="https://github.com/ynput/OpenPype/pull/5643">#5643</a></summary>
|
||||
|
||||
Implemented Yeti support for Unreal.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Add Static Mesh product-type (family) <a href="https://github.com/ynput/OpenPype/pull/5481">#5481</a></summary>
|
||||
|
||||
This PR adds support to publish Unreal Static Mesh in Houdini as FBXQuick recap
|
||||
- [x] Add UE Static Mesh Creator
|
||||
- [x] Dynamic subset name like in Maya
|
||||
- [x] Collect Static Mesh Type
|
||||
- [x] Update collect output node
|
||||
- [x] Validate FBX output node
|
||||
- [x] Validate mesh is static
|
||||
- [x] Validate Unreal Static Mesh Name
|
||||
- [x] Validate Subset Name
|
||||
- [x] FBX Extractor
|
||||
- [x] FBX Loader
|
||||
- [x] Update OP Settings
|
||||
- [x] Update AYON Settings
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Launcher tool: Refactor launcher tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5612">#5612</a></summary>
|
||||
|
||||
Refactored launcher tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Use custom staging dir function for Maya renders - OP-5265 <a href="https://github.com/ynput/OpenPype/pull/5186">#5186</a></summary>
|
||||
|
||||
Check for custom staging dir when setting the renders output folder in Maya.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Colorspace: updating file path detection methods <a href="https://github.com/ynput/OpenPype/pull/5273">#5273</a></summary>
|
||||
|
||||
Support for OCIO v2 file rules integrated into the available color management API
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: add default isort config <a href="https://github.com/ynput/OpenPype/pull/5572">#5572</a></summary>
|
||||
|
||||
Add default configuration for isort tool
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: set PATH environment in deadline jobs by GlobalJobPreLoad <a href="https://github.com/ynput/OpenPype/pull/5622">#5622</a></summary>
|
||||
|
||||
This PR makes `GlobalJobPreLoad` to set `PATH` environment in deadline jobs so that we don't have to use the full executable path for deadline to launch the dcc app. This trick should save us adding logic to pass houdini patch version and modifying Houdini deadline plugin. This trick should work with other DCCs
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>nuke: extract review data mov read node with expression <a href="https://github.com/ynput/OpenPype/pull/5635">#5635</a></summary>
|
||||
|
||||
Some productions might have set default values for read nodes, those settings are not colliding anymore now.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Support new publisher for colorsets validation. <a href="https://github.com/ynput/OpenPype/pull/5630">#5630</a></summary>
|
||||
|
||||
Fix `validate_color_sets` for the new publisher.In current `develop` the repair option does not appear due to wrong error raising.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Camera Loader fix mismatch for Maya cameras <a href="https://github.com/ynput/OpenPype/pull/5584">#5584</a></summary>
|
||||
|
||||
This PR adds
|
||||
- A workaround to match Maya render mask in Houdini
|
||||
- `SetCameraResolution` inventory action
|
||||
- set camera resolution when loading or updating camera
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fix set colorspace on writes <a href="https://github.com/ynput/OpenPype/pull/5634">#5634</a></summary>
|
||||
|
||||
Colorspace is set correctly to any write node created from publisher.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TVPaint: Fix review family extraction <a href="https://github.com/ynput/OpenPype/pull/5637">#5637</a></summary>
|
||||
|
||||
Extractor marks representation of review instance with review tag.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON settings: Extract OIIO transcode settings <a href="https://github.com/ynput/OpenPype/pull/5639">#5639</a></summary>
|
||||
|
||||
Output definitions of Extract OIIO transcode have name to match OpenPype settings, and the settings are converted to dictionary in settings conversion.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fix task type short name conversion <a href="https://github.com/ynput/OpenPype/pull/5641">#5641</a></summary>
|
||||
|
||||
Convert AYON task type short name for OpenPype correctly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>colorspace: missing `allowed_exts` fix <a href="https://github.com/ynput/OpenPype/pull/5646">#5646</a></summary>
|
||||
|
||||
Colorspace module is not failing due to missing `allowed_exts` attribute.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Photoshop: remove trailing underscore in subset name <a href="https://github.com/ynput/OpenPype/pull/5647">#5647</a></summary>
|
||||
|
||||
If {layer} placeholder is at the end of subset name template and not used (for example in `auto_image` where separating it by layer doesn't make any sense) trailing '_' was kept. This updates cleaning logic and extracts it as it might be similar in regular `image` instance.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>traypublisher: missing `assetEntity` in context data <a href="https://github.com/ynput/OpenPype/pull/5648">#5648</a></summary>
|
||||
|
||||
Issue with missing `assetEnity` key in context data is not problem anymore.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Workfiles tool save button works <a href="https://github.com/ynput/OpenPype/pull/5653">#5653</a></summary>
|
||||
|
||||
Fix save as button in workfiles tool.(It is mystery why this stopped to work??)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: bug fix delete items from container <a href="https://github.com/ynput/OpenPype/pull/5658">#5658</a></summary>
|
||||
|
||||
Fix the bug shown when clicking "Delete Items from Container" and selecting nothing and press ok.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Remove unused functions from Fusion integration <a href="https://github.com/ynput/OpenPype/pull/5617">#5617</a></summary>
|
||||
|
||||
Cleanup unused code from Fusion integration
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Increase timout for deadline test <a href="https://github.com/ynput/OpenPype/pull/5654">#5654</a></summary>
|
||||
|
||||
DL picks up jobs quite slow, so bump up delay.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.0](https://github.com/ynput/OpenPype/tree/3.17.0)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -65,12 +65,12 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
|||
|
||||
on button_add pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to add to
|
||||
current_sel = selectByName title:"Select Objects to add to
|
||||
the Container" buttontext:"Add" filter:nodes_to_add
|
||||
if current_selection == undefined then return False
|
||||
if current_sel == undefined then return False
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
for c in current_selection do
|
||||
for c in current_sel do
|
||||
(
|
||||
handle_name = node_to_name c
|
||||
node_ref = NodeTransformMonitor node:c
|
||||
|
|
@ -89,15 +89,18 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
|||
|
||||
on button_del pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to remove
|
||||
current_sel = selectByName title:"Select Objects to remove
|
||||
from the Container" buttontext:"Remove" filter: nodes_to_rmv
|
||||
if current_selection == undefined then return False
|
||||
if current_sel == undefined or current_sel.count == 0 then
|
||||
(
|
||||
return False
|
||||
)
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
new_i_node_arr = #()
|
||||
new_temp_arr = #()
|
||||
|
||||
for c in current_selection do
|
||||
for c in current_sel do
|
||||
(
|
||||
node_ref = NodeTransformMonitor node:c as string
|
||||
handle_name = node_to_name c
|
||||
|
|
|
|||
|
|
@ -2571,7 +2571,7 @@ def bake_to_world_space(nodes,
|
|||
new_name = "{0}_baked".format(short_name)
|
||||
new_node = cmds.duplicate(node,
|
||||
name=new_name,
|
||||
renameChildren=True)[0]
|
||||
renameChildren=True)[0] # noqa
|
||||
|
||||
# Connect all attributes on the node except for transform
|
||||
# attributes
|
||||
|
|
|
|||
32
openpype/hosts/maya/plugins/create/create_matchmove.py
Normal file
32
openpype/hosts/maya/plugins/create/create_matchmove.py
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.lib import BoolDef
|
||||
|
||||
|
||||
class CreateMatchmove(plugin.MayaCreator):
|
||||
"""Instance for more complex setup of cameras.
|
||||
|
||||
Might contain multiple cameras, geometries etc.
|
||||
|
||||
It is expected to be extracted into .abc or .ma
|
||||
"""
|
||||
|
||||
identifier = "io.openpype.creators.maya.matchmove"
|
||||
label = "Matchmove"
|
||||
family = "matchmove"
|
||||
icon = "video-camera"
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
defs = lib.collect_animation_defs()
|
||||
|
||||
defs.extend([
|
||||
BoolDef("bakeToWorldSpace",
|
||||
label="Bake Cameras to World-Space",
|
||||
tooltip="Bake Cameras to World-Space",
|
||||
default=True),
|
||||
])
|
||||
|
||||
return defs
|
||||
|
|
@ -1,12 +1,6 @@
|
|||
from maya import cmds, mel
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_id,
|
||||
get_subset_by_id,
|
||||
get_version_by_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
|
|
@ -18,7 +12,7 @@ class AudioLoader(load.LoaderPlugin):
|
|||
"""Specific loader of audio."""
|
||||
|
||||
families = ["audio"]
|
||||
label = "Import audio"
|
||||
label = "Load audio"
|
||||
representations = ["wav"]
|
||||
icon = "volume-up"
|
||||
color = "orange"
|
||||
|
|
@ -27,10 +21,10 @@ class AudioLoader(load.LoaderPlugin):
|
|||
|
||||
start_frame = cmds.playbackOptions(query=True, min=True)
|
||||
sound_node = cmds.sound(
|
||||
file=context["representation"]["data"]["path"], offset=start_frame
|
||||
file=self.filepath_from_context(context), offset=start_frame
|
||||
)
|
||||
cmds.timeControl(
|
||||
mel.eval("$tmpVar=$gPlayBackSlider"),
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
edit=True,
|
||||
sound=sound_node,
|
||||
displaySound=True
|
||||
|
|
@ -59,32 +53,50 @@ class AudioLoader(load.LoaderPlugin):
|
|||
assert audio_nodes is not None, "Audio node not found."
|
||||
audio_node = audio_nodes[0]
|
||||
|
||||
current_sound = cmds.timeControl(
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
query=True,
|
||||
sound=True
|
||||
)
|
||||
activate_sound = current_sound == audio_node
|
||||
|
||||
path = get_representation_path(representation)
|
||||
cmds.setAttr("{}.filename".format(audio_node), path, type="string")
|
||||
|
||||
cmds.sound(
|
||||
audio_node,
|
||||
edit=True,
|
||||
file=path
|
||||
)
|
||||
|
||||
# The source start + end does not automatically update itself to the
|
||||
# length of thew new audio file, even though maya does do that when
|
||||
# creating a new audio node. So to update we compute it manually.
|
||||
# This would however override any source start and source end a user
|
||||
# might have done on the original audio node after load.
|
||||
audio_frame_count = cmds.getAttr("{}.frameCount".format(audio_node))
|
||||
audio_sample_rate = cmds.getAttr("{}.sampleRate".format(audio_node))
|
||||
duration_in_seconds = audio_frame_count / audio_sample_rate
|
||||
fps = mel.eval('currentTimeUnitToFPS()') # workfile FPS
|
||||
source_start = 0
|
||||
source_end = (duration_in_seconds * fps)
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
if activate_sound:
|
||||
# maya by default deactivates it from timeline on file change
|
||||
cmds.timeControl(
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
edit=True,
|
||||
sound=audio_node,
|
||||
displaySound=True
|
||||
)
|
||||
|
||||
cmds.setAttr(
|
||||
container["objectName"] + ".representation",
|
||||
str(representation["_id"]),
|
||||
type="string"
|
||||
)
|
||||
|
||||
# Set frame range.
|
||||
project_name = get_current_project_name()
|
||||
version = get_version_by_id(
|
||||
project_name, representation["parent"], fields=["parent"]
|
||||
)
|
||||
subset = get_subset_by_id(
|
||||
project_name, version["parent"], fields=["parent"]
|
||||
)
|
||||
asset = get_asset_by_id(
|
||||
project_name, subset["parent"], fields=["parent"]
|
||||
)
|
||||
|
||||
source_start = 1 - asset["data"]["frameStart"]
|
||||
source_end = asset["data"]["frameEnd"]
|
||||
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
|
|
|
|||
|
|
@ -101,7 +101,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"camerarig",
|
||||
"staticMesh",
|
||||
"skeletalMesh",
|
||||
"mvLook"]
|
||||
"mvLook",
|
||||
"matchmove"]
|
||||
|
||||
representations = ["ma", "abc", "fbx", "mb"]
|
||||
|
||||
|
|
|
|||
|
|
@ -6,17 +6,21 @@ from openpype.pipeline import publish
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class ExtractCameraAlembic(publish.Extractor):
|
||||
class ExtractCameraAlembic(publish.Extractor,
|
||||
publish.OptionalPyblishPluginMixin):
|
||||
"""Extract a Camera as Alembic.
|
||||
|
||||
The cameras gets baked to world space by default. Only when the instance's
|
||||
The camera gets baked to world space by default. Only when the instance's
|
||||
`bakeToWorldSpace` is set to False it will include its full hierarchy.
|
||||
|
||||
'camera' family expects only single camera, if multiple cameras are needed,
|
||||
'matchmove' is better choice.
|
||||
|
||||
"""
|
||||
|
||||
label = "Camera (Alembic)"
|
||||
label = "Extract Camera (Alembic)"
|
||||
hosts = ["maya"]
|
||||
families = ["camera"]
|
||||
families = ["camera", "matchmove"]
|
||||
bake_attributes = []
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -35,10 +39,11 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
|
||||
# validate required settings
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
if not os.path.exists(dir_path):
|
||||
os.makedirs(dir_path)
|
||||
filename = "{0}.abc".format(instance.name)
|
||||
path = os.path.join(dir_path, filename)
|
||||
|
||||
|
|
@ -64,9 +69,10 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
|
||||
# if baked, drop the camera hierarchy to maintain
|
||||
# clean output and backwards compatibility
|
||||
camera_root = cmds.listRelatives(
|
||||
camera, parent=True, fullPath=True)[0]
|
||||
job_str += ' -root {0}'.format(camera_root)
|
||||
camera_roots = cmds.listRelatives(
|
||||
cameras, parent=True, fullPath=True)
|
||||
for camera_root in camera_roots:
|
||||
job_str += ' -root {0}'.format(camera_root)
|
||||
|
||||
for member in members:
|
||||
descendants = cmds.listRelatives(member,
|
||||
|
|
|
|||
|
|
@ -2,11 +2,15 @@
|
|||
"""Extract camera as Maya Scene."""
|
||||
import os
|
||||
import itertools
|
||||
import contextlib
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.lib import (
|
||||
BoolDef
|
||||
)
|
||||
|
||||
|
||||
def massage_ma_file(path):
|
||||
|
|
@ -78,7 +82,8 @@ def unlock(plug):
|
|||
cmds.disconnectAttr(source, destination)
|
||||
|
||||
|
||||
class ExtractCameraMayaScene(publish.Extractor):
|
||||
class ExtractCameraMayaScene(publish.Extractor,
|
||||
publish.OptionalPyblishPluginMixin):
|
||||
"""Extract a Camera as Maya Scene.
|
||||
|
||||
This will create a duplicate of the camera that will be baked *with*
|
||||
|
|
@ -88,17 +93,22 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
The cameras gets baked to world space by default. Only when the instance's
|
||||
`bakeToWorldSpace` is set to False it will include its full hierarchy.
|
||||
|
||||
'camera' family expects only single camera, if multiple cameras are needed,
|
||||
'matchmove' is better choice.
|
||||
|
||||
Note:
|
||||
The extracted Maya ascii file gets "massaged" removing the uuid values
|
||||
so they are valid for older versions of Fusion (e.g. 6.4)
|
||||
|
||||
"""
|
||||
|
||||
label = "Camera (Maya Scene)"
|
||||
label = "Extract Camera (Maya Scene)"
|
||||
hosts = ["maya"]
|
||||
families = ["camera"]
|
||||
families = ["camera", "matchmove"]
|
||||
scene_type = "ma"
|
||||
|
||||
keep_image_planes = True
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
# get settings
|
||||
|
|
@ -131,15 +141,15 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
"bake to world space is ignored...")
|
||||
|
||||
# get cameras
|
||||
members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True,
|
||||
long=True, dag=True)
|
||||
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera")
|
||||
members = set(cmds.ls(instance.data['setMembers'], leaf=True,
|
||||
shapes=True, long=True, dag=True))
|
||||
cameras = set(cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera"))
|
||||
|
||||
# validate required settings
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
transform = cmds.listRelatives(camera, parent=True, fullPath=True)
|
||||
transforms = cmds.listRelatives(list(cameras),
|
||||
parent=True, fullPath=True)
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
|
|
@ -151,23 +161,21 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
with lib.evaluation("off"):
|
||||
with lib.suspended_refresh():
|
||||
if bake_to_worldspace:
|
||||
self.log.debug(
|
||||
"Performing camera bakes: {}".format(transform))
|
||||
baked = lib.bake_to_world_space(
|
||||
transform,
|
||||
transforms,
|
||||
frame_range=[start, end],
|
||||
step=step
|
||||
)
|
||||
baked_camera_shapes = cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True)
|
||||
baked_camera_shapes = set(cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True))
|
||||
|
||||
members = members + baked_camera_shapes
|
||||
members.remove(camera)
|
||||
members.update(baked_camera_shapes)
|
||||
members.difference_update(cameras)
|
||||
else:
|
||||
baked_camera_shapes = cmds.ls(cameras,
|
||||
baked_camera_shapes = cmds.ls(list(cameras),
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
|
|
@ -186,19 +194,28 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
unlock(plug)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
self.log.debug("Performing extraction..")
|
||||
cmds.select(cmds.ls(members, dag=True,
|
||||
shapes=True, long=True), noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
constructionHistory=False,
|
||||
channels=True, # allow animation
|
||||
constraints=False,
|
||||
shader=False,
|
||||
expressions=False)
|
||||
attr_values = self.get_attr_values_from_data(
|
||||
instance.data)
|
||||
keep_image_planes = attr_values.get("keep_image_planes")
|
||||
|
||||
with transfer_image_planes(sorted(cameras),
|
||||
sorted(baked_camera_shapes),
|
||||
keep_image_planes):
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
cmds.select(cmds.ls(list(members), dag=True,
|
||||
shapes=True, long=True),
|
||||
noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
constructionHistory=False,
|
||||
channels=True, # allow animation
|
||||
constraints=False,
|
||||
shader=False,
|
||||
expressions=False)
|
||||
|
||||
# Delete the baked hierarchy
|
||||
if bake_to_worldspace:
|
||||
|
|
@ -219,3 +236,62 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
defs = super(ExtractCameraMayaScene, cls).get_attribute_defs()
|
||||
|
||||
defs.extend([
|
||||
BoolDef("keep_image_planes",
|
||||
label="Keep Image Planes",
|
||||
tooltip="Preserving connected image planes on camera",
|
||||
default=cls.keep_image_planes),
|
||||
|
||||
])
|
||||
|
||||
return defs
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def transfer_image_planes(source_cameras, target_cameras,
|
||||
keep_input_connections):
|
||||
"""Reattaches image planes to baked or original cameras.
|
||||
|
||||
Baked cameras are duplicates of original ones.
|
||||
This attaches it to duplicated camera properly and after
|
||||
export it reattaches it back to original to keep image plane in workfile.
|
||||
"""
|
||||
originals = {}
|
||||
try:
|
||||
for source_camera, target_camera in zip(source_cameras,
|
||||
target_cameras):
|
||||
image_planes = cmds.listConnections(source_camera,
|
||||
type="imagePlane") or []
|
||||
|
||||
# Split of the parent path they are attached - we want
|
||||
# the image plane node name.
|
||||
# TODO: Does this still mean the image plane name is unique?
|
||||
image_planes = [x.split("->", 1)[1] for x in image_planes]
|
||||
|
||||
if not image_planes:
|
||||
continue
|
||||
|
||||
originals[source_camera] = []
|
||||
for image_plane in image_planes:
|
||||
if keep_input_connections:
|
||||
if source_camera == target_camera:
|
||||
continue
|
||||
_attach_image_plane(target_camera, image_plane)
|
||||
else: # explicitly dettaching image planes
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
originals[source_camera].append(image_plane)
|
||||
yield
|
||||
finally:
|
||||
for camera, image_planes in originals.items():
|
||||
for image_plane in image_planes:
|
||||
_attach_image_plane(camera, image_plane)
|
||||
|
||||
|
||||
def _attach_image_plane(camera, image_plane):
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
cmds.imagePlane(image_plane, edit=True, camera=camera)
|
||||
|
|
|
|||
|
|
@ -3423,3 +3423,55 @@ def create_viewer_profile_string(viewer, display=None, path_like=False):
|
|||
if path_like:
|
||||
return "{}/{}".format(display, viewer)
|
||||
return "{} ({})".format(viewer, display)
|
||||
|
||||
|
||||
def get_head_filename_without_hashes(original_path, name):
|
||||
"""Function to get the renamed head filename without frame hashes
|
||||
To avoid the system being confused on finding the filename with
|
||||
frame hashes if the head of the filename has the hashed symbol
|
||||
|
||||
Examples:
|
||||
>>> get_head_filename_without_hashes("render.####.exr", "baking")
|
||||
render.baking.####.exr
|
||||
>>> get_head_filename_without_hashes("render.%04d.exr", "tag")
|
||||
render.tag.%d.exr
|
||||
>>> get_head_filename_without_hashes("exr.####.exr", "foo")
|
||||
exr.foo.%04d.exr
|
||||
|
||||
Args:
|
||||
original_path (str): the filename with frame hashes
|
||||
name (str): the name of the tags
|
||||
|
||||
Returns:
|
||||
str: the renamed filename with the tag
|
||||
"""
|
||||
filename = os.path.basename(original_path)
|
||||
|
||||
def insert_name(matchobj):
|
||||
return "{}.{}".format(name, matchobj.group(0))
|
||||
|
||||
return re.sub(r"(%\d*d)|#+", insert_name, filename)
|
||||
|
||||
|
||||
def get_filenames_without_hash(filename, frame_start, frame_end):
|
||||
"""Get filenames without frame hash
|
||||
i.e. "renderCompositingMain.baking.0001.exr"
|
||||
|
||||
Args:
|
||||
filename (str): filename with frame hash
|
||||
frame_start (str): start of the frame
|
||||
frame_end (str): end of the frame
|
||||
|
||||
Returns:
|
||||
list: filename per frame of the sequence
|
||||
"""
|
||||
filenames = []
|
||||
for frame in range(int(frame_start), (int(frame_end) + 1)):
|
||||
if "#" in filename:
|
||||
# use regex to convert #### to {:0>4}
|
||||
def replace(match):
|
||||
return "{{:0>{}}}".format(len(match.group()))
|
||||
filename_without_hashes = re.sub("#+", replace, filename)
|
||||
new_filename = filename_without_hashes.format(frame)
|
||||
filenames.append(new_filename)
|
||||
return filenames
|
||||
|
|
|
|||
|
|
@ -21,6 +21,9 @@ from openpype.pipeline import (
|
|||
CreatedInstance,
|
||||
get_current_task_name
|
||||
)
|
||||
from openpype.lib.transcoding import (
|
||||
VIDEO_EXTENSIONS
|
||||
)
|
||||
from .lib import (
|
||||
INSTANCE_DATA_KNOB,
|
||||
Knobby,
|
||||
|
|
@ -35,7 +38,9 @@ from .lib import (
|
|||
get_node_data,
|
||||
get_view_process_node,
|
||||
get_viewer_config_from_string,
|
||||
deprecated
|
||||
deprecated,
|
||||
get_head_filename_without_hashes,
|
||||
get_filenames_without_hash
|
||||
)
|
||||
from .pipeline import (
|
||||
list_instances,
|
||||
|
|
@ -634,6 +639,10 @@ class ExporterReview(object):
|
|||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
if ".{}".format(self.ext) not in VIDEO_EXTENSIONS:
|
||||
filenames = get_filenames_without_hash(
|
||||
self.file, self.first_frame, self.last_frame)
|
||||
repre["files"] = filenames
|
||||
|
||||
if self.multiple_presets:
|
||||
repre["outputName"] = self.name
|
||||
|
|
@ -808,6 +817,18 @@ class ExporterReviewMov(ExporterReview):
|
|||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
if ".{}".format(self.ext) not in VIDEO_EXTENSIONS:
|
||||
# filename would be with frame hashes if
|
||||
# the file extension is not in video format
|
||||
filename = get_head_filename_without_hashes(
|
||||
self.path_in, self.name)
|
||||
self.file = filename
|
||||
# make sure the filename are in
|
||||
# correct image output format
|
||||
if ".{}".format(self.ext) not in self.file:
|
||||
filename_no_ext, _ = os.path.splitext(filename)
|
||||
self.file = "{}.{}".format(filename_no_ext, self.ext)
|
||||
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
|
|
@ -933,7 +954,6 @@ class ExporterReviewMov(ExporterReview):
|
|||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(str(self.path))
|
||||
write_node["file_type"].setValue(str(self.ext))
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO shouldn't this come from settings on outputs?
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -8,15 +8,16 @@ from openpype.hosts.nuke.api import plugin
|
|||
from openpype.hosts.nuke.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ExtractReviewDataMov(publish.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
class ExtractReviewIntermediates(publish.Extractor):
|
||||
"""Extracting intermediate videos or sequences with
|
||||
thumbnail for transcoding.
|
||||
|
||||
must be run after extract_render_local.py
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.01
|
||||
label = "Extract Review Data Mov"
|
||||
label = "Extract Review Intermediates"
|
||||
|
||||
families = ["review"]
|
||||
hosts = ["nuke"]
|
||||
|
|
@ -25,6 +26,22 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
viewer_lut_raw = None
|
||||
outputs = {}
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings):
|
||||
"""Apply the settings from the deprecated
|
||||
ExtractReviewDataMov plugin for backwards compatibility
|
||||
"""
|
||||
nuke_publish = project_settings["nuke"]["publish"]
|
||||
deprecated_setting = nuke_publish["ExtractReviewDataMov"]
|
||||
current_setting = nuke_publish["ExtractReviewIntermediates"]
|
||||
if deprecated_setting["enabled"]:
|
||||
# Use deprecated settings if they are still enabled
|
||||
cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"]
|
||||
cls.outputs = deprecated_setting["outputs"]
|
||||
elif current_setting["enabled"]:
|
||||
cls.viewer_lut_raw = current_setting["viewer_lut_raw"]
|
||||
cls.outputs = current_setting["outputs"]
|
||||
|
||||
def process(self, instance):
|
||||
families = set(instance.data["families"])
|
||||
|
||||
|
|
@ -1,28 +1,21 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectMissingFrameDataFromAssetEntity(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Collect Missing Frame Range data From Asset Entity
|
||||
class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin):
|
||||
"""Collect Frame Data From AssetEntity found in context
|
||||
|
||||
Frame range data will only be collected if the keys
|
||||
are not yet collected for the instance.
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.491
|
||||
label = "Collect Missing Frame Data From Asset Entity"
|
||||
label = "Collect Missing Frame Data From Asset"
|
||||
families = ["plate", "pointcache",
|
||||
"vdbcache", "online",
|
||||
"render"]
|
||||
hosts = ["traypublisher"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
missing_keys = []
|
||||
for key in (
|
||||
"fps",
|
||||
|
|
@ -1,33 +1,47 @@
|
|||
import pyblish.api
|
||||
import clique
|
||||
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectSequenceFrameData(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Collect Original Sequence Frame Data
|
||||
|
||||
class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
||||
"""Collect Sequence Frame Data
|
||||
If the representation includes files with frame numbers,
|
||||
then set `frameStart` and `frameEnd` for the instance to the
|
||||
start and end frame respectively
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Sequence Frame Data"
|
||||
order = pyblish.api.CollectorOrder + 0.4905
|
||||
label = "Collect Original Sequence Frame Data"
|
||||
families = ["plate", "pointcache",
|
||||
"vdbcache", "online",
|
||||
"render"]
|
||||
hosts = ["traypublisher"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
frame_data = self.get_frame_data_from_repre_sequence(instance)
|
||||
|
||||
if not frame_data:
|
||||
# if no dict data skip collecting the frame range data
|
||||
return
|
||||
|
||||
for key, value in frame_data.items():
|
||||
if key not in instance.data:
|
||||
instance.data[key] = value
|
||||
self.log.debug(f"Collected Frame range data '{key}':{value} ")
|
||||
instance.data[key] = value
|
||||
self.log.debug(f"Collected Frame range data '{key}':{value} ")
|
||||
|
||||
|
||||
def get_frame_data_from_repre_sequence(self, instance):
|
||||
repres = instance.data.get("representations")
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
|
||||
if repres:
|
||||
first_repre = repres[0]
|
||||
if "ext" not in first_repre:
|
||||
|
|
@ -36,7 +50,7 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
files = first_repre["files"]
|
||||
collections, remainder = clique.assemble(files)
|
||||
collections, _ = clique.assemble(files)
|
||||
if not collections:
|
||||
# No sequences detected and we can't retrieve
|
||||
# frame range
|
||||
|
|
@ -52,5 +66,5 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
|||
"frameEnd": repres_frames[-1],
|
||||
"handleStart": 0,
|
||||
"handleEnd": 0,
|
||||
"fps": instance.context.data["assetEntity"]["data"]["fps"]
|
||||
"fps": asset_data["fps"]
|
||||
}
|
||||
|
|
@ -748,7 +748,19 @@ def _convert_nuke_project_settings(ayon_settings, output):
|
|||
)
|
||||
|
||||
new_review_data_outputs = {}
|
||||
for item in ayon_publish["ExtractReviewDataMov"]["outputs"]:
|
||||
outputs_settings = None
|
||||
# Check deprecated ExtractReviewDataMov
|
||||
# settings for backwards compatibility
|
||||
deprecrated_review_settings = ayon_publish["ExtractReviewDataMov"]
|
||||
current_review_settings = (
|
||||
ayon_publish["ExtractReviewIntermediates"]
|
||||
)
|
||||
if deprecrated_review_settings["enabled"]:
|
||||
outputs_settings = deprecrated_review_settings["outputs"]
|
||||
elif current_review_settings["enabled"]:
|
||||
outputs_settings = current_review_settings["outputs"]
|
||||
|
||||
for item in outputs_settings:
|
||||
item_filter = item["filter"]
|
||||
if "product_names" in item_filter:
|
||||
item_filter["subsets"] = item_filter.pop("product_names")
|
||||
|
|
@ -767,7 +779,11 @@ def _convert_nuke_project_settings(ayon_settings, output):
|
|||
|
||||
name = item.pop("name")
|
||||
new_review_data_outputs[name] = item
|
||||
ayon_publish["ExtractReviewDataMov"]["outputs"] = new_review_data_outputs
|
||||
|
||||
if deprecrated_review_settings["enabled"]:
|
||||
deprecrated_review_settings["outputs"] = new_review_data_outputs
|
||||
elif current_review_settings["enabled"]:
|
||||
current_review_settings["outputs"] = new_review_data_outputs
|
||||
|
||||
collect_instance_data = ayon_publish["CollectInstanceData"]
|
||||
if "sync_workfile_version_on_product_types" in collect_instance_data:
|
||||
|
|
|
|||
|
|
@ -1338,6 +1338,12 @@
|
|||
"active": true,
|
||||
"bake_attributes": []
|
||||
},
|
||||
"ExtractCameraMayaScene": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true,
|
||||
"keep_image_planes": false
|
||||
},
|
||||
"ExtractGLB": {
|
||||
"enabled": true,
|
||||
"active": true,
|
||||
|
|
|
|||
|
|
@ -501,6 +501,60 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"ExtractReviewIntermediates": {
|
||||
"enabled": true,
|
||||
"viewer_lut_raw": false,
|
||||
"outputs": {
|
||||
"baking": {
|
||||
"filter": {
|
||||
"task_types": [],
|
||||
"families": [],
|
||||
"subsets": []
|
||||
},
|
||||
"read_raw": false,
|
||||
"viewer_process_override": "",
|
||||
"bake_viewer_process": true,
|
||||
"bake_viewer_input_process": true,
|
||||
"reformat_nodes_config": {
|
||||
"enabled": false,
|
||||
"reposition_nodes": [
|
||||
{
|
||||
"node_class": "Reformat",
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
"name": "type",
|
||||
"value": "to format"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "format",
|
||||
"value": "HD_1080"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "filter",
|
||||
"value": "Lanczos6"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "black_outside",
|
||||
"value": true
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "pbb",
|
||||
"value": false
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"extension": "mov",
|
||||
"add_custom_tags": []
|
||||
}
|
||||
}
|
||||
},
|
||||
"ExtractSlateFrame": {
|
||||
"viewer_lut_raw": false,
|
||||
"key_value_mapping": {
|
||||
|
|
|
|||
|
|
@ -346,10 +346,10 @@
|
|||
}
|
||||
},
|
||||
"publish": {
|
||||
"CollectFrameDataFromAssetEntity": {
|
||||
"CollectSequenceFrameData": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
"active": false
|
||||
},
|
||||
"ValidateFrameRange": {
|
||||
"enabled": true,
|
||||
|
|
|
|||
|
|
@ -350,8 +350,8 @@
|
|||
"name": "template_validate_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "CollectFrameDataFromAssetEntity",
|
||||
"label": "Collect frame range from asset entity"
|
||||
"key": "CollectSequenceFrameData",
|
||||
"label": "Collect Original Sequence Frame Data"
|
||||
},
|
||||
{
|
||||
"key": "ValidateFrameRange",
|
||||
|
|
|
|||
|
|
@ -978,6 +978,35 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "ExtractCameraMayaScene",
|
||||
"label": "Extract camera to Maya scene",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "optional",
|
||||
"label": "Optional"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "active",
|
||||
"label": "Active"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "keep_image_planes",
|
||||
"label": "Export Image planes"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -371,6 +371,151 @@
|
|||
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "^ Settings and for <span style=\"color:#FF0000\";><b>ExtractReviewDataMov</b></span> is deprecated and will be soon removed. <br> Please use <b>ExtractReviewIntermediates</b> instead."
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "ExtractReviewIntermediates",
|
||||
"label": "ExtractReviewIntermediates",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "viewer_lut_raw",
|
||||
"label": "Viewer LUT raw"
|
||||
},
|
||||
{
|
||||
"key": "outputs",
|
||||
"label": "Output Definitions",
|
||||
"type": "dict-modifiable",
|
||||
"highlight_content": true,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": false,
|
||||
"key": "filter",
|
||||
"label": "Filtering",
|
||||
"children": [
|
||||
{
|
||||
"key": "task_types",
|
||||
"label": "Task types",
|
||||
"type": "task-types-enum"
|
||||
},
|
||||
{
|
||||
"key": "families",
|
||||
"label": "Families",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"key": "subsets",
|
||||
"label": "Subsets",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "read_raw",
|
||||
"label": "Read colorspace RAW",
|
||||
"default": false
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "viewer_process_override",
|
||||
"label": "Viewer Process colorspace profile override"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_process",
|
||||
"label": "Bake Viewer Process"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_input_process",
|
||||
"label": "Bake Viewer Input Process (LUTs)"
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"key": "reformat_nodes_config",
|
||||
"type": "dict",
|
||||
"label": "Reformat Nodes",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Reposition knobs supported only.<br/>You can add multiple reformat nodes <br/>and set their knobs. Order of reformat <br/>nodes is important. First reformat node <br/>will be applied first and last reformat <br/>node will be applied last."
|
||||
},
|
||||
{
|
||||
"key": "reposition_nodes",
|
||||
"type": "list",
|
||||
"label": "Reposition nodes",
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"key": "node_class",
|
||||
"label": "Node class",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_nuke_knob_inputs",
|
||||
"template_data": [
|
||||
{
|
||||
"label": "Node knobs",
|
||||
"key": "knobs"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "extension",
|
||||
"label": "Write node file type"
|
||||
},
|
||||
{
|
||||
"key": "add_custom_tags",
|
||||
"label": "Add custom tags",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ def get_task_template_data(project_entity, task):
|
|||
return {}
|
||||
short_name = None
|
||||
task_type_name = task["taskType"]
|
||||
for task_type_info in project_entity["config"]["taskTypes"]:
|
||||
for task_type_info in project_entity["taskTypes"]:
|
||||
if task_type_info["name"] == task_type_name:
|
||||
short_name = task_type_info["shortName"]
|
||||
break
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.17.1-nightly.2"
|
||||
__version__ = "3.17.1"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.17.0" # OpenPype
|
||||
version = "3.17.1" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -149,7 +149,7 @@ class ReformatNodesConfigModel(BaseSettingsModel):
|
|||
)
|
||||
|
||||
|
||||
class BakingStreamModel(BaseSettingsModel):
|
||||
class IntermediateOutputModel(BaseSettingsModel):
|
||||
name: str = Field(title="Output name")
|
||||
filter: BakingStreamFilterModel = Field(
|
||||
title="Filter", default_factory=BakingStreamFilterModel)
|
||||
|
|
@ -166,9 +166,21 @@ class BakingStreamModel(BaseSettingsModel):
|
|||
|
||||
|
||||
class ExtractReviewDataMovModel(BaseSettingsModel):
|
||||
"""[deprecated] use Extract Review Data Baking
|
||||
Streams instead.
|
||||
"""
|
||||
enabled: bool = Field(title="Enabled")
|
||||
viewer_lut_raw: bool = Field(title="Viewer lut raw")
|
||||
outputs: list[BakingStreamModel] = Field(
|
||||
outputs: list[IntermediateOutputModel] = Field(
|
||||
default_factory=list,
|
||||
title="Baking streams"
|
||||
)
|
||||
|
||||
|
||||
class ExtractReviewIntermediatesModel(BaseSettingsModel):
|
||||
enabled: bool = Field(title="Enabled")
|
||||
viewer_lut_raw: bool = Field(title="Viewer lut raw")
|
||||
outputs: list[IntermediateOutputModel] = Field(
|
||||
default_factory=list,
|
||||
title="Baking streams"
|
||||
)
|
||||
|
|
@ -270,6 +282,10 @@ class PublishPuginsModel(BaseSettingsModel):
|
|||
title="Extract Review Data Mov",
|
||||
default_factory=ExtractReviewDataMovModel
|
||||
)
|
||||
ExtractReviewIntermediates: ExtractReviewIntermediatesModel = Field(
|
||||
title="Extract Review Intermediates",
|
||||
default_factory=ExtractReviewIntermediatesModel
|
||||
)
|
||||
ExtractSlateFrame: ExtractSlateFrameModel = Field(
|
||||
title="Extract Slate Frame",
|
||||
default_factory=ExtractSlateFrameModel
|
||||
|
|
@ -465,6 +481,61 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = {
|
|||
}
|
||||
]
|
||||
},
|
||||
"ExtractReviewIntermediates": {
|
||||
"enabled": True,
|
||||
"viewer_lut_raw": False,
|
||||
"outputs": [
|
||||
{
|
||||
"name": "baking",
|
||||
"filter": {
|
||||
"task_types": [],
|
||||
"product_types": [],
|
||||
"product_names": []
|
||||
},
|
||||
"read_raw": False,
|
||||
"viewer_process_override": "",
|
||||
"bake_viewer_process": True,
|
||||
"bake_viewer_input_process": True,
|
||||
"reformat_nodes_config": {
|
||||
"enabled": False,
|
||||
"reposition_nodes": [
|
||||
{
|
||||
"node_class": "Reformat",
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
"name": "type",
|
||||
"text": "to format"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "format",
|
||||
"text": "HD_1080"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"name": "filter",
|
||||
"text": "Lanczos6"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "black_outside",
|
||||
"boolean": True
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "pbb",
|
||||
"boolean": False
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"extension": "mov",
|
||||
"add_custom_tags": []
|
||||
}
|
||||
]
|
||||
},
|
||||
"ExtractSlateFrame": {
|
||||
"viewer_lut_raw": False,
|
||||
"key_value_mapping": {
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__version__ = "0.1.2"
|
||||
__version__ = "0.1.3"
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ class TestDeadlinePublishInMaya(MayaDeadlinePublishTestClass):
|
|||
# keep empty to locate latest installed variant or explicit
|
||||
APP_VARIANT = ""
|
||||
|
||||
TIMEOUT = 120 # publish timeout
|
||||
TIMEOUT = 180 # publish timeout
|
||||
|
||||
def test_db_asserts(self, dbcon, publish_finished):
|
||||
"""Host and input data dependent expected results in DB."""
|
||||
|
|
|
|||
|
|
@ -33,39 +33,41 @@ The Instances are categorized into ‘families’ based on what type of data the
|
|||
Following family definitions and requirements are OpenPype defaults and what we consider good industry practice, but most of the requirements can be easily altered to suit the studio or project needs.
|
||||
Here's a list of supported families
|
||||
|
||||
| Family | Comment | Example Subsets |
|
||||
| ----------------------- | ------------------------------------------------ | ------------------------- |
|
||||
| [Model](#model) | Cleaned geo without materials | main, proxy, broken |
|
||||
| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty |
|
||||
| [Rig](#rig) | Characters or props with animation controls | main, deform, sim |
|
||||
| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim |
|
||||
| [Layout](#layout) | Simple representation of the environment | main, |
|
||||
| [Setdress](#setdress) | Environment containing only referenced assets | main, |
|
||||
| [Camera](#camera) | May contain trackers or proxy geo | main, tracked, anim |
|
||||
| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB |
|
||||
| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 |
|
||||
| MayaAscii | Maya publishes that don't fit other categories | |
|
||||
| [Render](#render) | Rendered frames from CG or Comp | |
|
||||
| RenderSetup | Scene render settings, AOVs and layers | |
|
||||
| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane |
|
||||
| Write | Nuke write nodes for rendering | |
|
||||
| Image | Any non-plate image to be used by artists | Reference, ConceptArt |
|
||||
| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt |
|
||||
| Review | Reviewable video or image. | |
|
||||
| Matchmove | Matchmoved camera, potentially with geometry | main |
|
||||
| Workfile | Backup of the workfile with all its content | uses the task name |
|
||||
| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop |
|
||||
| Yeticache | Cached out yeti fur setup | |
|
||||
| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed |
|
||||
| VrayProxy | Vray proxy geometry for rendering | |
|
||||
| VrayScene | Vray full scene export | |
|
||||
| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty |
|
||||
| LUT | | |
|
||||
| Nukenodes | | |
|
||||
| Gizmo | | |
|
||||
| Nukenodes | | |
|
||||
| Harmony.template | | |
|
||||
| Harmony.palette | | |
|
||||
| Family | Comment | Example Subsets |
|
||||
|-------------------------|-------------------------------------------------------| ------------------------- |
|
||||
| [Model](#model) | Cleaned geo without materials | main, proxy, broken |
|
||||
| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty |
|
||||
| [Rig](#rig) | Characters or props with animation controls | main, deform, sim |
|
||||
| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim |
|
||||
| [Layout](#layout) | Simple representation of the environment | main, |
|
||||
| [Setdress](#setdress) | Environment containing only referenced assets | main, |
|
||||
| [Camera](#camera) | May contain trackers or proxy geo, only single camera | main, tracked, anim |
|
||||
| | expected. | |
|
||||
| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB |
|
||||
| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 |
|
||||
| MayaAscii | Maya publishes that don't fit other categories | |
|
||||
| [Render](#render) | Rendered frames from CG or Comp | |
|
||||
| RenderSetup | Scene render settings, AOVs and layers | |
|
||||
| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane |
|
||||
| Write | Nuke write nodes for rendering | |
|
||||
| Image | Any non-plate image to be used by artists | Reference, ConceptArt |
|
||||
| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt |
|
||||
| Review | Reviewable video or image. | |
|
||||
| Matchmove | Matchmoved camera, potentially with geometry, allows | main |
|
||||
| | multiple cameras even with planes. | |
|
||||
| Workfile | Backup of the workfile with all its content | uses the task name |
|
||||
| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop |
|
||||
| Yeticache | Cached out yeti fur setup | |
|
||||
| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed |
|
||||
| VrayProxy | Vray proxy geometry for rendering | |
|
||||
| VrayScene | Vray full scene export | |
|
||||
| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty |
|
||||
| LUT | | |
|
||||
| Nukenodes | | |
|
||||
| Gizmo | | |
|
||||
| Nukenodes | | |
|
||||
| Harmony.template | | |
|
||||
| Harmony.palette | | |
|
||||
|
||||
|
||||
|
||||
|
|
@ -161,7 +163,7 @@ Example Representations:
|
|||
### Animation
|
||||
|
||||
Published result of an animation created with a rig. Animation can be extracted
|
||||
as animation curves, cached out geometry or even fully animated rig with all the controllers.
|
||||
as animation curves, cached out geometry or even fully animated rig with all the controllers.
|
||||
Animation cache is usually defined by a rigger in the rig file of a character or
|
||||
by FX TD in the effects rig, to ensure consistency of outputs.
|
||||
|
||||
|
|
|
|||
|
|
@ -189,7 +189,7 @@ A profile may generate multiple outputs from a single input. Each output must de
|
|||
- Profile filtering defines which group of output definitions is used but output definitions may require more specific filters on their own.
|
||||
- They may filter by subset name (regex can be used) or publish families. Publish families are more complex as are based on knowing code base.
|
||||
- Filtering by custom tags -> this is used for targeting to output definitions from other extractors using settings (at this moment only Nuke bake extractor can target using custom tags).
|
||||
- Nuke extractor settings path: `project_settings/nuke/publish/ExtractReviewDataMov/outputs/baking/add_custom_tags`
|
||||
- Nuke extractor settings path: `project_settings/nuke/publish/ExtractReviewIntermediates/outputs/baking/add_custom_tags`
|
||||
- Filtering by input length. Input may be video, sequence or single image. It is possible that `.mp4` should be created only when input is video or sequence and to create review `.png` when input is single frame. In some cases the output should be created even if it's single frame or multi frame input.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -534,8 +534,7 @@ Plugin responsible for generating thumbnails with colorspace controlled by Nuke.
|
|||
}
|
||||
```
|
||||
|
||||
### `ExtractReviewDataMov`
|
||||
|
||||
### `ExtractReviewIntermediates`
|
||||
`viewer_lut_raw` **true** will publish the baked mov file without any colorspace conversion. It will be baked with the workfile workspace. This can happen in case the Viewer input process uses baked screen space luts.
|
||||
|
||||
#### baking with controlled colorspace
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue