mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-02 08:54:53 +01:00
Merge branch 'develop' into bugfix/resolve_fix_loader_slate_6124
This commit is contained in:
commit
427ecdd62a
283 changed files with 6810 additions and 4877 deletions
34
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
34
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,23 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.18.8-nightly.1
|
||||
- 3.18.7
|
||||
- 3.18.7-nightly.5
|
||||
- 3.18.7-nightly.4
|
||||
- 3.18.7-nightly.3
|
||||
- 3.18.7-nightly.2
|
||||
- 3.18.7-nightly.1
|
||||
- 3.18.6
|
||||
- 3.18.6-nightly.2
|
||||
- 3.18.6-nightly.1
|
||||
- 3.18.5
|
||||
- 3.18.5-nightly.3
|
||||
- 3.18.5-nightly.2
|
||||
- 3.18.5-nightly.1
|
||||
- 3.18.4
|
||||
- 3.18.4-nightly.1
|
||||
- 3.18.3
|
||||
- 3.18.3-nightly.2
|
||||
- 3.18.3-nightly.1
|
||||
- 3.18.2
|
||||
|
|
@ -118,23 +135,6 @@ body:
|
|||
- 3.15.11-nightly.3
|
||||
- 3.15.11-nightly.2
|
||||
- 3.15.11-nightly.1
|
||||
- 3.15.10
|
||||
- 3.15.10-nightly.2
|
||||
- 3.15.10-nightly.1
|
||||
- 3.15.9
|
||||
- 3.15.9-nightly.2
|
||||
- 3.15.9-nightly.1
|
||||
- 3.15.8
|
||||
- 3.15.8-nightly.3
|
||||
- 3.15.8-nightly.2
|
||||
- 3.15.8-nightly.1
|
||||
- 3.15.7
|
||||
- 3.15.7-nightly.3
|
||||
- 3.15.7-nightly.2
|
||||
- 3.15.7-nightly.1
|
||||
- 3.15.6
|
||||
- 3.15.6-nightly.3
|
||||
- 3.15.6-nightly.2
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
1143
CHANGELOG.md
1143
CHANGELOG.md
File diff suppressed because it is too large
Load diff
|
|
@ -1,53 +1,12 @@
|
|||
## How to contribute to Pype
|
||||
## How to contribute to OpenPype
|
||||
|
||||
We are always happy for any contributions for OpenPype improvements. Before making a PR and starting working on an issue, please read these simple guidelines.
|
||||
OpenPype has reached the end of its life and is now in a limited maintenance mode (read more at https://community.ynput.io/t/openpype-end-of-life-timeline/877). As such we're no longer accepting contributions unless they are also ported to AYON at the same time.
|
||||
|
||||
#### **Did you find a bug?**
|
||||
## Getting my PR merged during this period
|
||||
|
||||
1. Check in the issues and our [bug triage[(https://github.com/pypeclub/pype/projects/2) to make sure it wasn't reported already.
|
||||
2. Ask on our [discord](http://pype.community/chat) Often, what appears as a bug, might be the intended behaviour for someone else.
|
||||
3. Create a new issue.
|
||||
4. Use the issue template for you PR please.
|
||||
- Each OpenPype PR MUST have a corresponding AYON PR in github. Without AYON compatibility features will not be merged! Luckily most of the code is compatible, albeit sometimes in a different place after refactor. Porting from OpenPype to AYON should be really easy.
|
||||
- Please keep the corresponding OpenPype and AYON PR names the same so they can be easily identified.
|
||||
|
||||
Inside each PR, put a link to the corresponding PR from the other product. OpenPype PRs should point to AYON PR and vice versa.
|
||||
|
||||
#### **Did you write a patch that fixes a bug?**
|
||||
|
||||
- Open a new GitHub pull request with the patch.
|
||||
- Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable.
|
||||
|
||||
#### **Do you intend to add a new feature or change an existing one?**
|
||||
|
||||
- Open a new thread in the [github discussions](https://github.com/pypeclub/pype/discussions/new)
|
||||
- Do not open issue until the suggestion is discussed. We will convert accepted suggestions into backlog and point them to the relevant discussion thread to keep the context.
|
||||
- If you are already working on a new feature and you'd like it eventually merged to the main codebase, please consider making a DRAFT PR as soon as possible. This makes it a lot easier to give feedback, discuss the code and functionalit, plus it prevents multiple people tackling the same problem independently.
|
||||
|
||||
#### **Do you have questions about the source code?**
|
||||
|
||||
Open a new question on [github discussions](https://github.com/pypeclub/pype/discussions/new)
|
||||
|
||||
## Branching Strategy
|
||||
|
||||
As we move to 3.x as the primary supported version of pype and only keep 2.15 on bug bugfixes and client sponsored feature requests, we need to be very careful with merging strategy.
|
||||
|
||||
We also use this opportunity to switch the branch naming. 3.0 production branch will no longer be called MASTER, but will be renamed to MAIN. Develop will stay as it is.
|
||||
|
||||
A few important notes about 2.x and 3.x development:
|
||||
|
||||
- 3.x features are not backported to 2.x unless specifically requested
|
||||
- 3.x bugs and hotfixes can be ported to 2.x if they are relevant or severe
|
||||
- 2.x features and bugs MUST be ported to 3.x at the same time
|
||||
|
||||
## Pull Requests
|
||||
|
||||
- Each 2.x PR MUST have a corresponding 3.x PR in github. Without 3.x PR, 2.x features will not be merged! Luckily most of the code is compatible, albeit sometimes in a different place after refactor. Porting from 2.x to 3.x should be really easy.
|
||||
- Please keep the corresponding 2 and 3 PR names the same so they can be easily identified from the PR list page.
|
||||
- Each 2.x PR should be labeled with `2.x-dev` label.
|
||||
|
||||
Inside each PR, put a link to the corresponding PR for the other version
|
||||
|
||||
Of course if you want to contribute, feel free to make a PR to only 2.x/develop or develop, based on what you are using. While reviewing the PRs, we might convert the code to corresponding PR for the other release ourselves.
|
||||
|
||||
We might also change the target of you PR to and intermediate branch, rather than `develop` if we feel it requires some extra work on our end. That way, we preserve all your commits so you don't loose out on the contribution credits.
|
||||
|
||||
|
||||
If a PR is targeted at 2.x release it must be labelled with 2x-dev label in Github.
|
||||
AYON repository structure is a lot more granular compared to OpenPype. If you're unsure what repository your AYON equivalent PR should target, feel free to make OpenPype PR first and ask.
|
||||
|
|
|
|||
|
|
@ -9,8 +9,13 @@ OpenPype
|
|||
|
||||
## Important Notice!
|
||||
|
||||
OpenPype as a standalone product has reach end of it's life and this repository is now used as a pipeline core code for [AYON](https://ynput.io/ayon/). You can read more details about the end of life process here https://community.ynput.io/t/openpype-end-of-life-timeline/877
|
||||
OpenPype as a standalone product has reach end of it's life and this repository is now being phased out in favour of [ayon-core](https://github.com/ynput/ayon-core). You can read more details about the end of life process here https://community.ynput.io/t/openpype-end-of-life-timeline/877
|
||||
|
||||
As such, we no longer accept Pull Requests that are not ported to AYON at the same time!
|
||||
|
||||
```
|
||||
Please refer to https://github.com/ynput/OpenPype/blob/develop/CONTRIBUTING.md for more information about the current PR process.
|
||||
```
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ class RenderCreator(Creator):
|
|||
|
||||
# Settings
|
||||
mark_for_review = True
|
||||
force_setting_values = True
|
||||
|
||||
def create(self, subset_name_from_ui, data, pre_create_data):
|
||||
stub = api.get_stub() # only after After Effects is up
|
||||
|
|
@ -96,7 +97,9 @@ class RenderCreator(Creator):
|
|||
self._add_instance_to_context(new_instance)
|
||||
|
||||
stub.rename_item(comp.id, subset_name)
|
||||
set_settings(True, True, [comp.id], print_msg=False)
|
||||
|
||||
if self.force_setting_values:
|
||||
set_settings(True, True, [comp.id], print_msg=False)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
output = [
|
||||
|
|
@ -173,6 +176,7 @@ class RenderCreator(Creator):
|
|||
)
|
||||
|
||||
self.mark_for_review = plugin_settings["mark_for_review"]
|
||||
self.force_setting_values = plugin_settings["force_setting_values"]
|
||||
self.default_variants = plugin_settings.get(
|
||||
"default_variants",
|
||||
plugin_settings.get("defaults") or []
|
||||
|
|
|
|||
|
|
@ -127,8 +127,9 @@ def isolate_objects(window, objects):
|
|||
|
||||
context = create_blender_context(selected=objects, window=window)
|
||||
|
||||
bpy.ops.view3d.view_axis(context, type="FRONT")
|
||||
bpy.ops.view3d.localview(context)
|
||||
with bpy.context.temp_override(**context):
|
||||
bpy.ops.view3d.view_axis(type="FRONT")
|
||||
bpy.ops.view3d.localview()
|
||||
|
||||
deselect_all()
|
||||
|
||||
|
|
@ -270,10 +271,12 @@ def _independent_window():
|
|||
"""Create capture-window context."""
|
||||
context = create_blender_context()
|
||||
current_windows = set(bpy.context.window_manager.windows)
|
||||
bpy.ops.wm.window_new(context)
|
||||
window = list(set(bpy.context.window_manager.windows) - current_windows)[0]
|
||||
context["window"] = window
|
||||
try:
|
||||
yield window
|
||||
finally:
|
||||
bpy.ops.wm.window_close(context)
|
||||
with bpy.context.temp_override(**context):
|
||||
bpy.ops.wm.window_new()
|
||||
window = list(
|
||||
set(bpy.context.window_manager.windows) - current_windows)[0]
|
||||
context["window"] = window
|
||||
try:
|
||||
yield window
|
||||
finally:
|
||||
bpy.ops.wm.window_close()
|
||||
|
|
|
|||
|
|
@ -36,6 +36,12 @@ def prepare_scene_name(
|
|||
if namespace:
|
||||
name = f"{name}_{namespace}"
|
||||
name = f"{name}_{subset}"
|
||||
|
||||
# Blender name for a collection or object cannot be longer than 63
|
||||
# characters. If the name is longer, it will raise an error.
|
||||
if len(name) > 63:
|
||||
raise ValueError(f"Scene name '{name}' would be too long.")
|
||||
|
||||
return name
|
||||
|
||||
|
||||
|
|
@ -226,7 +232,7 @@ class BaseCreator(Creator):
|
|||
|
||||
# Create asset group
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_name = instance_data["folderPath"]
|
||||
asset_name = instance_data["folderPath"].split("/")[-1]
|
||||
else:
|
||||
asset_name = instance_data["asset"]
|
||||
|
||||
|
|
@ -305,12 +311,16 @@ class BaseCreator(Creator):
|
|||
)
|
||||
return
|
||||
|
||||
# Rename the instance node in the scene if subset or asset changed
|
||||
# Rename the instance node in the scene if subset or asset changed.
|
||||
# Do not rename the instance if the family is workfile, as the
|
||||
# workfile instance is included in the AVALON_CONTAINER collection.
|
||||
if (
|
||||
"subset" in changes.changed_keys
|
||||
or asset_name_key in changes.changed_keys
|
||||
):
|
||||
) and created_instance.family != "workfile":
|
||||
asset_name = data[asset_name_key]
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_name = asset_name.split("/")[-1]
|
||||
name = prepare_scene_name(
|
||||
asset=asset_name, subset=data["subset"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ from pathlib import Path
|
|||
|
||||
import bpy
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import get_current_project_name
|
||||
|
||||
|
|
@ -47,6 +48,22 @@ def get_multilayer(settings):
|
|||
["multilayer_exr"])
|
||||
|
||||
|
||||
def get_renderer(settings):
|
||||
"""Get renderer from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["renderer"])
|
||||
|
||||
|
||||
def get_compositing(settings):
|
||||
"""Get compositing from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["compositing"])
|
||||
|
||||
|
||||
def get_render_product(output_path, name, aov_sep):
|
||||
"""
|
||||
Generate the path to the render product. Blender interprets the `#`
|
||||
|
|
@ -91,66 +108,121 @@ def set_render_format(ext, multilayer):
|
|||
image_settings.file_format = "TIFF"
|
||||
|
||||
|
||||
def set_render_passes(settings):
|
||||
aov_list = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["aov_list"])
|
||||
|
||||
custom_passes = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["custom_passes"])
|
||||
def set_render_passes(settings, renderer):
|
||||
aov_list = set(settings["blender"]["RenderSettings"]["aov_list"])
|
||||
custom_passes = settings["blender"]["RenderSettings"]["custom_passes"]
|
||||
|
||||
# Common passes for both renderers
|
||||
vl = bpy.context.view_layer
|
||||
|
||||
# Data Passes
|
||||
vl.use_pass_combined = "combined" in aov_list
|
||||
vl.use_pass_z = "z" in aov_list
|
||||
vl.use_pass_mist = "mist" in aov_list
|
||||
vl.use_pass_normal = "normal" in aov_list
|
||||
|
||||
# Light Passes
|
||||
vl.use_pass_diffuse_direct = "diffuse_light" in aov_list
|
||||
vl.use_pass_diffuse_color = "diffuse_color" in aov_list
|
||||
vl.use_pass_glossy_direct = "specular_light" in aov_list
|
||||
vl.use_pass_glossy_color = "specular_color" in aov_list
|
||||
vl.eevee.use_pass_volume_direct = "volume_light" in aov_list
|
||||
vl.use_pass_emit = "emission" in aov_list
|
||||
vl.use_pass_environment = "environment" in aov_list
|
||||
vl.use_pass_shadow = "shadow" in aov_list
|
||||
vl.use_pass_ambient_occlusion = "ao" in aov_list
|
||||
|
||||
cycles = vl.cycles
|
||||
# Cryptomatte Passes
|
||||
vl.use_pass_cryptomatte_object = "cryptomatte_object" in aov_list
|
||||
vl.use_pass_cryptomatte_material = "cryptomatte_material" in aov_list
|
||||
vl.use_pass_cryptomatte_asset = "cryptomatte_asset" in aov_list
|
||||
|
||||
cycles.denoising_store_passes = "denoising" in aov_list
|
||||
cycles.use_pass_volume_direct = "volume_direct" in aov_list
|
||||
cycles.use_pass_volume_indirect = "volume_indirect" in aov_list
|
||||
if renderer == "BLENDER_EEVEE":
|
||||
# Eevee exclusive passes
|
||||
eevee = vl.eevee
|
||||
|
||||
# Light Passes
|
||||
vl.use_pass_shadow = "shadow" in aov_list
|
||||
eevee.use_pass_volume_direct = "volume_light" in aov_list
|
||||
|
||||
# Effects Passes
|
||||
eevee.use_pass_bloom = "bloom" in aov_list
|
||||
eevee.use_pass_transparent = "transparent" in aov_list
|
||||
|
||||
# Cryptomatte Passes
|
||||
vl.use_pass_cryptomatte_accurate = "cryptomatte_accurate" in aov_list
|
||||
elif renderer == "CYCLES":
|
||||
# Cycles exclusive passes
|
||||
cycles = vl.cycles
|
||||
|
||||
# Data Passes
|
||||
vl.use_pass_position = "position" in aov_list
|
||||
vl.use_pass_vector = "vector" in aov_list
|
||||
vl.use_pass_uv = "uv" in aov_list
|
||||
cycles.denoising_store_passes = "denoising" in aov_list
|
||||
vl.use_pass_object_index = "object_index" in aov_list
|
||||
vl.use_pass_material_index = "material_index" in aov_list
|
||||
cycles.pass_debug_sample_count = "sample_count" in aov_list
|
||||
|
||||
# Light Passes
|
||||
vl.use_pass_diffuse_indirect = "diffuse_indirect" in aov_list
|
||||
vl.use_pass_glossy_indirect = "specular_indirect" in aov_list
|
||||
vl.use_pass_transmission_direct = "transmission_direct" in aov_list
|
||||
vl.use_pass_transmission_indirect = "transmission_indirect" in aov_list
|
||||
vl.use_pass_transmission_color = "transmission_color" in aov_list
|
||||
cycles.use_pass_volume_direct = "volume_light" in aov_list
|
||||
cycles.use_pass_volume_indirect = "volume_indirect" in aov_list
|
||||
cycles.use_pass_shadow_catcher = "shadow" in aov_list
|
||||
|
||||
aovs_names = [aov.name for aov in vl.aovs]
|
||||
for cp in custom_passes:
|
||||
cp_name = cp[0]
|
||||
cp_name = cp["attribute"] if AYON_SERVER_ENABLED else cp[0]
|
||||
if cp_name not in aovs_names:
|
||||
aov = vl.aovs.add()
|
||||
aov.name = cp_name
|
||||
else:
|
||||
aov = vl.aovs[cp_name]
|
||||
aov.type = cp[1].get("type", "VALUE")
|
||||
aov.type = (cp["value"]
|
||||
if AYON_SERVER_ENABLED else cp[1].get("type", "VALUE"))
|
||||
|
||||
return aov_list, custom_passes
|
||||
return list(aov_list), custom_passes
|
||||
|
||||
|
||||
def set_node_tree(output_path, name, aov_sep, ext, multilayer):
|
||||
def _create_aov_slot(name, aov_sep, slots, rpass_name, multi_exr, output_path):
|
||||
filename = f"{name}{aov_sep}{rpass_name}.####"
|
||||
slot = slots.new(rpass_name if multi_exr else filename)
|
||||
filepath = str(output_path / filename.lstrip("/"))
|
||||
|
||||
return slot, filepath
|
||||
|
||||
|
||||
def set_node_tree(
|
||||
output_path, render_product, name, aov_sep, ext, multilayer, compositing
|
||||
):
|
||||
# Set the scene to use the compositor node tree to render
|
||||
bpy.context.scene.use_nodes = True
|
||||
|
||||
tree = bpy.context.scene.node_tree
|
||||
|
||||
# Get the Render Layers node
|
||||
rl_node = None
|
||||
comp_layer_type = "CompositorNodeRLayers"
|
||||
output_type = "CompositorNodeOutputFile"
|
||||
compositor_type = "CompositorNodeComposite"
|
||||
|
||||
# Get the Render Layer, Composite and the previous output nodes
|
||||
render_layer_node = None
|
||||
composite_node = None
|
||||
old_output_node = None
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == "CompositorNodeRLayers":
|
||||
rl_node = node
|
||||
if node.bl_idname == comp_layer_type:
|
||||
render_layer_node = node
|
||||
elif node.bl_idname == compositor_type:
|
||||
composite_node = node
|
||||
elif node.bl_idname == output_type and "AYON" in node.name:
|
||||
old_output_node = node
|
||||
if render_layer_node and composite_node and old_output_node:
|
||||
break
|
||||
|
||||
# If there's not a Render Layers node, we create it
|
||||
if not rl_node:
|
||||
rl_node = tree.nodes.new("CompositorNodeRLayers")
|
||||
if not render_layer_node:
|
||||
render_layer_node = tree.nodes.new(comp_layer_type)
|
||||
|
||||
# Get the enabled output sockets, that are the active passes for the
|
||||
# render.
|
||||
|
|
@ -158,48 +230,81 @@ def set_node_tree(output_path, name, aov_sep, ext, multilayer):
|
|||
exclude_sockets = ["Image", "Alpha", "Noisy Image"]
|
||||
passes = [
|
||||
socket
|
||||
for socket in rl_node.outputs
|
||||
for socket in render_layer_node.outputs
|
||||
if socket.enabled and socket.name not in exclude_sockets
|
||||
]
|
||||
|
||||
# Remove all output nodes
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == "CompositorNodeOutputFile":
|
||||
tree.nodes.remove(node)
|
||||
|
||||
# Create a new output node
|
||||
output = tree.nodes.new("CompositorNodeOutputFile")
|
||||
output = tree.nodes.new(output_type)
|
||||
|
||||
image_settings = bpy.context.scene.render.image_settings
|
||||
output.format.file_format = image_settings.file_format
|
||||
|
||||
slots = None
|
||||
|
||||
# In case of a multilayer exr, we don't need to use the output node,
|
||||
# because the blender render already outputs a multilayer exr.
|
||||
if ext == "exr" and multilayer:
|
||||
output.layer_slots.clear()
|
||||
return []
|
||||
multi_exr = ext == "exr" and multilayer
|
||||
slots = output.layer_slots if multi_exr else output.file_slots
|
||||
output.base_path = render_product if multi_exr else str(output_path)
|
||||
|
||||
output.file_slots.clear()
|
||||
output.base_path = str(output_path)
|
||||
slots.clear()
|
||||
|
||||
aov_file_products = []
|
||||
|
||||
old_links = {
|
||||
link.from_socket.name: link for link in tree.links
|
||||
if link.to_node == old_output_node}
|
||||
|
||||
# Create a new socket for the beauty output
|
||||
pass_name = "rgba" if multi_exr else "beauty"
|
||||
slot, _ = _create_aov_slot(
|
||||
name, aov_sep, slots, pass_name, multi_exr, output_path)
|
||||
tree.links.new(render_layer_node.outputs["Image"], slot)
|
||||
|
||||
if compositing:
|
||||
# Create a new socket for the composite output
|
||||
pass_name = "composite"
|
||||
comp_socket, filepath = _create_aov_slot(
|
||||
name, aov_sep, slots, pass_name, multi_exr, output_path)
|
||||
aov_file_products.append(("Composite", filepath))
|
||||
|
||||
# For each active render pass, we add a new socket to the output node
|
||||
# and link it
|
||||
for render_pass in passes:
|
||||
filepath = f"{name}{aov_sep}{render_pass.name}.####"
|
||||
for rpass in passes:
|
||||
slot, filepath = _create_aov_slot(
|
||||
name, aov_sep, slots, rpass.name, multi_exr, output_path)
|
||||
aov_file_products.append((rpass.name, filepath))
|
||||
|
||||
output.file_slots.new(filepath)
|
||||
# If the rpass was not connected with the old output node, we connect
|
||||
# it with the new one.
|
||||
if not old_links.get(rpass.name):
|
||||
tree.links.new(rpass, slot)
|
||||
|
||||
filename = str(output_path / filepath.lstrip("/"))
|
||||
for link in list(old_links.values()):
|
||||
# Check if the socket is still available in the new output node.
|
||||
socket = output.inputs.get(link.to_socket.name)
|
||||
# If it is, we connect it with the new output node.
|
||||
if socket:
|
||||
tree.links.new(link.from_socket, socket)
|
||||
# Then, we remove the old link.
|
||||
tree.links.remove(link)
|
||||
|
||||
aov_file_products.append((render_pass.name, filename))
|
||||
# If there's a composite node, we connect its input with the new output
|
||||
if compositing and composite_node:
|
||||
for link in tree.links:
|
||||
if link.to_node == composite_node:
|
||||
tree.links.new(link.from_socket, comp_socket)
|
||||
break
|
||||
|
||||
node_input = output.inputs[-1]
|
||||
if old_output_node:
|
||||
output.location = old_output_node.location
|
||||
tree.nodes.remove(old_output_node)
|
||||
|
||||
tree.links.new(render_pass, node_input)
|
||||
output.name = "AYON File Output"
|
||||
output.label = "AYON File Output"
|
||||
|
||||
return aov_file_products
|
||||
return [] if multi_exr else aov_file_products
|
||||
|
||||
|
||||
def imprint_render_settings(node, data):
|
||||
|
|
@ -228,17 +333,23 @@ def prepare_rendering(asset_group):
|
|||
aov_sep = get_aov_separator(settings)
|
||||
ext = get_image_format(settings)
|
||||
multilayer = get_multilayer(settings)
|
||||
renderer = get_renderer(settings)
|
||||
compositing = get_compositing(settings)
|
||||
|
||||
set_render_format(ext, multilayer)
|
||||
aov_list, custom_passes = set_render_passes(settings)
|
||||
bpy.context.scene.render.engine = renderer
|
||||
aov_list, custom_passes = set_render_passes(settings, renderer)
|
||||
|
||||
output_path = Path.joinpath(dirpath, render_folder, file_name)
|
||||
|
||||
render_product = get_render_product(output_path, name, aov_sep)
|
||||
aov_file_product = set_node_tree(
|
||||
output_path, name, aov_sep, ext, multilayer)
|
||||
output_path, render_product, name, aov_sep,
|
||||
ext, multilayer, compositing)
|
||||
|
||||
bpy.context.scene.render.filepath = render_product
|
||||
# Clear the render filepath, so that the output is handled only by the
|
||||
# output node in the compositor.
|
||||
bpy.context.scene.render.filepath = ""
|
||||
|
||||
render_settings = {
|
||||
"render_folder": render_folder,
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
"""Create render."""
|
||||
import bpy
|
||||
|
||||
from openpype.lib import version_up
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.render_lib import prepare_rendering
|
||||
from openpype.hosts.blender.api.workio import save_file
|
||||
|
||||
|
||||
class CreateRenderlayer(plugin.BaseCreator):
|
||||
|
|
@ -37,6 +39,7 @@ class CreateRenderlayer(plugin.BaseCreator):
|
|||
# settings. Even the validator to check that the file is saved will
|
||||
# detect the file as saved, even if it isn't. The only solution for
|
||||
# now it is to force the file to be saved.
|
||||
bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath)
|
||||
filepath = version_up(bpy.data.filepath)
|
||||
save_file(filepath, copy=False)
|
||||
|
||||
return collection
|
||||
|
|
|
|||
|
|
@ -61,5 +61,10 @@ class BlendAnimationLoader(plugin.AssetLoader):
|
|||
|
||||
bpy.data.objects.remove(container)
|
||||
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
filename = bpy.path.basename(libpath)
|
||||
# Blender has a limit of 63 characters for any data name.
|
||||
# If the filename is longer, it will be truncated.
|
||||
if len(filename) > 63:
|
||||
filename = filename[:63]
|
||||
library = bpy.data.libraries.get(filename)
|
||||
bpy.data.libraries.remove(library)
|
||||
|
|
|
|||
|
|
@ -67,7 +67,8 @@ class AudioLoader(plugin.AssetLoader):
|
|||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
bpy.ops.sequencer.sound_strip_add(oc, filepath=libpath, frame_start=1)
|
||||
with bpy.context.temp_override(**oc):
|
||||
bpy.ops.sequencer.sound_strip_add(filepath=libpath, frame_start=1)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
|
|
@ -156,17 +157,18 @@ class AudioLoader(plugin.AssetLoader):
|
|||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(oc, action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[old_audio].select = True
|
||||
with bpy.context.temp_override(**oc):
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[old_audio].select = True
|
||||
|
||||
bpy.ops.sequencer.delete(oc)
|
||||
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
|
||||
bpy.ops.sequencer.delete()
|
||||
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
|
||||
|
||||
bpy.ops.sequencer.sound_strip_add(
|
||||
oc, filepath=str(libpath), frame_start=1)
|
||||
bpy.ops.sequencer.sound_strip_add(
|
||||
filepath=str(libpath), frame_start=1)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
|
|
@ -205,12 +207,13 @@ class AudioLoader(plugin.AssetLoader):
|
|||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(oc, action='DESELECT')
|
||||
bpy.context.scene.sequence_editor.sequences_all[audio].select = True
|
||||
|
||||
bpy.ops.sequencer.delete(oc)
|
||||
with bpy.context.temp_override(**oc):
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[audio].select = True
|
||||
bpy.ops.sequencer.delete()
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
|
|
|
|||
|
|
@ -102,11 +102,15 @@ class BlendLoader(plugin.AssetLoader):
|
|||
|
||||
# Link all the container children to the collection
|
||||
for obj in container.children_recursive:
|
||||
print(obj)
|
||||
bpy.context.scene.collection.objects.link(obj)
|
||||
|
||||
# Remove the library from the blend file
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
filepath = bpy.path.basename(libpath)
|
||||
# Blender has a limit of 63 characters for any data name.
|
||||
# If the filepath is longer, it will be truncated.
|
||||
if len(filepath) > 63:
|
||||
filepath = filepath[:63]
|
||||
library = bpy.data.libraries.get(filepath)
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return container, members
|
||||
|
|
@ -189,8 +193,20 @@ class BlendLoader(plugin.AssetLoader):
|
|||
|
||||
transform = asset_group.matrix_basis.copy()
|
||||
old_data = dict(asset_group.get(AVALON_PROPERTY))
|
||||
old_members = old_data.get("members", [])
|
||||
parent = asset_group.parent
|
||||
|
||||
actions = {}
|
||||
objects_with_anim = [
|
||||
obj for obj in asset_group.children_recursive
|
||||
if obj.animation_data]
|
||||
for obj in objects_with_anim:
|
||||
# Check if the object has an action and, if so, add it to a dict
|
||||
# so we can restore it later. Save and restore the action only
|
||||
# if it wasn't originally loaded from the current asset.
|
||||
if obj.animation_data.action not in old_members:
|
||||
actions[obj.name] = obj.animation_data.action
|
||||
|
||||
self.exec_remove(container)
|
||||
|
||||
asset_group, members = self._process_data(libpath, group_name)
|
||||
|
|
@ -201,6 +217,13 @@ class BlendLoader(plugin.AssetLoader):
|
|||
asset_group.matrix_basis = transform
|
||||
asset_group.parent = parent
|
||||
|
||||
# Restore the actions
|
||||
for obj in asset_group.children_recursive:
|
||||
if obj.name in actions:
|
||||
if not obj.animation_data:
|
||||
obj.animation_data_create()
|
||||
obj.animation_data.action = actions[obj.name]
|
||||
|
||||
# Restore the old data, but reset memebers, as they don't exist anymore
|
||||
# This avoids a crash, because the memory addresses of those members
|
||||
# are not valid anymore
|
||||
|
|
|
|||
|
|
@ -60,7 +60,12 @@ class BlendSceneLoader(plugin.AssetLoader):
|
|||
bpy.context.scene.collection.children.link(container)
|
||||
|
||||
# Remove the library from the blend file
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
filepath = bpy.path.basename(libpath)
|
||||
# Blender has a limit of 63 characters for any data name.
|
||||
# If the filepath is longer, it will be truncated.
|
||||
if len(filepath) > 63:
|
||||
filepath = filepath[:63]
|
||||
library = bpy.data.libraries.get(filepath)
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return container, members
|
||||
|
|
|
|||
|
|
@ -55,13 +55,13 @@ class ExtractAnimationABC(
|
|||
context = plugin.create_blender_context(
|
||||
active=asset_group, selected=selected)
|
||||
|
||||
# We export the abc
|
||||
bpy.ops.wm.alembic_export(
|
||||
context,
|
||||
filepath=filepath,
|
||||
selected=True,
|
||||
flatten=False
|
||||
)
|
||||
with bpy.context.temp_override(**context):
|
||||
# We export the abc
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=filepath,
|
||||
selected=True,
|
||||
flatten=False
|
||||
)
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
|
|
|
|||
|
|
@ -50,19 +50,19 @@ class ExtractCamera(publish.Extractor, publish.OptionalPyblishPluginMixin):
|
|||
scale_length = bpy.context.scene.unit_settings.scale_length
|
||||
bpy.context.scene.unit_settings.scale_length = 0.01
|
||||
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
context,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
with bpy.context.temp_override(**context):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
|
||||
bpy.context.scene.unit_settings.scale_length = scale_length
|
||||
|
||||
|
|
|
|||
|
|
@ -57,15 +57,15 @@ class ExtractFBX(publish.Extractor, publish.OptionalPyblishPluginMixin):
|
|||
scale_length = bpy.context.scene.unit_settings.scale_length
|
||||
bpy.context.scene.unit_settings.scale_length = 0.01
|
||||
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
context,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
mesh_smooth_type='FACE',
|
||||
add_leaf_bones=False
|
||||
)
|
||||
with bpy.context.temp_override(**context):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
mesh_smooth_type='FACE',
|
||||
add_leaf_bones=False
|
||||
)
|
||||
|
||||
bpy.context.scene.unit_settings.scale_length = scale_length
|
||||
|
||||
|
|
|
|||
|
|
@ -153,17 +153,20 @@ class ExtractAnimationFBX(
|
|||
|
||||
override = plugin.create_blender_context(
|
||||
active=root, selected=[root, armature])
|
||||
bpy.ops.export_scene.fbx(
|
||||
override,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
|
||||
with bpy.context.temp_override(**override):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
|
||||
armature.name = armature_name
|
||||
asset_group.name = asset_group_name
|
||||
root.select_set(True)
|
||||
|
|
|
|||
|
|
@ -80,17 +80,18 @@ class ExtractLayout(publish.Extractor, publish.OptionalPyblishPluginMixin):
|
|||
|
||||
override = plugin.create_blender_context(
|
||||
active=asset, selected=[asset, obj])
|
||||
bpy.ops.export_scene.fbx(
|
||||
override,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
with bpy.context.temp_override(**override):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
obj.name = armature_name
|
||||
asset.name = asset_group_name
|
||||
asset.select_set(False)
|
||||
|
|
|
|||
|
|
@ -28,15 +28,27 @@ class ValidateDeadlinePublish(pyblish.api.InstancePlugin,
|
|||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
tree = bpy.context.scene.node_tree
|
||||
output_type = "CompositorNodeOutputFile"
|
||||
output_node = None
|
||||
# Remove all output nodes that inlcude "AYON" in the name.
|
||||
# There should be only one.
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == output_type and "AYON" in node.name:
|
||||
output_node = node
|
||||
break
|
||||
if not output_node:
|
||||
raise PublishValidationError(
|
||||
"No output node found in the compositor tree."
|
||||
)
|
||||
filepath = bpy.data.filepath
|
||||
file = os.path.basename(filepath)
|
||||
filename, ext = os.path.splitext(file)
|
||||
if filename not in bpy.context.scene.render.filepath:
|
||||
if filename not in output_node.base_path:
|
||||
raise PublishValidationError(
|
||||
"Render output folder "
|
||||
"doesn't match the blender scene name! "
|
||||
"Use Repair action to "
|
||||
"fix the folder file path."
|
||||
"Render output folder doesn't match the blender scene name! "
|
||||
"Use Repair action to fix the folder file path."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ from openpype.hosts.fusion.api.lib import (
|
|||
)
|
||||
from openpype.pipeline import get_current_asset_name
|
||||
from openpype.resources import get_openpype_icon_filepath
|
||||
from openpype.tools.utils import get_qt_app
|
||||
|
||||
from .pipeline import FusionEventHandler
|
||||
from .pulse import FusionPulse
|
||||
|
|
@ -174,7 +175,8 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
|
||||
|
||||
def launch_openpype_menu():
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
app = get_qt_app()
|
||||
|
||||
pype_menu = OpenPypeMenu()
|
||||
|
||||
|
|
|
|||
221
openpype/hosts/fusion/api/plugin.py
Normal file
221
openpype/hosts/fusion/api/plugin.py
Normal file
|
|
@ -0,0 +1,221 @@
|
|||
from copy import deepcopy
|
||||
import os
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
)
|
||||
|
||||
from openpype.lib import (
|
||||
BoolDef,
|
||||
EnumDef,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
Creator,
|
||||
CreatedInstance
|
||||
)
|
||||
|
||||
|
||||
class GenericCreateSaver(Creator):
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = [
|
||||
"reviewable"
|
||||
]
|
||||
|
||||
settings_category = "fusion"
|
||||
|
||||
image_format = "exr"
|
||||
|
||||
# TODO: This should be renamed together with Nuke so it is aligned
|
||||
temp_rendering_path_template = (
|
||||
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}")
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
|
||||
|
||||
instance = CreatedInstance(
|
||||
family=self.family,
|
||||
subset_name=subset_name,
|
||||
data=instance_data,
|
||||
creator=self,
|
||||
)
|
||||
data = instance.data_to_store()
|
||||
comp = get_current_comp()
|
||||
with comp_lock_and_undo_chunk(comp):
|
||||
args = (-32768, -32768) # Magical position numbers
|
||||
saver = comp.AddTool("Saver", *args)
|
||||
|
||||
self._update_tool_with_data(saver, data=data)
|
||||
|
||||
# Register the CreatedInstance
|
||||
self._imprint(saver, data)
|
||||
|
||||
# Insert the transient data
|
||||
instance.transient_data["tool"] = saver
|
||||
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
return instance
|
||||
|
||||
def collect_instances(self):
|
||||
comp = get_current_comp()
|
||||
tools = comp.GetToolList(False, "Saver").values()
|
||||
for tool in tools:
|
||||
data = self.get_managed_tool_data(tool)
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# Add instance
|
||||
created_instance = CreatedInstance.from_existing(data, self)
|
||||
|
||||
# Collect transient data
|
||||
created_instance.transient_data["tool"] = tool
|
||||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
tool = created_inst.transient_data["tool"]
|
||||
self._update_tool_with_data(tool, new_data)
|
||||
self._imprint(tool, new_data)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
# Remove the tool from the scene
|
||||
|
||||
tool = instance.transient_data["tool"]
|
||||
if tool:
|
||||
tool.Delete()
|
||||
|
||||
# Remove the collected CreatedInstance to remove from UI directly
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def _imprint(self, tool, data):
|
||||
# Save all data in a "openpype.{key}" = value data
|
||||
|
||||
# Instance id is the tool's name so we don't need to imprint as data
|
||||
data.pop("instance_id", None)
|
||||
|
||||
active = data.pop("active", None)
|
||||
if active is not None:
|
||||
# Use active value to set the passthrough state
|
||||
tool.SetAttrs({"TOOLB_PassThrough": not active})
|
||||
|
||||
for key, value in data.items():
|
||||
tool.SetData(f"openpype.{key}", value)
|
||||
|
||||
def _update_tool_with_data(self, tool, data):
|
||||
"""Update tool node name and output path based on subset data"""
|
||||
if "subset" not in data:
|
||||
return
|
||||
|
||||
original_subset = tool.GetData("openpype.subset")
|
||||
original_format = tool.GetData(
|
||||
"openpype.creator_attributes.image_format"
|
||||
)
|
||||
|
||||
subset = data["subset"]
|
||||
if (
|
||||
original_subset != subset
|
||||
or original_format != data["creator_attributes"]["image_format"]
|
||||
):
|
||||
self._configure_saver_tool(data, tool, subset)
|
||||
|
||||
def _configure_saver_tool(self, data, tool, subset):
|
||||
formatting_data = deepcopy(data)
|
||||
|
||||
# get frame padding from anatomy templates
|
||||
frame_padding = self.project_anatomy.templates["frame_padding"]
|
||||
|
||||
# get output format
|
||||
ext = data["creator_attributes"]["image_format"]
|
||||
|
||||
# Subset change detected
|
||||
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
|
||||
formatting_data.update({
|
||||
"workdir": workdir,
|
||||
"frame": "0" * frame_padding,
|
||||
"ext": ext,
|
||||
"product": {
|
||||
"name": formatting_data["subset"],
|
||||
"type": formatting_data["family"],
|
||||
},
|
||||
})
|
||||
|
||||
# build file path to render
|
||||
filepath = self.temp_rendering_path_template.format(**formatting_data)
|
||||
|
||||
comp = get_current_comp()
|
||||
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
|
||||
|
||||
# Rename tool
|
||||
if tool.Name != subset:
|
||||
print(f"Renaming {tool.Name} -> {subset}")
|
||||
tool.SetAttrs({"TOOLS_Name": subset})
|
||||
|
||||
def get_managed_tool_data(self, tool):
|
||||
"""Return data of the tool if it matches creator identifier"""
|
||||
data = tool.GetData("openpype")
|
||||
if not isinstance(data, dict):
|
||||
return
|
||||
|
||||
required = {
|
||||
"id": "pyblish.avalon.instance",
|
||||
"creator_identifier": self.identifier,
|
||||
}
|
||||
for key, value in required.items():
|
||||
if key not in data or data[key] != value:
|
||||
return
|
||||
|
||||
# Get active state from the actual tool state
|
||||
attrs = tool.GetAttrs()
|
||||
passthrough = attrs["TOOLB_PassThrough"]
|
||||
data["active"] = not passthrough
|
||||
|
||||
# Override publisher's UUID generation because tool names are
|
||||
# already unique in Fusion in a comp
|
||||
data["instance_id"] = tool.Name
|
||||
|
||||
return data
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
"""Settings for publish page"""
|
||||
return self.get_pre_create_attr_defs()
|
||||
|
||||
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
|
||||
creator_attrs = instance_data["creator_attributes"] = {}
|
||||
for pass_key in pre_create_data.keys():
|
||||
creator_attrs[pass_key] = pre_create_data[pass_key]
|
||||
|
||||
def _get_render_target_enum(self):
|
||||
rendering_targets = {
|
||||
"local": "Local machine rendering",
|
||||
"frames": "Use existing frames",
|
||||
}
|
||||
if "farm_rendering" in self.instance_attributes:
|
||||
rendering_targets["farm"] = "Farm rendering"
|
||||
|
||||
return EnumDef(
|
||||
"render_target", items=rendering_targets, label="Render target"
|
||||
)
|
||||
|
||||
def _get_reviewable_bool(self):
|
||||
return BoolDef(
|
||||
"review",
|
||||
default=("reviewable" in self.instance_attributes),
|
||||
label="Review",
|
||||
)
|
||||
|
||||
def _get_image_format_enum(self):
|
||||
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
|
||||
return EnumDef(
|
||||
"image_format",
|
||||
items=image_format_options,
|
||||
default=self.image_format,
|
||||
label="Output Image Format",
|
||||
)
|
||||
64
openpype/hosts/fusion/plugins/create/create_image_saver.py
Normal file
64
openpype/hosts/fusion/plugins/create/create_image_saver.py
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
from openpype.lib import NumberDef
|
||||
|
||||
from openpype.hosts.fusion.api.plugin import GenericCreateSaver
|
||||
from openpype.hosts.fusion.api import get_current_comp
|
||||
|
||||
|
||||
class CreateImageSaver(GenericCreateSaver):
|
||||
"""Fusion Saver to generate single image.
|
||||
|
||||
Created to explicitly separate single ('image') or
|
||||
multi frame('render) outputs.
|
||||
|
||||
This might be temporary creator until 'alias' functionality will be
|
||||
implemented to limit creation of additional product types with similar, but
|
||||
not the same workflows.
|
||||
"""
|
||||
identifier = "io.openpype.creators.fusion.imagesaver"
|
||||
label = "Image (saver)"
|
||||
name = "image"
|
||||
family = "image"
|
||||
description = "Fusion Saver to generate image"
|
||||
|
||||
default_frame = 0
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Fusion Saver to generate single image.
|
||||
|
||||
This creator is expected for publishing of single frame `image` product
|
||||
type.
|
||||
|
||||
Artist should provide frame number (integer) to specify which frame
|
||||
should be published. It must be inside of global timeline frame range.
|
||||
|
||||
Supports local and deadline rendering.
|
||||
|
||||
Supports selection from predefined set of output file extensions:
|
||||
- exr
|
||||
- tga
|
||||
- png
|
||||
- tif
|
||||
- jpg
|
||||
|
||||
Created to explicitly separate single frame ('image') or
|
||||
multi frame ('render') outputs.
|
||||
"""
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
"""Settings for create page"""
|
||||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool(),
|
||||
self._get_frame_int(),
|
||||
self._get_image_format_enum(),
|
||||
]
|
||||
return attr_defs
|
||||
|
||||
def _get_frame_int(self):
|
||||
return NumberDef(
|
||||
"frame",
|
||||
default=self.default_frame,
|
||||
label="Frame",
|
||||
tooltip="Set frame to be rendered, must be inside of global "
|
||||
"timeline range"
|
||||
)
|
||||
|
|
@ -1,187 +1,42 @@
|
|||
from copy import deepcopy
|
||||
import os
|
||||
from openpype.lib import EnumDef
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
)
|
||||
|
||||
from openpype.lib import (
|
||||
BoolDef,
|
||||
EnumDef,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
Creator as NewCreator,
|
||||
CreatedInstance,
|
||||
Anatomy,
|
||||
)
|
||||
from openpype.hosts.fusion.api.plugin import GenericCreateSaver
|
||||
|
||||
|
||||
class CreateSaver(NewCreator):
|
||||
class CreateSaver(GenericCreateSaver):
|
||||
"""Fusion Saver to generate image sequence of 'render' product type.
|
||||
|
||||
Original Saver creator targeted for 'render' product type. It uses
|
||||
original not to descriptive name because of values in Settings.
|
||||
"""
|
||||
identifier = "io.openpype.creators.fusion.saver"
|
||||
label = "Render (saver)"
|
||||
name = "render"
|
||||
family = "render"
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = ["reviewable"]
|
||||
image_format = "exr"
|
||||
default_frame_range_option = "asset_db"
|
||||
|
||||
# TODO: This should be renamed together with Nuke so it is aligned
|
||||
temp_rendering_path_template = (
|
||||
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}"
|
||||
)
|
||||
def get_detail_description(self):
|
||||
return """Fusion Saver to generate image sequence.
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
|
||||
This creator is expected for publishing of image sequences for 'render'
|
||||
product type. (But can publish even single frame 'render'.)
|
||||
|
||||
instance_data.update(
|
||||
{"id": "pyblish.avalon.instance", "subset": subset_name}
|
||||
)
|
||||
Select what should be source of render range:
|
||||
- "Current asset context" - values set on Asset in DB (Ftrack)
|
||||
- "From render in/out" - from node itself
|
||||
- "From composition timeline" - from timeline
|
||||
|
||||
comp = get_current_comp()
|
||||
with comp_lock_and_undo_chunk(comp):
|
||||
args = (-32768, -32768) # Magical position numbers
|
||||
saver = comp.AddTool("Saver", *args)
|
||||
Supports local and farm rendering.
|
||||
|
||||
self._update_tool_with_data(saver, data=instance_data)
|
||||
|
||||
# Register the CreatedInstance
|
||||
instance = CreatedInstance(
|
||||
family=self.family,
|
||||
subset_name=subset_name,
|
||||
data=instance_data,
|
||||
creator=self,
|
||||
)
|
||||
data = instance.data_to_store()
|
||||
self._imprint(saver, data)
|
||||
|
||||
# Insert the transient data
|
||||
instance.transient_data["tool"] = saver
|
||||
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
return instance
|
||||
|
||||
def collect_instances(self):
|
||||
comp = get_current_comp()
|
||||
tools = comp.GetToolList(False, "Saver").values()
|
||||
for tool in tools:
|
||||
data = self.get_managed_tool_data(tool)
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# Add instance
|
||||
created_instance = CreatedInstance.from_existing(data, self)
|
||||
|
||||
# Collect transient data
|
||||
created_instance.transient_data["tool"] = tool
|
||||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
tool = created_inst.transient_data["tool"]
|
||||
self._update_tool_with_data(tool, new_data)
|
||||
self._imprint(tool, new_data)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
# Remove the tool from the scene
|
||||
|
||||
tool = instance.transient_data["tool"]
|
||||
if tool:
|
||||
tool.Delete()
|
||||
|
||||
# Remove the collected CreatedInstance to remove from UI directly
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def _imprint(self, tool, data):
|
||||
# Save all data in a "openpype.{key}" = value data
|
||||
|
||||
# Instance id is the tool's name so we don't need to imprint as data
|
||||
data.pop("instance_id", None)
|
||||
|
||||
active = data.pop("active", None)
|
||||
if active is not None:
|
||||
# Use active value to set the passthrough state
|
||||
tool.SetAttrs({"TOOLB_PassThrough": not active})
|
||||
|
||||
for key, value in data.items():
|
||||
tool.SetData(f"openpype.{key}", value)
|
||||
|
||||
def _update_tool_with_data(self, tool, data):
|
||||
"""Update tool node name and output path based on subset data"""
|
||||
if "subset" not in data:
|
||||
return
|
||||
|
||||
original_subset = tool.GetData("openpype.subset")
|
||||
original_format = tool.GetData(
|
||||
"openpype.creator_attributes.image_format"
|
||||
)
|
||||
|
||||
subset = data["subset"]
|
||||
if (
|
||||
original_subset != subset
|
||||
or original_format != data["creator_attributes"]["image_format"]
|
||||
):
|
||||
self._configure_saver_tool(data, tool, subset)
|
||||
|
||||
def _configure_saver_tool(self, data, tool, subset):
|
||||
formatting_data = deepcopy(data)
|
||||
|
||||
# get frame padding from anatomy templates
|
||||
anatomy = Anatomy()
|
||||
frame_padding = anatomy.templates["frame_padding"]
|
||||
|
||||
# get output format
|
||||
ext = data["creator_attributes"]["image_format"]
|
||||
|
||||
# Subset change detected
|
||||
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
|
||||
formatting_data.update(
|
||||
{"workdir": workdir, "frame": "0" * frame_padding, "ext": ext}
|
||||
)
|
||||
|
||||
# build file path to render
|
||||
filepath = self.temp_rendering_path_template.format(**formatting_data)
|
||||
|
||||
comp = get_current_comp()
|
||||
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
|
||||
|
||||
# Rename tool
|
||||
if tool.Name != subset:
|
||||
print(f"Renaming {tool.Name} -> {subset}")
|
||||
tool.SetAttrs({"TOOLS_Name": subset})
|
||||
|
||||
def get_managed_tool_data(self, tool):
|
||||
"""Return data of the tool if it matches creator identifier"""
|
||||
data = tool.GetData("openpype")
|
||||
if not isinstance(data, dict):
|
||||
return
|
||||
|
||||
required = {
|
||||
"id": "pyblish.avalon.instance",
|
||||
"creator_identifier": self.identifier,
|
||||
}
|
||||
for key, value in required.items():
|
||||
if key not in data or data[key] != value:
|
||||
return
|
||||
|
||||
# Get active state from the actual tool state
|
||||
attrs = tool.GetAttrs()
|
||||
passthrough = attrs["TOOLB_PassThrough"]
|
||||
data["active"] = not passthrough
|
||||
|
||||
# Override publisher's UUID generation because tool names are
|
||||
# already unique in Fusion in a comp
|
||||
data["instance_id"] = tool.Name
|
||||
|
||||
return data
|
||||
Supports selection from predefined set of output file extensions:
|
||||
- exr
|
||||
- tga
|
||||
- png
|
||||
- tif
|
||||
- jpg
|
||||
"""
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
"""Settings for create page"""
|
||||
|
|
@ -193,29 +48,6 @@ class CreateSaver(NewCreator):
|
|||
]
|
||||
return attr_defs
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
"""Settings for publish page"""
|
||||
return self.get_pre_create_attr_defs()
|
||||
|
||||
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
|
||||
creator_attrs = instance_data["creator_attributes"] = {}
|
||||
for pass_key in pre_create_data.keys():
|
||||
creator_attrs[pass_key] = pre_create_data[pass_key]
|
||||
|
||||
# These functions below should be moved to another file
|
||||
# so it can be used by other plugins. plugin.py ?
|
||||
def _get_render_target_enum(self):
|
||||
rendering_targets = {
|
||||
"local": "Local machine rendering",
|
||||
"frames": "Use existing frames",
|
||||
}
|
||||
if "farm_rendering" in self.instance_attributes:
|
||||
rendering_targets["farm"] = "Farm rendering"
|
||||
|
||||
return EnumDef(
|
||||
"render_target", items=rendering_targets, label="Render target"
|
||||
)
|
||||
|
||||
def _get_frame_range_enum(self):
|
||||
frame_range_options = {
|
||||
"asset_db": "Current asset context",
|
||||
|
|
@ -227,42 +59,5 @@ class CreateSaver(NewCreator):
|
|||
"frame_range_source",
|
||||
items=frame_range_options,
|
||||
label="Frame range source",
|
||||
)
|
||||
|
||||
def _get_reviewable_bool(self):
|
||||
return BoolDef(
|
||||
"review",
|
||||
default=("reviewable" in self.instance_attributes),
|
||||
label="Review",
|
||||
)
|
||||
|
||||
def _get_image_format_enum(self):
|
||||
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
|
||||
return EnumDef(
|
||||
"image_format",
|
||||
items=image_format_options,
|
||||
default=self.image_format,
|
||||
label="Output Image Format",
|
||||
)
|
||||
|
||||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
# plugin settings
|
||||
plugin_settings = project_settings["fusion"]["create"][
|
||||
self.__class__.__name__
|
||||
]
|
||||
|
||||
# individual attributes
|
||||
self.instance_attributes = plugin_settings.get(
|
||||
"instance_attributes", self.instance_attributes
|
||||
)
|
||||
self.default_variants = plugin_settings.get(
|
||||
"default_variants", self.default_variants
|
||||
)
|
||||
self.temp_rendering_path_template = plugin_settings.get(
|
||||
"temp_rendering_path_template", self.temp_rendering_path_template
|
||||
)
|
||||
self.image_format = plugin_settings.get(
|
||||
"image_format", self.image_format
|
||||
default=self.default_frame_range_option
|
||||
)
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
|
|||
label = "Collect Inputs"
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
hosts = ["fusion"]
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -57,6 +57,18 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
|
|||
start_with_handle = comp_start
|
||||
end_with_handle = comp_end
|
||||
|
||||
frame = instance.data["creator_attributes"].get("frame")
|
||||
# explicitly publishing only single frame
|
||||
if frame is not None:
|
||||
frame = int(frame)
|
||||
|
||||
start = frame
|
||||
end = frame
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
start_with_handle = frame
|
||||
end_with_handle = frame
|
||||
|
||||
# Include start and end render frame in label
|
||||
subset = instance.data["subset"]
|
||||
label = (
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ class CollectFusionRender(
|
|||
continue
|
||||
|
||||
family = inst.data["family"]
|
||||
if family != "render":
|
||||
if family not in ["render", "image"]:
|
||||
continue
|
||||
|
||||
task_name = context.data["task"]
|
||||
|
|
@ -59,7 +59,7 @@ class CollectFusionRender(
|
|||
instance_families = inst.data.get("families", [])
|
||||
subset_name = inst.data["subset"]
|
||||
instance = FusionRenderInstance(
|
||||
family="render",
|
||||
family=family,
|
||||
tool=tool,
|
||||
workfileComp=comp,
|
||||
families=instance_families,
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ class FusionSaveComp(pyblish.api.ContextPlugin):
|
|||
label = "Save current file"
|
||||
order = pyblish.api.ExtractorOrder - 0.49
|
||||
hosts = ["fusion"]
|
||||
families = ["render", "workfile"]
|
||||
families = ["render", "image", "workfile"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class ValidateBackgroundDepth(
|
|||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Background Depth 32 bit"
|
||||
hosts = ["fusion"]
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
optional = True
|
||||
|
||||
actions = [SelectInvalidAction, publish.RepairAction]
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ class ValidateFusionCompSaved(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Comp Saved"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
|
||||
def process(self, context):
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Create Folder Checked"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [RepairAction, SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class ValidateFilenameHasExtension(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Filename Has Extension"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,27 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateImageFrame(pyblish.api.InstancePlugin):
|
||||
"""Validates that `image` product type contains only single frame."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Image Frame"
|
||||
families = ["image"]
|
||||
hosts = ["fusion"]
|
||||
|
||||
def process(self, instance):
|
||||
render_start = instance.data["frameStartHandle"]
|
||||
render_end = instance.data["frameEndHandle"]
|
||||
too_many_frames = (isinstance(instance.data["expectedFiles"], list)
|
||||
and len(instance.data["expectedFiles"]) > 1)
|
||||
|
||||
if render_end - render_start > 0 or too_many_frames:
|
||||
desc = ("Trying to render multiple frames. 'image' product type "
|
||||
"is meant for single frame. Please use 'render' creator.")
|
||||
raise PublishValidationError(
|
||||
title="Frame range outside of comp range",
|
||||
message=desc,
|
||||
description=desc
|
||||
)
|
||||
|
|
@ -7,8 +7,8 @@ class ValidateInstanceFrameRange(pyblish.api.InstancePlugin):
|
|||
"""Validate instance frame range is within comp's global render range."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Filename Has Extension"
|
||||
families = ["render"]
|
||||
label = "Validate Frame Range"
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ class ValidateSaverHasInput(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Saver Has Input"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ class ValidateSaverPassthrough(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Saver Passthrough"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -8,55 +8,6 @@ from openpype.hosts.fusion.api.action import SelectInvalidAction
|
|||
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
|
||||
|
||||
|
||||
def get_tool_resolution(tool, frame):
|
||||
"""Return the 2D input resolution to a Fusion tool
|
||||
|
||||
If the current tool hasn't been rendered its input resolution
|
||||
hasn't been saved. To combat this, add an expression in
|
||||
the comments field to read the resolution
|
||||
|
||||
Args
|
||||
tool (Fusion Tool): The tool to query input resolution
|
||||
frame (int): The frame to query the resolution on.
|
||||
|
||||
Returns:
|
||||
tuple: width, height as 2-tuple of integers
|
||||
|
||||
"""
|
||||
comp = tool.Composition
|
||||
|
||||
# False undo removes the undo-stack from the undo list
|
||||
with comp_lock_and_undo_chunk(comp, "Read resolution", False):
|
||||
# Save old comment
|
||||
old_comment = ""
|
||||
has_expression = False
|
||||
if tool["Comments"][frame] != "":
|
||||
if tool["Comments"].GetExpression() is not None:
|
||||
has_expression = True
|
||||
old_comment = tool["Comments"].GetExpression()
|
||||
tool["Comments"].SetExpression(None)
|
||||
else:
|
||||
old_comment = tool["Comments"][frame]
|
||||
tool["Comments"][frame] = ""
|
||||
|
||||
# Get input width
|
||||
tool["Comments"].SetExpression("self.Input.OriginalWidth")
|
||||
width = int(tool["Comments"][frame])
|
||||
|
||||
# Get input height
|
||||
tool["Comments"].SetExpression("self.Input.OriginalHeight")
|
||||
height = int(tool["Comments"][frame])
|
||||
|
||||
# Reset old comment
|
||||
tool["Comments"].SetExpression(None)
|
||||
if has_expression:
|
||||
tool["Comments"].SetExpression(old_comment)
|
||||
else:
|
||||
tool["Comments"][frame] = old_comment
|
||||
|
||||
return width, height
|
||||
|
||||
|
||||
class ValidateSaverResolution(
|
||||
pyblish.api.InstancePlugin, OptionalPyblishPluginMixin
|
||||
):
|
||||
|
|
@ -64,7 +15,7 @@ class ValidateSaverResolution(
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Asset Resolution"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
optional = True
|
||||
actions = [SelectInvalidAction]
|
||||
|
|
@ -87,19 +38,79 @@ class ValidateSaverResolution(
|
|||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
resolution = cls.get_resolution(instance)
|
||||
saver = instance.data["tool"]
|
||||
try:
|
||||
resolution = cls.get_resolution(instance)
|
||||
except PublishValidationError:
|
||||
resolution = None
|
||||
expected_resolution = cls.get_expected_resolution(instance)
|
||||
if resolution != expected_resolution:
|
||||
saver = instance.data["tool"]
|
||||
return [saver]
|
||||
|
||||
@classmethod
|
||||
def get_resolution(cls, instance):
|
||||
saver = instance.data["tool"]
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
return get_tool_resolution(saver, frame=first_frame)
|
||||
return cls.get_tool_resolution(saver, frame=first_frame)
|
||||
|
||||
@classmethod
|
||||
def get_expected_resolution(cls, instance):
|
||||
data = instance.data["assetEntity"]["data"]
|
||||
return data["resolutionWidth"], data["resolutionHeight"]
|
||||
|
||||
@classmethod
|
||||
def get_tool_resolution(cls, tool, frame):
|
||||
"""Return the 2D input resolution to a Fusion tool
|
||||
|
||||
If the current tool hasn't been rendered its input resolution
|
||||
hasn't been saved. To combat this, add an expression in
|
||||
the comments field to read the resolution
|
||||
|
||||
Args
|
||||
tool (Fusion Tool): The tool to query input resolution
|
||||
frame (int): The frame to query the resolution on.
|
||||
|
||||
Returns:
|
||||
tuple: width, height as 2-tuple of integers
|
||||
|
||||
"""
|
||||
comp = tool.Composition
|
||||
|
||||
# False undo removes the undo-stack from the undo list
|
||||
with comp_lock_and_undo_chunk(comp, "Read resolution", False):
|
||||
# Save old comment
|
||||
old_comment = ""
|
||||
has_expression = False
|
||||
|
||||
if tool["Comments"][frame] not in ["", None]:
|
||||
if tool["Comments"].GetExpression() is not None:
|
||||
has_expression = True
|
||||
old_comment = tool["Comments"].GetExpression()
|
||||
tool["Comments"].SetExpression(None)
|
||||
else:
|
||||
old_comment = tool["Comments"][frame]
|
||||
tool["Comments"][frame] = ""
|
||||
# Get input width
|
||||
tool["Comments"].SetExpression("self.Input.OriginalWidth")
|
||||
if tool["Comments"][frame] is None:
|
||||
raise PublishValidationError(
|
||||
"Cannot get resolution info for frame '{}'.\n\n "
|
||||
"Please check that saver has connected input.".format(
|
||||
frame
|
||||
)
|
||||
)
|
||||
|
||||
width = int(tool["Comments"][frame])
|
||||
|
||||
# Get input height
|
||||
tool["Comments"].SetExpression("self.Input.OriginalHeight")
|
||||
height = int(tool["Comments"][frame])
|
||||
|
||||
# Reset old comment
|
||||
tool["Comments"].SetExpression(None)
|
||||
if has_expression:
|
||||
tool["Comments"].SetExpression(old_comment)
|
||||
else:
|
||||
tool["Comments"][frame] = old_comment
|
||||
|
||||
return width, height
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ class ValidateUniqueSubsets(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Unique Subsets"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -9,6 +9,8 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
label = "Collect Clip Effects Instances"
|
||||
families = ["clip"]
|
||||
|
||||
effect_categories = []
|
||||
|
||||
def process(self, instance):
|
||||
family = "effect"
|
||||
effects = {}
|
||||
|
|
@ -70,29 +72,66 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
|
||||
subset_split.insert(0, "effect")
|
||||
|
||||
name = "".join(subset_split)
|
||||
# Need to convert to dict for AYON settings. This isinstance check can
|
||||
# be removed in the future when OpenPype is no longer.
|
||||
effect_categories = self.effect_categories
|
||||
if isinstance(self.effect_categories, list):
|
||||
effect_categories = {
|
||||
x["name"]: x["effect_classes"] for x in self.effect_categories
|
||||
}
|
||||
|
||||
# create new instance and inherit data
|
||||
data = {}
|
||||
for key, value in instance.data.items():
|
||||
if "clipEffectItems" in key:
|
||||
category_by_effect = {"": ""}
|
||||
for key, values in effect_categories.items():
|
||||
for cls in values:
|
||||
category_by_effect[cls] = key
|
||||
|
||||
effects_categorized = {k: {} for k in effect_categories.keys()}
|
||||
effects_categorized[""] = {}
|
||||
for key, value in effects.items():
|
||||
if key == "assignTo":
|
||||
continue
|
||||
data[key] = value
|
||||
|
||||
# change names
|
||||
data["subset"] = name
|
||||
data["family"] = family
|
||||
data["families"] = [family]
|
||||
data["name"] = data["subset"] + "_" + data["asset"]
|
||||
data["label"] = "{} - {}".format(
|
||||
data['asset'], data["subset"]
|
||||
)
|
||||
data["effects"] = effects
|
||||
# Some classes can have a number in them. Like Text2.
|
||||
found_cls = ""
|
||||
for cls in category_by_effect.keys():
|
||||
if cls in value["class"]:
|
||||
found_cls = cls
|
||||
|
||||
# create new instance
|
||||
_instance = instance.context.create_instance(**data)
|
||||
self.log.info("Created instance `{}`".format(_instance))
|
||||
self.log.debug("instance.data `{}`".format(_instance.data))
|
||||
effects_categorized[category_by_effect[found_cls]][key] = value
|
||||
|
||||
categories = list(effects_categorized.keys())
|
||||
for category in categories:
|
||||
if not effects_categorized[category]:
|
||||
effects_categorized.pop(category)
|
||||
continue
|
||||
|
||||
effects_categorized[category]["assignTo"] = effects["assignTo"]
|
||||
|
||||
for category, effects in effects_categorized.items():
|
||||
name = "".join(subset_split)
|
||||
name += category.capitalize()
|
||||
|
||||
# create new instance and inherit data
|
||||
data = {}
|
||||
for key, value in instance.data.items():
|
||||
if "clipEffectItems" in key:
|
||||
continue
|
||||
data[key] = value
|
||||
|
||||
# change names
|
||||
data["subset"] = name
|
||||
data["family"] = family
|
||||
data["families"] = [family]
|
||||
data["name"] = data["subset"] + "_" + data["asset"]
|
||||
data["label"] = "{} - {}".format(
|
||||
data['asset'], data["subset"]
|
||||
)
|
||||
data["effects"] = effects
|
||||
|
||||
# create new instance
|
||||
_instance = instance.context.create_instance(**data)
|
||||
self.log.info("Created instance `{}`".format(_instance))
|
||||
self.log.debug("instance.data `{}`".format(_instance.data))
|
||||
|
||||
def test_overlap(self, effect_t_in, effect_t_out):
|
||||
covering_exp = bool(
|
||||
|
|
|
|||
|
|
@ -16,8 +16,9 @@ class FbxLoader(load.LoaderPlugin):
|
|||
|
||||
order = -10
|
||||
|
||||
families = ["staticMesh", "fbx"]
|
||||
representations = ["fbx"]
|
||||
families = ["*"]
|
||||
representations = ["*"]
|
||||
extensions = {"fbx"}
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
beauty_product = self.get_render_product_name(default_prefix)
|
||||
render_products.append(beauty_product)
|
||||
files_by_aov = {
|
||||
"RGB Color": self.generate_expected_files(instance,
|
||||
"": self.generate_expected_files(instance,
|
||||
beauty_product)}
|
||||
|
||||
if instance.data.get("RenderElement", True):
|
||||
|
|
@ -75,7 +75,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
if render_element:
|
||||
for aov, renderpass in render_element.items():
|
||||
render_products.append(renderpass)
|
||||
files_by_aov[aov] = self.generate_expected_files(instance, renderpass) # noqa
|
||||
files_by_aov[aov] = self.generate_expected_files(
|
||||
instance, renderpass)
|
||||
|
||||
|
||||
for product in render_products:
|
||||
self.log.debug("Found render product: %s" % product)
|
||||
|
|
|
|||
42
openpype/hosts/max/api/action.py
Normal file
42
openpype/hosts/max/api/action.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
from pymxs import runtime as rt
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import get_errored_instances_from_context
|
||||
|
||||
|
||||
class SelectInvalidAction(pyblish.api.Action):
|
||||
"""Select invalid objects in Blender when a publish plug-in failed."""
|
||||
label = "Select Invalid"
|
||||
on = "failed"
|
||||
icon = "search"
|
||||
|
||||
def process(self, context, plugin):
|
||||
errored_instances = get_errored_instances_from_context(context,
|
||||
plugin=plugin)
|
||||
|
||||
# Get the invalid nodes for the plug-ins
|
||||
self.log.info("Finding invalid nodes...")
|
||||
invalid = list()
|
||||
for instance in errored_instances:
|
||||
invalid_nodes = plugin.get_invalid(instance)
|
||||
if invalid_nodes:
|
||||
if isinstance(invalid_nodes, (list, tuple)):
|
||||
invalid.extend(invalid_nodes)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Failed plug-in doesn't have any selectable objects."
|
||||
)
|
||||
|
||||
if not invalid:
|
||||
self.log.info("No invalid nodes found.")
|
||||
return
|
||||
invalid_names = [obj.name for obj in invalid if isinstance(obj, str)]
|
||||
if not invalid_names:
|
||||
invalid_names = [obj.name for obj, _ in invalid]
|
||||
invalid = [obj for obj, _ in invalid]
|
||||
self.log.info(
|
||||
"Selecting invalid objects: %s", ", ".join(invalid_names)
|
||||
)
|
||||
|
||||
rt.Select(invalid)
|
||||
|
|
@ -37,6 +37,95 @@ class RenderProducts(object):
|
|||
)
|
||||
}
|
||||
|
||||
def get_multiple_beauty(self, outputs, cameras):
|
||||
beauty_output_frames = dict()
|
||||
for output, camera in zip(outputs, cameras):
|
||||
filename, ext = os.path.splitext(output)
|
||||
filename = filename.replace(".", "")
|
||||
ext = ext.replace(".", "")
|
||||
start_frame = int(rt.rendStart)
|
||||
end_frame = int(rt.rendEnd) + 1
|
||||
new_beauty = self.get_expected_beauty(
|
||||
filename, start_frame, end_frame, ext
|
||||
)
|
||||
beauty_output = ({
|
||||
f"{camera}_beauty": new_beauty
|
||||
})
|
||||
beauty_output_frames.update(beauty_output)
|
||||
return beauty_output_frames
|
||||
|
||||
def get_multiple_aovs(self, outputs, cameras):
|
||||
renderer_class = get_current_renderer()
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
aovs_frames = {}
|
||||
for output, camera in zip(outputs, cameras):
|
||||
filename, ext = os.path.splitext(output)
|
||||
filename = filename.replace(".", "")
|
||||
ext = ext.replace(".", "")
|
||||
start_frame = int(rt.rendStart)
|
||||
end_frame = int(rt.rendEnd) + 1
|
||||
|
||||
if renderer in [
|
||||
"ART_Renderer",
|
||||
"V_Ray_6_Hotfix_3",
|
||||
"V_Ray_GPU_6_Hotfix_3",
|
||||
"Default_Scanline_Renderer",
|
||||
"Quicksilver_Hardware_Renderer",
|
||||
]:
|
||||
render_name = self.get_render_elements_name()
|
||||
if render_name:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
elif renderer == "Redshift_Renderer":
|
||||
render_name = self.get_render_elements_name()
|
||||
if render_name:
|
||||
rs_aov_files = rt.Execute("renderers.current.separateAovFiles") # noqa
|
||||
# this doesn't work, always returns False
|
||||
# rs_AovFiles = rt.RedShift_Renderer().separateAovFiles
|
||||
if ext == "exr" and not rs_aov_files:
|
||||
for name in render_name:
|
||||
if name == "RsCryptomatte":
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
else:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
elif renderer == "Arnold":
|
||||
render_name = self.get_arnold_product_name()
|
||||
if render_name:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_arnold_product( # noqa
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
elif renderer in [
|
||||
"V_Ray_6_Hotfix_3",
|
||||
"V_Ray_GPU_6_Hotfix_3"
|
||||
]:
|
||||
if ext != "exr":
|
||||
render_name = self.get_render_elements_name()
|
||||
if render_name:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
|
||||
return aovs_frames
|
||||
|
||||
def get_aovs(self, container):
|
||||
render_dir = os.path.dirname(rt.rendOutputFilename)
|
||||
|
||||
|
|
@ -63,7 +152,7 @@ class RenderProducts(object):
|
|||
if render_name:
|
||||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
|
|
@ -77,14 +166,14 @@ class RenderProducts(object):
|
|||
for name in render_name:
|
||||
if name == "RsCryptomatte":
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
else:
|
||||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
|
|
@ -95,7 +184,8 @@ class RenderProducts(object):
|
|||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_arnold_product(
|
||||
output_file, name, start_frame, end_frame, img_fmt)
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
elif renderer in [
|
||||
"V_Ray_6_Hotfix_3",
|
||||
|
|
@ -106,7 +196,7 @@ class RenderProducts(object):
|
|||
if render_name:
|
||||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt) # noqa
|
||||
})
|
||||
|
|
@ -169,8 +259,8 @@ class RenderProducts(object):
|
|||
|
||||
return render_name
|
||||
|
||||
def get_expected_render_elements(self, folder, name,
|
||||
start_frame, end_frame, fmt):
|
||||
def get_expected_aovs(self, folder, name,
|
||||
start_frame, end_frame, fmt):
|
||||
"""Get all the expected render element output files. """
|
||||
render_elements = []
|
||||
for f in range(start_frame, end_frame):
|
||||
|
|
|
|||
|
|
@ -74,13 +74,13 @@ class RenderSettings(object):
|
|||
output = os.path.join(output_dir, container)
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
self._project_settings["maya"]
|
||||
self._project_settings["max"]
|
||||
["RenderSettings"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "."
|
||||
output_filename = "{0}..{1}".format(output, img_fmt)
|
||||
output_filename = f"{output}..{img_fmt}"
|
||||
output_filename = output_filename.replace("{aov_separator}",
|
||||
aov_separator)
|
||||
rt.rendOutputFilename = output_filename
|
||||
|
|
@ -146,13 +146,13 @@ class RenderSettings(object):
|
|||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = "{0}_{1}..{2}".format(dir, renderpass, ext)
|
||||
aov_name = f"{dir}_{renderpass}..{ext}"
|
||||
render_elem.SetRenderElementFileName(i, aov_name)
|
||||
|
||||
def get_render_output(self, container, output_dir):
|
||||
output = os.path.join(output_dir, container)
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
output_filename = "{0}..{1}".format(output, img_fmt)
|
||||
output_filename = f"{output}..{img_fmt}"
|
||||
return output_filename
|
||||
|
||||
def get_render_element(self):
|
||||
|
|
@ -167,3 +167,61 @@ class RenderSettings(object):
|
|||
orig_render_elem.append(render_element)
|
||||
|
||||
return orig_render_elem
|
||||
|
||||
def get_batch_render_elements(self, container,
|
||||
output_dir, camera):
|
||||
render_element_list = list()
|
||||
output = os.path.join(output_dir, container)
|
||||
render_elem = rt.maxOps.GetCurRenderElementMgr()
|
||||
render_elem_num = render_elem.NumRenderElements()
|
||||
if render_elem_num < 0:
|
||||
return
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
|
||||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = f"{output}_{camera}_{renderpass}..{img_fmt}"
|
||||
render_element_list.append(aov_name)
|
||||
return render_element_list
|
||||
|
||||
def get_batch_render_output(self, camera):
|
||||
target_layer_no = rt.batchRenderMgr.FindView(camera)
|
||||
target_layer = rt.batchRenderMgr.GetView(target_layer_no)
|
||||
return target_layer.outputFilename
|
||||
|
||||
def batch_render_elements(self, camera):
|
||||
target_layer_no = rt.batchRenderMgr.FindView(camera)
|
||||
target_layer = rt.batchRenderMgr.GetView(target_layer_no)
|
||||
outputfilename = target_layer.outputFilename
|
||||
directory = os.path.dirname(outputfilename)
|
||||
render_elem = rt.maxOps.GetCurRenderElementMgr()
|
||||
render_elem_num = render_elem.NumRenderElements()
|
||||
if render_elem_num < 0:
|
||||
return
|
||||
ext = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
|
||||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = f"{directory}_{camera}_{renderpass}..{ext}"
|
||||
render_elem.SetRenderElementFileName(i, aov_name)
|
||||
|
||||
def batch_render_layer(self, container,
|
||||
output_dir, cameras):
|
||||
outputs = list()
|
||||
output = os.path.join(output_dir, container)
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
for cam in cameras:
|
||||
camera = rt.getNodeByName(cam)
|
||||
layer_no = rt.batchRenderMgr.FindView(cam)
|
||||
renderlayer = None
|
||||
if layer_no == 0:
|
||||
renderlayer = rt.batchRenderMgr.CreateView(camera)
|
||||
else:
|
||||
renderlayer = rt.batchRenderMgr.GetView(layer_no)
|
||||
# use camera name as renderlayer name
|
||||
renderlayer.name = cam
|
||||
renderlayer.outputFilename = f"{output}_{cam}..{img_fmt}"
|
||||
outputs.append(renderlayer.outputFilename)
|
||||
return outputs
|
||||
|
|
|
|||
|
|
@ -59,10 +59,11 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
|
||||
rt.callbacks.addScript(rt.Name('filePostOpen'),
|
||||
lib.check_colorspace)
|
||||
rt.callbacks.addScript(rt.Name('postWorkspaceChange'),
|
||||
self._deferred_menu_creation)
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
# TODO: how to get it from 3dsmax?
|
||||
return True
|
||||
def workfile_has_unsaved_changes(self):
|
||||
return rt.getSaveRequired()
|
||||
|
||||
def get_workfile_extensions(self):
|
||||
return [".max"]
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
"""Creator plugin for creating camera."""
|
||||
import os
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
|
||||
|
||||
|
|
@ -17,15 +18,33 @@ class CreateRender(plugin.MaxCreator):
|
|||
file = rt.maxFileName
|
||||
filename, _ = os.path.splitext(file)
|
||||
instance_data["AssetName"] = filename
|
||||
instance_data["multiCamera"] = pre_create_data.get("multi_cam")
|
||||
num_of_renderlayer = rt.batchRenderMgr.numViews
|
||||
if num_of_renderlayer > 0:
|
||||
rt.batchRenderMgr.DeleteView(num_of_renderlayer)
|
||||
|
||||
instance = super(CreateRender, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
|
||||
container_name = instance.data.get("instance_node")
|
||||
sel_obj = self.selected_nodes
|
||||
if sel_obj:
|
||||
# set viewport camera for rendering(mandatory for deadline)
|
||||
RenderSettings(self.project_settings).set_render_camera(sel_obj)
|
||||
# set output paths for rendering(mandatory for deadline)
|
||||
RenderSettings().render_output(container_name)
|
||||
# TODO: create multiple camera options
|
||||
if self.selected_nodes:
|
||||
selected_nodes_name = []
|
||||
for sel in self.selected_nodes:
|
||||
name = sel.name
|
||||
selected_nodes_name.append(name)
|
||||
RenderSettings().batch_render_layer(
|
||||
container_name, filename,
|
||||
selected_nodes_name)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateRender, self).get_pre_create_attr_defs()
|
||||
return attrs + [
|
||||
BoolDef("multi_cam",
|
||||
label="Multiple Cameras Submission",
|
||||
default=False),
|
||||
]
|
||||
|
|
|
|||
123
openpype/hosts/max/plugins/create/create_workfile.py
Normal file
123
openpype/hosts/max/plugins/create/create_workfile.py
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating workfiles."""
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.pipeline import CreatedInstance, AutoCreator
|
||||
from openpype.client import get_asset_by_name, get_asset_name_identifier
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.hosts.max.api.lib import read, imprint
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class CreateWorkfile(plugin.MaxCreatorBase, AutoCreator):
|
||||
"""Workfile auto-creator."""
|
||||
identifier = "io.openpype.creators.max.workfile"
|
||||
label = "Workfile"
|
||||
family = "workfile"
|
||||
icon = "fa5.file"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
def create(self):
|
||||
variant = self.default_variant
|
||||
current_instance = next(
|
||||
(
|
||||
instance for instance in self.create_context.instances
|
||||
if instance.creator_identifier == self.identifier
|
||||
), None)
|
||||
project_name = self.project_name
|
||||
asset_name = self.create_context.get_current_asset_name()
|
||||
task_name = self.create_context.get_current_task_name()
|
||||
host_name = self.create_context.host_name
|
||||
|
||||
if current_instance is None:
|
||||
current_instance_asset = None
|
||||
elif AYON_SERVER_ENABLED:
|
||||
current_instance_asset = current_instance["folderPath"]
|
||||
else:
|
||||
current_instance_asset = current_instance["asset"]
|
||||
|
||||
if current_instance is None:
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
data = {
|
||||
"task": task_name,
|
||||
"variant": variant
|
||||
}
|
||||
if AYON_SERVER_ENABLED:
|
||||
data["folderPath"] = asset_name
|
||||
else:
|
||||
data["asset"] = asset_name
|
||||
|
||||
data.update(
|
||||
self.get_dynamic_data(
|
||||
variant, task_name, asset_doc,
|
||||
project_name, host_name, current_instance)
|
||||
)
|
||||
self.log.info("Auto-creating workfile instance...")
|
||||
instance_node = self.create_node(subset_name)
|
||||
data["instance_node"] = instance_node.name
|
||||
current_instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
self._add_instance_to_context(current_instance)
|
||||
imprint(instance_node.name, current_instance.data)
|
||||
elif (
|
||||
current_instance_asset != asset_name
|
||||
or current_instance["task"] != task_name
|
||||
):
|
||||
# Update instance context if is not the same
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
asset_name = get_asset_name_identifier(asset_doc)
|
||||
|
||||
if AYON_SERVER_ENABLED:
|
||||
current_instance["folderPath"] = asset_name
|
||||
else:
|
||||
current_instance["asset"] = asset_name
|
||||
current_instance["task"] = task_name
|
||||
current_instance["subset"] = subset_name
|
||||
|
||||
def collect_instances(self):
|
||||
self.cache_subsets(self.collection_shared_data)
|
||||
for instance in self.collection_shared_data["max_cached_subsets"].get(self.identifier, []): # noqa
|
||||
if not rt.getNodeByName(instance):
|
||||
continue
|
||||
created_instance = CreatedInstance.from_existing(
|
||||
read(rt.GetNodeByName(instance)), self
|
||||
)
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _ in update_list:
|
||||
instance_node = created_inst.get("instance_node")
|
||||
imprint(
|
||||
instance_node,
|
||||
created_inst.data_to_store()
|
||||
)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
"""Remove specified instance from the scene.
|
||||
|
||||
This is only removing `id` parameter so instance is no longer
|
||||
instance, because it might contain valuable data for artist.
|
||||
|
||||
"""
|
||||
for instance in instances:
|
||||
instance_node = rt.GetNodeByName(
|
||||
instance.data.get("instance_node"))
|
||||
if instance_node:
|
||||
rt.Delete(instance_node)
|
||||
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def create_node(self, subset_name):
|
||||
if rt.getNodeByName(subset_name):
|
||||
node = rt.getNodeByName(subset_name)
|
||||
return node
|
||||
node = rt.Container(name=subset_name)
|
||||
node.isHidden = True
|
||||
return node
|
||||
|
|
@ -12,7 +12,10 @@ class CollectMembers(pyblish.api.InstancePlugin):
|
|||
hosts = ['max']
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
if instance.data["family"] == "workfile":
|
||||
self.log.debug("Skipping Actions for workfile family.")
|
||||
self.log.debug("{}".format(instance.data["subset"]))
|
||||
return
|
||||
if instance.data.get("instance_node"):
|
||||
container = rt.GetNodeByName(instance.data["instance_node"])
|
||||
instance.data["members"] = [
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ import os
|
|||
import pyblish.api
|
||||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
from openpype.hosts.max.api import colorspace
|
||||
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
|
||||
|
||||
|
||||
|
|
@ -23,7 +25,6 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
file = rt.maxFileName
|
||||
current_file = os.path.join(folder, file)
|
||||
filepath = current_file.replace("\\", "/")
|
||||
|
||||
context.data['currentFile'] = current_file
|
||||
|
||||
files_by_aov = RenderProducts().get_beauty(instance.name)
|
||||
|
|
@ -39,6 +40,28 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
|
||||
instance.data["cameras"] = [camera.name] if camera else None # noqa
|
||||
|
||||
if instance.data.get("multiCamera"):
|
||||
cameras = instance.data.get("members")
|
||||
if not cameras:
|
||||
raise KnownPublishError("There should be at least"
|
||||
" one renderable camera in container")
|
||||
sel_cam = [
|
||||
c.name for c in cameras
|
||||
if rt.classOf(c) in rt.Camera.classes]
|
||||
container_name = instance.data.get("instance_node")
|
||||
render_dir = os.path.dirname(rt.rendOutputFilename)
|
||||
outputs = RenderSettings().batch_render_layer(
|
||||
container_name, render_dir, sel_cam
|
||||
)
|
||||
|
||||
instance.data["cameras"] = sel_cam
|
||||
|
||||
files_by_aov = RenderProducts().get_multiple_beauty(
|
||||
outputs, sel_cam)
|
||||
aovs = RenderProducts().get_multiple_aovs(
|
||||
outputs, sel_cam)
|
||||
files_by_aov.update(aovs)
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["files"] = list()
|
||||
|
|
|
|||
|
|
@ -6,15 +6,16 @@ import pyblish.api
|
|||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
class CollectWorkfile(pyblish.api.InstancePlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
label = "Collect 3dsmax Workfile"
|
||||
hosts = ['max']
|
||||
|
||||
def process(self, context):
|
||||
def process(self, instance):
|
||||
"""Inject the current working file."""
|
||||
context = instance.context
|
||||
folder = rt.maxFilePath
|
||||
file = rt.maxFileName
|
||||
if not folder or not file:
|
||||
|
|
@ -23,15 +24,12 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
|
||||
context.data['currentFile'] = current_file
|
||||
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
task = context.data["task"]
|
||||
ext = os.path.splitext(file)[-1].lstrip(".")
|
||||
|
||||
data = {}
|
||||
|
||||
# create instance
|
||||
instance = context.create_instance(name=filename)
|
||||
subset = 'workfile' + task.capitalize()
|
||||
subset = instance.data["subset"]
|
||||
|
||||
data.update({
|
||||
"subset": subset,
|
||||
|
|
@ -55,7 +53,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
}]
|
||||
|
||||
instance.data.update(data)
|
||||
|
||||
self.log.info('Collected data: {}'.format(data))
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('Scene path: {}'.format(current_file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
|
|
|
|||
|
|
@ -1,11 +1,9 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
|
||||
class SaveCurrentScene(pyblish.api.ContextPlugin):
|
||||
"""Save current scene
|
||||
|
||||
"""
|
||||
"""Save current scene"""
|
||||
|
||||
label = "Save current file"
|
||||
order = pyblish.api.ExtractorOrder - 0.49
|
||||
|
|
@ -13,9 +11,13 @@ class SaveCurrentScene(pyblish.api.ContextPlugin):
|
|||
families = ["maxrender", "workfile"]
|
||||
|
||||
def process(self, context):
|
||||
from pymxs import runtime as rt
|
||||
folder = rt.maxFilePath
|
||||
file = rt.maxFileName
|
||||
current = os.path.join(folder, file)
|
||||
assert context.data["currentFile"] == current
|
||||
rt.saveMaxFile(current)
|
||||
host = registered_host()
|
||||
current_file = host.get_current_workfile()
|
||||
|
||||
assert context.data["currentFile"] == current_file
|
||||
|
||||
if host.workfile_has_unsaved_changes():
|
||||
self.log.info(f"Saving current file: {current_file}")
|
||||
host.save_workfile(current_file)
|
||||
else:
|
||||
self.log.debug("No unsaved changes, skipping file save..")
|
||||
|
|
|
|||
105
openpype/hosts/max/plugins/publish/save_scenes_for_cameras.py
Normal file
105
openpype/hosts/max/plugins/publish/save_scenes_for_cameras.py
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.lib import run_subprocess
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
|
||||
|
||||
|
||||
class SaveScenesForCamera(pyblish.api.InstancePlugin):
|
||||
"""Save scene files for multiple cameras without
|
||||
editing the original scene before deadline submission
|
||||
|
||||
"""
|
||||
|
||||
label = "Save Scene files for cameras"
|
||||
order = pyblish.api.ExtractorOrder - 0.48
|
||||
hosts = ["max"]
|
||||
families = ["maxrender"]
|
||||
|
||||
def process(self, instance):
|
||||
if not instance.data.get("multiCamera"):
|
||||
self.log.debug(
|
||||
"Multi Camera disabled. "
|
||||
"Skipping to save scene files for cameras")
|
||||
return
|
||||
current_folder = rt.maxFilePath
|
||||
current_filename = rt.maxFileName
|
||||
current_filepath = os.path.join(current_folder, current_filename)
|
||||
camera_scene_files = []
|
||||
scripts = []
|
||||
filename, ext = os.path.splitext(current_filename)
|
||||
fmt = RenderProducts().image_format()
|
||||
cameras = instance.data.get("cameras")
|
||||
if not cameras:
|
||||
return
|
||||
new_folder = f"{current_folder}_{filename}"
|
||||
os.makedirs(new_folder, exist_ok=True)
|
||||
for camera in cameras:
|
||||
new_output = RenderSettings().get_batch_render_output(camera) # noqa
|
||||
new_output = new_output.replace("\\", "/")
|
||||
new_filename = f"{filename}_{camera}{ext}"
|
||||
new_filepath = os.path.join(new_folder, new_filename)
|
||||
new_filepath = new_filepath.replace("\\", "/")
|
||||
camera_scene_files.append(new_filepath)
|
||||
RenderSettings().batch_render_elements(camera)
|
||||
rt.rendOutputFilename = new_output
|
||||
rt.saveMaxFile(current_filepath)
|
||||
script = ("""
|
||||
from pymxs import runtime as rt
|
||||
import os
|
||||
filename = "{filename}"
|
||||
new_filepath = "{new_filepath}"
|
||||
new_output = "{new_output}"
|
||||
camera = "{camera}"
|
||||
rt.rendOutputFilename = new_output
|
||||
directory = os.path.dirname(rt.rendOutputFilename)
|
||||
directory = os.path.join(directory, filename)
|
||||
render_elem = rt.maxOps.GetCurRenderElementMgr()
|
||||
render_elem_num = render_elem.NumRenderElements()
|
||||
if render_elem_num > 0:
|
||||
ext = "{ext}"
|
||||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = f"{{directory}}_{camera}_{{renderpass}}..{ext}"
|
||||
render_elem.SetRenderElementFileName(i, aov_name)
|
||||
rt.saveMaxFile(new_filepath)
|
||||
""").format(filename=instance.name,
|
||||
new_filepath=new_filepath,
|
||||
new_output=new_output,
|
||||
camera=camera,
|
||||
ext=fmt)
|
||||
scripts.append(script)
|
||||
|
||||
maxbatch_exe = os.path.join(
|
||||
os.path.dirname(sys.executable), "3dsmaxbatch")
|
||||
maxbatch_exe = maxbatch_exe.replace("\\", "/")
|
||||
if sys.platform == "windows":
|
||||
maxbatch_exe += ".exe"
|
||||
maxbatch_exe = os.path.normpath(maxbatch_exe)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
tmp_script_path = os.path.join(
|
||||
tmp_dir_name, "extract_scene_files.py")
|
||||
self.log.info("Using script file: {}".format(tmp_script_path))
|
||||
|
||||
with open(tmp_script_path, "wt") as tmp:
|
||||
for script in scripts:
|
||||
tmp.write(script + "\n")
|
||||
|
||||
try:
|
||||
current_filepath = current_filepath.replace("\\", "/")
|
||||
tmp_script_path = tmp_script_path.replace("\\", "/")
|
||||
run_subprocess([maxbatch_exe, tmp_script_path,
|
||||
"-sceneFile", current_filepath])
|
||||
except RuntimeError:
|
||||
self.log.debug("Checking the scene files existing")
|
||||
|
||||
for camera_scene in camera_scene_files:
|
||||
if not os.path.exists(camera_scene):
|
||||
self.log.error("Camera scene files not existed yet!")
|
||||
raise RuntimeError("MaxBatch.exe doesn't run as expected")
|
||||
self.log.debug(f"Found Camera scene:{camera_scene}")
|
||||
|
|
@ -0,0 +1,88 @@
|
|||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
OptionalPyblishPluginMixin,
|
||||
PublishValidationError
|
||||
)
|
||||
from openpype.hosts.max.api.action import SelectInvalidAction
|
||||
|
||||
|
||||
class ValidateCameraAttributes(OptionalPyblishPluginMixin,
|
||||
pyblish.api.InstancePlugin):
|
||||
"""Validates Camera has no invalid attribute properties
|
||||
or values.(For 3dsMax Cameras only)
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ['camera']
|
||||
hosts = ['max']
|
||||
label = 'Validate Camera Attributes'
|
||||
actions = [SelectInvalidAction, RepairAction]
|
||||
optional = True
|
||||
|
||||
DEFAULTS = ["fov", "nearrange", "farrange",
|
||||
"nearclip", "farclip"]
|
||||
CAM_TYPE = ["Freecamera", "Targetcamera",
|
||||
"Physical"]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
invalid = []
|
||||
if rt.units.DisplayType != rt.Name("Generic"):
|
||||
cls.log.warning(
|
||||
"Generic Type is not used as a scene unit\n\n"
|
||||
"sure you tweak the settings with your own values\n\n"
|
||||
"before validation.")
|
||||
cameras = instance.data["members"]
|
||||
project_settings = instance.context.data["project_settings"].get("max")
|
||||
cam_attr_settings = (
|
||||
project_settings["publish"]["ValidateCameraAttributes"]
|
||||
)
|
||||
for camera in cameras:
|
||||
if str(rt.ClassOf(camera)) not in cls.CAM_TYPE:
|
||||
cls.log.debug(
|
||||
"Skipping camera created from external plugin..")
|
||||
continue
|
||||
for attr in cls.DEFAULTS:
|
||||
default_value = cam_attr_settings.get(attr)
|
||||
if default_value == float(0):
|
||||
cls.log.debug(
|
||||
f"the value of {attr} in setting set to"
|
||||
" zero. Skipping the check.")
|
||||
continue
|
||||
if round(rt.getProperty(camera, attr), 1) != default_value:
|
||||
cls.log.error(
|
||||
f"Invalid attribute value for {camera.name}:{attr} "
|
||||
f"(should be: {default_value}))")
|
||||
invalid.append(camera)
|
||||
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
self.log.debug("Skipping Validate Camera Attributes.")
|
||||
return
|
||||
invalid = self.get_invalid(instance)
|
||||
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Invalid camera attributes found. See log.")
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
invalid_cameras = cls.get_invalid(instance)
|
||||
project_settings = instance.context.data["project_settings"].get("max")
|
||||
cam_attr_settings = (
|
||||
project_settings["publish"]["ValidateCameraAttributes"]
|
||||
)
|
||||
for camera in invalid_cameras:
|
||||
for attr in cls.DEFAULTS:
|
||||
expected_value = cam_attr_settings.get(attr)
|
||||
if expected_value == float(0):
|
||||
cls.log.debug(
|
||||
f"the value of {attr} in setting set to zero.")
|
||||
continue
|
||||
rt.setProperty(camera, attr, expected_value)
|
||||
|
|
@ -7,6 +7,9 @@
|
|||
|
||||
local pythonpath = systemTools.getEnvVariable "MAX_PYTHONPATH"
|
||||
systemTools.setEnvVariable "PYTHONPATH" pythonpath
|
||||
|
||||
/*opens the create menu on startup to ensure users are presented with a useful default view.*/
|
||||
max create mode
|
||||
|
||||
python.ExecuteFile startup
|
||||
)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -2778,9 +2778,37 @@ def bake_to_world_space(nodes,
|
|||
list: The newly created and baked node names.
|
||||
|
||||
"""
|
||||
@contextlib.contextmanager
|
||||
def _unlock_attr(attr):
|
||||
"""Unlock attribute during context if it is locked"""
|
||||
if not cmds.getAttr(attr, lock=True):
|
||||
# If not locked, do nothing
|
||||
yield
|
||||
return
|
||||
try:
|
||||
cmds.setAttr(attr, lock=False)
|
||||
yield
|
||||
finally:
|
||||
cmds.setAttr(attr, lock=True)
|
||||
|
||||
def _get_attrs(node):
|
||||
"""Workaround for buggy shape attribute listing with listAttr"""
|
||||
"""Workaround for buggy shape attribute listing with listAttr
|
||||
|
||||
This will only return keyable settable attributes that have an
|
||||
incoming connections (those that have a reason to be baked).
|
||||
|
||||
Technically this *may* fail to return attributes driven by complex
|
||||
expressions for which maya makes no connections, e.g. doing actual
|
||||
`setAttr` calls in expressions.
|
||||
|
||||
Arguments:
|
||||
node (str): The node to list attributes for.
|
||||
|
||||
Returns:
|
||||
list: Keyable attributes with incoming connections.
|
||||
The attribute may be locked.
|
||||
|
||||
"""
|
||||
attrs = cmds.listAttr(node,
|
||||
write=True,
|
||||
scalar=True,
|
||||
|
|
@ -2805,14 +2833,14 @@ def bake_to_world_space(nodes,
|
|||
|
||||
return valid_attrs
|
||||
|
||||
transform_attrs = set(["t", "r", "s",
|
||||
"tx", "ty", "tz",
|
||||
"rx", "ry", "rz",
|
||||
"sx", "sy", "sz"])
|
||||
transform_attrs = {"t", "r", "s",
|
||||
"tx", "ty", "tz",
|
||||
"rx", "ry", "rz",
|
||||
"sx", "sy", "sz"}
|
||||
|
||||
world_space_nodes = []
|
||||
with delete_after() as delete_bin:
|
||||
|
||||
with ExitStack() as stack:
|
||||
delete_bin = stack.enter_context(delete_after())
|
||||
# Create the duplicate nodes that are in world-space connected to
|
||||
# the originals
|
||||
for node in nodes:
|
||||
|
|
@ -2824,23 +2852,26 @@ def bake_to_world_space(nodes,
|
|||
name=new_name,
|
||||
renameChildren=True)[0] # noqa
|
||||
|
||||
# Connect all attributes on the node except for transform
|
||||
# attributes
|
||||
attrs = _get_attrs(node)
|
||||
attrs = set(attrs) - transform_attrs if attrs else []
|
||||
# Parent new node to world
|
||||
if cmds.listRelatives(new_node, parent=True):
|
||||
new_node = cmds.parent(new_node, world=True)[0]
|
||||
|
||||
# Temporarily unlock and passthrough connect all attributes
|
||||
# so we can bake them over time
|
||||
# Skip transform attributes because we will constrain them later
|
||||
attrs = set(_get_attrs(node)) - transform_attrs
|
||||
for attr in attrs:
|
||||
orig_node_attr = '{0}.{1}'.format(node, attr)
|
||||
new_node_attr = '{0}.{1}'.format(new_node, attr)
|
||||
|
||||
# unlock to avoid connection errors
|
||||
cmds.setAttr(new_node_attr, lock=False)
|
||||
orig_node_attr = "{}.{}".format(node, attr)
|
||||
new_node_attr = "{}.{}".format(new_node, attr)
|
||||
|
||||
# unlock during context to avoid connection errors
|
||||
stack.enter_context(_unlock_attr(new_node_attr))
|
||||
cmds.connectAttr(orig_node_attr,
|
||||
new_node_attr,
|
||||
force=True)
|
||||
|
||||
# If shapes are also baked then connect those keyable attributes
|
||||
# If shapes are also baked then also temporarily unlock and
|
||||
# passthrough connect all shape attributes for baking
|
||||
if shape:
|
||||
children_shapes = cmds.listRelatives(new_node,
|
||||
children=True,
|
||||
|
|
@ -2855,25 +2886,19 @@ def bake_to_world_space(nodes,
|
|||
children_shapes):
|
||||
attrs = _get_attrs(orig_shape)
|
||||
for attr in attrs:
|
||||
orig_node_attr = '{0}.{1}'.format(orig_shape, attr)
|
||||
new_node_attr = '{0}.{1}'.format(new_shape, attr)
|
||||
|
||||
# unlock to avoid connection errors
|
||||
cmds.setAttr(new_node_attr, lock=False)
|
||||
orig_node_attr = "{}.{}".format(orig_shape, attr)
|
||||
new_node_attr = "{}.{}".format(new_shape, attr)
|
||||
|
||||
# unlock during context to avoid connection errors
|
||||
stack.enter_context(_unlock_attr(new_node_attr))
|
||||
cmds.connectAttr(orig_node_attr,
|
||||
new_node_attr,
|
||||
force=True)
|
||||
|
||||
# Parent to world
|
||||
if cmds.listRelatives(new_node, parent=True):
|
||||
new_node = cmds.parent(new_node, world=True)[0]
|
||||
|
||||
# Unlock transform attributes so constraint can be created
|
||||
# Constraint transforms
|
||||
for attr in transform_attrs:
|
||||
cmds.setAttr('{0}.{1}'.format(new_node, attr), lock=False)
|
||||
|
||||
# Constraints
|
||||
transform_attr = "{}.{}".format(new_node, attr)
|
||||
stack.enter_context(_unlock_attr(transform_attr))
|
||||
delete_bin.extend(cmds.parentConstraint(node, new_node, mo=False))
|
||||
delete_bin.extend(cmds.scaleConstraint(node, new_node, mo=False))
|
||||
|
||||
|
|
@ -3117,119 +3142,6 @@ def fix_incompatible_containers():
|
|||
"ReferenceLoader", type="string")
|
||||
|
||||
|
||||
def _null(*args):
|
||||
pass
|
||||
|
||||
|
||||
class shelf():
|
||||
'''A simple class to build shelves in maya. Since the build method is empty,
|
||||
it should be extended by the derived class to build the necessary shelf
|
||||
elements. By default it creates an empty shelf called "customShelf".'''
|
||||
|
||||
###########################################################################
|
||||
'''This is an example shelf.'''
|
||||
# class customShelf(_shelf):
|
||||
# def build(self):
|
||||
# self.addButon(label="button1")
|
||||
# self.addButon("button2")
|
||||
# self.addButon("popup")
|
||||
# p = cmds.popupMenu(b=1)
|
||||
# self.addMenuItem(p, "popupMenuItem1")
|
||||
# self.addMenuItem(p, "popupMenuItem2")
|
||||
# sub = self.addSubMenu(p, "subMenuLevel1")
|
||||
# self.addMenuItem(sub, "subMenuLevel1Item1")
|
||||
# sub2 = self.addSubMenu(sub, "subMenuLevel2")
|
||||
# self.addMenuItem(sub2, "subMenuLevel2Item1")
|
||||
# self.addMenuItem(sub2, "subMenuLevel2Item2")
|
||||
# self.addMenuItem(sub, "subMenuLevel1Item2")
|
||||
# self.addMenuItem(p, "popupMenuItem3")
|
||||
# self.addButon("button3")
|
||||
# customShelf()
|
||||
###########################################################################
|
||||
|
||||
def __init__(self, name="customShelf", iconPath="", preset={}):
|
||||
self.name = name
|
||||
|
||||
self.iconPath = iconPath
|
||||
|
||||
self.labelBackground = (0, 0, 0, 0)
|
||||
self.labelColour = (.9, .9, .9)
|
||||
|
||||
self.preset = preset
|
||||
|
||||
self._cleanOldShelf()
|
||||
cmds.setParent(self.name)
|
||||
self.build()
|
||||
|
||||
def build(self):
|
||||
'''This method should be overwritten in derived classes to actually
|
||||
build the shelf elements. Otherwise, nothing is added to the shelf.'''
|
||||
for item in self.preset['items']:
|
||||
if not item.get('command'):
|
||||
item['command'] = self._null
|
||||
if item['type'] == 'button':
|
||||
self.addButon(item['name'],
|
||||
command=item['command'],
|
||||
icon=item['icon'])
|
||||
if item['type'] == 'menuItem':
|
||||
self.addMenuItem(item['parent'],
|
||||
item['name'],
|
||||
command=item['command'],
|
||||
icon=item['icon'])
|
||||
if item['type'] == 'subMenu':
|
||||
self.addMenuItem(item['parent'],
|
||||
item['name'],
|
||||
command=item['command'],
|
||||
icon=item['icon'])
|
||||
|
||||
def addButon(self, label, icon="commandButton.png",
|
||||
command=_null, doubleCommand=_null):
|
||||
'''
|
||||
Adds a shelf button with the specified label, command,
|
||||
double click command and image.
|
||||
'''
|
||||
cmds.setParent(self.name)
|
||||
if icon:
|
||||
icon = os.path.join(self.iconPath, icon)
|
||||
print(icon)
|
||||
cmds.shelfButton(width=37, height=37, image=icon, label=label,
|
||||
command=command, dcc=doubleCommand,
|
||||
imageOverlayLabel=label, olb=self.labelBackground,
|
||||
olc=self.labelColour)
|
||||
|
||||
def addMenuItem(self, parent, label, command=_null, icon=""):
|
||||
'''
|
||||
Adds a shelf button with the specified label, command,
|
||||
double click command and image.
|
||||
'''
|
||||
if icon:
|
||||
icon = os.path.join(self.iconPath, icon)
|
||||
print(icon)
|
||||
return cmds.menuItem(p=parent, label=label, c=command, i="")
|
||||
|
||||
def addSubMenu(self, parent, label, icon=None):
|
||||
'''
|
||||
Adds a sub menu item with the specified label and icon to
|
||||
the specified parent popup menu.
|
||||
'''
|
||||
if icon:
|
||||
icon = os.path.join(self.iconPath, icon)
|
||||
print(icon)
|
||||
return cmds.menuItem(p=parent, label=label, i=icon, subMenu=1)
|
||||
|
||||
def _cleanOldShelf(self):
|
||||
'''
|
||||
Checks if the shelf exists and empties it if it does
|
||||
or creates it if it does not.
|
||||
'''
|
||||
if cmds.shelfLayout(self.name, ex=1):
|
||||
if cmds.shelfLayout(self.name, q=1, ca=1):
|
||||
for each in cmds.shelfLayout(self.name, q=1, ca=1):
|
||||
cmds.deleteUI(each)
|
||||
else:
|
||||
cmds.shelfLayout(self.name, p="ShelfLayout")
|
||||
|
||||
|
||||
def update_content_on_context_change():
|
||||
"""
|
||||
This will update scene content to match new asset on context change
|
||||
|
|
|
|||
|
|
@ -265,13 +265,16 @@ def transfer_image_planes(source_cameras, target_cameras,
|
|||
try:
|
||||
for source_camera, target_camera in zip(source_cameras,
|
||||
target_cameras):
|
||||
image_planes = cmds.listConnections(source_camera,
|
||||
image_plane_plug = "{}.imagePlane".format(source_camera)
|
||||
image_planes = cmds.listConnections(image_plane_plug,
|
||||
source=True,
|
||||
destination=False,
|
||||
type="imagePlane") or []
|
||||
|
||||
# Split of the parent path they are attached - we want
|
||||
# the image plane node name.
|
||||
# the image plane node name if attached to a camera.
|
||||
# TODO: Does this still mean the image plane name is unique?
|
||||
image_planes = [x.split("->", 1)[1] for x in image_planes]
|
||||
image_planes = [x.split("->", 1)[-1] for x in image_planes]
|
||||
|
||||
if not image_planes:
|
||||
continue
|
||||
|
|
@ -282,7 +285,7 @@ def transfer_image_planes(source_cameras, target_cameras,
|
|||
if source_camera == target_camera:
|
||||
continue
|
||||
_attach_image_plane(target_camera, image_plane)
|
||||
else: # explicitly dettaching image planes
|
||||
else: # explicitly detach image planes
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
originals[source_camera].append(image_plane)
|
||||
yield
|
||||
|
|
|
|||
|
|
@ -1,77 +0,0 @@
|
|||
from collections import defaultdict
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError, ValidatePipelineOrder)
|
||||
|
||||
|
||||
class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
|
||||
"""Validate the relational nodes of the look data to ensure every node is
|
||||
unique.
|
||||
|
||||
This ensures the all member ids are unique. Every node id must be from
|
||||
a single node in the scene.
|
||||
|
||||
That means there's only ever one of a specific node inside the look to be
|
||||
published. For example if you'd have a loaded 3x the same tree and by
|
||||
accident you're trying to publish them all together in a single look that
|
||||
would be invalid, because they are the same tree. It should be included
|
||||
inside the look instance only once.
|
||||
|
||||
"""
|
||||
|
||||
order = ValidatePipelineOrder
|
||||
label = 'Look members unique'
|
||||
hosts = ['maya']
|
||||
families = ['look']
|
||||
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction,
|
||||
openpype.hosts.maya.api.action.GenerateUUIDsOnInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
"""Process all meshes"""
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
("Members found without non-unique IDs: "
|
||||
"{0}").format(invalid))
|
||||
|
||||
@staticmethod
|
||||
def get_invalid(instance):
|
||||
"""
|
||||
Check all the relationship members of the objectSets
|
||||
|
||||
Example of the lookData relationships:
|
||||
{"uuid": 59b2bb27bda2cb2776206dd8:79ab0a63ffdf,
|
||||
"members":[{"uuid": 59b2bb27bda2cb2776206dd8:1b158cc7496e,
|
||||
"name": |model_GRP|body_GES|body_GESShape}
|
||||
...,
|
||||
...]}
|
||||
|
||||
Args:
|
||||
instance:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
|
||||
# Get all members from the sets
|
||||
id_nodes = defaultdict(set)
|
||||
relationships = instance.data["lookData"]["relationships"]
|
||||
|
||||
for relationship in relationships.values():
|
||||
for member in relationship['members']:
|
||||
node_id = member["uuid"]
|
||||
node = member["name"]
|
||||
id_nodes[node_id].add(node)
|
||||
|
||||
# Check if any id has more than 1 node
|
||||
invalid = []
|
||||
for nodes in id_nodes.values():
|
||||
if len(nodes) > 1:
|
||||
invalid.extend(nodes)
|
||||
|
||||
return invalid
|
||||
|
|
@ -7,6 +7,7 @@ from openpype.hosts.maya.api import lib
|
|||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -38,7 +39,8 @@ class ValidateRigJointsHidden(pyblish.api.InstancePlugin):
|
|||
invalid = self.get_invalid(instance)
|
||||
|
||||
if invalid:
|
||||
raise ValueError("Visible joints found: {0}".format(invalid))
|
||||
raise PublishValidationError(
|
||||
"Visible joints found: {0}".format(invalid))
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
|
|
|||
|
|
@ -44,4 +44,8 @@ class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin):
|
|||
|
||||
if not is_subdir(scene_name, root_dir):
|
||||
raise PublishValidationError(
|
||||
"Maya workspace is not set correctly.")
|
||||
"Maya workspace is not set correctly.\n\n"
|
||||
f"Current workfile `{scene_name}` is not inside the "
|
||||
"current Maya project root directory `{root_dir}`.\n\n"
|
||||
"Please use Workfile app to re-save."
|
||||
)
|
||||
|
|
|
|||
|
|
@ -46,24 +46,5 @@ if bool(int(os.environ.get(key, "0"))):
|
|||
lowestPriority=True
|
||||
)
|
||||
|
||||
# Build a shelf.
|
||||
shelf_preset = settings['maya'].get('project_shelf')
|
||||
if shelf_preset:
|
||||
icon_path = os.path.join(
|
||||
os.environ['OPENPYPE_PROJECT_SCRIPTS'],
|
||||
project_name,
|
||||
"icons")
|
||||
icon_path = os.path.abspath(icon_path)
|
||||
|
||||
for i in shelf_preset['imports']:
|
||||
import_string = "from {} import {}".format(project_name, i)
|
||||
print(import_string)
|
||||
exec(import_string)
|
||||
|
||||
cmds.evalDeferred(
|
||||
"mlib.shelf(name=shelf_preset['name'], iconPath=icon_path,"
|
||||
" preset=shelf_preset)"
|
||||
)
|
||||
|
||||
|
||||
print("Finished OpenPype usersetup.")
|
||||
|
|
|
|||
|
|
@ -43,7 +43,8 @@ from .lib import (
|
|||
get_node_data,
|
||||
set_node_data,
|
||||
update_node_data,
|
||||
create_write_node
|
||||
create_write_node,
|
||||
link_knobs
|
||||
)
|
||||
from .utils import (
|
||||
colorspace_exists_on_node,
|
||||
|
|
@ -95,6 +96,7 @@ __all__ = (
|
|||
"set_node_data",
|
||||
"update_node_data",
|
||||
"create_write_node",
|
||||
"link_knobs",
|
||||
|
||||
"colorspace_exists_on_node",
|
||||
"get_colorspace_list",
|
||||
|
|
|
|||
|
|
@ -3499,3 +3499,27 @@ def create_camera_node_by_version():
|
|||
return nuke.createNode("Camera4")
|
||||
else:
|
||||
return nuke.createNode("Camera2")
|
||||
|
||||
|
||||
def link_knobs(knobs, node, group_node):
|
||||
"""Link knobs from inside `group_node`"""
|
||||
|
||||
missing_knobs = []
|
||||
for knob in knobs:
|
||||
if knob in group_node.knobs():
|
||||
continue
|
||||
|
||||
if knob not in node.knobs().keys():
|
||||
missing_knobs.append(knob)
|
||||
|
||||
link = nuke.Link_Knob("")
|
||||
link.makeLink(node.name(), knob)
|
||||
link.setName(knob)
|
||||
link.setFlag(0x1000)
|
||||
group_node.addKnob(link)
|
||||
|
||||
if missing_knobs:
|
||||
raise ValueError(
|
||||
"Write node exposed knobs missing:\n\n{}\n\nPlease review"
|
||||
" project settings.".format("\n".join(missing_knobs))
|
||||
)
|
||||
|
|
|
|||
|
|
@ -44,7 +44,8 @@ from .lib import (
|
|||
get_view_process_node,
|
||||
get_viewer_config_from_string,
|
||||
deprecated,
|
||||
get_filenames_without_hash
|
||||
get_filenames_without_hash,
|
||||
link_knobs
|
||||
)
|
||||
from .pipeline import (
|
||||
list_instances,
|
||||
|
|
@ -1344,3 +1345,13 @@ def _remove_old_knobs(node):
|
|||
node.removeKnob(knob)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
|
||||
def exposed_write_knobs(settings, plugin_name, instance_node):
|
||||
exposed_knobs = settings["nuke"]["create"][plugin_name].get(
|
||||
"exposed_knobs", []
|
||||
)
|
||||
if exposed_knobs:
|
||||
instance_node.addKnob(nuke.Text_Knob('', 'Write Knobs'))
|
||||
write_node = nuke.allNodes(group=instance_node, filter="Write")[0]
|
||||
link_knobs(exposed_knobs, write_node, instance_node)
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ from openpype.lib import (
|
|||
EnumDef
|
||||
)
|
||||
from openpype.hosts.nuke import api as napi
|
||||
from openpype.hosts.nuke.api.plugin import exposed_write_knobs
|
||||
|
||||
|
||||
class CreateWriteImage(napi.NukeWriteCreator):
|
||||
|
|
@ -132,6 +133,10 @@ class CreateWriteImage(napi.NukeWriteCreator):
|
|||
instance.data_to_store()
|
||||
)
|
||||
|
||||
exposed_write_knobs(
|
||||
self.project_settings, self.__class__.__name__, instance_node
|
||||
)
|
||||
|
||||
return instance
|
||||
|
||||
except Exception as er:
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ from openpype.lib import (
|
|||
BoolDef
|
||||
)
|
||||
from openpype.hosts.nuke import api as napi
|
||||
from openpype.hosts.nuke.api.plugin import exposed_write_knobs
|
||||
|
||||
|
||||
class CreateWritePrerender(napi.NukeWriteCreator):
|
||||
|
|
@ -119,6 +120,10 @@ class CreateWritePrerender(napi.NukeWriteCreator):
|
|||
instance.data_to_store()
|
||||
)
|
||||
|
||||
exposed_write_knobs(
|
||||
self.project_settings, self.__class__.__name__, instance_node
|
||||
)
|
||||
|
||||
return instance
|
||||
|
||||
except Exception as er:
|
||||
|
|
|
|||
|
|
@ -9,6 +9,7 @@ from openpype.lib import (
|
|||
BoolDef
|
||||
)
|
||||
from openpype.hosts.nuke import api as napi
|
||||
from openpype.hosts.nuke.api.plugin import exposed_write_knobs
|
||||
|
||||
|
||||
class CreateWriteRender(napi.NukeWriteCreator):
|
||||
|
|
@ -113,6 +114,10 @@ class CreateWriteRender(napi.NukeWriteCreator):
|
|||
instance.data_to_store()
|
||||
)
|
||||
|
||||
exposed_write_knobs(
|
||||
self.project_settings, self.__class__.__name__, instance_node
|
||||
)
|
||||
|
||||
return instance
|
||||
|
||||
except Exception as er:
|
||||
|
|
|
|||
|
|
@ -112,8 +112,6 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
project_name = get_current_project_name()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
object_name = container["node"]
|
||||
|
||||
# get main variables
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
|
|
@ -139,7 +137,7 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
file = get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
with maintained_selection():
|
||||
camera_node = nuke.toNode(object_name)
|
||||
camera_node = container["node"]
|
||||
camera_node['selected'].setValue(True)
|
||||
|
||||
# collect input output dependencies
|
||||
|
|
@ -154,9 +152,10 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
xpos = camera_node.xpos()
|
||||
ypos = camera_node.ypos()
|
||||
nuke.nodeCopy("%clipboard%")
|
||||
camera_name = camera_node.name()
|
||||
nuke.delete(camera_node)
|
||||
nuke.nodePaste("%clipboard%")
|
||||
camera_node = nuke.toNode(object_name)
|
||||
camera_node = nuke.toNode(camera_name)
|
||||
camera_node.setXYpos(xpos, ypos)
|
||||
|
||||
# link to original input nodes
|
||||
|
|
|
|||
|
|
@ -0,0 +1,77 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import get_errored_instances_from_context
|
||||
from openpype.hosts.nuke.api.lib import link_knobs
|
||||
from openpype.pipeline.publish import (
|
||||
OptionalPyblishPluginMixin,
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class RepairExposedKnobs(pyblish.api.Action):
|
||||
label = "Repair"
|
||||
on = "failed"
|
||||
icon = "wrench"
|
||||
|
||||
def process(self, context, plugin):
|
||||
instances = get_errored_instances_from_context(context)
|
||||
|
||||
for instance in instances:
|
||||
child_nodes = (
|
||||
instance.data.get("transientData", {}).get("childNodes")
|
||||
or instance
|
||||
)
|
||||
|
||||
write_group_node = instance.data["transientData"]["node"]
|
||||
# get write node from inside of group
|
||||
write_node = None
|
||||
for x in child_nodes:
|
||||
if x.Class() == "Write":
|
||||
write_node = x
|
||||
|
||||
plugin_name = plugin.families_mapping[instance.data["family"]]
|
||||
nuke_settings = instance.context.data["project_settings"]["nuke"]
|
||||
create_settings = nuke_settings["create"][plugin_name]
|
||||
exposed_knobs = create_settings["exposed_knobs"]
|
||||
link_knobs(exposed_knobs, write_node, write_group_node)
|
||||
|
||||
|
||||
class ValidateExposedKnobs(
|
||||
OptionalPyblishPluginMixin,
|
||||
pyblish.api.InstancePlugin
|
||||
):
|
||||
""" Validate write node exposed knobs.
|
||||
|
||||
Compare exposed linked knobs to settings.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
optional = True
|
||||
families = ["render", "prerender", "image"]
|
||||
label = "Validate Exposed Knobs"
|
||||
actions = [RepairExposedKnobs]
|
||||
hosts = ["nuke"]
|
||||
families_mapping = {
|
||||
"render": "CreateWriteRender",
|
||||
"prerender": "CreateWritePrerender",
|
||||
"image": "CreateWriteImage"
|
||||
}
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
plugin = self.families_mapping[instance.data["family"]]
|
||||
group_node = instance.data["transientData"]["node"]
|
||||
nuke_settings = instance.context.data["project_settings"]["nuke"]
|
||||
create_settings = nuke_settings["create"][plugin]
|
||||
exposed_knobs = create_settings.get("exposed_knobs", [])
|
||||
unexposed_knobs = []
|
||||
for knob in exposed_knobs:
|
||||
if knob not in group_node.knobs():
|
||||
unexposed_knobs.append(knob)
|
||||
|
||||
if unexposed_knobs:
|
||||
raise PublishValidationError(
|
||||
"Missing exposed knobs: {}".format(unexposed_knobs)
|
||||
)
|
||||
|
|
@ -10,7 +10,7 @@ from openpype.hosts.nuke.api.lib import (
|
|||
|
||||
from openpype.pipeline.publish import (
|
||||
PublishXmlValidationError,
|
||||
OptionalPyblishPluginMixin,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -129,7 +129,7 @@ class ValidateNukeWriteNode(
|
|||
and key != "file"
|
||||
and key != "tile_color"
|
||||
):
|
||||
check.append([key, node_value, write_node[key].value()])
|
||||
check.append([key, fixed_values, write_node[key].value()])
|
||||
|
||||
if check:
|
||||
self._make_error(check)
|
||||
|
|
@ -137,7 +137,7 @@ class ValidateNukeWriteNode(
|
|||
def _make_error(self, check):
|
||||
# sourcery skip: merge-assign-and-aug-assign, move-assign-in-block
|
||||
dbg_msg = "Write node's knobs values are not correct!\n"
|
||||
msg_add = "Knob '{0}' > Correct: `{1}` > Wrong: `{2}`"
|
||||
msg_add = "Knob '{0}' > Expected: `{1}` > Current: `{2}`"
|
||||
|
||||
details = [
|
||||
msg_add.format(item[0], item[1], item[2])
|
||||
|
|
|
|||
|
|
@ -3,12 +3,11 @@ import sys
|
|||
import contextlib
|
||||
import traceback
|
||||
|
||||
from qtpy import QtWidgets
|
||||
|
||||
from openpype.lib import env_value_to_bool, Logger
|
||||
from openpype.modules import ModulesManager
|
||||
from openpype.pipeline import install_host
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.tools.utils import get_openpype_qt_app
|
||||
from openpype.tests.lib import is_in_tests
|
||||
|
||||
from .launch_logic import ProcessLauncher, stub
|
||||
|
|
@ -30,7 +29,7 @@ def main(*subprocess_args):
|
|||
|
||||
# coloring in StdOutBroker
|
||||
os.environ["OPENPYPE_LOG_NO_COLORS"] = "False"
|
||||
app = QtWidgets.QApplication([])
|
||||
app = get_openpype_qt_app()
|
||||
app.setQuitOnLastWindowClosed(False)
|
||||
|
||||
launcher = ProcessLauncher(subprocess_args)
|
||||
|
|
|
|||
|
|
@ -717,6 +717,11 @@ def swap_clips(from_clip, to_clip, to_in_frame, to_out_frame):
|
|||
bool: True if successfully replaced
|
||||
|
||||
"""
|
||||
# copy ACES input transform from timeline clip to new media item
|
||||
mediapool_item_from_timeline = from_clip.GetMediaPoolItem()
|
||||
_idt = mediapool_item_from_timeline.GetClipProperty('IDT')
|
||||
to_clip.SetClipProperty('IDT', _idt)
|
||||
|
||||
_clip_prop = to_clip.GetClipProperty
|
||||
to_clip_name = _clip_prop("File Name")
|
||||
# add clip item as take to timeline
|
||||
|
|
|
|||
|
|
@ -481,14 +481,16 @@ class ClipLoader:
|
|||
)
|
||||
_clip_property = media_pool_item.GetClipProperty
|
||||
|
||||
source_in = int(_clip_property("Start"))
|
||||
source_out = int(_clip_property("End"))
|
||||
# Read trimming from timeline item
|
||||
timeline_item_in = timeline_item.GetLeftOffset()
|
||||
timeline_item_len = timeline_item.GetDuration()
|
||||
timeline_item_out = timeline_item_in + timeline_item_len
|
||||
|
||||
lib.swap_clips(
|
||||
timeline_item,
|
||||
media_pool_item,
|
||||
source_in,
|
||||
source_out
|
||||
timeline_item_in,
|
||||
timeline_item_out
|
||||
)
|
||||
|
||||
print("Loading clips: `{}`".format(self.data["clip_name"]))
|
||||
|
|
|
|||
|
|
@ -1,10 +1,13 @@
|
|||
import os
|
||||
|
||||
import click
|
||||
|
||||
from openpype.lib import get_openpype_execute_args
|
||||
from openpype.lib.execute import run_detached_process
|
||||
from openpype.modules import OpenPypeModule, ITrayAction, IHostAddon
|
||||
from openpype.modules import (
|
||||
click_wrap,
|
||||
OpenPypeModule,
|
||||
ITrayAction,
|
||||
IHostAddon,
|
||||
)
|
||||
|
||||
STANDALONEPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
|
@ -37,10 +40,10 @@ class StandAlonePublishAddon(OpenPypeModule, ITrayAction, IHostAddon):
|
|||
run_detached_process(args)
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
click_group.add_command(cli_main.to_click_obj())
|
||||
|
||||
|
||||
@click.group(
|
||||
@click_wrap.group(
|
||||
StandAlonePublishAddon.name,
|
||||
help="StandalonePublisher related commands.")
|
||||
def cli_main():
|
||||
|
|
|
|||
|
|
@ -80,6 +80,7 @@ class ValidateOutputMaps(pyblish.api.InstancePlugin):
|
|||
self.log.warning(f"Disabling texture instance: "
|
||||
f"{image_instance}")
|
||||
image_instance.data["active"] = False
|
||||
image_instance.data["publish"] = False
|
||||
image_instance.data["integrate"] = False
|
||||
representation.setdefault("tags", []).append("delete")
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -1,10 +1,13 @@
|
|||
import os
|
||||
|
||||
import click
|
||||
|
||||
from openpype.lib import get_openpype_execute_args
|
||||
from openpype.lib.execute import run_detached_process
|
||||
from openpype.modules import OpenPypeModule, ITrayAction, IHostAddon
|
||||
from openpype.modules import (
|
||||
click_wrap,
|
||||
OpenPypeModule,
|
||||
ITrayAction,
|
||||
IHostAddon,
|
||||
)
|
||||
|
||||
TRAYPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
|
@ -38,10 +41,12 @@ class TrayPublishAddon(OpenPypeModule, IHostAddon, ITrayAction):
|
|||
run_detached_process(args)
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
click_group.add_command(cli_main.to_click_obj())
|
||||
|
||||
|
||||
@click.group(TrayPublishAddon.name, help="TrayPublisher related commands.")
|
||||
@click_wrap.group(
|
||||
TrayPublishAddon.name,
|
||||
help="TrayPublisher related commands.")
|
||||
def cli_main():
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -32,6 +32,7 @@ SHARED_DATA_KEY = "openpype.traypublisher.instances"
|
|||
|
||||
class HiddenTrayPublishCreator(HiddenCreator):
|
||||
host_name = "traypublisher"
|
||||
settings_category = "traypublisher"
|
||||
|
||||
def collect_instances(self):
|
||||
instances_by_identifier = cache_and_get_instances(
|
||||
|
|
@ -68,6 +69,7 @@ class HiddenTrayPublishCreator(HiddenCreator):
|
|||
class TrayPublishCreator(Creator):
|
||||
create_allow_context_change = True
|
||||
host_name = "traypublisher"
|
||||
settings_category = "traypublisher"
|
||||
|
||||
def collect_instances(self):
|
||||
instances_by_identifier = cache_and_get_instances(
|
||||
|
|
@ -221,9 +223,16 @@ class SettingsCreator(TrayPublishCreator):
|
|||
):
|
||||
filtered_instance_data.append(instance)
|
||||
|
||||
asset_names = {
|
||||
instance["asset"]
|
||||
for instance in filtered_instance_data}
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_names = {
|
||||
instance["folderPath"]
|
||||
for instance in filtered_instance_data
|
||||
}
|
||||
else:
|
||||
asset_names = {
|
||||
instance["asset"]
|
||||
for instance in filtered_instance_data
|
||||
}
|
||||
subset_names = {
|
||||
instance["subset"]
|
||||
for instance in filtered_instance_data}
|
||||
|
|
@ -231,7 +240,10 @@ class SettingsCreator(TrayPublishCreator):
|
|||
asset_names, subset_names
|
||||
)
|
||||
for instance in filtered_instance_data:
|
||||
asset_name = instance["asset"]
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_name = instance["folderPath"]
|
||||
else:
|
||||
asset_name = instance["asset"]
|
||||
subset_name = instance["subset"]
|
||||
version = subset_docs_by_asset_id[asset_name][subset_name]
|
||||
instance["creator_attributes"]["version_to_use"] = version
|
||||
|
|
|
|||
|
|
@ -381,15 +381,19 @@ or updating already created. Publishing will create OTIO file.
|
|||
"""
|
||||
self.asset_name_check = []
|
||||
|
||||
tracks = otio_timeline.each_child(
|
||||
descended_from_type=otio.schema.Track
|
||||
)
|
||||
tracks = [
|
||||
track for track in otio_timeline.each_child(
|
||||
descended_from_type=otio.schema.Track)
|
||||
if track.kind == "Video"
|
||||
]
|
||||
|
||||
# media data for audio sream and reference solving
|
||||
# media data for audio stream and reference solving
|
||||
media_data = self._get_media_source_metadata(media_path)
|
||||
|
||||
for track in tracks:
|
||||
# set track name
|
||||
track.name = f"{sequence_file_name} - {otio_timeline.name}"
|
||||
|
||||
try:
|
||||
track_start_frame = (
|
||||
abs(track.source_range.start_time.value)
|
||||
|
|
@ -398,19 +402,19 @@ or updating already created. Publishing will create OTIO file.
|
|||
except AttributeError:
|
||||
track_start_frame = 0
|
||||
|
||||
|
||||
for clip in track.each_child():
|
||||
if not self._validate_clip_for_processing(clip):
|
||||
for otio_clip in track.each_child():
|
||||
if not self._validate_clip_for_processing(otio_clip):
|
||||
continue
|
||||
|
||||
|
||||
# get available frames info to clip data
|
||||
self._create_otio_reference(clip, media_path, media_data)
|
||||
self._create_otio_reference(otio_clip, media_path, media_data)
|
||||
|
||||
# convert timeline range to source range
|
||||
self._restore_otio_source_range(clip)
|
||||
self._restore_otio_source_range(otio_clip)
|
||||
|
||||
base_instance_data = self._get_base_instance_data(
|
||||
clip,
|
||||
otio_clip,
|
||||
instance_data,
|
||||
track_start_frame
|
||||
)
|
||||
|
|
@ -429,7 +433,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
continue
|
||||
|
||||
instance = self._make_subset_instance(
|
||||
clip,
|
||||
otio_clip,
|
||||
_fpreset,
|
||||
deepcopy(base_instance_data),
|
||||
parenting_data
|
||||
|
|
|
|||
|
|
@ -79,6 +79,7 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
clip for clip in otio_timeline.each_child(
|
||||
descended_from_type=otio.schema.Clip)
|
||||
if clip.name == otio_clip.name
|
||||
if clip.parent().kind == "Video"
|
||||
]
|
||||
|
||||
otio_clip = clips.pop()
|
||||
|
|
|
|||
|
|
@ -216,6 +216,11 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
|||
instance.data["thumbnailSource"] = first_filepath
|
||||
|
||||
review_representation["tags"].append("review")
|
||||
|
||||
# Adding "review" to representation name since it can clash with main
|
||||
# representation if they share the same extension.
|
||||
review_representation["outputName"] = "review"
|
||||
|
||||
self.log.debug("Representation {} was marked for review. {}".format(
|
||||
review_representation["name"], review_path
|
||||
))
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
Subproject commit 63266607ceb972a61484f046634ddfc9eb0b5757
|
||||
Subproject commit a4755d2869694fcf58c98119298cde8d204e2ce4
|
||||
|
|
@ -64,7 +64,7 @@ class CollectRenderInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
new_data = new_instance.data
|
||||
|
||||
new_data["asset"] = seq_name
|
||||
new_data["asset"] = f"/{s.get('output')}"
|
||||
new_data["setMembers"] = seq_name
|
||||
new_data["family"] = "render"
|
||||
new_data["families"] = ["render", "review"]
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
import os
|
||||
|
||||
import click
|
||||
|
||||
from openpype.modules import OpenPypeModule, IHostAddon
|
||||
from openpype.modules import click_wrap, OpenPypeModule, IHostAddon
|
||||
|
||||
WEBPUBLISHER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
|
@ -38,10 +36,10 @@ class WebpublisherAddon(OpenPypeModule, IHostAddon):
|
|||
)
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
click_group.add_command(cli_main.to_click_obj())
|
||||
|
||||
|
||||
@click.group(
|
||||
@click_wrap.group(
|
||||
WebpublisherAddon.name,
|
||||
help="Webpublisher related commands.")
|
||||
def cli_main():
|
||||
|
|
@ -49,10 +47,10 @@ def cli_main():
|
|||
|
||||
|
||||
@cli_main.command()
|
||||
@click.argument("path")
|
||||
@click.option("-u", "--user", help="User email address")
|
||||
@click.option("-p", "--project", help="Project")
|
||||
@click.option("-t", "--targets", help="Targets", default=None,
|
||||
@click_wrap.argument("path")
|
||||
@click_wrap.option("-u", "--user", help="User email address")
|
||||
@click_wrap.option("-p", "--project", help="Project")
|
||||
@click_wrap.option("-t", "--targets", help="Targets", default=None,
|
||||
multiple=True)
|
||||
def publish(project, path, user=None, targets=None):
|
||||
"""Start publishing (Inner command).
|
||||
|
|
@ -67,11 +65,11 @@ def publish(project, path, user=None, targets=None):
|
|||
|
||||
|
||||
@cli_main.command()
|
||||
@click.argument("path")
|
||||
@click.option("-p", "--project", help="Project")
|
||||
@click.option("-h", "--host", help="Host")
|
||||
@click.option("-u", "--user", help="User email address")
|
||||
@click.option("-t", "--targets", help="Targets", default=None,
|
||||
@click_wrap.argument("path")
|
||||
@click_wrap.option("-p", "--project", help="Project")
|
||||
@click_wrap.option("-h", "--host", help="Host")
|
||||
@click_wrap.option("-u", "--user", help="User email address")
|
||||
@click_wrap.option("-t", "--targets", help="Targets", default=None,
|
||||
multiple=True)
|
||||
def publishfromapp(project, path, host, user=None, targets=None):
|
||||
"""Start publishing through application (Inner command).
|
||||
|
|
@ -86,10 +84,10 @@ def publishfromapp(project, path, host, user=None, targets=None):
|
|||
|
||||
|
||||
@cli_main.command()
|
||||
@click.option("-e", "--executable", help="Executable")
|
||||
@click.option("-u", "--upload_dir", help="Upload dir")
|
||||
@click.option("-h", "--host", help="Host", default=None)
|
||||
@click.option("-p", "--port", help="Port", default=None)
|
||||
@click_wrap.option("-e", "--executable", help="Executable")
|
||||
@click_wrap.option("-u", "--upload_dir", help="Upload dir")
|
||||
@click_wrap.option("-h", "--host", help="Host", default=None)
|
||||
@click_wrap.option("-p", "--port", help="Port", default=None)
|
||||
def webserver(executable, upload_dir, host=None, port=None):
|
||||
"""Start service for communication with Webpublish Front end.
|
||||
|
||||
|
|
|
|||
|
|
@ -6,13 +6,13 @@ def requests_post(*args, **kwargs):
|
|||
"""Wrap request post method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
Disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
of defense SSL is providing, and it is not recommended.
|
||||
|
||||
"""
|
||||
if "verify" not in kwargs:
|
||||
|
|
@ -24,13 +24,13 @@ def requests_get(*args, **kwargs):
|
|||
"""Wrap request get method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
Disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
of defense SSL is providing, and it is not recommended.
|
||||
|
||||
"""
|
||||
if "verify" not in kwargs:
|
||||
|
|
|
|||
|
|
@ -44,17 +44,17 @@ XML_CHAR_REF_REGEX_HEX = re.compile(r"&#x?[0-9a-fA-F]+;")
|
|||
ARRAY_TYPE_REGEX = re.compile(r"^(int|float|string)\[\d+\]$")
|
||||
|
||||
IMAGE_EXTENSIONS = {
|
||||
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
|
||||
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
|
||||
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
|
||||
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
|
||||
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
|
||||
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
|
||||
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
|
||||
".pictor", ".png", ".psd", ".psb", ".psp", ".qtvr", ".ras",
|
||||
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
|
||||
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
|
||||
".xpm", ".xwd"
|
||||
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave",
|
||||
".cal", ".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr",
|
||||
".fits", ".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc",
|
||||
".icer", ".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
|
||||
".jng", ".jpeg", ".jpeg-ls", ".jpeg-hdr", ".2000", ".jpg",
|
||||
".kra", ".logluv", ".mng", ".miff", ".nrrd", ".ora",
|
||||
".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
|
||||
".pictor", ".png", ".psd", ".psb", ".psp", ".qtvr",
|
||||
".ras", ".rgbe", ".sgi", ".tga",
|
||||
".tif", ".tiff", ".tiff/ep", ".tiff/it", ".ufo", ".ufp",
|
||||
".wbmp", ".webp", ".xr", ".xt", ".xbm", ".xcf", ".xpm", ".xwd"
|
||||
}
|
||||
|
||||
VIDEO_EXTENSIONS = {
|
||||
|
|
@ -110,8 +110,9 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False):
|
|||
if line == "</ImageSpec>":
|
||||
subimages_lines.append(lines)
|
||||
lines = []
|
||||
xml_started = False
|
||||
|
||||
if not xml_started:
|
||||
if not subimages_lines:
|
||||
raise ValueError(
|
||||
"Failed to read input file \"{}\".\nOutput:\n{}".format(
|
||||
filepath, output
|
||||
|
|
@ -1226,12 +1227,8 @@ def get_rescaled_command_arguments(
|
|||
target_par = target_par or 1.0
|
||||
input_par = 1.0
|
||||
|
||||
# ffmpeg command
|
||||
input_file_metadata = get_ffprobe_data(input_path, logger=log)
|
||||
stream = input_file_metadata["streams"][0]
|
||||
input_width = int(stream["width"])
|
||||
input_height = int(stream["height"])
|
||||
stream_input_par = stream.get("sample_aspect_ratio")
|
||||
input_height, input_width, stream_input_par = _get_image_dimensions(
|
||||
application, input_path, log)
|
||||
if stream_input_par:
|
||||
input_par = (
|
||||
float(stream_input_par.split(":")[0])
|
||||
|
|
@ -1344,6 +1341,48 @@ def get_rescaled_command_arguments(
|
|||
return command_args
|
||||
|
||||
|
||||
def _get_image_dimensions(application, input_path, log):
|
||||
"""Uses 'ffprobe' first and then 'oiiotool' if available to get dim.
|
||||
|
||||
Args:
|
||||
application (str): "oiiotool"|"ffmpeg"
|
||||
input_path (str): path to image file
|
||||
log (Optional[logging.Logger]): Logger used for logging.
|
||||
Returns:
|
||||
(tuple) (int, int, dict) - (height, width, sample_aspect_ratio)
|
||||
Raises:
|
||||
RuntimeError if image dimensions couldn't be parsed out.
|
||||
"""
|
||||
# ffmpeg command
|
||||
input_file_metadata = get_ffprobe_data(input_path, logger=log)
|
||||
input_width = input_height = 0
|
||||
stream = next(
|
||||
(
|
||||
s for s in input_file_metadata["streams"]
|
||||
if s.get("codec_type") == "video"
|
||||
),
|
||||
{}
|
||||
)
|
||||
if stream:
|
||||
input_width = int(stream["width"])
|
||||
input_height = int(stream["height"])
|
||||
|
||||
# fallback for weird files with width=0, height=0
|
||||
if (input_width == 0 or input_height == 0) and application == "oiiotool":
|
||||
# Load info about file from oiio tool
|
||||
input_info = get_oiio_info_for_input(input_path, logger=log)
|
||||
if input_info:
|
||||
input_width = int(input_info["width"])
|
||||
input_height = int(input_info["height"])
|
||||
|
||||
if input_width == 0 or input_height == 0:
|
||||
raise RuntimeError("Couldn't read {} either "
|
||||
"with ffprobe or oiiotool".format(input_path))
|
||||
|
||||
stream_input_par = stream.get("sample_aspect_ratio")
|
||||
return input_height, input_width, stream_input_par
|
||||
|
||||
|
||||
def convert_color_values(application, color_value):
|
||||
"""Get color mapping for ffmpeg and oiiotool.
|
||||
Args:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from . import click_wrap
|
||||
from .interfaces import (
|
||||
ILaunchHookPaths,
|
||||
IPluginPaths,
|
||||
|
|
@ -28,6 +29,8 @@ from .base import (
|
|||
|
||||
|
||||
__all__ = (
|
||||
"click_wrap",
|
||||
|
||||
"ILaunchHookPaths",
|
||||
"IPluginPaths",
|
||||
"ITrayModule",
|
||||
|
|
|
|||
|
|
@ -542,7 +542,8 @@ def _load_modules():
|
|||
module_dirs.insert(0, current_dir)
|
||||
|
||||
addons_dir = os.path.join(os.path.dirname(current_dir), "addons")
|
||||
module_dirs.append(addons_dir)
|
||||
if os.path.exists(addons_dir):
|
||||
module_dirs.append(addons_dir)
|
||||
|
||||
ignored_host_names = set(IGNORED_HOSTS_IN_AYON)
|
||||
ignored_current_dir_filenames = set(IGNORED_DEFAULT_FILENAMES)
|
||||
|
|
@ -1332,7 +1333,6 @@ class TrayModulesManager(ModulesManager):
|
|||
"user",
|
||||
"ftrack",
|
||||
"kitsu",
|
||||
"muster",
|
||||
"launcher_tool",
|
||||
"avalon",
|
||||
"clockify",
|
||||
|
|
|
|||
365
openpype/modules/click_wrap.py
Normal file
365
openpype/modules/click_wrap.py
Normal file
|
|
@ -0,0 +1,365 @@
|
|||
"""Simplified wrapper for 'click' python module.
|
||||
|
||||
Module 'click' is used as main cli handler in AYON/OpenPype. Addons can
|
||||
register their own subcommands with options. This wrapper allows to define
|
||||
commands and options as with 'click', but without any dependency.
|
||||
|
||||
Why not to use 'click' directly? Version of 'click' used in AYON/OpenPype
|
||||
is not compatible with 'click' version used in some DCCs (e.g. Houdini 20+).
|
||||
And updating 'click' would break other DCCs.
|
||||
|
||||
How to use it? If you already have cli commands defined in addon, just replace
|
||||
'click' with 'click_wrap' and it should work and modify your addon's cli
|
||||
method to convert 'click_wrap' object to 'click' object.
|
||||
|
||||
Before
|
||||
```python
|
||||
import click
|
||||
from openpype.modules import OpenPypeModule
|
||||
|
||||
|
||||
class ExampleAddon(OpenPypeModule):
|
||||
name = "example"
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
|
||||
|
||||
@click.group(ExampleAddon.name, help="Example addon")
|
||||
def cli_main():
|
||||
pass
|
||||
|
||||
|
||||
@cli_main.command(help="Example command")
|
||||
@click.option("--arg1", help="Example argument 1", default="default1")
|
||||
@click.option("--arg2", help="Example argument 2", is_flag=True)
|
||||
def mycommand(arg1, arg2):
|
||||
print(arg1, arg2)
|
||||
```
|
||||
|
||||
Now
|
||||
```
|
||||
from openpype import click_wrap
|
||||
from openpype.modules import OpenPypeModule
|
||||
|
||||
|
||||
class ExampleAddon(OpenPypeModule):
|
||||
name = "example"
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main.to_click_obj())
|
||||
|
||||
|
||||
@click_wrap.group(ExampleAddon.name, help="Example addon")
|
||||
def cli_main():
|
||||
pass
|
||||
|
||||
|
||||
@cli_main.command(help="Example command")
|
||||
@click_wrap.option("--arg1", help="Example argument 1", default="default1")
|
||||
@click_wrap.option("--arg2", help="Example argument 2", is_flag=True)
|
||||
def mycommand(arg1, arg2):
|
||||
print(arg1, arg2)
|
||||
```
|
||||
|
||||
|
||||
Added small enhancements:
|
||||
- most of the methods can be used as chained calls
|
||||
- functions/methods 'command' and 'group' can be used in a way that
|
||||
first argument is callback function and the rest are arguments
|
||||
for click
|
||||
|
||||
Example:
|
||||
```python
|
||||
from openpype import click_wrap
|
||||
from openpype.modules import OpenPypeModule
|
||||
|
||||
|
||||
class ExampleAddon(OpenPypeModule):
|
||||
name = "example"
|
||||
|
||||
def cli(self, click_group):
|
||||
# Define main command (name 'example')
|
||||
main = click_wrap.group(
|
||||
self._cli_main, name=self.name, help="Example addon"
|
||||
)
|
||||
# Add subcommand (name 'mycommand')
|
||||
(
|
||||
main.command(
|
||||
self._cli_command, name="mycommand", help="Example command"
|
||||
)
|
||||
.option(
|
||||
"--arg1", help="Example argument 1", default="default1"
|
||||
)
|
||||
.option(
|
||||
"--arg2", help="Example argument 2", is_flag=True,
|
||||
)
|
||||
)
|
||||
# Convert main command to click object and add it to parent group
|
||||
click_group.add_command(main.to_click_obj())
|
||||
|
||||
def _cli_main(self):
|
||||
pass
|
||||
|
||||
def _cli_command(self, arg1, arg2):
|
||||
print(arg1, arg2)
|
||||
```
|
||||
|
||||
```shell
|
||||
openpype_console addon example mycommand --arg1 value1 --arg2
|
||||
```
|
||||
"""
|
||||
|
||||
import collections
|
||||
|
||||
FUNC_ATTR_NAME = "__ayon_cli_options__"
|
||||
|
||||
|
||||
class Command(object):
|
||||
def __init__(self, func, *args, **kwargs):
|
||||
# Command function
|
||||
self._func = func
|
||||
# Command definition arguments
|
||||
self._args = args
|
||||
# Command definition kwargs
|
||||
self._kwargs = kwargs
|
||||
# Both 'options' and 'arguments' are stored to the same variable
|
||||
# - keep order of options and arguments
|
||||
self._options = getattr(func, FUNC_ATTR_NAME, [])
|
||||
|
||||
def to_click_obj(self):
|
||||
"""Converts this object to click object.
|
||||
|
||||
Returns:
|
||||
click.Command: Click command object.
|
||||
"""
|
||||
return convert_to_click(self)
|
||||
|
||||
# --- Methods for 'convert_to_click' function ---
|
||||
def get_args(self):
|
||||
"""
|
||||
Returns:
|
||||
tuple: Command definition arguments.
|
||||
"""
|
||||
return self._args
|
||||
|
||||
def get_kwargs(self):
|
||||
"""
|
||||
Returns:
|
||||
dict[str, Any]: Command definition kwargs.
|
||||
"""
|
||||
return self._kwargs
|
||||
|
||||
def get_func(self):
|
||||
"""
|
||||
Returns:
|
||||
Function: Function to invoke on command trigger.
|
||||
"""
|
||||
return self._func
|
||||
|
||||
def iter_options(self):
|
||||
"""
|
||||
Yields:
|
||||
tuple[str, tuple, dict]: Option type name with args and kwargs.
|
||||
"""
|
||||
for item in self._options:
|
||||
yield item
|
||||
# -----------------------------------------------
|
||||
|
||||
def add_option(self, *args, **kwargs):
|
||||
return self.add_option_by_type("option", *args, **kwargs)
|
||||
|
||||
def add_argument(self, *args, **kwargs):
|
||||
return self.add_option_by_type("argument", *args, **kwargs)
|
||||
|
||||
option = add_option
|
||||
argument = add_argument
|
||||
|
||||
def add_option_by_type(self, option_name, *args, **kwargs):
|
||||
self._options.append((option_name, args, kwargs))
|
||||
return self
|
||||
|
||||
|
||||
class Group(Command):
|
||||
def __init__(self, func, *args, **kwargs):
|
||||
super(Group, self).__init__(func, *args, **kwargs)
|
||||
# Store sub-groupd and sub-commands to the same variable
|
||||
self._commands = []
|
||||
|
||||
# --- Methods for 'convert_to_click' function ---
|
||||
def iter_commands(self):
|
||||
for command in self._commands:
|
||||
yield command
|
||||
# -----------------------------------------------
|
||||
|
||||
def add_command(self, command):
|
||||
"""Add prepared command object as child.
|
||||
|
||||
Args:
|
||||
command (Command): Prepared command object.
|
||||
"""
|
||||
if command not in self._commands:
|
||||
self._commands.append(command)
|
||||
|
||||
def add_group(self, group):
|
||||
"""Add prepared group object as child.
|
||||
|
||||
Args:
|
||||
group (Group): Prepared group object.
|
||||
"""
|
||||
if group not in self._commands:
|
||||
self._commands.append(group)
|
||||
|
||||
def command(self, *args, **kwargs):
|
||||
"""Add child command.
|
||||
|
||||
Returns:
|
||||
Union[Command, Function]: New command object, or wrapper function.
|
||||
"""
|
||||
return self._add_new(Command, *args, **kwargs)
|
||||
|
||||
def group(self, *args, **kwargs):
|
||||
"""Add child group.
|
||||
|
||||
Returns:
|
||||
Union[Group, Function]: New group object, or wrapper function.
|
||||
"""
|
||||
return self._add_new(Group, *args, **kwargs)
|
||||
|
||||
def _add_new(self, target_cls, *args, **kwargs):
|
||||
func = None
|
||||
if args and callable(args[0]):
|
||||
args = list(args)
|
||||
func = args.pop(0)
|
||||
args = tuple(args)
|
||||
|
||||
def decorator(_func):
|
||||
out = target_cls(_func, *args, **kwargs)
|
||||
self._commands.append(out)
|
||||
return out
|
||||
|
||||
if func is not None:
|
||||
return decorator(func)
|
||||
return decorator
|
||||
|
||||
|
||||
def convert_to_click(obj_to_convert):
|
||||
"""Convert wrapped object to click object.
|
||||
|
||||
Args:
|
||||
obj_to_convert (Command): Object to convert to click object.
|
||||
|
||||
Returns:
|
||||
click.Command: Click command object.
|
||||
"""
|
||||
import click
|
||||
|
||||
commands_queue = collections.deque()
|
||||
commands_queue.append((obj_to_convert, None))
|
||||
top_obj = None
|
||||
while commands_queue:
|
||||
item = commands_queue.popleft()
|
||||
command_obj, parent_obj = item
|
||||
if not isinstance(command_obj, Command):
|
||||
raise TypeError(
|
||||
"Invalid type '{}' expected 'Command'".format(
|
||||
type(command_obj)
|
||||
)
|
||||
)
|
||||
|
||||
if isinstance(command_obj, Group):
|
||||
click_obj = (
|
||||
click.group(
|
||||
*command_obj.get_args(),
|
||||
**command_obj.get_kwargs()
|
||||
)(command_obj.get_func())
|
||||
)
|
||||
|
||||
else:
|
||||
click_obj = (
|
||||
click.command(
|
||||
*command_obj.get_args(),
|
||||
**command_obj.get_kwargs()
|
||||
)(command_obj.get_func())
|
||||
)
|
||||
|
||||
for item in command_obj.iter_options():
|
||||
option_name, args, kwargs = item
|
||||
if option_name == "option":
|
||||
click.option(*args, **kwargs)(click_obj)
|
||||
elif option_name == "argument":
|
||||
click.argument(*args, **kwargs)(click_obj)
|
||||
else:
|
||||
raise ValueError(
|
||||
"Invalid option name '{}'".format(option_name)
|
||||
)
|
||||
|
||||
if top_obj is None:
|
||||
top_obj = click_obj
|
||||
|
||||
if parent_obj is not None:
|
||||
parent_obj.add_command(click_obj)
|
||||
|
||||
if isinstance(command_obj, Group):
|
||||
for command in command_obj.iter_commands():
|
||||
commands_queue.append((command, click_obj))
|
||||
|
||||
return top_obj
|
||||
|
||||
|
||||
def group(*args, **kwargs):
|
||||
func = None
|
||||
if args and callable(args[0]):
|
||||
args = list(args)
|
||||
func = args.pop(0)
|
||||
args = tuple(args)
|
||||
|
||||
def decorator(_func):
|
||||
return Group(_func, *args, **kwargs)
|
||||
|
||||
if func is not None:
|
||||
return decorator(func)
|
||||
return decorator
|
||||
|
||||
|
||||
def command(*args, **kwargs):
|
||||
func = None
|
||||
if args and callable(args[0]):
|
||||
args = list(args)
|
||||
func = args.pop(0)
|
||||
args = tuple(args)
|
||||
|
||||
def decorator(_func):
|
||||
return Command(_func, *args, **kwargs)
|
||||
|
||||
if func is not None:
|
||||
return decorator(func)
|
||||
return decorator
|
||||
|
||||
|
||||
def argument(*args, **kwargs):
|
||||
def decorator(func):
|
||||
return _add_option_to_func(
|
||||
func, "argument", *args, **kwargs
|
||||
)
|
||||
return decorator
|
||||
|
||||
|
||||
def option(*args, **kwargs):
|
||||
def decorator(func):
|
||||
return _add_option_to_func(
|
||||
func, "option", *args, **kwargs
|
||||
)
|
||||
return decorator
|
||||
|
||||
|
||||
def _add_option_to_func(func, option_name, *args, **kwargs):
|
||||
if isinstance(func, Command):
|
||||
func.add_option_by_type(option_name, *args, **kwargs)
|
||||
return func
|
||||
|
||||
if not hasattr(func, FUNC_ATTR_NAME):
|
||||
setattr(func, FUNC_ATTR_NAME, [])
|
||||
cli_options = getattr(func, FUNC_ATTR_NAME)
|
||||
cli_options.append((option_name, args, kwargs))
|
||||
return func
|
||||
|
|
@ -34,8 +34,8 @@ def requests_post(*args, **kwargs):
|
|||
"""Wrap request post method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
|
|
@ -55,8 +55,8 @@ def requests_get(*args, **kwargs):
|
|||
"""Wrap request get method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
|
|
|
|||
|
|
@ -82,6 +82,7 @@ class AfterEffectsSubmitDeadline(
|
|||
"FTRACK_API_KEY",
|
||||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -104,6 +104,7 @@ class BlenderSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"OPENPYPE_SG_USER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -223,6 +223,7 @@ class FusionSubmitDeadline(
|
|||
"FTRACK_API_KEY",
|
||||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -275,6 +275,7 @@ class HarmonySubmitDeadline(
|
|||
"FTRACK_API_KEY",
|
||||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -110,6 +110,7 @@ class HoudiniCacheSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline
|
|||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"OPENPYPE_SG_USER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -205,6 +205,7 @@ class HoudiniSubmitDeadline(
|
|||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"OPENPYPE_SG_USER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ from openpype.pipeline import (
|
|||
from openpype.pipeline.publish.lib import (
|
||||
replace_with_published_scene_path
|
||||
)
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
from openpype_modules.deadline import abstract_submit_deadline
|
||||
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
|
||||
from openpype.lib import is_running_from_build
|
||||
|
|
@ -54,7 +55,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
cls.priority)
|
||||
cls.chuck_size = settings.get("chunk_size", cls.chunk_size)
|
||||
cls.group = settings.get("group", cls.group)
|
||||
|
||||
# TODO: multiple camera instance, separate job infos
|
||||
def get_job_info(self):
|
||||
job_info = DeadlineJobInfo(Plugin="3dsmax")
|
||||
|
||||
|
|
@ -71,7 +72,6 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
src_filepath = context.data["currentFile"]
|
||||
src_filename = os.path.basename(src_filepath)
|
||||
|
||||
job_info.Name = "%s - %s" % (src_filename, instance.name)
|
||||
job_info.BatchName = src_filename
|
||||
job_info.Plugin = instance.data["plugin"]
|
||||
|
|
@ -103,6 +103,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"OPENPYPE_SG_USER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
@ -134,11 +135,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
# Add list of expected files to job
|
||||
# ---------------------------------
|
||||
exp = instance.data.get("expectedFiles")
|
||||
|
||||
for filepath in self._iter_expected_files(exp):
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
if not instance.data.get("multiCamera"):
|
||||
exp = instance.data.get("expectedFiles")
|
||||
for filepath in self._iter_expected_files(exp):
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
|
||||
return job_info
|
||||
|
||||
|
|
@ -163,11 +164,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
def process_submission(self):
|
||||
|
||||
instance = self._instance
|
||||
filepath = self.scene_path
|
||||
filepath = instance.context.data["currentFile"]
|
||||
|
||||
files = instance.data["expectedFiles"]
|
||||
if not files:
|
||||
raise RuntimeError("No Render Elements found!")
|
||||
raise KnownPublishError("No Render Elements found!")
|
||||
first_file = next(self._iter_expected_files(files))
|
||||
output_dir = os.path.dirname(first_file)
|
||||
instance.data["outputDir"] = output_dir
|
||||
|
|
@ -181,9 +182,17 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
self.log.debug("Submitting 3dsMax render..")
|
||||
project_settings = instance.context.data["project_settings"]
|
||||
payload = self._use_published_name(payload_data, project_settings)
|
||||
job_info, plugin_info = payload
|
||||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
if instance.data.get("multiCamera"):
|
||||
self.log.debug("Submitting jobs for multiple cameras..")
|
||||
payload = self._use_published_name_for_multiples(
|
||||
payload_data, project_settings)
|
||||
job_infos, plugin_infos = payload
|
||||
for job_info, plugin_info in zip(job_infos, plugin_infos):
|
||||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
else:
|
||||
payload = self._use_published_name(payload_data, project_settings)
|
||||
job_info, plugin_info = payload
|
||||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
|
||||
def _use_published_name(self, data, project_settings):
|
||||
# Not all hosts can import these modules.
|
||||
|
|
@ -206,7 +215,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
files = instance.data.get("expectedFiles")
|
||||
if not files:
|
||||
raise RuntimeError("No render elements found")
|
||||
raise KnownPublishError("No render elements found")
|
||||
first_file = next(self._iter_expected_files(files))
|
||||
old_output_dir = os.path.dirname(first_file)
|
||||
output_beauty = RenderSettings().get_render_output(instance.name,
|
||||
|
|
@ -218,6 +227,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
plugin_data["RenderOutput"] = beauty_name
|
||||
# as 3dsmax has version with different languages
|
||||
plugin_data["Language"] = "ENU"
|
||||
|
||||
renderer_class = get_current_renderer()
|
||||
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
|
|
@ -249,6 +259,125 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
return job_info, plugin_info
|
||||
|
||||
def get_job_info_through_camera(self, camera):
|
||||
"""Get the job parameters for deadline submission when
|
||||
multi-camera is enabled.
|
||||
Args:
|
||||
infos(dict): a dictionary with job info.
|
||||
"""
|
||||
instance = self._instance
|
||||
context = instance.context
|
||||
job_info = copy.deepcopy(self.job_info)
|
||||
exp = instance.data.get("expectedFiles")
|
||||
|
||||
src_filepath = context.data["currentFile"]
|
||||
src_filename = os.path.basename(src_filepath)
|
||||
job_info.Name = "%s - %s - %s" % (
|
||||
src_filename, instance.name, camera)
|
||||
for filepath in self._iter_expected_files(exp):
|
||||
if camera not in filepath:
|
||||
continue
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
|
||||
return job_info
|
||||
# set the output filepath with the relative camera
|
||||
|
||||
def get_plugin_info_through_camera(self, camera):
|
||||
"""Get the plugin parameters for deadline submission when
|
||||
multi-camera is enabled.
|
||||
Args:
|
||||
infos(dict): a dictionary with plugin info.
|
||||
"""
|
||||
from openpype.hosts.max.api.lib import get_current_renderer
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
|
||||
instance = self._instance
|
||||
# set the target camera
|
||||
plugin_info = copy.deepcopy(self.plugin_info)
|
||||
|
||||
plugin_data = {}
|
||||
# set the output filepath with the relative camera
|
||||
if instance.data.get("multiCamera"):
|
||||
scene_filepath = instance.context.data["currentFile"]
|
||||
scene_filename = os.path.basename(scene_filepath)
|
||||
scene_directory = os.path.dirname(scene_filepath)
|
||||
current_filename, ext = os.path.splitext(scene_filename)
|
||||
camera_scene_name = f"{current_filename}_{camera}{ext}"
|
||||
camera_scene_filepath = os.path.join(
|
||||
scene_directory, f"_{current_filename}", camera_scene_name)
|
||||
plugin_data["SceneFile"] = camera_scene_filepath
|
||||
|
||||
files = instance.data.get("expectedFiles")
|
||||
if not files:
|
||||
raise KnownPublishError("No render elements found")
|
||||
first_file = next(self._iter_expected_files(files))
|
||||
old_output_dir = os.path.dirname(first_file)
|
||||
rgb_output = RenderSettings().get_batch_render_output(camera) # noqa
|
||||
rgb_bname = os.path.basename(rgb_output)
|
||||
dir = os.path.dirname(first_file)
|
||||
beauty_name = f"{dir}/{rgb_bname}"
|
||||
beauty_name = beauty_name.replace("\\", "/")
|
||||
plugin_info["RenderOutput"] = beauty_name
|
||||
renderer_class = get_current_renderer()
|
||||
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
if renderer in [
|
||||
"ART_Renderer",
|
||||
"Redshift_Renderer",
|
||||
"V_Ray_6_Hotfix_3",
|
||||
"V_Ray_GPU_6_Hotfix_3",
|
||||
"Default_Scanline_Renderer",
|
||||
"Quicksilver_Hardware_Renderer",
|
||||
]:
|
||||
render_elem_list = RenderSettings().get_batch_render_elements(
|
||||
instance.name, old_output_dir, camera
|
||||
)
|
||||
for i, element in enumerate(render_elem_list):
|
||||
if camera in element:
|
||||
elem_bname = os.path.basename(element)
|
||||
new_elem = f"{dir}/{elem_bname}"
|
||||
new_elem = new_elem.replace("/", "\\")
|
||||
plugin_info["RenderElementOutputFilename%d" % i] = new_elem # noqa
|
||||
|
||||
if camera:
|
||||
# set the default camera and target camera
|
||||
# (weird parameters from max)
|
||||
plugin_data["Camera"] = camera
|
||||
plugin_data["Camera1"] = camera
|
||||
plugin_data["Camera0"] = None
|
||||
|
||||
plugin_info.update(plugin_data)
|
||||
return plugin_info
|
||||
|
||||
def _use_published_name_for_multiples(self, data, project_settings):
|
||||
"""Process the parameters submission for deadline when
|
||||
user enables multi-cameras option.
|
||||
Args:
|
||||
job_info_list (list): A list of multiple job infos
|
||||
plugin_info_list (list): A list of multiple plugin infos
|
||||
"""
|
||||
from openpype.hosts.max.api.lib import get_multipass_setting
|
||||
|
||||
job_info_list = []
|
||||
plugin_info_list = []
|
||||
instance = self._instance
|
||||
cameras = instance.data.get("cameras", [])
|
||||
plugin_data = {}
|
||||
multipass = get_multipass_setting(project_settings)
|
||||
if multipass:
|
||||
plugin_data["DisableMultipass"] = 0
|
||||
else:
|
||||
plugin_data["DisableMultipass"] = 1
|
||||
for cam in cameras:
|
||||
job_info = self.get_job_info_through_camera(cam)
|
||||
plugin_info = self.get_plugin_info_through_camera(cam)
|
||||
plugin_info.update(plugin_data)
|
||||
job_info_list.append(job_info)
|
||||
plugin_info_list.append(plugin_info)
|
||||
|
||||
return job_info_list, plugin_info_list
|
||||
|
||||
def from_published_scene(self, replace_in_path=True):
|
||||
instance = self._instance
|
||||
if instance.data["renderer"] == "Redshift_Renderer":
|
||||
|
|
|
|||
|
|
@ -201,6 +201,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"OPENPYPE_SG_USER",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -108,6 +108,7 @@ class MayaSubmitRemotePublishDeadline(
|
|||
if key in os.environ}, **legacy_io.Session)
|
||||
|
||||
# TODO replace legacy_io with context.data
|
||||
environment["AVALON_DB"] = os.environ.get("AVALON_DB")
|
||||
environment["AVALON_PROJECT"] = project_name
|
||||
environment["AVALON_ASSET"] = instance.context.data["asset"]
|
||||
environment["AVALON_TASK"] = instance.context.data["task"]
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
env_allowed_keys = []
|
||||
env_search_replace_values = {}
|
||||
workfile_dependency = True
|
||||
use_published_workfile = True
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
|
|
@ -85,8 +86,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
),
|
||||
BoolDef(
|
||||
"workfile_dependency",
|
||||
default=True,
|
||||
default=cls.workfile_dependency,
|
||||
label="Workfile Dependency"
|
||||
),
|
||||
BoolDef(
|
||||
"use_published_workfile",
|
||||
default=cls.use_published_workfile,
|
||||
label="Use Published Workfile"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
@ -125,20 +131,11 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
render_path = instance.data['path']
|
||||
script_path = context.data["currentFile"]
|
||||
|
||||
for item_ in context:
|
||||
if "workfile" in item_.data["family"]:
|
||||
template_data = item_.data.get("anatomyData")
|
||||
rep = item_.data.get("representations")[0].get("name")
|
||||
template_data["representation"] = rep
|
||||
template_data["ext"] = rep
|
||||
template_data["comment"] = None
|
||||
anatomy_filled = context.data["anatomy"].format(template_data)
|
||||
template_filled = anatomy_filled["publish"]["path"]
|
||||
script_path = os.path.normpath(template_filled)
|
||||
|
||||
self.log.info(
|
||||
"Using published scene for render {}".format(script_path)
|
||||
)
|
||||
use_published_workfile = instance.data["attributeValues"].get(
|
||||
"use_published_workfile", self.use_published_workfile
|
||||
)
|
||||
if use_published_workfile:
|
||||
script_path = self._get_published_workfile_path(context)
|
||||
|
||||
# only add main rendering job if target is not frames_farm
|
||||
r_job_response_json = None
|
||||
|
|
@ -197,6 +194,44 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
families.insert(0, "prerender")
|
||||
instance.data["families"] = families
|
||||
|
||||
def _get_published_workfile_path(self, context):
|
||||
"""This method is temporary while the class is not inherited from
|
||||
AbstractSubmitDeadline"""
|
||||
for instance in context:
|
||||
if (
|
||||
instance.data["family"] != "workfile"
|
||||
# Disabled instances won't be integrated
|
||||
or instance.data("publish") is False
|
||||
):
|
||||
continue
|
||||
template_data = instance.data["anatomyData"]
|
||||
# Expect workfile instance has only one representation
|
||||
representation = instance.data["representations"][0]
|
||||
# Get workfile extension
|
||||
repre_file = representation["files"]
|
||||
self.log.info(repre_file)
|
||||
ext = os.path.splitext(repre_file)[1].lstrip(".")
|
||||
|
||||
# Fill template data
|
||||
template_data["representation"] = representation["name"]
|
||||
template_data["ext"] = ext
|
||||
template_data["comment"] = None
|
||||
|
||||
anatomy = context.data["anatomy"]
|
||||
# WARNING Hardcoded template name 'publish' > may not be used
|
||||
template_obj = anatomy.templates_obj["publish"]["path"]
|
||||
|
||||
template_filled = template_obj.format(template_data)
|
||||
script_path = os.path.normpath(template_filled)
|
||||
self.log.info(
|
||||
"Using published scene for render {}".format(
|
||||
script_path
|
||||
)
|
||||
)
|
||||
return script_path
|
||||
|
||||
return None
|
||||
|
||||
def payload_submit(
|
||||
self,
|
||||
instance,
|
||||
|
|
@ -341,6 +376,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
keys = [
|
||||
"PYTHONPATH",
|
||||
"PATH",
|
||||
"AVALON_DB",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
|
|
|
|||
|
|
@ -99,10 +99,6 @@ class ProcessSubmittedCacheJobOnFarm(pyblish.api.InstancePlugin,
|
|||
def _submit_deadline_post_job(self, instance, job):
|
||||
"""Submit publish job to Deadline.
|
||||
|
||||
Deadline specific code separated from :meth:`process` for sake of
|
||||
more universal code. Muster post job is sent directly by Muster
|
||||
submitter, so this type of code isn't necessary for it.
|
||||
|
||||
Returns:
|
||||
(str): deadline_publish_job_id
|
||||
"""
|
||||
|
|
@ -135,6 +131,7 @@ class ProcessSubmittedCacheJobOnFarm(pyblish.api.InstancePlugin,
|
|||
create_metadata_path(instance, anatomy)
|
||||
|
||||
environment = {
|
||||
"AVALON_DB": os.environ["AVALON_DB"],
|
||||
"AVALON_PROJECT": instance.context.data["projectName"],
|
||||
"AVALON_ASSET": instance.context.data["asset"],
|
||||
"AVALON_TASK": instance.context.data["task"],
|
||||
|
|
|
|||
|
|
@ -59,21 +59,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
publish.ColormanagedPyblishPluginMixin):
|
||||
"""Process Job submitted on farm.
|
||||
|
||||
These jobs are dependent on a deadline or muster job
|
||||
These jobs are dependent on a deadline job
|
||||
submission prior to this plug-in.
|
||||
|
||||
- In case of Deadline, it creates dependent job on farm publishing
|
||||
rendered image sequence.
|
||||
|
||||
- In case of Muster, there is no need for such thing as dependent job,
|
||||
post action will be executed and rendered sequence will be published.
|
||||
It creates dependent job on farm publishing rendered image sequence.
|
||||
|
||||
Options in instance.data:
|
||||
- deadlineSubmissionJob (dict, Required): The returned .json
|
||||
data from the job submission to deadline.
|
||||
|
||||
- musterSubmissionJob (dict, Required): same as deadline.
|
||||
|
||||
- outputDir (str, Required): The output directory where the metadata
|
||||
file should be generated. It's assumed that this will also be
|
||||
final folder containing the output files.
|
||||
|
|
@ -161,10 +155,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
def _submit_deadline_post_job(self, instance, job, instances):
|
||||
"""Submit publish job to Deadline.
|
||||
|
||||
Deadline specific code separated from :meth:`process` for sake of
|
||||
more universal code. Muster post job is sent directly by Muster
|
||||
submitter, so this type of code isn't necessary for it.
|
||||
|
||||
Returns:
|
||||
(str): deadline_publish_job_id
|
||||
"""
|
||||
|
|
@ -197,6 +187,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
create_metadata_path(instance, anatomy)
|
||||
|
||||
environment = {
|
||||
"AVALON_DB": os.environ["AVALON_DB"],
|
||||
"AVALON_PROJECT": instance.context.data["projectName"],
|
||||
"AVALON_ASSET": instance.context.data["asset"],
|
||||
"AVALON_TASK": instance.context.data["task"],
|
||||
|
|
@ -331,7 +322,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
|
||||
return deadline_publish_job_id
|
||||
|
||||
|
||||
def process(self, instance):
|
||||
# type: (pyblish.api.Instance) -> None
|
||||
"""Process plugin.
|
||||
|
|
@ -348,151 +338,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
self.log.debug("Skipping local instance.")
|
||||
return
|
||||
|
||||
data = instance.data.copy()
|
||||
context = instance.context
|
||||
self.context = context
|
||||
self.anatomy = instance.context.data["anatomy"]
|
||||
|
||||
asset = data.get("asset") or context.data["asset"]
|
||||
subset = data.get("subset")
|
||||
|
||||
start = instance.data.get("frameStart")
|
||||
if start is None:
|
||||
start = context.data["frameStart"]
|
||||
|
||||
end = instance.data.get("frameEnd")
|
||||
if end is None:
|
||||
end = context.data["frameEnd"]
|
||||
|
||||
handle_start = instance.data.get("handleStart")
|
||||
if handle_start is None:
|
||||
handle_start = context.data["handleStart"]
|
||||
|
||||
handle_end = instance.data.get("handleEnd")
|
||||
if handle_end is None:
|
||||
handle_end = context.data["handleEnd"]
|
||||
|
||||
fps = instance.data.get("fps")
|
||||
if fps is None:
|
||||
fps = context.data["fps"]
|
||||
|
||||
if data.get("extendFrames", False):
|
||||
start, end = self._extend_frames(
|
||||
asset,
|
||||
subset,
|
||||
start,
|
||||
end,
|
||||
data["overrideExistingFrame"])
|
||||
|
||||
try:
|
||||
source = data["source"]
|
||||
except KeyError:
|
||||
source = context.data["currentFile"]
|
||||
|
||||
success, rootless_path = (
|
||||
self.anatomy.find_root_template_from_path(source)
|
||||
)
|
||||
if success:
|
||||
source = rootless_path
|
||||
|
||||
else:
|
||||
# `rootless_path` is not set to `source` if none of roots match
|
||||
self.log.warning((
|
||||
"Could not find root path for remapping \"{}\"."
|
||||
" This may cause issues."
|
||||
).format(source))
|
||||
|
||||
family = "render"
|
||||
if ("prerender" in instance.data["families"] or
|
||||
"prerender.farm" in instance.data["families"]):
|
||||
family = "prerender"
|
||||
families = [family]
|
||||
|
||||
# pass review to families if marked as review
|
||||
do_not_add_review = False
|
||||
if data.get("review"):
|
||||
families.append("review")
|
||||
elif data.get("review") is False:
|
||||
self.log.debug("Instance has review explicitly disabled.")
|
||||
do_not_add_review = True
|
||||
|
||||
instance_skeleton_data = {
|
||||
"family": family,
|
||||
"subset": subset,
|
||||
"families": families,
|
||||
"asset": asset,
|
||||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStartHandle": start - handle_start,
|
||||
"frameEndHandle": end + handle_end,
|
||||
"comment": instance.data["comment"],
|
||||
"fps": fps,
|
||||
"source": source,
|
||||
"extendFrames": data.get("extendFrames"),
|
||||
"overrideExistingFrame": data.get("overrideExistingFrame"),
|
||||
"pixelAspect": data.get("pixelAspect", 1),
|
||||
"resolutionWidth": data.get("resolutionWidth", 1920),
|
||||
"resolutionHeight": data.get("resolutionHeight", 1080),
|
||||
"multipartExr": data.get("multipartExr", False),
|
||||
"jobBatchName": data.get("jobBatchName", ""),
|
||||
"useSequenceForReview": data.get("useSequenceForReview", True),
|
||||
# map inputVersions `ObjectId` -> `str` so json supports it
|
||||
"inputVersions": list(map(str, data.get("inputVersions", []))),
|
||||
"colorspace": instance.data.get("colorspace"),
|
||||
"stagingDir_persistent": instance.data.get(
|
||||
"stagingDir_persistent", False
|
||||
)
|
||||
}
|
||||
|
||||
# skip locking version if we are creating v01
|
||||
instance_version = instance.data.get("version") # take this if exists
|
||||
if instance_version != 1:
|
||||
instance_skeleton_data["version"] = instance_version
|
||||
|
||||
# transfer specific families from original instance to new render
|
||||
for item in self.families_transfer:
|
||||
if item in instance.data.get("families", []):
|
||||
instance_skeleton_data["families"] += [item]
|
||||
|
||||
# transfer specific properties from original instance based on
|
||||
# mapping dictionary `instance_transfer`
|
||||
for key, values in self.instance_transfer.items():
|
||||
if key in instance.data.get("families", []):
|
||||
for v in values:
|
||||
instance_skeleton_data[v] = instance.data.get(v)
|
||||
|
||||
# look into instance data if representations are not having any
|
||||
# which are having tag `publish_on_farm` and include them
|
||||
for repre in instance.data.get("representations", []):
|
||||
staging_dir = repre.get("stagingDir")
|
||||
if staging_dir:
|
||||
success, rootless_staging_dir = (
|
||||
self.anatomy.find_root_template_from_path(
|
||||
staging_dir
|
||||
)
|
||||
)
|
||||
if success:
|
||||
repre["stagingDir"] = rootless_staging_dir
|
||||
else:
|
||||
self.log.warning((
|
||||
"Could not find root path for remapping \"{}\"."
|
||||
" This may cause issues on farm."
|
||||
).format(staging_dir))
|
||||
repre["stagingDir"] = staging_dir
|
||||
|
||||
if "publish_on_farm" in repre.get("tags"):
|
||||
# create representations attribute of not there
|
||||
if "representations" not in instance_skeleton_data.keys():
|
||||
instance_skeleton_data["representations"] = []
|
||||
|
||||
instance_skeleton_data["representations"].append(repre)
|
||||
|
||||
instances = None
|
||||
assert data.get("expectedFiles"), ("Submission from old Pype version"
|
||||
" - missing expectedFiles")
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
instance_skeleton_data = create_skeleton_instance(
|
||||
|
|
@ -586,9 +431,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
|
||||
render_job = instance.data.pop("deadlineSubmissionJob", None)
|
||||
if not render_job and instance.data.get("tileRendering") is False:
|
||||
raise AssertionError(("Cannot continue without valid Deadline "
|
||||
"or Muster submission."))
|
||||
|
||||
raise AssertionError(("Cannot continue without valid "
|
||||
"Deadline submission."))
|
||||
if not render_job:
|
||||
import getpass
|
||||
|
||||
|
|
|
|||
|
|
@ -8,9 +8,9 @@ in global space here until are required or used.
|
|||
"""
|
||||
|
||||
import os
|
||||
import click
|
||||
|
||||
from openpype.modules import (
|
||||
click_wrap,
|
||||
JsonFilesSettingsDef,
|
||||
OpenPypeAddOn,
|
||||
ModulesManager,
|
||||
|
|
@ -115,10 +115,12 @@ class ExampleAddon(OpenPypeAddOn, IPluginPaths, ITrayAction):
|
|||
}
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
click_group.add_command(cli_main.to_click_obj())
|
||||
|
||||
|
||||
@click.group(ExampleAddon.name, help="Example addon dynamic cli commands.")
|
||||
@click_wrap.group(
|
||||
ExampleAddon.name,
|
||||
help="Example addon dynamic cli commands.")
|
||||
def cli_main():
|
||||
pass
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue