mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-01 16:34:53 +01:00
Merge branch 'develop' into feature/maya-build-from-template
This commit is contained in:
commit
a221a5fc14
45 changed files with 1435 additions and 543 deletions
64
CHANGELOG.md
64
CHANGELOG.md
|
|
@ -1,8 +1,40 @@
|
|||
# Changelog
|
||||
|
||||
## [3.12.3-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.2...HEAD)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Traypublisher: simple editorial publishing [\#3492](https://github.com/pypeclub/OpenPype/pull/3492)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Kitsu: Shot&Sequence name with prefix over appends [\#3593](https://github.com/pypeclub/OpenPype/pull/3593)
|
||||
- Photoshop: implemented {layer} placeholder in subset template [\#3591](https://github.com/pypeclub/OpenPype/pull/3591)
|
||||
- General: New Integrator small fixes [\#3583](https://github.com/pypeclub/OpenPype/pull/3583)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600)
|
||||
- Bugfix: Add OCIO as submodule to prepare for handling `maketx` color space conversion. [\#3590](https://github.com/pypeclub/OpenPype/pull/3590)
|
||||
- Editorial publishing workflow improvements [\#3580](https://github.com/pypeclub/OpenPype/pull/3580)
|
||||
- Nuke: render family integration consistency [\#3576](https://github.com/pypeclub/OpenPype/pull/3576)
|
||||
- Ftrack: Handle missing published path in integrator [\#3570](https://github.com/pypeclub/OpenPype/pull/3570)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- General: Use query functions in general code [\#3596](https://github.com/pypeclub/OpenPype/pull/3596)
|
||||
- General: Separate extraction of template data into more functions [\#3574](https://github.com/pypeclub/OpenPype/pull/3574)
|
||||
- General: Lib cleanup [\#3571](https://github.com/pypeclub/OpenPype/pull/3571)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Enable write color sets on animation publish automatically [\#3582](https://github.com/pypeclub/OpenPype/pull/3582)
|
||||
|
||||
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...3.12.2)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
|
|
@ -28,6 +60,7 @@
|
|||
- NewPublisher: Python 2 compatible html escape [\#3559](https://github.com/pypeclub/OpenPype/pull/3559)
|
||||
- Remove invalid submodules from `/vendor` [\#3557](https://github.com/pypeclub/OpenPype/pull/3557)
|
||||
- General: Remove hosts filter on integrator plugins [\#3556](https://github.com/pypeclub/OpenPype/pull/3556)
|
||||
- Nuke: publish existing frames with slate with correct range [\#3555](https://github.com/pypeclub/OpenPype/pull/3555)
|
||||
- Settings: Clean default values of environments [\#3550](https://github.com/pypeclub/OpenPype/pull/3550)
|
||||
- Module interfaces: Fix import error [\#3547](https://github.com/pypeclub/OpenPype/pull/3547)
|
||||
- Workfiles tool: Show of tool and it's flags [\#3539](https://github.com/pypeclub/OpenPype/pull/3539)
|
||||
|
|
@ -38,7 +71,6 @@
|
|||
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
|
||||
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
|
||||
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
|
||||
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
|
||||
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
|
||||
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
|
||||
|
||||
|
|
@ -51,7 +83,6 @@
|
|||
- General: Move load related functions into pipeline [\#3527](https://github.com/pypeclub/OpenPype/pull/3527)
|
||||
- General: Get current context document functions [\#3522](https://github.com/pypeclub/OpenPype/pull/3522)
|
||||
- Kitsu: Use query function from client [\#3496](https://github.com/pypeclub/OpenPype/pull/3496)
|
||||
- Deadline: Use query functions [\#3466](https://github.com/pypeclub/OpenPype/pull/3466)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
|
|
@ -61,10 +92,6 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
|
||||
|
|
@ -72,8 +99,6 @@
|
|||
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
|
||||
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
|
||||
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
|
||||
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
|
||||
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -84,27 +109,6 @@
|
|||
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
|
||||
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
|
||||
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
|
||||
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
|
||||
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
|
||||
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
|
||||
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
|
||||
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
|
||||
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
|
||||
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
|
||||
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
|
||||
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
|
||||
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
|
||||
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
|
||||
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
|
||||
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
|
||||
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
|
||||
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
|
||||
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
|
||||
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
|
||||
|
||||
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
from openpype.client import get_project, get_asset_by_name
|
||||
from openpype.lib import (
|
||||
PreLaunchHook,
|
||||
EnvironmentPrepData,
|
||||
|
|
@ -69,7 +70,7 @@ class GlobalHostDataHook(PreLaunchHook):
|
|||
self.data["dbcon"] = dbcon
|
||||
|
||||
# Project document
|
||||
project_doc = dbcon.find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
self.data["project_doc"] = project_doc
|
||||
|
||||
asset_name = self.data.get("asset_name")
|
||||
|
|
@ -79,8 +80,5 @@ class GlobalHostDataHook(PreLaunchHook):
|
|||
)
|
||||
return
|
||||
|
||||
asset_doc = dbcon.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
self.data["asset_doc"] = asset_doc
|
||||
|
|
|
|||
|
|
@ -102,7 +102,6 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
attachTo=False,
|
||||
setMembers='',
|
||||
publish=True,
|
||||
renderer='aerender',
|
||||
name=subset_name,
|
||||
resolutionWidth=render_q.width,
|
||||
resolutionHeight=render_q.height,
|
||||
|
|
@ -113,7 +112,6 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
frameStart=frame_start,
|
||||
frameEnd=frame_end,
|
||||
frameStep=1,
|
||||
toBeRenderedOn='deadline',
|
||||
fps=fps,
|
||||
app_version=app_version,
|
||||
publish_attributes=inst.data.get("publish_attributes", {}),
|
||||
|
|
@ -138,6 +136,9 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
fam = "render.farm"
|
||||
if fam not in instance.families:
|
||||
instance.families.append(fam)
|
||||
instance.toBeRenderedOn = "deadline"
|
||||
instance.renderer = "aerender"
|
||||
instance.farm = True # to skip integrate
|
||||
|
||||
instances.append(instance)
|
||||
instances_to_remove.append(inst)
|
||||
|
|
|
|||
|
|
@ -220,12 +220,9 @@ class LaunchQtApp(bpy.types.Operator):
|
|||
self._app.store_window(self.bl_idname, window)
|
||||
self._window = window
|
||||
|
||||
if not isinstance(
|
||||
self._window,
|
||||
(QtWidgets.QMainWindow, QtWidgets.QDialog, ModuleType)
|
||||
):
|
||||
if not isinstance(self._window, (QtWidgets.QWidget, ModuleType)):
|
||||
raise AttributeError(
|
||||
"`window` should be a `QDialog or module`. Got: {}".format(
|
||||
"`window` should be a `QWidget or module`. Got: {}".format(
|
||||
str(type(window))
|
||||
)
|
||||
)
|
||||
|
|
@ -249,9 +246,9 @@ class LaunchQtApp(bpy.types.Operator):
|
|||
self._window.setWindowFlags(on_top_flags)
|
||||
self._window.show()
|
||||
|
||||
if on_top_flags != origin_flags:
|
||||
self._window.setWindowFlags(origin_flags)
|
||||
self._window.show()
|
||||
# if on_top_flags != origin_flags:
|
||||
# self._window.setWindowFlags(origin_flags)
|
||||
# self._window.show()
|
||||
|
||||
return {'FINISHED'}
|
||||
|
||||
|
|
|
|||
|
|
@ -3,9 +3,7 @@ import re
|
|||
import sys
|
||||
import logging
|
||||
|
||||
# Pipeline imports
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_name,
|
||||
get_versions,
|
||||
)
|
||||
|
|
@ -21,9 +19,6 @@ from openpype.lib.avalon_context import get_workdir_from_session
|
|||
|
||||
log = logging.getLogger("Update Slap Comp")
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self._project = None
|
||||
|
||||
|
||||
def _format_version_folder(folder):
|
||||
"""Format a version folder based on the filepath
|
||||
|
|
@ -212,9 +207,6 @@ def switch(asset_name, filepath=None, new=True):
|
|||
asset = get_asset_by_name(project_name, asset_name)
|
||||
assert asset, "Could not find '%s' in the database" % asset_name
|
||||
|
||||
# Get current project
|
||||
self._project = get_project(project_name)
|
||||
|
||||
# Go to comp
|
||||
if not filepath:
|
||||
current_comp = api.get_current_comp()
|
||||
|
|
|
|||
|
|
@ -82,6 +82,14 @@ IMAGE_PREFIXES = {
|
|||
|
||||
RENDERMAN_IMAGE_DIR = "maya/<scene>/<layer>"
|
||||
|
||||
|
||||
def has_tokens(string, tokens):
|
||||
"""Return whether any of tokens is in input string (case-insensitive)"""
|
||||
pattern = "({})".format("|".join(re.escape(token) for token in tokens))
|
||||
match = re.search(pattern, string, re.IGNORECASE)
|
||||
return bool(match)
|
||||
|
||||
|
||||
@attr.s
|
||||
class LayerMetadata(object):
|
||||
"""Data class for Render Layer metadata."""
|
||||
|
|
@ -99,6 +107,12 @@ class LayerMetadata(object):
|
|||
# Render Products
|
||||
products = attr.ib(init=False, default=attr.Factory(list))
|
||||
|
||||
# The AOV separator token. Note that not all renderers define an explicit
|
||||
# render separator but allow to put the AOV/RenderPass token anywhere in
|
||||
# the file path prefix. For those renderers we'll fall back to whatever
|
||||
# is between the last occurrences of <RenderLayer> and <RenderPass> tokens.
|
||||
aov_separator = attr.ib(default="_")
|
||||
|
||||
|
||||
@attr.s
|
||||
class RenderProduct(object):
|
||||
|
|
@ -183,7 +197,6 @@ class ARenderProducts:
|
|||
self.layer = layer
|
||||
self.render_instance = render_instance
|
||||
self.multipart = False
|
||||
self.aov_separator = render_instance.data.get("aovSeparator", "_")
|
||||
|
||||
# Initialize
|
||||
self.layer_data = self._get_layer_data()
|
||||
|
|
@ -319,6 +332,31 @@ class ARenderProducts:
|
|||
# defaultRenderLayer renders as masterLayer
|
||||
layer_name = "masterLayer"
|
||||
|
||||
# AOV separator - default behavior extracts the part between
|
||||
# last occurences of <RenderLayer> and <RenderPass>
|
||||
# todo: This code also triggers for V-Ray which overrides it explicitly
|
||||
# so this code will invalidly debug log it couldn't extract the
|
||||
# aov separator even though it does set it in RenderProductsVray
|
||||
layer_tokens = ["<renderlayer>", "<layer>"]
|
||||
aov_tokens = ["<aov>", "<renderpass>"]
|
||||
|
||||
def match_last(tokens, text):
|
||||
"""regex match the last occurence from a list of tokens"""
|
||||
pattern = "(?:.*)({})".format("|".join(tokens))
|
||||
return re.search(pattern, text, re.IGNORECASE)
|
||||
|
||||
layer_match = match_last(layer_tokens, file_prefix)
|
||||
aov_match = match_last(aov_tokens, file_prefix)
|
||||
kwargs = {}
|
||||
if layer_match and aov_match:
|
||||
matches = sorted((layer_match, aov_match),
|
||||
key=lambda match: match.end(1))
|
||||
separator = file_prefix[matches[0].end(1):matches[1].start(1)]
|
||||
kwargs["aov_separator"] = separator
|
||||
else:
|
||||
log.debug("Couldn't extract aov separator from "
|
||||
"file prefix: {}".format(file_prefix))
|
||||
|
||||
# todo: Support Custom Frames sequences 0,5-10,100-120
|
||||
# Deadline allows submitting renders with a custom frame list
|
||||
# to support those cases we might want to allow 'custom frames'
|
||||
|
|
@ -335,7 +373,8 @@ class ARenderProducts:
|
|||
layerName=layer_name,
|
||||
renderer=self.renderer,
|
||||
defaultExt=self._get_attr("defaultRenderGlobals.imfPluginKey"),
|
||||
filePrefix=file_prefix
|
||||
filePrefix=file_prefix,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
def _generate_file_sequence(
|
||||
|
|
@ -680,9 +719,17 @@ class RenderProductsVray(ARenderProducts):
|
|||
|
||||
"""
|
||||
prefix = super(RenderProductsVray, self).get_renderer_prefix()
|
||||
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
|
||||
aov_separator = self._get_aov_separator()
|
||||
prefix = "{}{}<aov>".format(prefix, aov_separator)
|
||||
return prefix
|
||||
|
||||
def _get_aov_separator(self):
|
||||
# type: () -> str
|
||||
"""Return the V-Ray AOV/Render Elements separator"""
|
||||
return self._get_attr(
|
||||
"vraySettings.fileNameRenderElementSeparator"
|
||||
)
|
||||
|
||||
def _get_layer_data(self):
|
||||
# type: () -> LayerMetadata
|
||||
"""Override to get vray specific extension."""
|
||||
|
|
@ -694,6 +741,8 @@ class RenderProductsVray(ARenderProducts):
|
|||
layer_data.defaultExt = default_ext
|
||||
layer_data.padding = self._get_attr("vraySettings.fileNamePadding")
|
||||
|
||||
layer_data.aov_separator = self._get_aov_separator()
|
||||
|
||||
return layer_data
|
||||
|
||||
def get_render_products(self):
|
||||
|
|
|
|||
242
openpype/hosts/maya/api/lib_rendersettings.py
Normal file
242
openpype/hosts/maya/api/lib_rendersettings.py
Normal file
|
|
@ -0,0 +1,242 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Class for handling Render Settings."""
|
||||
from maya import cmds # noqa
|
||||
import maya.mel as mel
|
||||
import six
|
||||
import sys
|
||||
|
||||
from openpype.api import (
|
||||
get_project_settings,
|
||||
get_current_project_settings
|
||||
)
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline import CreatorError
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
from openpype.hosts.maya.api.commands import reset_frame_range
|
||||
|
||||
|
||||
class RenderSettings(object):
|
||||
|
||||
_image_prefix_nodes = {
|
||||
'vray': 'vraySettings.fileNamePrefix',
|
||||
'arnold': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'renderman': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix'
|
||||
}
|
||||
|
||||
_image_prefixes = {
|
||||
'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa
|
||||
'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa
|
||||
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>',
|
||||
'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa
|
||||
}
|
||||
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_image_prefix_attr(cls, renderer):
|
||||
return cls._image_prefix_nodes[renderer]
|
||||
|
||||
def __init__(self, project_settings=None):
|
||||
self._project_settings = project_settings
|
||||
if not self._project_settings:
|
||||
self._project_settings = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
)
|
||||
|
||||
def set_default_renderer_settings(self, renderer=None):
|
||||
"""Set basic settings based on renderer."""
|
||||
if not renderer:
|
||||
renderer = cmds.getAttr(
|
||||
'defaultRenderGlobals.currentRenderer').lower()
|
||||
|
||||
asset_doc = get_current_project_asset()
|
||||
# project_settings/maya/create/CreateRender/aov_separator
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
self._project_settings["maya"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
reset_frame = self._project_settings["maya"]["RenderSettings"]["reset_current_frame"] # noqa
|
||||
|
||||
if reset_frame:
|
||||
start_frame = cmds.getAttr("defaultRenderGlobals.startFrame")
|
||||
cmds.currentTime(start_frame, edit=True)
|
||||
|
||||
if renderer in self._image_prefix_nodes:
|
||||
prefix = self._image_prefixes[renderer]
|
||||
prefix = prefix.replace("{aov_separator}", aov_separator)
|
||||
cmds.setAttr(self._image_prefix_nodes[renderer],
|
||||
prefix, type="string") # noqa
|
||||
else:
|
||||
print("{0} isn't a supported renderer to autoset settings.".format(renderer)) # noqa
|
||||
|
||||
# TODO: handle not having res values in the doc
|
||||
width = asset_doc["data"].get("resolutionWidth")
|
||||
height = asset_doc["data"].get("resolutionHeight")
|
||||
|
||||
if renderer == "arnold":
|
||||
# set renderer settings for Arnold from project settings
|
||||
self._set_arnold_settings(width, height)
|
||||
|
||||
if renderer == "vray":
|
||||
self._set_vray_settings(aov_separator, width, height)
|
||||
|
||||
if renderer == "redshift":
|
||||
self._set_redshift_settings(width, height)
|
||||
|
||||
def _set_arnold_settings(self, width, height):
|
||||
"""Sets settings for Arnold."""
|
||||
from mtoa.core import createOptions # noqa
|
||||
from mtoa.aovs import AOVInterface # noqa
|
||||
createOptions()
|
||||
arnold_render_presets = self._project_settings["maya"]["RenderSettings"]["arnold_renderer"] # noqa
|
||||
# Force resetting settings and AOV list to avoid having to deal with
|
||||
# AOV checking logic, for now.
|
||||
# This is a work around because the standard
|
||||
# function to revert render settings does not reset AOVs list in MtoA
|
||||
# Fetch current aovs in case there's any.
|
||||
current_aovs = AOVInterface().getAOVs()
|
||||
# Remove fetched AOVs
|
||||
AOVInterface().removeAOVs(current_aovs)
|
||||
mel.eval("unifiedRenderGlobalsRevertToDefault")
|
||||
img_ext = arnold_render_presets["image_format"]
|
||||
img_prefix = arnold_render_presets["image_prefix"]
|
||||
aovs = arnold_render_presets["aov_list"]
|
||||
img_tiled = arnold_render_presets["tiled"]
|
||||
multi_exr = arnold_render_presets["multilayer_exr"]
|
||||
additional_options = arnold_render_presets["additional_options"]
|
||||
for aov in aovs:
|
||||
AOVInterface('defaultArnoldRenderOptions').addAOV(aov)
|
||||
|
||||
cmds.setAttr("defaultResolution.width", width)
|
||||
cmds.setAttr("defaultResolution.height", height)
|
||||
|
||||
self._set_global_output_settings()
|
||||
|
||||
cmds.setAttr(
|
||||
"defaultRenderGlobals.imageFilePrefix", img_prefix, type="string")
|
||||
|
||||
cmds.setAttr(
|
||||
"defaultArnoldDriver.ai_translator", img_ext, type="string")
|
||||
|
||||
cmds.setAttr(
|
||||
"defaultArnoldDriver.exrTiled", img_tiled)
|
||||
|
||||
cmds.setAttr(
|
||||
"defaultArnoldDriver.mergeAOVs", multi_exr)
|
||||
# Passes additional options in from the schema as a list
|
||||
# but converts it to a dictionary because ftrack doesn't
|
||||
# allow fullstops in custom attributes. Then checks for
|
||||
# type of MtoA attribute passed to adjust the `setAttr`
|
||||
# command accordingly.
|
||||
self._additional_attribs_setter(additional_options)
|
||||
for item in additional_options:
|
||||
attribute, value = item
|
||||
if (cmds.getAttr(str(attribute), type=True)) == "long":
|
||||
cmds.setAttr(str(attribute), int(value))
|
||||
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
|
||||
cmds.setAttr(str(attribute), int(value), type = "Boolean") # noqa
|
||||
elif (cmds.getAttr(str(attribute), type=True)) == "string":
|
||||
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
|
||||
reset_frame_range()
|
||||
|
||||
def _set_redshift_settings(self, width, height):
|
||||
"""Sets settings for Redshift."""
|
||||
redshift_render_presets = (
|
||||
self._project_settings
|
||||
["maya"]
|
||||
["RenderSettings"]
|
||||
["redshift_renderer"]
|
||||
)
|
||||
additional_options = redshift_render_presets["additional_options"]
|
||||
ext = redshift_render_presets["image_format"]
|
||||
img_exts = ["iff", "exr", "tif", "png", "tga", "jpg"]
|
||||
img_ext = img_exts.index(ext)
|
||||
|
||||
self._set_global_output_settings()
|
||||
cmds.setAttr("redshiftOptions.imageFormat", img_ext)
|
||||
cmds.setAttr("defaultResolution.width", width)
|
||||
cmds.setAttr("defaultResolution.height", height)
|
||||
self._additional_attribs_setter(additional_options)
|
||||
|
||||
def _set_vray_settings(self, aov_separator, width, height):
|
||||
# type: (str, int, int) -> None
|
||||
"""Sets important settings for Vray."""
|
||||
settings = cmds.ls(type="VRaySettingsNode")
|
||||
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
|
||||
vray_render_presets = (
|
||||
self._project_settings
|
||||
["maya"]
|
||||
["RenderSettings"]
|
||||
["vray_renderer"]
|
||||
)
|
||||
# Set aov separator
|
||||
# First we need to explicitly set the UI items in Render Settings
|
||||
# because that is also what V-Ray updates to when that Render Settings
|
||||
# UI did initialize before and refreshes again.
|
||||
MENU = "vrayRenderElementSeparator"
|
||||
if cmds.optionMenuGrp(MENU, query=True, exists=True):
|
||||
items = cmds.optionMenuGrp(MENU, query=True, ill=True)
|
||||
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
|
||||
try:
|
||||
sep_idx = separators.index(aov_separator)
|
||||
except ValueError as e:
|
||||
six.reraise(
|
||||
CreatorError,
|
||||
CreatorError(
|
||||
"AOV character {} not in {}".format(
|
||||
aov_separator, separators)),
|
||||
sys.exc_info()[2])
|
||||
|
||||
cmds.optionMenuGrp(MENU, edit=True, select=sep_idx + 1)
|
||||
|
||||
# Set the render element attribute as string. This is also what V-Ray
|
||||
# sets whenever the `vrayRenderElementSeparator` menu items switch
|
||||
cmds.setAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node),
|
||||
aov_separator,
|
||||
type="string"
|
||||
)
|
||||
|
||||
# Set render file format to exr
|
||||
cmds.setAttr("{}.imageFormatStr".format(node), "exr", type="string")
|
||||
|
||||
# animType
|
||||
cmds.setAttr("{}.animType".format(node), 1)
|
||||
|
||||
# resolution
|
||||
cmds.setAttr("{}.width".format(node), width)
|
||||
cmds.setAttr("{}.height".format(node), height)
|
||||
|
||||
additional_options = vray_render_presets["additional_options"]
|
||||
|
||||
self._additional_attribs_setter(additional_options)
|
||||
|
||||
@staticmethod
|
||||
def _set_global_output_settings():
|
||||
# enable animation
|
||||
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
|
||||
cmds.setAttr("defaultRenderGlobals.animation", 1)
|
||||
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
|
||||
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
|
||||
|
||||
def _additional_attribs_setter(self, additional_attribs):
|
||||
print(additional_attribs)
|
||||
for item in additional_attribs:
|
||||
attribute, value = item
|
||||
if (cmds.getAttr(str(attribute), type=True)) == "long":
|
||||
cmds.setAttr(str(attribute), int(value))
|
||||
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
|
||||
cmds.setAttr(str(attribute), int(value)) # noqa
|
||||
elif (cmds.getAttr(str(attribute), type=True)) == "string":
|
||||
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
|
||||
|
|
@ -15,8 +15,7 @@ from openpype.pipeline.workfile.build_template import (
|
|||
update_workfile_template
|
||||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
from openpype.hosts.maya.api import lib, lib_rendersettings
|
||||
from .lib import get_main_window, IS_HEADLESS
|
||||
from .commands import reset_frame_range
|
||||
from .lib_template_builder import create_placeholder, update_placeholder
|
||||
|
|
@ -51,6 +50,7 @@ def install():
|
|||
parent="MayaWindow"
|
||||
)
|
||||
|
||||
renderer = cmds.getAttr('defaultRenderGlobals.currentRenderer').lower()
|
||||
# Create context menu
|
||||
context_label = "{}, {}".format(
|
||||
legacy_io.Session["AVALON_ASSET"],
|
||||
|
|
@ -105,6 +105,13 @@ def install():
|
|||
|
||||
cmds.menuItem(divider=True)
|
||||
|
||||
cmds.menuItem(
|
||||
"Set Render Settings",
|
||||
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
|
||||
)
|
||||
|
||||
cmds.menuItem(divider=True)
|
||||
|
||||
cmds.menuItem(
|
||||
"Work Files...",
|
||||
command=lambda *args: host_tools.show_workfiles(
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ class CreateAnimation(plugin.Creator):
|
|||
label = "Animation"
|
||||
family = "animation"
|
||||
icon = "male"
|
||||
write_color_sets = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateAnimation, self).__init__(*args, **kwargs)
|
||||
|
|
@ -22,7 +23,7 @@ class CreateAnimation(plugin.Creator):
|
|||
self.data[key] = value
|
||||
|
||||
# Write vertex colors with the geometry.
|
||||
self.data["writeColorSets"] = False
|
||||
self.data["writeColorSets"] = self.write_color_sets
|
||||
self.data["writeFaceSets"] = False
|
||||
|
||||
# Include only renderable visible shapes.
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ class CreatePointCache(plugin.Creator):
|
|||
label = "Point Cache"
|
||||
family = "pointcache"
|
||||
icon = "gears"
|
||||
write_color_sets = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreatePointCache, self).__init__(*args, **kwargs)
|
||||
|
|
@ -18,7 +19,8 @@ class CreatePointCache(plugin.Creator):
|
|||
# Add animation data
|
||||
self.data.update(lib.collect_animation_data())
|
||||
|
||||
self.data["writeColorSets"] = False # Vertex colors with the geometry.
|
||||
# Vertex colors with the geometry.
|
||||
self.data["writeColorSets"] = self.write_color_sets
|
||||
self.data["writeFaceSets"] = False # Vertex colors with the geometry.
|
||||
self.data["renderableOnly"] = False # Only renderable visible shapes
|
||||
self.data["visibleOnly"] = False # only nodes that are visible
|
||||
|
|
|
|||
|
|
@ -1,15 +1,21 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Create ``Render`` instance in Maya."""
|
||||
import os
|
||||
import json
|
||||
import os
|
||||
|
||||
import appdirs
|
||||
import requests
|
||||
|
||||
from maya import cmds
|
||||
import maya.app.renderSetup.model.renderSetup as renderSetup
|
||||
from maya.app.renderSetup.model import renderSetup
|
||||
|
||||
from openpype.api import (
|
||||
get_system_settings,
|
||||
get_project_settings,
|
||||
)
|
||||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
lib_rendersettings,
|
||||
plugin
|
||||
)
|
||||
from openpype.lib import requests_get
|
||||
|
|
@ -17,6 +23,7 @@ from openpype.api import (
|
|||
get_system_settings,
|
||||
get_project_settings)
|
||||
from openpype.modules import ModulesManager
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline import (
|
||||
CreatorError,
|
||||
legacy_io,
|
||||
|
|
@ -69,35 +76,6 @@ class CreateRender(plugin.Creator):
|
|||
_user = None
|
||||
_password = None
|
||||
|
||||
# renderSetup instance
|
||||
_rs = None
|
||||
|
||||
_image_prefix_nodes = {
|
||||
'mentalray': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'vray': 'vraySettings.fileNamePrefix',
|
||||
'arnold': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'renderman': 'rmanGlobals.imageFileFormat',
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
|
||||
}
|
||||
|
||||
_image_prefixes = {
|
||||
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'vray': 'maya/<scene>/<Layer>/<Layer>',
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
# this needs `imageOutputDir`
|
||||
# (<ws>/renders/maya/<scene>) set separately
|
||||
'renderman': '<layer>_<aov>.<f4>.<ext>',
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
|
||||
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
|
||||
}
|
||||
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
_project_settings = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
@ -109,18 +87,8 @@ class CreateRender(plugin.Creator):
|
|||
return
|
||||
self._project_settings = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"])
|
||||
|
||||
# project_settings/maya/create/CreateRender/aov_separator
|
||||
try:
|
||||
self.aov_separator = self._aov_chars[(
|
||||
self._project_settings["maya"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
self.aov_separator = "_"
|
||||
|
||||
if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
|
||||
lib_rendersettings.RenderSettings().set_default_renderer_settings()
|
||||
manager = ModulesManager()
|
||||
self.deadline_module = manager.modules_by_name["deadline"]
|
||||
try:
|
||||
|
|
@ -177,13 +145,13 @@ class CreateRender(plugin.Creator):
|
|||
])
|
||||
|
||||
cmds.setAttr("{}.machineList".format(self.instance), lock=True)
|
||||
self._rs = renderSetup.instance()
|
||||
layers = self._rs.getRenderLayers()
|
||||
rs = renderSetup.instance()
|
||||
layers = rs.getRenderLayers()
|
||||
if use_selection:
|
||||
print(">>> processing existing layers")
|
||||
self.log.info("Processing existing layers")
|
||||
sets = []
|
||||
for layer in layers:
|
||||
print(" - creating set for {}:{}".format(
|
||||
self.log.info(" - creating set for {}:{}".format(
|
||||
namespace, layer.name()))
|
||||
render_set = cmds.sets(
|
||||
n="{}:{}".format(namespace, layer.name()))
|
||||
|
|
@ -193,17 +161,10 @@ class CreateRender(plugin.Creator):
|
|||
# if no render layers are present, create default one with
|
||||
# asterisk selector
|
||||
if not layers:
|
||||
render_layer = self._rs.createRenderLayer('Main')
|
||||
render_layer = rs.createRenderLayer('Main')
|
||||
collection = render_layer.createCollection("defaultCollection")
|
||||
collection.getSelector().setPattern('*')
|
||||
|
||||
renderer = cmds.getAttr(
|
||||
'defaultRenderGlobals.currentRenderer').lower()
|
||||
# handle various renderman names
|
||||
if renderer.startswith('renderman'):
|
||||
renderer = 'renderman'
|
||||
|
||||
self._set_default_renderer_settings(renderer)
|
||||
return self.instance
|
||||
|
||||
def _deadline_webservice_changed(self):
|
||||
|
|
@ -237,7 +198,7 @@ class CreateRender(plugin.Creator):
|
|||
|
||||
def _create_render_settings(self):
|
||||
"""Create instance settings."""
|
||||
# get pools
|
||||
# get pools (slave machines of the render farm)
|
||||
pool_names = []
|
||||
default_priority = 50
|
||||
|
||||
|
|
@ -281,7 +242,8 @@ class CreateRender(plugin.Creator):
|
|||
# if 'default' server is not between selected,
|
||||
# use first one for initial list of pools.
|
||||
deadline_url = next(iter(self.deadline_servers.values()))
|
||||
|
||||
# Uses function to get pool machines from the assigned deadline
|
||||
# url in settings
|
||||
pool_names = self.deadline_module.get_deadline_pools(deadline_url,
|
||||
self.log)
|
||||
maya_submit_dl = self._project_settings.get(
|
||||
|
|
@ -400,102 +362,36 @@ class CreateRender(plugin.Creator):
|
|||
self.log.error("Cannot show login form to Muster")
|
||||
raise Exception("Cannot show login form to Muster")
|
||||
|
||||
def _set_default_renderer_settings(self, renderer):
|
||||
"""Set basic settings based on renderer.
|
||||
def _requests_post(self, *args, **kwargs):
|
||||
"""Wrap request post method.
|
||||
|
||||
Args:
|
||||
renderer (str): Renderer name.
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
Disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
|
||||
"""
|
||||
prefix = self._image_prefixes[renderer]
|
||||
prefix = prefix.replace("{aov_separator}", self.aov_separator)
|
||||
cmds.setAttr(self._image_prefix_nodes[renderer],
|
||||
prefix,
|
||||
type="string")
|
||||
if "verify" not in kwargs:
|
||||
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
|
||||
return requests.post(*args, **kwargs)
|
||||
|
||||
asset = get_current_project_asset()
|
||||
def _requests_get(self, *args, **kwargs):
|
||||
"""Wrap request get method.
|
||||
|
||||
if renderer == "arnold":
|
||||
# set format to exr
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
cmds.setAttr(
|
||||
"defaultArnoldDriver.ai_translator", "exr", type="string")
|
||||
self._set_global_output_settings()
|
||||
# resolution
|
||||
cmds.setAttr(
|
||||
"defaultResolution.width",
|
||||
asset["data"].get("resolutionWidth"))
|
||||
cmds.setAttr(
|
||||
"defaultResolution.height",
|
||||
asset["data"].get("resolutionHeight"))
|
||||
Warning:
|
||||
Disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
|
||||
if renderer == "vray":
|
||||
self._set_vray_settings(asset)
|
||||
if renderer == "redshift":
|
||||
cmds.setAttr("redshiftOptions.imageFormat", 1)
|
||||
|
||||
# resolution
|
||||
cmds.setAttr(
|
||||
"defaultResolution.width",
|
||||
asset["data"].get("resolutionWidth"))
|
||||
cmds.setAttr(
|
||||
"defaultResolution.height",
|
||||
asset["data"].get("resolutionHeight"))
|
||||
|
||||
self._set_global_output_settings()
|
||||
|
||||
if renderer == "renderman":
|
||||
cmds.setAttr("rmanGlobals.imageOutputDir",
|
||||
"maya/<scene>/<layer>", type="string")
|
||||
|
||||
def _set_vray_settings(self, asset):
|
||||
# type: (dict) -> None
|
||||
"""Sets important settings for Vray."""
|
||||
settings = cmds.ls(type="VRaySettingsNode")
|
||||
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
|
||||
|
||||
# set separator
|
||||
# set it in vray menu
|
||||
if cmds.optionMenuGrp("vrayRenderElementSeparator", exists=True,
|
||||
q=True):
|
||||
items = cmds.optionMenuGrp(
|
||||
"vrayRenderElementSeparator", ill=True, query=True)
|
||||
|
||||
separators = [cmds.menuItem(i, label=True, query=True) for i in items] # noqa: E501
|
||||
try:
|
||||
sep_idx = separators.index(self.aov_separator)
|
||||
except ValueError:
|
||||
raise CreatorError(
|
||||
"AOV character {} not in {}".format(
|
||||
self.aov_separator, separators))
|
||||
|
||||
cmds.optionMenuGrp(
|
||||
"vrayRenderElementSeparator", sl=sep_idx + 1, edit=True)
|
||||
cmds.setAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node),
|
||||
self.aov_separator,
|
||||
type="string"
|
||||
)
|
||||
# set format to exr
|
||||
cmds.setAttr(
|
||||
"{}.imageFormatStr".format(node), "exr", type="string")
|
||||
|
||||
# animType
|
||||
cmds.setAttr(
|
||||
"{}.animType".format(node), 1)
|
||||
|
||||
# resolution
|
||||
cmds.setAttr(
|
||||
"{}.width".format(node),
|
||||
asset["data"].get("resolutionWidth"))
|
||||
cmds.setAttr(
|
||||
"{}.height".format(node),
|
||||
asset["data"].get("resolutionHeight"))
|
||||
|
||||
@staticmethod
|
||||
def _set_global_output_settings():
|
||||
# enable animation
|
||||
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
|
||||
cmds.setAttr("defaultRenderGlobals.animation", 1)
|
||||
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
|
||||
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
|
||||
"""
|
||||
if "verify" not in kwargs:
|
||||
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
|
||||
return requests.get(*args, **kwargs)
|
||||
|
|
|
|||
|
|
@ -551,7 +551,9 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if cmds.getAttr(attribute, type=True) == "message":
|
||||
continue
|
||||
node_attributes[attr] = cmds.getAttr(attribute)
|
||||
|
||||
# Only include if there are any properties we care about
|
||||
if not node_attributes:
|
||||
continue
|
||||
attributes.append({"name": node,
|
||||
"uuid": lib.get_id(node),
|
||||
"attributes": node_attributes})
|
||||
|
|
|
|||
|
|
@ -72,7 +72,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
"""Entry point to collector."""
|
||||
render_instance = None
|
||||
deadline_url = None
|
||||
|
||||
for instance in context:
|
||||
if "rendering" in instance.data["families"]:
|
||||
|
|
@ -96,23 +95,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
workspace = context.data["workspaceDir"]
|
||||
|
||||
deadline_settings = (
|
||||
context.data
|
||||
["system_settings"]
|
||||
["modules"]
|
||||
["deadline"]
|
||||
)
|
||||
|
||||
if deadline_settings["enabled"]:
|
||||
deadline_url = render_instance.data.get("deadlineUrl")
|
||||
self._rs = renderSetup.instance()
|
||||
current_layer = self._rs.getVisibleRenderLayer()
|
||||
# Retrieve render setup layers
|
||||
rs = renderSetup.instance()
|
||||
maya_render_layers = {
|
||||
layer.name(): layer for layer in self._rs.getRenderLayers()
|
||||
layer.name(): layer for layer in rs.getRenderLayers()
|
||||
}
|
||||
|
||||
self.maya_layers = maya_render_layers
|
||||
|
||||
for layer in collected_render_layers:
|
||||
try:
|
||||
if layer.startswith("LAYER_"):
|
||||
|
|
@ -147,49 +135,34 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
self.log.warning(msg)
|
||||
continue
|
||||
|
||||
# test if there are sets (subsets) to attach render to
|
||||
# detect if there are sets (subsets) to attach render to
|
||||
sets = cmds.sets(layer, query=True) or []
|
||||
attach_to = []
|
||||
if sets:
|
||||
for s in sets:
|
||||
if "family" not in cmds.listAttr(s):
|
||||
continue
|
||||
for s in sets:
|
||||
if not cmds.attributeQuery("family", node=s, exists=True):
|
||||
continue
|
||||
|
||||
attach_to.append(
|
||||
{
|
||||
"version": None, # we need integrator for that
|
||||
"subset": s,
|
||||
"family": cmds.getAttr("{}.family".format(s)),
|
||||
}
|
||||
)
|
||||
self.log.info(" -> attach render to: {}".format(s))
|
||||
attach_to.append(
|
||||
{
|
||||
"version": None, # we need integrator for that
|
||||
"subset": s,
|
||||
"family": cmds.getAttr("{}.family".format(s)),
|
||||
}
|
||||
)
|
||||
self.log.info(" -> attach render to: {}".format(s))
|
||||
|
||||
layer_name = "rs_{}".format(expected_layer_name)
|
||||
|
||||
# collect all frames we are expecting to be rendered
|
||||
renderer = cmds.getAttr(
|
||||
"defaultRenderGlobals.currentRenderer"
|
||||
).lower()
|
||||
renderer = self.get_render_attribute("currentRenderer",
|
||||
layer=layer_name)
|
||||
# handle various renderman names
|
||||
if renderer.startswith("renderman"):
|
||||
renderer = "renderman"
|
||||
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
context.data["project_settings"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
|
||||
render_instance.data["aovSeparator"] = aov_separator
|
||||
|
||||
# return all expected files for all cameras and aovs in given
|
||||
# frame range
|
||||
layer_render_products = get_layer_render_products(
|
||||
layer_name, render_instance)
|
||||
layer_render_products = get_layer_render_products(layer_name)
|
||||
render_products = layer_render_products.layer_data.products
|
||||
assert render_products, "no render products generated"
|
||||
exp_files = []
|
||||
|
|
@ -269,8 +242,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
frame_start_handle = frame_start_render
|
||||
frame_end_handle = frame_end_render
|
||||
|
||||
full_exp_files.append(aov_dict)
|
||||
|
||||
# find common path to store metadata
|
||||
# so if image prefix is branching to many directories
|
||||
# metadata file will be located in top-most common
|
||||
|
|
@ -299,16 +270,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
self.log.info("collecting layer: {}".format(layer_name))
|
||||
# Get layer specific settings, might be overrides
|
||||
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
context.data["project_settings"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
|
||||
data = {
|
||||
"subset": expected_layer_name,
|
||||
"attachTo": attach_to,
|
||||
|
|
@ -360,8 +321,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
"aovSeparator": aov_separator
|
||||
}
|
||||
|
||||
if deadline_url:
|
||||
data["deadlineUrl"] = deadline_url
|
||||
# Collect Deadline url if Deadline module is enabled
|
||||
deadline_settings = (
|
||||
context.data["system_settings"]["modules"]["deadline"]
|
||||
)
|
||||
if deadline_settings["enabled"]:
|
||||
data["deadlineUrl"] = render_instance.data.get("deadlineUrl")
|
||||
|
||||
if self.sync_workfile_version:
|
||||
data["version"] = context.data["version"]
|
||||
|
|
@ -370,19 +335,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
if instance.data['family'] == "workfile":
|
||||
instance.data["version"] = context.data["version"]
|
||||
|
||||
# Apply each user defined attribute as data
|
||||
for attr in cmds.listAttr(layer, userDefined=True) or list():
|
||||
try:
|
||||
value = cmds.getAttr("{}.{}".format(layer, attr))
|
||||
except Exception:
|
||||
# Some attributes cannot be read directly,
|
||||
# such as mesh and color attributes. These
|
||||
# are considered non-essential to this
|
||||
# particular publishing pipeline.
|
||||
value = None
|
||||
|
||||
data[attr] = value
|
||||
|
||||
# handle standalone renderers
|
||||
if render_instance.data.get("vrayScene") is True:
|
||||
data["families"].append("vrayscene_render")
|
||||
|
|
@ -490,10 +442,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
|
||||
return pool_a, pool_b
|
||||
|
||||
def _get_overrides(self, layer):
|
||||
rset = self.maya_layers[layer].renderSettingsCollectionInstance()
|
||||
return rset.getOverrides()
|
||||
|
||||
@staticmethod
|
||||
def get_render_attribute(attr, layer):
|
||||
"""Get attribute from render options.
|
||||
|
|
|
|||
|
|
@ -78,14 +78,13 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
|
|||
|
||||
# Check if attributes are on a node with an ID, crucial for rebuild!
|
||||
for attr_changes in lookdata["attributes"]:
|
||||
if not attr_changes["uuid"]:
|
||||
if not attr_changes["uuid"] and not attr_changes["attributes"]:
|
||||
cls.log.error("Node '%s' has no cbId, please set the "
|
||||
"attributes to its children if it has any"
|
||||
% attr_changes["name"])
|
||||
invalid.add(instance.name)
|
||||
|
||||
return list(invalid)
|
||||
|
||||
@classmethod
|
||||
def validate_looks(cls, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -1,20 +1,11 @@
|
|||
import re
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.maya.api.action
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
||||
ImagePrefixes = {
|
||||
'mentalray': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'vray': 'vraySettings.fileNamePrefix',
|
||||
'arnold': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'renderman': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
|
||||
}
|
||||
import openpype.api
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.hosts.maya.api.render_settings import RenderSettings
|
||||
|
||||
|
||||
class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
|
||||
|
|
@ -47,7 +38,11 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
|
|||
# handle various renderman names
|
||||
if renderer.startswith('renderman'):
|
||||
renderer = 'renderman'
|
||||
file_prefix = cmds.getAttr(ImagePrefixes[renderer])
|
||||
|
||||
file_prefix = cmds.getAttr(
|
||||
RenderSettings.get_image_prefix_attr(renderer)
|
||||
)
|
||||
|
||||
|
||||
if len(cameras) > 1:
|
||||
if re.search(cls.R_CAMERA_TOKEN, file_prefix):
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
import re
|
||||
|
||||
from openpype.hosts.photoshop import api
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.pipeline import (
|
||||
|
|
@ -5,6 +7,8 @@ from openpype.pipeline import (
|
|||
CreatedInstance,
|
||||
legacy_io
|
||||
)
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
|
||||
|
||||
|
||||
class ImageCreator(Creator):
|
||||
|
|
@ -38,17 +42,24 @@ class ImageCreator(Creator):
|
|||
top_level_selected_items = stub.get_selected_layers()
|
||||
if pre_create_data.get("use_selection"):
|
||||
only_single_item_selected = len(top_level_selected_items) == 1
|
||||
for selected_item in top_level_selected_items:
|
||||
if (
|
||||
only_single_item_selected or
|
||||
pre_create_data.get("create_multiple")):
|
||||
if (
|
||||
only_single_item_selected or
|
||||
pre_create_data.get("create_multiple")):
|
||||
for selected_item in top_level_selected_items:
|
||||
if selected_item.group:
|
||||
groups_to_create.append(selected_item)
|
||||
else:
|
||||
top_layers_to_wrap.append(selected_item)
|
||||
else:
|
||||
group = stub.group_selected_layers(subset_name_from_ui)
|
||||
groups_to_create.append(group)
|
||||
else:
|
||||
group = stub.group_selected_layers(subset_name_from_ui)
|
||||
groups_to_create.append(group)
|
||||
else:
|
||||
stub.select_layers(stub.get_layers())
|
||||
try:
|
||||
group = stub.group_selected_layers(subset_name_from_ui)
|
||||
except:
|
||||
raise ValueError("Cannot group locked Bakcground layer!")
|
||||
groups_to_create.append(group)
|
||||
|
||||
if not groups_to_create and not top_layers_to_wrap:
|
||||
group = stub.create_group(subset_name_from_ui)
|
||||
|
|
@ -60,6 +71,7 @@ class ImageCreator(Creator):
|
|||
group = stub.group_selected_layers(layer.name)
|
||||
groups_to_create.append(group)
|
||||
|
||||
layer_name = ''
|
||||
creating_multiple_groups = len(groups_to_create) > 1
|
||||
for group in groups_to_create:
|
||||
subset_name = subset_name_from_ui # reset to name from creator UI
|
||||
|
|
@ -67,8 +79,16 @@ class ImageCreator(Creator):
|
|||
created_group_name = self._clean_highlights(stub, group.name)
|
||||
|
||||
if creating_multiple_groups:
|
||||
# concatenate with layer name to differentiate subsets
|
||||
subset_name += group.name.title().replace(" ", "")
|
||||
layer_name = re.sub(
|
||||
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
|
||||
"",
|
||||
group.name
|
||||
)
|
||||
if "{layer}" not in subset_name.lower():
|
||||
subset_name += "{Layer}"
|
||||
|
||||
layer_fill = prepare_template_data({"layer": layer_name})
|
||||
subset_name = subset_name.format(**layer_fill)
|
||||
|
||||
if group.long_name:
|
||||
for directory in group.long_name[::-1]:
|
||||
|
|
@ -143,3 +163,6 @@ class ImageCreator(Creator):
|
|||
def _clean_highlights(self, stub, item):
|
||||
return item.replace(stub.PUBLISH_ICON, '').replace(stub.LOADED_ICON,
|
||||
'')
|
||||
@classmethod
|
||||
def get_dynamic_data(cls, *args, **kwargs):
|
||||
return {"layer": "{layer}"}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,12 @@
|
|||
import re
|
||||
|
||||
from Qt import QtWidgets
|
||||
from openpype.pipeline import create
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
|
||||
|
||||
|
||||
class CreateImage(create.LegacyCreator):
|
||||
"""Image folder for publish."""
|
||||
|
|
@ -75,6 +80,7 @@ class CreateImage(create.LegacyCreator):
|
|||
groups.append(group)
|
||||
|
||||
creator_subset_name = self.data["subset"]
|
||||
layer_name = ''
|
||||
for group in groups:
|
||||
long_names = []
|
||||
group.name = group.name.replace(stub.PUBLISH_ICON, ''). \
|
||||
|
|
@ -82,7 +88,16 @@ class CreateImage(create.LegacyCreator):
|
|||
|
||||
subset_name = creator_subset_name
|
||||
if len(groups) > 1:
|
||||
subset_name += group.name.title().replace(" ", "")
|
||||
layer_name = re.sub(
|
||||
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
|
||||
"",
|
||||
group.name
|
||||
)
|
||||
if "{layer}" not in subset_name.lower():
|
||||
subset_name += "{Layer}"
|
||||
|
||||
layer_fill = prepare_template_data({"layer": layer_name})
|
||||
subset_name = subset_name.format(**layer_fill)
|
||||
|
||||
if group.long_name:
|
||||
for directory in group.long_name[::-1]:
|
||||
|
|
@ -98,3 +113,7 @@ class CreateImage(create.LegacyCreator):
|
|||
# reusing existing group, need to rename afterwards
|
||||
if not create_group:
|
||||
stub.rename_layer(group.id, stub.PUBLISH_ICON + group.name)
|
||||
|
||||
@classmethod
|
||||
def get_dynamic_data(cls, *args, **kwargs):
|
||||
return {"layer": "{layer}"}
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import pyblish.api
|
|||
import openpype.api
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
|
||||
|
||||
|
||||
class ValidateNamingRepair(pyblish.api.Action):
|
||||
|
|
@ -50,6 +51,13 @@ class ValidateNamingRepair(pyblish.api.Action):
|
|||
subset_name = re.sub(invalid_chars, replace_char,
|
||||
instance.data["subset"])
|
||||
|
||||
# format from Tool Creator
|
||||
subset_name = re.sub(
|
||||
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
|
||||
"",
|
||||
subset_name
|
||||
)
|
||||
|
||||
layer_meta["subset"] = subset_name
|
||||
stub.imprint(instance_id, layer_meta)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import os
|
||||
import json
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class HostContext:
|
||||
|
|
@ -17,10 +17,10 @@ class HostContext:
|
|||
if not asset_name:
|
||||
return project_name
|
||||
|
||||
asset_doc = legacy_io.find_one(
|
||||
{"type": "asset", "name": asset_name},
|
||||
{"data.parents": 1}
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, asset_name, fields=["data.parents"]
|
||||
)
|
||||
|
||||
parents = asset_doc.get("data", {}).get("parents") or []
|
||||
|
||||
hierarchy = [project_name]
|
||||
|
|
|
|||
|
|
@ -1,10 +1,11 @@
|
|||
from openpype.lib import NumberDef
|
||||
from openpype.hosts.testhost.api import pipeline
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
AutoCreator,
|
||||
CreatedInstance,
|
||||
)
|
||||
from openpype.hosts.testhost.api import pipeline
|
||||
|
||||
|
||||
class MyAutoCreator(AutoCreator):
|
||||
|
|
@ -44,10 +45,7 @@ class MyAutoCreator(AutoCreator):
|
|||
host_name = legacy_io.Session["AVALON_APP"]
|
||||
|
||||
if existing_instance is None:
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
@ -69,10 +67,7 @@ class MyAutoCreator(AutoCreator):
|
|||
existing_instance["asset"] != asset_name
|
||||
or existing_instance["task"] != task_name
|
||||
):
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
|
|||
|
|
@ -92,6 +92,21 @@ class TrayPublishCreator(Creator):
|
|||
for instance in instances:
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def _store_new_instance(self, new_instance):
|
||||
"""Tray publisher specific method to store instance.
|
||||
|
||||
Instance is stored into "workfile" of traypublisher and also add it
|
||||
to CreateContext.
|
||||
|
||||
Args:
|
||||
new_instance (CreatedInstance): Instance that should be stored.
|
||||
"""
|
||||
|
||||
# Host implementation of storing metadata about instance
|
||||
HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
|
||||
class SettingsCreator(TrayPublishCreator):
|
||||
create_allow_context_change = True
|
||||
|
|
|
|||
|
|
@ -29,8 +29,6 @@ from openpype.lib import (
|
|||
UILabelDef
|
||||
)
|
||||
|
||||
from openpype.hosts.traypublisher.api.pipeline import HostContext
|
||||
|
||||
|
||||
CLIP_ATTR_DEFS = [
|
||||
EnumDef(
|
||||
|
|
@ -75,18 +73,13 @@ class EditorialClipInstanceCreatorBase(HiddenTrayPublishCreator):
|
|||
self.log.info(f"instance_data: {instance_data}")
|
||||
subset_name = instance_data["subset"]
|
||||
|
||||
return self._create_instance(subset_name, instance_data)
|
||||
|
||||
def _create_instance(self, subset_name, data):
|
||||
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(self.family, subset_name, data, self)
|
||||
new_instance = CreatedInstance(
|
||||
self.family, subset_name, instance_data, self
|
||||
)
|
||||
self.log.info(f"instance_data: {pformat(new_instance.data)}")
|
||||
|
||||
# Host implementation of storing metadata about instance
|
||||
HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
return new_instance
|
||||
|
||||
|
|
@ -299,8 +292,10 @@ or updating already created. Publishing will create OTIO file.
|
|||
"editorialSourcePath": media_path,
|
||||
"otioTimeline": otio.adapters.write_to_string(otio_timeline)
|
||||
})
|
||||
|
||||
self._create_instance(self.family, subset_name, data)
|
||||
new_instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
def _create_otio_timeline(self, sequence_path, fps):
|
||||
"""Creating otio timeline from sequence path
|
||||
|
|
@ -820,23 +815,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
"Please check names in the input sequence files."
|
||||
)
|
||||
|
||||
def _create_instance(self, family, subset_name, instance_data):
|
||||
""" CreatedInstance object creator
|
||||
|
||||
Args:
|
||||
family (str): family name
|
||||
subset_name (str): subset name
|
||||
instance_data (dict): instance data
|
||||
"""
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(
|
||||
family, subset_name, instance_data, self
|
||||
)
|
||||
# Host implementation of storing metadata about instance
|
||||
HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
""" Creating pre-create attributes at creator plugin.
|
||||
|
||||
|
|
|
|||
|
|
@ -115,6 +115,7 @@ from .transcoding import (
|
|||
get_ffmpeg_codec_args,
|
||||
get_ffmpeg_format_args,
|
||||
convert_ffprobe_fps_value,
|
||||
convert_ffprobe_fps_to_float,
|
||||
)
|
||||
from .avalon_context import (
|
||||
CURRENT_DOC_SCHEMAS,
|
||||
|
|
@ -287,6 +288,7 @@ __all__ = [
|
|||
"get_ffmpeg_codec_args",
|
||||
"get_ffmpeg_format_args",
|
||||
"convert_ffprobe_fps_value",
|
||||
"convert_ffprobe_fps_to_float",
|
||||
|
||||
"CURRENT_DOC_SCHEMAS",
|
||||
"PROJECT_NAME_ALLOWED_SYMBOLS",
|
||||
|
|
|
|||
|
|
@ -938,3 +938,40 @@ def convert_ffprobe_fps_value(str_value):
|
|||
fps = int(fps)
|
||||
|
||||
return str(fps)
|
||||
|
||||
|
||||
def convert_ffprobe_fps_to_float(value):
|
||||
"""Convert string value of frame rate to float.
|
||||
|
||||
Copy of 'convert_ffprobe_fps_value' which raises exceptions on invalid
|
||||
value, does not convert value to string and does not return "Unknown"
|
||||
string.
|
||||
|
||||
Args:
|
||||
value (str): Value to be converted.
|
||||
|
||||
Returns:
|
||||
Float: Converted frame rate in float. If divisor in value is '0' then
|
||||
'0.0' is returned.
|
||||
|
||||
Raises:
|
||||
ValueError: Passed value is invalid for conversion.
|
||||
"""
|
||||
|
||||
if not value:
|
||||
raise ValueError("Got empty value.")
|
||||
|
||||
items = value.split("/")
|
||||
if len(items) == 1:
|
||||
return float(items[0])
|
||||
|
||||
if len(items) > 2:
|
||||
raise ValueError((
|
||||
"FPS expression contains multiple dividers \"{}\"."
|
||||
).format(value))
|
||||
|
||||
dividend = float(items.pop(0))
|
||||
divisor = float(items.pop(0))
|
||||
if divisor == 0.0:
|
||||
return 0.0
|
||||
return dividend / divisor
|
||||
|
|
|
|||
|
|
@ -87,6 +87,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
|
||||
asset_versions_data_by_id = {}
|
||||
used_asset_versions = []
|
||||
|
||||
# Iterate over components and publish
|
||||
for data in component_list:
|
||||
self.log.debug("data: {}".format(data))
|
||||
|
|
@ -116,9 +117,6 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
asset_version_status_ids_by_name
|
||||
)
|
||||
|
||||
# Component
|
||||
self.create_component(session, asset_version_entity, data)
|
||||
|
||||
# Store asset version and components items that were
|
||||
version_id = asset_version_entity["id"]
|
||||
if version_id not in asset_versions_data_by_id:
|
||||
|
|
@ -135,6 +133,8 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
if asset_version_entity not in used_asset_versions:
|
||||
used_asset_versions.append(asset_version_entity)
|
||||
|
||||
self._create_components(session, asset_versions_data_by_id)
|
||||
|
||||
instance.data["ftrackIntegratedAssetVersionsData"] = (
|
||||
asset_versions_data_by_id
|
||||
)
|
||||
|
|
@ -623,3 +623,40 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
def _create_components(self, session, asset_versions_data_by_id):
|
||||
for item in asset_versions_data_by_id.values():
|
||||
asset_version_entity = item["asset_version"]
|
||||
component_items = item["component_items"]
|
||||
|
||||
component_entities = session.query(
|
||||
(
|
||||
"select id, name from Component where version_id is \"{}\""
|
||||
).format(asset_version_entity["id"])
|
||||
).all()
|
||||
|
||||
existing_component_names = {
|
||||
component["name"]
|
||||
for component in component_entities
|
||||
}
|
||||
|
||||
contain_review = "ftrackreview-mp4" in existing_component_names
|
||||
thumbnail_component_item = None
|
||||
for component_item in component_items:
|
||||
component_data = component_item.get("component_data") or {}
|
||||
component_name = component_data.get("name")
|
||||
if component_name == "ftrackreview-mp4":
|
||||
contain_review = True
|
||||
elif component_name == "ftrackreview-image":
|
||||
thumbnail_component_item = component_item
|
||||
|
||||
if contain_review and thumbnail_component_item:
|
||||
thumbnail_component_item["component_data"]["name"] = (
|
||||
"thumbnail"
|
||||
)
|
||||
|
||||
# Component
|
||||
for component_item in component_items:
|
||||
self.create_component(
|
||||
session, asset_version_entity, component_item
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,10 @@ import json
|
|||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_ffprobe_streams
|
||||
from openpype.lib.transcoding import (
|
||||
get_ffprobe_streams,
|
||||
convert_ffprobe_fps_to_float,
|
||||
)
|
||||
from openpype.lib.profiles_filtering import filter_profiles
|
||||
|
||||
|
||||
|
|
@ -79,11 +82,6 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
).format(family))
|
||||
return
|
||||
|
||||
# Prepare FPS
|
||||
instance_fps = instance.data.get("fps")
|
||||
if instance_fps is None:
|
||||
instance_fps = instance.context.data["fps"]
|
||||
|
||||
status_name = self._get_asset_version_status_name(instance)
|
||||
|
||||
# Base of component item data
|
||||
|
|
@ -168,10 +166,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
# Add item to component list
|
||||
component_list.append(thumbnail_item)
|
||||
|
||||
if (
|
||||
not review_representations
|
||||
and first_thumbnail_component is not None
|
||||
):
|
||||
if first_thumbnail_component is not None:
|
||||
width = first_thumbnail_component_repre.get("width")
|
||||
height = first_thumbnail_component_repre.get("height")
|
||||
if not width or not height:
|
||||
|
|
@ -253,20 +248,9 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
first_thumbnail_component[
|
||||
"asset_data"]["name"] = extended_asset_name
|
||||
|
||||
frame_start = repre.get("frameStartFtrack")
|
||||
frame_end = repre.get("frameEndFtrack")
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = instance.data["frameStart"]
|
||||
frame_end = instance.data["frameEnd"]
|
||||
|
||||
# Frame end of uploaded video file should be duration in frames
|
||||
# - frame start is always 0
|
||||
# - frame end is duration in frames
|
||||
duration = frame_end - frame_start + 1
|
||||
|
||||
fps = repre.get("fps")
|
||||
if fps is None:
|
||||
fps = instance_fps
|
||||
component_meta = self._prepare_component_metadata(
|
||||
instance, repre, repre_path, True
|
||||
)
|
||||
|
||||
# Change location
|
||||
review_item["component_path"] = repre_path
|
||||
|
|
@ -275,11 +259,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {
|
||||
"ftr_meta": json.dumps({
|
||||
"frameIn": 0,
|
||||
"frameOut": int(duration),
|
||||
"frameRate": float(fps)
|
||||
})
|
||||
"ftr_meta": json.dumps(component_meta)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -322,6 +302,13 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
component_data = copy_src_item["component_data"]
|
||||
component_name = component_data["name"]
|
||||
component_data["name"] = component_name + "_src"
|
||||
component_meta = self._prepare_component_metadata(
|
||||
instance, repre, copy_src_item["component_path"], False
|
||||
)
|
||||
if component_meta:
|
||||
component_data["metadata"] = {
|
||||
"ftr_meta": json.dumps(component_meta)
|
||||
}
|
||||
component_list.append(copy_src_item)
|
||||
|
||||
# Add others representations as component
|
||||
|
|
@ -339,9 +326,17 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
):
|
||||
other_item["asset_data"]["name"] = extended_asset_name
|
||||
|
||||
other_item["component_data"] = {
|
||||
component_meta = self._prepare_component_metadata(
|
||||
instance, repre, published_path, False
|
||||
)
|
||||
component_data = {
|
||||
"name": repre["name"]
|
||||
}
|
||||
if component_meta:
|
||||
component_data["metadata"] = {
|
||||
"ftr_meta": json.dumps(component_meta)
|
||||
}
|
||||
other_item["component_data"] = component_data
|
||||
other_item["component_location_name"] = unmanaged_location_name
|
||||
other_item["component_path"] = published_path
|
||||
component_list.append(other_item)
|
||||
|
|
@ -424,3 +419,107 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
return None
|
||||
|
||||
return matching_profile["status"] or None
|
||||
|
||||
def _prepare_component_metadata(
|
||||
self, instance, repre, component_path, is_review
|
||||
):
|
||||
extension = os.path.splitext(component_path)[-1]
|
||||
streams = []
|
||||
try:
|
||||
streams = get_ffprobe_streams(component_path)
|
||||
except Exception:
|
||||
self.log.debug((
|
||||
"Failed to retrieve information about intput {}"
|
||||
).format(component_path))
|
||||
|
||||
# Find video streams
|
||||
video_streams = [
|
||||
stream
|
||||
for stream in streams
|
||||
if stream["codec_type"] == "video"
|
||||
]
|
||||
# Skip if there are not video streams
|
||||
# - exr is special case which can have issues with reading through
|
||||
# ffmpegh but we want to set fps for it
|
||||
if not video_streams and extension not in [".exr"]:
|
||||
return {}
|
||||
|
||||
stream_width = None
|
||||
stream_height = None
|
||||
stream_fps = None
|
||||
frame_out = None
|
||||
for video_stream in video_streams:
|
||||
tmp_width = video_stream.get("width")
|
||||
tmp_height = video_stream.get("height")
|
||||
if tmp_width and tmp_height:
|
||||
stream_width = tmp_width
|
||||
stream_height = tmp_height
|
||||
|
||||
input_framerate = video_stream.get("r_frame_rate")
|
||||
duration = video_stream.get("duration")
|
||||
if input_framerate is None or duration is None:
|
||||
continue
|
||||
try:
|
||||
stream_fps = convert_ffprobe_fps_to_float(
|
||||
input_framerate
|
||||
)
|
||||
except ValueError:
|
||||
self.log.warning((
|
||||
"Could not convert ffprobe fps to float \"{}\""
|
||||
).format(input_framerate))
|
||||
continue
|
||||
|
||||
stream_width = tmp_width
|
||||
stream_height = tmp_height
|
||||
|
||||
self.log.debug("FPS from stream is {} and duration is {}".format(
|
||||
input_framerate, duration
|
||||
))
|
||||
frame_out = float(duration) * stream_fps
|
||||
break
|
||||
|
||||
# Prepare FPS
|
||||
instance_fps = instance.data.get("fps")
|
||||
if instance_fps is None:
|
||||
instance_fps = instance.context.data["fps"]
|
||||
|
||||
if not is_review:
|
||||
output = {}
|
||||
fps = stream_fps or instance_fps
|
||||
if fps:
|
||||
output["frameRate"] = fps
|
||||
|
||||
if stream_width and stream_height:
|
||||
output["width"] = int(stream_width)
|
||||
output["height"] = int(stream_height)
|
||||
return output
|
||||
|
||||
frame_start = repre.get("frameStartFtrack")
|
||||
frame_end = repre.get("frameEndFtrack")
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = instance.data["frameStart"]
|
||||
frame_end = instance.data["frameEnd"]
|
||||
|
||||
fps = None
|
||||
repre_fps = repre.get("fps")
|
||||
if repre_fps is not None:
|
||||
repre_fps = float(repre_fps)
|
||||
|
||||
fps = stream_fps or repre_fps or instance_fps
|
||||
|
||||
# Frame end of uploaded video file should be duration in frames
|
||||
# - frame start is always 0
|
||||
# - frame end is duration in frames
|
||||
if not frame_out:
|
||||
frame_out = frame_end - frame_start + 1
|
||||
|
||||
# Ftrack documentation says that it is required to have
|
||||
# 'width' and 'height' in review component. But with those values
|
||||
# review video does not play.
|
||||
component_meta = {
|
||||
"frameIn": 0,
|
||||
"frameOut": frame_out,
|
||||
"frameRate": float(fps)
|
||||
}
|
||||
|
||||
return component_meta
|
||||
|
|
|
|||
|
|
@ -39,10 +39,12 @@ class CollectKitsuEntities(pyblish.api.ContextPlugin):
|
|||
kitsu_entity = gazu.asset.get_asset(zou_asset_data["id"])
|
||||
|
||||
if not kitsu_entity:
|
||||
raise AssertionError(f"{entity_type} not found in kitsu!")
|
||||
raise AssertionError("{} not found in kitsu!".format(entity_type))
|
||||
|
||||
context.data["kitsu_entity"] = kitsu_entity
|
||||
self.log.debug(f"Collect kitsu {entity_type}: {kitsu_entity}")
|
||||
self.log.debug(
|
||||
"Collect kitsu {}: {}".format(entity_type, kitsu_entity)
|
||||
)
|
||||
|
||||
if zou_task_data:
|
||||
kitsu_task = gazu.task.get_task(zou_task_data["id"])
|
||||
|
|
|
|||
|
|
@ -276,7 +276,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
|
|||
project_doc = create_project(project_name, project_name, dbcon=dbcon)
|
||||
|
||||
# Project data and tasks
|
||||
project_data = project["data"] or {}
|
||||
project_data = project_doc["data"] or {}
|
||||
|
||||
# Build project code and update Kitsu
|
||||
project_code = project.get("code")
|
||||
|
|
@ -305,6 +305,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
|
|||
"config.tasks": {
|
||||
t["name"]: {"short_name": t.get("short_name", t["name"])}
|
||||
for t in gazu.task.all_task_types_for_project(project)
|
||||
or gazu.task.all_task_types()
|
||||
},
|
||||
"data": project_data,
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import inspect
|
|||
from uuid import uuid4
|
||||
from contextlib import contextmanager
|
||||
|
||||
from openpype.client import get_assets
|
||||
from openpype.host import INewPublisher
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.mongodb import (
|
||||
|
|
@ -1082,15 +1083,10 @@ class CreateContext:
|
|||
for asset_name in task_names_by_asset_name.keys()
|
||||
if asset_name is not None
|
||||
]
|
||||
asset_docs = list(self.dbcon.find(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": {"$in": asset_names}
|
||||
},
|
||||
{
|
||||
"name": True,
|
||||
"data.tasks": True
|
||||
}
|
||||
asset_docs = list(get_assets(
|
||||
self.project_name,
|
||||
asset_names=asset_names,
|
||||
fields=["name", "data.tasks"]
|
||||
))
|
||||
|
||||
task_names_by_asset_name = {}
|
||||
|
|
|
|||
|
|
@ -63,6 +63,8 @@ class RenderInstance(object):
|
|||
|
||||
family = attr.ib(default="renderlayer")
|
||||
families = attr.ib(default=["renderlayer"]) # list of families
|
||||
# True if should be rendered on farm, eg not integrate
|
||||
farm = attr.ib(default=False)
|
||||
|
||||
# format settings
|
||||
multipartExr = attr.ib(default=False) # flag for multipart exrs
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
import copy
|
||||
import logging
|
||||
|
||||
from openpype.client import get_project
|
||||
from . import legacy_io
|
||||
from .plugin_discover import (
|
||||
discover,
|
||||
|
|
@ -85,13 +86,8 @@ class TemplateResolver(ThumbnailResolver):
|
|||
self.log.debug("Thumbnail entity does not have set template")
|
||||
return
|
||||
|
||||
project = self.dbcon.find_one(
|
||||
{"type": "project"},
|
||||
{
|
||||
"name": True,
|
||||
"data.code": True
|
||||
}
|
||||
)
|
||||
project_name = self.dbcon.active_project()
|
||||
project = get_project(project_name, fields=["name", "data.code"])
|
||||
|
||||
template_data = copy.deepcopy(
|
||||
thumbnail_entity["data"].get("template_data") or {}
|
||||
|
|
|
|||
|
|
@ -360,6 +360,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
os.unlink(f)
|
||||
|
||||
new_repre.update({
|
||||
"fps": temp_data["fps"],
|
||||
"name": "{}_{}".format(output_name, output_ext),
|
||||
"outputName": output_name,
|
||||
"outputDef": output_def,
|
||||
|
|
|
|||
24
openpype/plugins/publish/help/validate_containers.xml
Normal file
24
openpype/plugins/publish/help/validate_containers.xml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Not up-to-date assets</title>
|
||||
<description>
|
||||
## Obsolete containers found
|
||||
|
||||
Scene contains one or more obsolete loaded containers, eg. items loaded into scene by Loader.
|
||||
|
||||
### How to repair?
|
||||
|
||||
Use 'Scene Inventory' and update all highlighted old container to latest OR
|
||||
refresh Publish and switch 'Validate Containers' toggle on 'Options' tab.
|
||||
|
||||
WARNING: Skipping this validator will result in publishing (and probably rendering) old version of loaded assets.
|
||||
</description>
|
||||
<detail>
|
||||
### __Detailed Info__ (optional)
|
||||
|
||||
This validator protects you from rendering obsolete content, someone modified some referenced asset in this scene, eg.
|
||||
by skipping this you would ignore changes to that asset.
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -23,41 +23,6 @@ from openpype.pipeline.publish import KnownPublishError
|
|||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def assemble(files):
|
||||
"""Convenience `clique.assemble` wrapper for files of a single collection.
|
||||
|
||||
Unlike `clique.assemble` this wrapper does not allow more than a single
|
||||
Collection nor any remainder files. Errors will be raised when not only
|
||||
a single collection is assembled.
|
||||
|
||||
Returns:
|
||||
clique.Collection: A single sequence Collection
|
||||
|
||||
Raises:
|
||||
ValueError: Error is raised when files do not result in a single
|
||||
collected Collection.
|
||||
|
||||
"""
|
||||
# todo: move this to lib?
|
||||
# Get the sequence as a collection. The files must be of a single
|
||||
# sequence and have no remainder outside of the collections.
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
collections, remainder = clique.assemble(files,
|
||||
minimum_items=1,
|
||||
patterns=patterns)
|
||||
if not collections:
|
||||
raise ValueError("No collections found in files: "
|
||||
"{}".format(files))
|
||||
if remainder:
|
||||
raise ValueError("Files found not detected as part"
|
||||
" of a sequence: {}".format(remainder))
|
||||
if len(collections) > 1:
|
||||
raise ValueError("Files in sequence are not part of a"
|
||||
" single sequence collection: "
|
||||
"{}".format(collections))
|
||||
return collections[0]
|
||||
|
||||
|
||||
def get_instance_families(instance):
|
||||
"""Get all families of the instance"""
|
||||
# todo: move this to lib?
|
||||
|
|
@ -576,8 +541,18 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
if any(os.path.isabs(fname) for fname in files):
|
||||
raise KnownPublishError("Given file names contain full paths")
|
||||
|
||||
src_collection = assemble(files)
|
||||
src_collections, remainders = clique.assemble(files)
|
||||
if len(files) < 2 or len(src_collections) != 1 or remainders:
|
||||
raise KnownPublishError((
|
||||
"Files of representation does not contain proper"
|
||||
" sequence files.\nCollected collections: {}"
|
||||
"\nCollected remainders: {}"
|
||||
).format(
|
||||
", ".join([str(col) for col in src_collections]),
|
||||
", ".join([str(rem) for rem in remainders])
|
||||
))
|
||||
|
||||
src_collection = src_collections[0]
|
||||
destination_indexes = list(src_collection.indexes)
|
||||
# Use last frame for minimum padding
|
||||
# - that should cover both 'udim' and 'frame' minimum padding
|
||||
|
|
@ -609,18 +584,27 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# a Frame or UDIM tile set for the template data. We use the first
|
||||
# index of the destination for that because that could've shifted
|
||||
# from the source indexes, etc.
|
||||
first_index_padded = get_frame_padded(frame=destination_indexes[0],
|
||||
padding=destination_padding)
|
||||
if is_udim:
|
||||
# UDIM representations handle ranges in a different manner
|
||||
template_data["udim"] = first_index_padded
|
||||
else:
|
||||
template_data["frame"] = first_index_padded
|
||||
first_index_padded = get_frame_padded(
|
||||
frame=destination_indexes[0],
|
||||
padding=destination_padding
|
||||
)
|
||||
|
||||
# Construct destination collection from template
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
template_filled = anatomy_filled[template_name]["path"]
|
||||
repre_context = template_filled.used_values
|
||||
repre_context = None
|
||||
dst_filepaths = []
|
||||
for index in destination_indexes:
|
||||
if is_udim:
|
||||
template_data["udim"] = index
|
||||
else:
|
||||
template_data["frame"] = index
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
template_filled = anatomy_filled[template_name]["path"]
|
||||
dst_filepaths.append(template_filled)
|
||||
if repre_context is None:
|
||||
self.log.debug(
|
||||
"Template filled: {}".format(str(template_filled))
|
||||
)
|
||||
repre_context = template_filled.used_values
|
||||
|
||||
# Make sure context contains frame
|
||||
# NOTE: Frame would not be available only if template does not
|
||||
|
|
@ -628,12 +612,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
if not is_udim:
|
||||
repre_context["frame"] = first_index_padded
|
||||
|
||||
self.log.debug("Template filled: {}".format(str(template_filled)))
|
||||
dst_collection = assemble([os.path.normpath(template_filled)])
|
||||
|
||||
# Update the destination indexes and padding
|
||||
dst_collection.indexes.clear()
|
||||
dst_collection.indexes.update(set(destination_indexes))
|
||||
dst_collection = clique.assemble(dst_filepaths)[0][0]
|
||||
dst_collection.padding = destination_padding
|
||||
if len(src_collection.indexes) != len(dst_collection.indexes):
|
||||
raise KnownPublishError((
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline.load import any_outdated_containers
|
||||
from openpype.pipeline import (
|
||||
PublishXmlValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
|
||||
|
||||
class ShowInventory(pyblish.api.Action):
|
||||
|
|
@ -14,7 +18,9 @@ class ShowInventory(pyblish.api.Action):
|
|||
host_tools.show_scene_inventory()
|
||||
|
||||
|
||||
class ValidateContainers(pyblish.api.ContextPlugin):
|
||||
class ValidateContainers(OptionalPyblishPluginMixin,
|
||||
pyblish.api.ContextPlugin):
|
||||
|
||||
"""Containers are must be updated to latest version on publish."""
|
||||
|
||||
label = "Validate Containers"
|
||||
|
|
@ -24,5 +30,9 @@ class ValidateContainers(pyblish.api.ContextPlugin):
|
|||
actions = [ShowInventory]
|
||||
|
||||
def process(self, context):
|
||||
if not self.is_active(context.data):
|
||||
return
|
||||
|
||||
if any_outdated_containers():
|
||||
raise ValueError("There are outdated containers in the scene.")
|
||||
msg = "There are outdated containers in the scene."
|
||||
raise PublishXmlValidationError(self, msg)
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ import re
|
|||
import sys
|
||||
import logging
|
||||
|
||||
from openpype.client import get_asset_by_name, get_versions
|
||||
|
||||
# Pipeline imports
|
||||
from openpype.hosts.fusion import api
|
||||
import openpype.hosts.fusion.api.lib as fusion_lib
|
||||
|
|
@ -19,9 +21,6 @@ from openpype.lib.avalon_context import get_workdir_from_session
|
|||
|
||||
log = logging.getLogger("Update Slap Comp")
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self._project = None
|
||||
|
||||
|
||||
def _format_version_folder(folder):
|
||||
"""Format a version folder based on the filepath
|
||||
|
|
@ -131,8 +130,8 @@ def update_frame_range(comp, representations):
|
|||
"""
|
||||
|
||||
version_ids = [r["parent"] for r in representations]
|
||||
versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}})
|
||||
versions = list(versions)
|
||||
project_name = legacy_io.active_project()
|
||||
versions = list(get_versions(project_name, version_ids=version_ids))
|
||||
|
||||
start = min(v["data"]["frameStart"] for v in versions)
|
||||
end = max(v["data"]["frameEnd"] for v in versions)
|
||||
|
|
@ -162,15 +161,10 @@ def switch(asset_name, filepath=None, new=True):
|
|||
|
||||
# Assert asset name exists
|
||||
# It is better to do this here then to wait till switch_shot does it
|
||||
asset = legacy_io.find_one({"type": "asset", "name": asset_name})
|
||||
project_name = legacy_io.active_project()
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
assert asset, "Could not find '%s' in the database" % asset_name
|
||||
|
||||
# Get current project
|
||||
self._project = legacy_io.find_one({
|
||||
"type": "project",
|
||||
"name": legacy_io.Session["AVALON_PROJECT"]
|
||||
})
|
||||
|
||||
# Go to comp
|
||||
if not filepath:
|
||||
current_comp = api.get_current_comp()
|
||||
|
|
|
|||
|
|
@ -31,6 +31,37 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
"RenderSettings": {
|
||||
"apply_render_settings": true,
|
||||
"default_render_image_folder": "",
|
||||
"aov_separator": "underscore",
|
||||
"reset_current_frame": false,
|
||||
"arnold_renderer": {
|
||||
"image_prefix": "maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>",
|
||||
"image_format": "exr",
|
||||
"multilayer_exr": true,
|
||||
"tiled": true,
|
||||
"aov_list": [],
|
||||
"additional_options": {}
|
||||
},
|
||||
"vray_renderer": {
|
||||
"image_prefix": "maya/<scene>/<Layer>/<Layer>",
|
||||
"engine": "1",
|
||||
"image_format": "png",
|
||||
"aov_list": [],
|
||||
"additional_options": {}
|
||||
},
|
||||
"redshift_renderer": {
|
||||
"image_prefix": "maya/<Scene>/<RenderLayer>/<RenderLayer>",
|
||||
"primary_gi_engine": "0",
|
||||
"secondary_gi_engine": "0",
|
||||
"image_format": "iff",
|
||||
"multilayer_exr": true,
|
||||
"force_combine": true,
|
||||
"aov_list": [],
|
||||
"additional_options": {}
|
||||
}
|
||||
},
|
||||
"create": {
|
||||
"CreateLook": {
|
||||
"enabled": true,
|
||||
|
|
@ -43,9 +74,7 @@
|
|||
"enabled": true,
|
||||
"defaults": [
|
||||
"Main"
|
||||
],
|
||||
"aov_separator": "underscore",
|
||||
"default_render_image_folder": "renders"
|
||||
]
|
||||
},
|
||||
"CreateUnrealStaticMesh": {
|
||||
"enabled": true,
|
||||
|
|
@ -90,9 +119,11 @@
|
|||
},
|
||||
"CreateAnimation": {
|
||||
"enabled": true,
|
||||
"write_color_sets": false,
|
||||
"defaults": [
|
||||
"Main"
|
||||
]
|
||||
|
||||
},
|
||||
"CreateAss": {
|
||||
"enabled": true,
|
||||
|
|
@ -134,6 +165,7 @@
|
|||
},
|
||||
"CreatePointCache": {
|
||||
"enabled": true,
|
||||
"write_color_sets": false,
|
||||
"defaults": [
|
||||
"Main"
|
||||
]
|
||||
|
|
@ -954,4 +986,4 @@
|
|||
"ValidateNoAnimation": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -57,6 +57,10 @@
|
|||
"type": "schema",
|
||||
"name": "schema_scriptsmenu"
|
||||
},
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_maya_render_settings"
|
||||
},
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_maya_create"
|
||||
|
|
|
|||
|
|
@ -29,42 +29,9 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CreateRender",
|
||||
"label": "Create Render",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "defaults",
|
||||
"label": "Default Subsets",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"key": "aov_separator",
|
||||
"label": "AOV Separator character",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"default": "underscore",
|
||||
"enum_items": [
|
||||
{"dash": "- (dash)"},
|
||||
{"underscore": "_ (underscore)"},
|
||||
{"dot": ". (dot)"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "default_render_image_folder",
|
||||
"label": "Default render image folder"
|
||||
}
|
||||
]
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_maya_create_render"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
|
|
@ -143,6 +110,57 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CreateAnimation",
|
||||
"label": "Create Animation",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "write_color_sets",
|
||||
"label": "Write Color Sets"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "defaults",
|
||||
"label": "Default Subsets",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CreatePointCache",
|
||||
"label": "Create Point Cache",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "write_color_sets",
|
||||
"label": "Write Color Sets"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "defaults",
|
||||
"label": "Default Subsets",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_create_plugin",
|
||||
|
|
@ -159,10 +177,6 @@
|
|||
"key": "CreateMultiverseUsdOver",
|
||||
"label": "Create Multiverse USD Override"
|
||||
},
|
||||
{
|
||||
"key": "CreateAnimation",
|
||||
"label": "Create Animation"
|
||||
},
|
||||
{
|
||||
"key": "CreateAss",
|
||||
"label": "Create Ass"
|
||||
|
|
@ -187,10 +201,6 @@
|
|||
"key": "CreateModel",
|
||||
"label": "Create Model"
|
||||
},
|
||||
{
|
||||
"key": "CreatePointCache",
|
||||
"label": "Create Cache"
|
||||
},
|
||||
{
|
||||
"key": "CreateRenderSetup",
|
||||
"label": "Create Render Setup"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,20 @@
|
|||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CreateRender",
|
||||
"label": "Create Render",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "defaults",
|
||||
"label": "Default Subsets",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -0,0 +1,418 @@
|
|||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "RenderSettings",
|
||||
"label": "Render Settings",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "apply_render_settings",
|
||||
"label": "Apply Render Settings on creation"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "default_render_image_folder",
|
||||
"label": "Default render image folder"
|
||||
},
|
||||
{
|
||||
"key": "aov_separator",
|
||||
"label": "AOV Separator character",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"default": "underscore",
|
||||
"enum_items": [
|
||||
{"dash": "- (dash)"},
|
||||
{"underscore": "_ (underscore)"},
|
||||
{"dot": ". (dot)"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "reset_current_frame",
|
||||
"label": "Reset Current Frame",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "arnold_renderer",
|
||||
"label": "Arnold Renderer",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"key": "image_prefix",
|
||||
"label": "Image prefix template",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"key": "image_format",
|
||||
"label": "Output Image Format",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"defaults": "exr",
|
||||
"enum_items": [
|
||||
{"jpeg": "jpeg"},
|
||||
{"png": "png"},
|
||||
{"deepexr": "deep exr"},
|
||||
{"tif": "tif"},
|
||||
{"exr": "exr"},
|
||||
{"maya": "maya"},
|
||||
{"mtoa_shaders": "mtoa_shaders"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "multilayer_exr",
|
||||
"label": "Multilayer (exr)",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"key": "tiled",
|
||||
"label": "Tiled (tif, exr)",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"key": "aov_list",
|
||||
"label": "AOVs to create",
|
||||
"type": "enum",
|
||||
"multiselection": true,
|
||||
"defaults": "empty",
|
||||
"enum_items": [
|
||||
{"empty": "< empty >"},
|
||||
{"ID": "ID"},
|
||||
{"N": "N"},
|
||||
{"P": "P"},
|
||||
{"Pref": "Pref"},
|
||||
{"RGBA": "RGBA"},
|
||||
{"Z": "Z"},
|
||||
{"albedo": "albedo"},
|
||||
{"background": "background"},
|
||||
{"coat": "coat"},
|
||||
{"coat_albedo": "coat_albedo"},
|
||||
{"coat_direct": "coat_direct"},
|
||||
{"coat_indirect": "coat_indirect"},
|
||||
{"cputime": "cputime"},
|
||||
{"crypto_asset": "crypto_asset"},
|
||||
{"crypto_material": "cypto_material"},
|
||||
{"crypto_object": "crypto_object"},
|
||||
{"diffuse": "diffuse"},
|
||||
{"diffuse_albedo": "diffuse_albedo"},
|
||||
{"diffuse_direct": "diffuse_direct"},
|
||||
{"diffuse_indirect": "diffuse_indirect"},
|
||||
{"direct": "direct"},
|
||||
{"emission": "emission"},
|
||||
{"highlight": "highlight"},
|
||||
{"indirect": "indirect"},
|
||||
{"motionvector": "motionvector"},
|
||||
{"opacity": "opacity"},
|
||||
{"raycount": "raycount"},
|
||||
{"rim_light": "rim_light"},
|
||||
{"shadow": "shadow"},
|
||||
{"shadow_diff": "shadow_diff"},
|
||||
{"shadow_mask": "shadow_mask"},
|
||||
{"shadow_matte": "shadow_matte"},
|
||||
{"sheen": "sheen"},
|
||||
{"sheen_albedo": "sheen_albedo"},
|
||||
{"sheen_direct": "sheen_direct"},
|
||||
{"sheen_indirect": "sheen_indirect"},
|
||||
{"specular": "specular"},
|
||||
{"specular_albedo": "specular_albedo"},
|
||||
{"specular_direct": "specular_direct"},
|
||||
{"specular_indirect": "specular_indirect"},
|
||||
{"sss": "sss"},
|
||||
{"sss_albedo": "sss_albedo"},
|
||||
{"sss_direct": "sss_direct"},
|
||||
{"sss_indirect": "sss_indirect"},
|
||||
{"transmission": "transmission"},
|
||||
{"transmission_albedo": "transmission_albedo"},
|
||||
{"transmission_direct": "transmission_direct"},
|
||||
{"transmission_indirect": "transmission_indirect"},
|
||||
{"volume": "volume"},
|
||||
{"volume_Z": "volume_Z"},
|
||||
{"volume_albedo": "volume_albedo"},
|
||||
{"volume_direct": "volume_direct"},
|
||||
{"volume_indirect": "volume_indirect"},
|
||||
{"volume_opacity": "volume_opacity"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Add additional options - put attribute and value, like <code>AASamples</code>"
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"store_as_list": true,
|
||||
"key": "additional_options",
|
||||
"label": "Additional Renderer Options",
|
||||
"use_label_wrap": true,
|
||||
"object_type": {
|
||||
"type": "text"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "vray_renderer",
|
||||
"label": "V-Ray Renderer",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"key": "image_prefix",
|
||||
"label": "Image prefix template",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"key": "engine",
|
||||
"label": "Production Engine",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"defaults": "1",
|
||||
"enum_items": [
|
||||
{"1": "V-Ray"},
|
||||
{"2": "V-Ray GPU"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "image_format",
|
||||
"label": "Output Image Format",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"defaults": "exr",
|
||||
"enum_items": [
|
||||
{"png": "png"},
|
||||
{"jpg": "jpg"},
|
||||
{"vrimg": "vrimg"},
|
||||
{"hdr": "hdr"},
|
||||
{"exr": "exr"},
|
||||
{"exr (multichannel)": "exr (multichannel)"},
|
||||
{"exr (deep)": "exr (deep)"},
|
||||
{"tga": "tga"},
|
||||
{"bmp": "bmp"},
|
||||
{"sgi": "sgi"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "aov_list",
|
||||
"label": "AOVs to create",
|
||||
"type": "enum",
|
||||
"multiselection": true,
|
||||
"defaults": "empty",
|
||||
"enum_items": [
|
||||
{"empty": "< empty >"},
|
||||
{"atmosphereChannel": "atmosphere"},
|
||||
{"backgroundChannel": "background"},
|
||||
{"bumpNormalsChannel": "bumpnormals"},
|
||||
{"causticsChannel": "caustics"},
|
||||
{"coatFilterChannel": "coat_filter"},
|
||||
{"coatGlossinessChannel": "coatGloss"},
|
||||
{"coatReflectionChannel": "coat_reflection"},
|
||||
{"vrayCoatChannel": "coat_specular"},
|
||||
{"CoverageChannel": "coverage"},
|
||||
{"cryptomatteChannel": "cryptomatte"},
|
||||
{"customColor": "custom_color"},
|
||||
{"drBucketChannel": "DR"},
|
||||
{"denoiserChannel": "denoiser"},
|
||||
{"diffuseChannel": "diffuse"},
|
||||
{"ExtraTexElement": "extraTex"},
|
||||
{"giChannel": "GI"},
|
||||
{"LightMixElement": "None"},
|
||||
{"lightingChannel": "lighting"},
|
||||
{"LightingAnalysisChannel": "LightingAnalysis"},
|
||||
{"materialIDChannel": "materialID"},
|
||||
{"MaterialSelectElement": "materialSelect"},
|
||||
{"matteShadowChannel": "matteShadow"},
|
||||
{"MultiMatteElement": "multimatte"},
|
||||
{"multimatteIDChannel": "multimatteID"},
|
||||
{"normalsChannel": "normals"},
|
||||
{"nodeIDChannel": "objectId"},
|
||||
{"objectSelectChannel": "objectSelect"},
|
||||
{"rawCoatFilterChannel": "raw_coat_filter"},
|
||||
{"rawCoatReflectionChannel": "raw_coat_reflection"},
|
||||
{"rawDiffuseFilterChannel": "rawDiffuseFilter"},
|
||||
{"rawGiChannel": "rawGI"},
|
||||
{"rawLightChannel": "rawLight"},
|
||||
{"rawReflectionChannel": "rawReflection"},
|
||||
{"rawReflectionFilterChannel": "rawReflectionFilter"},
|
||||
{"rawRefractionChannel": "rawRefraction"},
|
||||
{"rawRefractionFilterChannel": "rawRefractionFilter"},
|
||||
{"rawShadowChannel": "rawShadow"},
|
||||
{"rawSheenFilterChannel": "raw_sheen_filter"},
|
||||
{"rawSheenReflectionChannel": "raw_sheen_reflection"},
|
||||
{"rawTotalLightChannel": "rawTotalLight"},
|
||||
{"reflectIORChannel": "reflIOR"},
|
||||
{"reflectChannel": "reflect"},
|
||||
{"reflectionFilterChannel": "reflectionFilter"},
|
||||
{"reflectGlossinessChannel": "reflGloss"},
|
||||
{"refractChannel": "refract"},
|
||||
{"refractionFilterChannel": "refractionFilter"},
|
||||
{"refractGlossinessChannel": "refrGloss"},
|
||||
{"renderIDChannel": "renderId"},
|
||||
{"FastSSS2Channel": "SSS"},
|
||||
{"sampleRateChannel": "sampleRate"},
|
||||
{"samplerInfo": "samplerInfo"},
|
||||
{"selfIllumChannel": "selfIllum"},
|
||||
{"shadowChannel": "shadow"},
|
||||
{"sheenFilterChannel": "sheen_filter"},
|
||||
{"sheenGlossinessChannel": "sheenGloss"},
|
||||
{"sheenReflectionChannel": "sheen_reflection"},
|
||||
{"vraySheenChannel": "sheen_specular"},
|
||||
{"specularChannel": "specular"},
|
||||
{"Toon": "Toon"},
|
||||
{"toonLightingChannel": "toonLighting"},
|
||||
{"toonSpecularChannel": "toonSpecular"},
|
||||
{"totalLightChannel": "totalLight"},
|
||||
{"unclampedColorChannel": "unclampedColor"},
|
||||
{"VRScansPaintMaskChannel": "VRScansPaintMask"},
|
||||
{"VRScansZoneMaskChannel": "VRScansZoneMask"},
|
||||
{"velocityChannel": "velocity"},
|
||||
{"zdepthChannel": "zDepth"},
|
||||
{"LightSelectElement": "lightselect"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Add additional options - put attribute and value, like <code>aaFilterSize</code>"
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"store_as_list": true,
|
||||
"key": "additional_options",
|
||||
"label": "Additional Renderer Options",
|
||||
"use_label_wrap": true,
|
||||
"object_type": {
|
||||
"type": "text"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "redshift_renderer",
|
||||
"label": "Redshift Renderer",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"key": "image_prefix",
|
||||
"label": "Image prefix template",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"key": "primary_gi_engine",
|
||||
"label": "Primary GI Engine",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"defaults": "0",
|
||||
"enum_items": [
|
||||
{"0": "None"},
|
||||
{"1": "Photon Map"},
|
||||
{"2": "Irradiance Cache"},
|
||||
{"3": "Brute Force"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "secondary_gi_engine",
|
||||
"label": "Secondary GI Engine",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"defaults": "0",
|
||||
"enum_items": [
|
||||
{"0": "None"},
|
||||
{"1": "Photon Map"},
|
||||
{"2": "Irradiance Cache"},
|
||||
{"3": "Brute Force"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "image_format",
|
||||
"label": "Output Image Format",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"defaults": "exr",
|
||||
"enum_items": [
|
||||
{"iff": "Maya IFF"},
|
||||
{"exr": "OpenEXR"},
|
||||
{"tif": "TIFF"},
|
||||
{"png": "PNG"},
|
||||
{"tga": "Targa"},
|
||||
{"jpg": "JPEG"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "multilayer_exr",
|
||||
"label": "Multilayer (exr)",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"key": "force_combine",
|
||||
"label": "Force combine beauty and AOVs",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"key": "aov_list",
|
||||
"label": "AOVs to create",
|
||||
"type": "enum",
|
||||
"multiselection": true,
|
||||
"defaults": "empty",
|
||||
"enum_items": [
|
||||
{"empty": "< none >"},
|
||||
{"AO": "Ambient Occlusion"},
|
||||
{"Background": "Background"},
|
||||
{"Beauty": "Beauty"},
|
||||
{"BumpNormals": "Bump Normals"},
|
||||
{"Caustics": "Caustics"},
|
||||
{"CausticsRaw": "Caustics Raw"},
|
||||
{"Cryptomatte": "Cryptomatte"},
|
||||
{"Custom": "Custom"},
|
||||
{"Z": "Depth"},
|
||||
{"DiffuseFilter": "Diffuse Filter"},
|
||||
{"DiffuseLighting": "Diffuse Lighting"},
|
||||
{"DiffuseLightingRaw": "Diffuse Lighting Raw"},
|
||||
{"Emission": "Emission"},
|
||||
{"GI": "Global Illumination"},
|
||||
{"GIRaw": "Global Illumination Raw"},
|
||||
{"Matte": "Matte"},
|
||||
{"MotionVectors": "Ambient Occlusion"},
|
||||
{"N": "Normals"},
|
||||
{"ID": "ObjectID"},
|
||||
{"ObjectBumpNormal": "Object-Space Bump Normals"},
|
||||
{"ObjectPosition": "Object-Space Positions"},
|
||||
{"PuzzleMatte": "Puzzle Matte"},
|
||||
{"Reflections": "Reflections"},
|
||||
{"ReflectionsFilter": "Reflections Filter"},
|
||||
{"ReflectionsRaw": "Reflections Raw"},
|
||||
{"Refractions": "Refractions"},
|
||||
{"RefractionsFilter": "Refractions Filter"},
|
||||
{"RefractionsRaw": "Refractions Filter"},
|
||||
{"Shadows": "Shadows"},
|
||||
{"SpecularLighting": "Specular Lighting"},
|
||||
{"SSS": "Sub Surface Scatter"},
|
||||
{"SSSRaw": "Sub Surface Scatter Raw"},
|
||||
{"TotalDiffuseLightingRaw": "Total Diffuse Lighting Raw"},
|
||||
{"TotalTransLightingRaw": "Total Translucency Filter"},
|
||||
{"TransTint": "Translucency Filter"},
|
||||
{"TransGIRaw": "Translucency Lighting Raw"},
|
||||
{"VolumeFogEmission": "Volume Fog Emission"},
|
||||
{"VolumeFogTint": "Volume Fog Tint"},
|
||||
{"VolumeLighting": "Volume Lighting"},
|
||||
{"P": "World Position"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Add additional options - put attribute and value, like <code>reflectionMaxTraceDepth</code>"
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"store_as_list": true,
|
||||
"key": "additional_options",
|
||||
"label": "Additional Renderer Options",
|
||||
"use_label_wrap": true,
|
||||
"object_type": {
|
||||
"type": "text"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.12.2"
|
||||
__version__ = "3.12.3-nightly.2"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.12.2" # OpenPype
|
||||
version = "3.12.3-nightly.2" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
import logging
|
||||
|
||||
from tests.lib.assert_classes import DBAssert
|
||||
from tests.integration.hosts.aftereffects.lib import AfterEffectsTestClass
|
||||
|
||||
log = logging.getLogger("test_publish_in_aftereffects")
|
||||
|
||||
|
||||
class TestPublishInAfterEffects(AfterEffectsTestClass):
|
||||
"""Basic test case for publishing in AfterEffects
|
||||
|
||||
Should publish 5 frames
|
||||
"""
|
||||
PERSIST = True
|
||||
|
||||
TEST_FILES = [
|
||||
("12aSDRjthn4X3yw83gz_0FZJcRRiVDEYT",
|
||||
"test_aftereffects_publish_multiframe.zip",
|
||||
"")
|
||||
]
|
||||
|
||||
APP = "aftereffects"
|
||||
APP_VARIANT = ""
|
||||
|
||||
APP_NAME = "{}/{}".format(APP, APP_VARIANT)
|
||||
|
||||
TIMEOUT = 120 # publish timeout
|
||||
|
||||
def test_db_asserts(self, dbcon, publish_finished):
|
||||
"""Host and input data dependent expected results in DB."""
|
||||
print("test_db_asserts")
|
||||
failures = []
|
||||
|
||||
failures.append(DBAssert.count_of_types(dbcon, "version", 2))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1}))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "subset", 1,
|
||||
name="imageMainBackgroundcopy"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "subset", 1,
|
||||
name="workfileTest_task"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "subset", 1,
|
||||
name="reviewTesttask"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
|
||||
additional_args = {"context.subset": "renderTestTaskDefault",
|
||||
"context.ext": "png"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
additional_args=additional_args))
|
||||
|
||||
assert not any(failures)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_case = TestPublishInAfterEffects()
|
||||
|
|
@ -314,30 +314,22 @@ class PublishTest(ModuleUnitTest):
|
|||
|
||||
Compares only presence, not size nor content!
|
||||
"""
|
||||
published_dir_base = download_test_data
|
||||
published_dir = os.path.join(output_folder_url,
|
||||
self.PROJECT,
|
||||
self.ASSET,
|
||||
self.TASK,
|
||||
"**")
|
||||
expected_dir_base = os.path.join(published_dir_base,
|
||||
published_dir_base = output_folder_url
|
||||
expected_dir_base = os.path.join(download_test_data,
|
||||
"expected")
|
||||
expected_dir = os.path.join(expected_dir_base,
|
||||
self.PROJECT,
|
||||
self.ASSET,
|
||||
self.TASK,
|
||||
"**")
|
||||
print("Comparing published:'{}' : expected:'{}'".format(published_dir,
|
||||
expected_dir))
|
||||
published = set(f.replace(published_dir_base, '') for f in
|
||||
glob.glob(published_dir, recursive=True) if
|
||||
f != published_dir_base and os.path.exists(f))
|
||||
expected = set(f.replace(expected_dir_base, '') for f in
|
||||
glob.glob(expected_dir, recursive=True) if
|
||||
f != expected_dir_base and os.path.exists(f))
|
||||
|
||||
not_matched = expected.difference(published)
|
||||
assert not not_matched, "Missing {} files".format(not_matched)
|
||||
print("Comparing published:'{}' : expected:'{}'".format(
|
||||
published_dir_base, expected_dir_base))
|
||||
published = set(f.replace(published_dir_base, '') for f in
|
||||
glob.glob(published_dir_base + "\\**", recursive=True)
|
||||
if f != published_dir_base and os.path.exists(f))
|
||||
expected = set(f.replace(expected_dir_base, '') for f in
|
||||
glob.glob(expected_dir_base + "\\**", recursive=True)
|
||||
if f != expected_dir_base and os.path.exists(f))
|
||||
|
||||
not_matched = expected.symmetric_difference(published)
|
||||
assert not not_matched, "Missing {} files".format(
|
||||
"\n".join(sorted(not_matched)))
|
||||
|
||||
|
||||
class HostFixtures(PublishTest):
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue