This commit is contained in:
Petr Kalis 2021-08-19 18:22:49 +02:00
commit 34fef4e001
128 changed files with 5410 additions and 1824 deletions

View file

@ -1,11 +1,33 @@
# Changelog
## [3.3.0-nightly.7](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.3.1-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.2.0...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.3.0...HEAD)
**🐛 Bug fixes**
- TVPaint: Fixed rendered frame indexes [\#1946](https://github.com/pypeclub/OpenPype/pull/1946)
- Maya: Menu actions fix [\#1945](https://github.com/pypeclub/OpenPype/pull/1945)
- standalone: editorial shared object problem [\#1941](https://github.com/pypeclub/OpenPype/pull/1941)
- Bugfix nuke deadline app name [\#1928](https://github.com/pypeclub/OpenPype/pull/1928)
## [3.3.0](https://github.com/pypeclub/OpenPype/tree/3.3.0) (2021-08-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.3.0-nightly.11...3.3.0)
**🚀 Enhancements**
- Python console interpreter [\#1940](https://github.com/pypeclub/OpenPype/pull/1940)
- Global: Updated logos and Default settings [\#1927](https://github.com/pypeclub/OpenPype/pull/1927)
- Check for missing ✨ Python when using `pyenv` [\#1925](https://github.com/pypeclub/OpenPype/pull/1925)
- Maya: Scene patching 🩹on submission to Deadline [\#1923](https://github.com/pypeclub/OpenPype/pull/1923)
- Settings: Default values for enum [\#1920](https://github.com/pypeclub/OpenPype/pull/1920)
- Settings UI: Modifiable dict view enhance [\#1919](https://github.com/pypeclub/OpenPype/pull/1919)
- submodules: avalon-core update [\#1911](https://github.com/pypeclub/OpenPype/pull/1911)
- Feature AE local render [\#1901](https://github.com/pypeclub/OpenPype/pull/1901)
- Ftrack: Where I run action enhancement [\#1900](https://github.com/pypeclub/OpenPype/pull/1900)
- Ftrack: Private project server actions [\#1899](https://github.com/pypeclub/OpenPype/pull/1899)
- Support nested studio plugins paths. [\#1898](https://github.com/pypeclub/OpenPype/pull/1898)
- Settings: global validators with options [\#1892](https://github.com/pypeclub/OpenPype/pull/1892)
- Settings: Conditional dict enum positioning [\#1891](https://github.com/pypeclub/OpenPype/pull/1891)
- Expose stop timer through rest api. [\#1886](https://github.com/pypeclub/OpenPype/pull/1886)
@ -15,41 +37,39 @@
- Filter hosts in settings host-enum [\#1868](https://github.com/pypeclub/OpenPype/pull/1868)
- Local actions with process identifier [\#1867](https://github.com/pypeclub/OpenPype/pull/1867)
- Workfile tool start at host launch support [\#1865](https://github.com/pypeclub/OpenPype/pull/1865)
- Anatomy schema validation [\#1864](https://github.com/pypeclub/OpenPype/pull/1864)
- Ftrack prepare project structure [\#1861](https://github.com/pypeclub/OpenPype/pull/1861)
- Independent general environments [\#1853](https://github.com/pypeclub/OpenPype/pull/1853)
- TVPaint Start Frame [\#1844](https://github.com/pypeclub/OpenPype/pull/1844)
- Ftrack push attributes action adds traceback to job [\#1843](https://github.com/pypeclub/OpenPype/pull/1843)
- Prepare project action enhance [\#1838](https://github.com/pypeclub/OpenPype/pull/1838)
- Standalone Publish of textures family [\#1834](https://github.com/pypeclub/OpenPype/pull/1834)
- nuke: settings create missing default subsets [\#1829](https://github.com/pypeclub/OpenPype/pull/1829)
- Update poetry lock [\#1823](https://github.com/pypeclub/OpenPype/pull/1823)
- Settings: settings for plugins [\#1819](https://github.com/pypeclub/OpenPype/pull/1819)
- Maya: Deadline custom settings [\#1797](https://github.com/pypeclub/OpenPype/pull/1797)
- Maya: Shader name validation [\#1762](https://github.com/pypeclub/OpenPype/pull/1762)
- Maya: support for configurable `dirmap` 🗺️ [\#1859](https://github.com/pypeclub/OpenPype/pull/1859)
- Settings list can use template or schema as object type [\#1815](https://github.com/pypeclub/OpenPype/pull/1815)
**🐛 Bug fixes**
- Fix - ftrack family was added incorrectly in some cases [\#1935](https://github.com/pypeclub/OpenPype/pull/1935)
- Fix - Deadline publish on Linux started Tray instead of headless publishing [\#1930](https://github.com/pypeclub/OpenPype/pull/1930)
- Maya: Validate Model Name - repair accident deletion in settings defaults [\#1929](https://github.com/pypeclub/OpenPype/pull/1929)
- Nuke: submit to farm failed due `ftrack` family remove [\#1926](https://github.com/pypeclub/OpenPype/pull/1926)
- Fix - validate takes repre\["files"\] as list all the time [\#1922](https://github.com/pypeclub/OpenPype/pull/1922)
- standalone: validator asset parents [\#1917](https://github.com/pypeclub/OpenPype/pull/1917)
- Nuke: update video file crassing [\#1916](https://github.com/pypeclub/OpenPype/pull/1916)
- Fix - texture validators for workfiles triggers only for textures workfiles [\#1914](https://github.com/pypeclub/OpenPype/pull/1914)
- Fix - validators for textures workfiles trigger only for textures workfiles [\#1913](https://github.com/pypeclub/OpenPype/pull/1913)
- Settings UI: List order works as expected [\#1906](https://github.com/pypeclub/OpenPype/pull/1906)
- Hiero: loaded clip was not set colorspace from version data [\#1904](https://github.com/pypeclub/OpenPype/pull/1904)
- Pyblish UI: Fix collecting stage processing [\#1903](https://github.com/pypeclub/OpenPype/pull/1903)
- Burnins: Use input's bitrate in h624 [\#1902](https://github.com/pypeclub/OpenPype/pull/1902)
- Bug: fixed python detection [\#1893](https://github.com/pypeclub/OpenPype/pull/1893)
- global: integrate name missing default template [\#1890](https://github.com/pypeclub/OpenPype/pull/1890)
- publisher: editorial plugins fixes [\#1889](https://github.com/pypeclub/OpenPype/pull/1889)
- Normalize path returned from Workfiles. [\#1880](https://github.com/pypeclub/OpenPype/pull/1880)
- Workfiles tool event arguments fix [\#1862](https://github.com/pypeclub/OpenPype/pull/1862)
- imageio: fix grouping [\#1856](https://github.com/pypeclub/OpenPype/pull/1856)
- publisher: missing version in subset prop [\#1849](https://github.com/pypeclub/OpenPype/pull/1849)
- Ftrack type error fix in sync to avalon event handler [\#1845](https://github.com/pypeclub/OpenPype/pull/1845)
- Nuke: updating effects subset fail [\#1841](https://github.com/pypeclub/OpenPype/pull/1841)
- nuke: write render node skipped with crop [\#1836](https://github.com/pypeclub/OpenPype/pull/1836)
- Project folder structure overrides [\#1813](https://github.com/pypeclub/OpenPype/pull/1813)
- Maya: fix yeti settings path in extractor [\#1809](https://github.com/pypeclub/OpenPype/pull/1809)
- Failsafe for cross project containers. [\#1806](https://github.com/pypeclub/OpenPype/pull/1806)
- Houdini colector formatting keys fix [\#1802](https://github.com/pypeclub/OpenPype/pull/1802)
- Maya: don't add reference members as connections to the container set 📦 [\#1855](https://github.com/pypeclub/OpenPype/pull/1855)
- Settings error dialog on show [\#1798](https://github.com/pypeclub/OpenPype/pull/1798)
**Merged pull requests:**
- Fix - make AE workfile publish to Ftrack configurable [\#1937](https://github.com/pypeclub/OpenPype/pull/1937)
- Settings UI: Breadcrumbs in settings [\#1932](https://github.com/pypeclub/OpenPype/pull/1932)
- Add support for multiple Deadline ☠️➖ servers [\#1905](https://github.com/pypeclub/OpenPype/pull/1905)
- Maya: add support for `RedshiftNormalMap` node, fix `tx` linear space 🚀 [\#1863](https://github.com/pypeclub/OpenPype/pull/1863)
- Add support for pyenv-win on windows [\#1822](https://github.com/pypeclub/OpenPype/pull/1822)
- PS, AE - send actual context when another webserver is running [\#1811](https://github.com/pypeclub/OpenPype/pull/1811)
- Maya: expected files -\> render products ⚙️ overhaul [\#1812](https://github.com/pypeclub/OpenPype/pull/1812)
## [3.2.0](https://github.com/pypeclub/OpenPype/tree/3.2.0) (2021-07-13)
@ -60,51 +80,20 @@
- Nuke: ftrack family plugin settings preset [\#1805](https://github.com/pypeclub/OpenPype/pull/1805)
- Standalone publisher last project [\#1799](https://github.com/pypeclub/OpenPype/pull/1799)
- Ftrack Multiple notes as server action [\#1795](https://github.com/pypeclub/OpenPype/pull/1795)
- Settings conditional dict [\#1777](https://github.com/pypeclub/OpenPype/pull/1777)
- Settings application use python 2 only where needed [\#1776](https://github.com/pypeclub/OpenPype/pull/1776)
- Settings UI copy/paste [\#1769](https://github.com/pypeclub/OpenPype/pull/1769)
- Workfile tool widths [\#1766](https://github.com/pypeclub/OpenPype/pull/1766)
- Push hierarchical attributes care about task parent changes [\#1763](https://github.com/pypeclub/OpenPype/pull/1763)
- Application executables with environment variables [\#1757](https://github.com/pypeclub/OpenPype/pull/1757)
- Deadline: Nuke submission additional attributes [\#1756](https://github.com/pypeclub/OpenPype/pull/1756)
- Settings schema without prefill [\#1753](https://github.com/pypeclub/OpenPype/pull/1753)
- Settings Hosts enum [\#1739](https://github.com/pypeclub/OpenPype/pull/1739)
**🐛 Bug fixes**
- nuke: fixing wrong name of family folder when `used existing frames` [\#1803](https://github.com/pypeclub/OpenPype/pull/1803)
- Collect ftrack family bugs [\#1801](https://github.com/pypeclub/OpenPype/pull/1801)
- Invitee email can be None which break the Ftrack commit. [\#1788](https://github.com/pypeclub/OpenPype/pull/1788)
- Fix: staging and `--use-version` option [\#1786](https://github.com/pypeclub/OpenPype/pull/1786)
- FFprobe streams order [\#1775](https://github.com/pypeclub/OpenPype/pull/1775)
- Fix - single file files are str only, cast it to list to count properly [\#1772](https://github.com/pypeclub/OpenPype/pull/1772)
- Environments in app executable for MacOS [\#1768](https://github.com/pypeclub/OpenPype/pull/1768)
- Project specific environments [\#1767](https://github.com/pypeclub/OpenPype/pull/1767)
- Settings UI with refresh button [\#1764](https://github.com/pypeclub/OpenPype/pull/1764)
- Standalone publisher thumbnail extractor fix [\#1761](https://github.com/pypeclub/OpenPype/pull/1761)
- Anatomy others templates don't cause crash [\#1758](https://github.com/pypeclub/OpenPype/pull/1758)
- Backend acre module commit update [\#1745](https://github.com/pypeclub/OpenPype/pull/1745)
- hiero: precollect instances failing when audio selected [\#1743](https://github.com/pypeclub/OpenPype/pull/1743)
- Hiero: creator instance error [\#1742](https://github.com/pypeclub/OpenPype/pull/1742)
- Nuke: fixing render creator for no selection format failing [\#1741](https://github.com/pypeclub/OpenPype/pull/1741)
- StandalonePublisher: failing collector for editorial [\#1738](https://github.com/pypeclub/OpenPype/pull/1738)
**Merged pull requests:**
- Build: don't add Poetry to `PATH` [\#1808](https://github.com/pypeclub/OpenPype/pull/1808)
- Bump prismjs from 1.23.0 to 1.24.0 in /website [\#1773](https://github.com/pypeclub/OpenPype/pull/1773)
- Bc/fix/docs [\#1771](https://github.com/pypeclub/OpenPype/pull/1771)
- TVPaint ftrack family [\#1755](https://github.com/pypeclub/OpenPype/pull/1755)
## [2.18.4](https://github.com/pypeclub/OpenPype/tree/2.18.4) (2021-06-24)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/2.18.3...2.18.4)
**Merged pull requests:**
- celaction fixes [\#1754](https://github.com/pypeclub/OpenPype/pull/1754)
- celaciton: audio subset changed data structure [\#1750](https://github.com/pypeclub/OpenPype/pull/1750)
## [2.18.3](https://github.com/pypeclub/OpenPype/tree/2.18.3) (2021-06-23)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.2.0-nightly.2...2.18.3)

View file

@ -29,7 +29,7 @@ The main things you will need to run and build OpenPype are:
- PowerShell 5.0+ (Windows)
- Bash (Linux)
- [**Python 3.7.8**](#python) or higher
- [**MongoDB**](#database)
- [**MongoDB**](#database) (needed only for local development)
It can be built and ran on all common platforms. We develop and test on the following:
@ -126,6 +126,16 @@ pyenv local 3.7.9
### Linux
#### Docker
Easiest way to build OpenPype on Linux is using [Docker](https://www.docker.com/). Just run:
```sh
sudo ./tools/docker_build.sh
```
If all is successful, you'll find built OpenPype in `./build/` folder.
#### Manual build
You will need [Python 3.7](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll also need [curl](https://curl.se) on systems that doesn't have one preinstalled.
To build Python related stuff, you need Python header files installed (`python3-dev` on Ubuntu for example).
@ -133,7 +143,6 @@ To build Python related stuff, you need Python header files installed (`python3-
You'll need also other tools to build
some OpenPype dependencies like [CMake](https://cmake.org/). Python 3 should be part of all modern distributions. You can use your package manager to install **git** and **cmake**.
<details>
<summary>Details for Ubuntu</summary>
Install git, cmake and curl

View file

@ -0,0 +1,17 @@
from openpype.hosts.aftereffects.plugins.create import create_render
import logging
log = logging.getLogger(__name__)
class CreateLocalRender(create_render.CreateRender):
""" Creator to render locally.
Created only after default render on farm. So family 'render.local' is
used for backward compatibility.
"""
name = "renderDefault"
label = "Render Locally"
family = "renderLocal"

View file

@ -1,10 +1,14 @@
from openpype.lib import abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
import pyblish.api
import attr
import os
import re
import attr
import tempfile
from avalon import aftereffects
import pyblish.api
from openpype.settings import get_project_settings
from openpype.lib import abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
@attr.s
@ -13,6 +17,8 @@ class AERenderInstance(RenderInstance):
comp_name = attr.ib(default=None)
comp_id = attr.ib(default=None)
fps = attr.ib(default=None)
projectEntity = attr.ib(default=None)
stagingDir = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
@ -21,6 +27,11 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
label = "Collect After Effects Render Layers"
hosts = ["aftereffects"]
# internal
family_remapping = {
"render": ("render.farm", "farm"), # (family, label)
"renderLocal": ("render", "local")
}
padding_width = 6
rendered_extension = 'png'
@ -62,14 +73,16 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
fps = work_area_info.frameRate
# TODO add resolution when supported by extension
if inst["family"] == "render" and inst["active"]:
if inst["family"] in self.family_remapping.keys() \
and inst["active"]:
remapped_family = self.family_remapping[inst["family"]]
instance = AERenderInstance(
family="render.farm", # other way integrate would catch it
families=["render.farm"],
family=remapped_family[0],
families=[remapped_family[0]],
version=version,
time="",
source=current_file,
label="{} - farm".format(inst["subset"]),
label="{} - {}".format(inst["subset"], remapped_family[1]),
subset=inst["subset"],
asset=context.data["assetEntity"]["name"],
attachTo=False,
@ -105,6 +118,30 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
instance.outputDir = self._get_output_dir(instance)
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
reviewable_subset_filter = \
(settings["deadline"]
["publish"]
["ProcessSubmittedJobOnFarm"]
["aov_filter"])
if inst["family"] == "renderLocal":
# for local renders
instance.anatomyData["version"] = instance.version
instance.anatomyData["subset"] = instance.subset
instance.stagingDir = tempfile.mkdtemp()
instance.projectEntity = project_entity
if self.hosts[0] in reviewable_subset_filter.keys():
for aov_pattern in \
reviewable_subset_filter[self.hosts[0]]:
if re.match(aov_pattern, instance.subset):
instance.families.append("review")
instance.review = True
break
self.log.info("New instance:: {}".format(instance))
instances.append(instance)
return instances

View file

@ -47,7 +47,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
"subset": subset,
"label": scene_file,
"family": family,
"families": [family, "ftrack"],
"families": [family],
"representations": list()
})

View file

@ -0,0 +1,82 @@
import os
import six
import sys
import openpype.api
from avalon import aftereffects
class ExtractLocalRender(openpype.api.Extractor):
"""Render RenderQueue locally."""
order = openpype.api.Extractor.order - 0.47
label = "Extract Local Render"
hosts = ["aftereffects"]
families = ["render"]
def process(self, instance):
stub = aftereffects.stub()
staging_dir = instance.data["stagingDir"]
self.log.info("staging_dir::{}".format(staging_dir))
stub.render(staging_dir)
# pull file name from Render Queue Output module
render_q = stub.get_render_info()
if not render_q:
raise ValueError("No file extension set in Render Queue")
_, ext = os.path.splitext(os.path.basename(render_q.file_name))
ext = ext[1:]
first_file_path = None
files = []
self.log.info("files::{}".format(os.listdir(staging_dir)))
for file_name in os.listdir(staging_dir):
files.append(file_name)
if first_file_path is None:
first_file_path = os.path.join(staging_dir,
file_name)
resulting_files = files
if len(files) == 1:
resulting_files = files[0]
repre_data = {
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"name": ext,
"ext": ext,
"files": resulting_files,
"stagingDir": staging_dir
}
if instance.data["review"]:
repre_data["tags"] = ["review"]
instance.data["representations"] = [repre_data]
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.
thumbnail_path = os.path.join(staging_dir,
"thumbnail.jpg")
args = [
ffmpeg_path, "-y",
"-i", first_file_path,
"-vf", "scale=300:-1",
"-vframes", "1",
thumbnail_path
]
self.log.debug("Thumbnail args:: {}".format(args))
try:
output = openpype.lib.run_subprocess(args)
except TypeError:
self.log.warning("Error in creating thumbnail")
six.reraise(*sys.exc_info())
instance.data["representations"].append({
"name": "thumbnail",
"ext": "jpg",
"files": os.path.basename(thumbnail_path),
"stagingDir": staging_dir,
"tags": ["thumbnail"]
})

View file

@ -53,7 +53,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Scene Settings"
families = ["render.farm"]
families = ["render.farm", "render"]
hosts = ["aftereffects"]
optional = True

View file

@ -54,6 +54,10 @@ class LoadClip(phiero.SequenceLoader):
object_name = self.clip_name_template.format(
**context["representation"]["context"])
# set colorspace
if colorspace:
track_item.source().setSourceMediaColourTransform(colorspace)
# add additional metadata from the version to imprint Avalon knob
add_keys = [
"frameStart", "frameEnd", "source", "author",
@ -109,9 +113,14 @@ class LoadClip(phiero.SequenceLoader):
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
file = api.get_representation_path(representation).replace("\\", "/")
clip = track_item.source()
# reconnect media to new path
track_item.source().reconnectMedia(file)
clip.reconnectMedia(file)
# set colorspace
if colorspace:
clip.setSourceMediaColourTransform(colorspace)
# add additional metadata from the version to imprint Avalon knob
add_keys = [
@ -160,6 +169,7 @@ class LoadClip(phiero.SequenceLoader):
@classmethod
def set_item_color(cls, track_item, version):
clip = track_item.source()
# define version name
version_name = version.get("name", None)
# get all versions in list
@ -172,6 +182,6 @@ class LoadClip(phiero.SequenceLoader):
# set clip colour
if version_name == max_version:
track_item.source().binItem().setColor(cls.clip_color_last)
clip.binItem().setColor(cls.clip_color_last)
else:
track_item.source().binItem().setColor(cls.clip_color)
clip.binItem().setColor(cls.clip_color)

View file

@ -120,6 +120,13 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
# create instance
instance = context.create_instance(**data)
# add colorspace data
instance.data.update({
"versionData": {
"colorspace": track_item.sourceMediaColourTransform(),
}
})
# create shot instance for shot attributes create/update
self.create_shot_instance(context, **data)
@ -133,13 +140,6 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
# create audio subset instance
self.create_audio_instance(context, **data)
# add colorspace data
instance.data.update({
"versionData": {
"colorspace": track_item.sourceMediaColourTransform(),
}
})
# add audioReview attribute to plate instance data
# if reviewTrack is on
if tag_data.get("reviewTrack") is not None:

View file

@ -26,6 +26,12 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
def install():
from openpype.settings import get_project_settings
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
process_dirmap(project_settings)
pyblish.register_plugin_path(PUBLISH_PATH)
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
@ -53,6 +59,40 @@ def install():
avalon.data["familiesStateToggled"] = ["imagesequence"]
def process_dirmap(project_settings):
# type: (dict) -> None
"""Go through all paths in Settings and set them using `dirmap`.
Args:
project_settings (dict): Settings for current project.
"""
if not project_settings["maya"].get("maya-dirmap"):
return
mapping = project_settings["maya"]["maya-dirmap"]["paths"] or {}
mapping_enabled = project_settings["maya"]["maya-dirmap"]["enabled"]
if not mapping or not mapping_enabled:
return
if mapping.get("source-path") and mapping_enabled is True:
log.info("Processing directory mapping ...")
cmds.dirmap(en=True)
for k, sp in enumerate(mapping["source-path"]):
try:
print("{} -> {}".format(sp, mapping["destination-path"][k]))
cmds.dirmap(m=(sp, mapping["destination-path"][k]))
cmds.dirmap(m=(mapping["destination-path"][k], sp))
except IndexError:
# missing corresponding destination path
log.error(("invalid dirmap mapping, missing corresponding"
" destination directory."))
break
except RuntimeError:
log.error("invalid path {} -> {}, mapping not registered".format(
sp, mapping["destination-path"][k]
))
continue
def uninstall():
pyblish.deregister_plugin_path(PUBLISH_PATH)
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)

View file

@ -1,945 +0,0 @@
# -*- coding: utf-8 -*-
"""Module handling expected render output from Maya.
This module is used in :mod:`collect_render` and :mod:`collect_vray_scene`.
Note:
To implement new renderer, just create new class inheriting from
:class:`AExpectedFiles` and add it to :func:`ExpectedFiles.get()`.
Attributes:
R_SINGLE_FRAME (:class:`re.Pattern`): Find single frame number.
R_FRAME_RANGE (:class:`re.Pattern`): Find frame range.
R_FRAME_NUMBER (:class:`re.Pattern`): Find frame number in string.
R_LAYER_TOKEN (:class:`re.Pattern`): Find layer token in image prefixes.
R_AOV_TOKEN (:class:`re.Pattern`): Find AOV token in image prefixes.
R_SUBSTITUTE_AOV_TOKEN (:class:`re.Pattern`): Find and substitute AOV token
in image prefixes.
R_REMOVE_AOV_TOKEN (:class:`re.Pattern`): Find and remove AOV token in
image prefixes.
R_CLEAN_FRAME_TOKEN (:class:`re.Pattern`): Find and remove unfilled
Renderman frame token in image prefix.
R_CLEAN_EXT_TOKEN (:class:`re.Pattern`): Find and remove unfilled Renderman
extension token in image prefix.
R_SUBSTITUTE_LAYER_TOKEN (:class:`re.Pattern`): Find and substitute render
layer token in image prefixes.
R_SUBSTITUTE_SCENE_TOKEN (:class:`re.Pattern`): Find and substitute scene
token in image prefixes.
R_SUBSTITUTE_CAMERA_TOKEN (:class:`re.Pattern`): Find and substitute camera
token in image prefixes.
RENDERER_NAMES (dict): Renderer names mapping between reported name and
*human readable* name.
IMAGE_PREFIXES (dict): Mapping between renderers and their respective
image prefix attribute names.
Todo:
Determine `multipart` from render instance.
"""
import types
import re
import os
from abc import ABCMeta, abstractmethod
import six
import attr
import openpype.hosts.maya.api.lib as lib
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
R_SINGLE_FRAME = re.compile(r"^(-?)\d+$")
R_FRAME_RANGE = re.compile(r"^(?P<sf>(-?)\d+)-(?P<ef>(-?)\d+)$")
R_FRAME_NUMBER = re.compile(r".+\.(?P<frame>[0-9]+)\..+")
R_LAYER_TOKEN = re.compile(
r".*((?:%l)|(?:<layer>)|(?:<renderlayer>)).*", re.IGNORECASE
)
R_AOV_TOKEN = re.compile(r".*%a.*|.*<aov>.*|.*<renderpass>.*", re.IGNORECASE)
R_SUBSTITUTE_AOV_TOKEN = re.compile(r"%a|<aov>|<renderpass>", re.IGNORECASE)
R_REMOVE_AOV_TOKEN = re.compile(
r"_%a|\.%a|_<aov>|\.<aov>|_<renderpass>|\.<renderpass>", re.IGNORECASE)
# to remove unused renderman tokens
R_CLEAN_FRAME_TOKEN = re.compile(r"\.?<f\d>\.?", re.IGNORECASE)
R_CLEAN_EXT_TOKEN = re.compile(r"\.?<ext>\.?", re.IGNORECASE)
R_SUBSTITUTE_LAYER_TOKEN = re.compile(
r"%l|<layer>|<renderlayer>", re.IGNORECASE
)
R_SUBSTITUTE_CAMERA_TOKEN = re.compile(r"%c|<camera>", re.IGNORECASE)
R_SUBSTITUTE_SCENE_TOKEN = re.compile(r"%s|<scene>", re.IGNORECASE)
RENDERER_NAMES = {
"mentalray": "MentalRay",
"vray": "V-Ray",
"arnold": "Arnold",
"renderman": "Renderman",
"redshift": "Redshift",
}
# not sure about the renderman image prefix
IMAGE_PREFIXES = {
"mentalray": "defaultRenderGlobals.imageFilePrefix",
"vray": "vraySettings.fileNamePrefix",
"arnold": "defaultRenderGlobals.imageFilePrefix",
"renderman": "rmanGlobals.imageFileFormat",
"redshift": "defaultRenderGlobals.imageFilePrefix",
}
@attr.s
class LayerMetadata(object):
"""Data class for Render Layer metadata."""
frameStart = attr.ib()
frameEnd = attr.ib()
cameras = attr.ib()
sceneName = attr.ib()
layerName = attr.ib()
renderer = attr.ib()
defaultExt = attr.ib()
filePrefix = attr.ib()
enabledAOVs = attr.ib()
frameStep = attr.ib(default=1)
padding = attr.ib(default=4)
class ExpectedFiles:
"""Class grouping functionality for all supported renderers.
Attributes:
multipart (bool): Flag if multipart exrs are used.
"""
multipart = False
def __init__(self, render_instance):
"""Constructor."""
self._render_instance = render_instance
def get(self, renderer, layer):
"""Get expected files for given renderer and render layer.
Args:
renderer (str): Name of renderer
layer (str): Name of render layer
Returns:
dict: Expected rendered files by AOV
Raises:
:exc:`UnsupportedRendererException`: If requested renderer
is not supported. It needs to be implemented by extending
:class:`AExpectedFiles` and added to this methods ``if``
statement.
"""
renderSetup.instance().switchToLayerUsingLegacyName(layer)
if renderer.lower() == "arnold":
return self._get_files(ExpectedFilesArnold(layer,
self._render_instance))
if renderer.lower() == "vray":
return self._get_files(ExpectedFilesVray(
layer, self._render_instance))
if renderer.lower() == "redshift":
return self._get_files(ExpectedFilesRedshift(
layer, self._render_instance))
if renderer.lower() == "mentalray":
return self._get_files(ExpectedFilesMentalray(
layer, self._render_instance))
if renderer.lower() == "renderman":
return self._get_files(ExpectedFilesRenderman(
layer, self._render_instance))
raise UnsupportedRendererException(
"unsupported {}".format(renderer)
)
def _get_files(self, renderer):
# type: (AExpectedFiles) -> list
files = renderer.get_files()
self.multipart = renderer.multipart
return files
@six.add_metaclass(ABCMeta)
class AExpectedFiles:
"""Abstract class with common code for all renderers.
Attributes:
renderer (str): name of renderer.
layer (str): name of render layer.
multipart (bool): flag for multipart exrs.
"""
renderer = None
layer = None
multipart = False
def __init__(self, layer, render_instance):
"""Constructor."""
self.layer = layer
self.render_instance = render_instance
@abstractmethod
def get_aovs(self):
"""To be implemented by renderer class."""
@staticmethod
def sanitize_camera_name(camera):
"""Sanitize camera name.
Remove Maya illegal characters from camera name.
Args:
camera (str): Maya camera name.
Returns:
(str): sanitized camera name
Example:
>>> AExpectedFiles.sanizite_camera_name('test:camera_01')
test_camera_01
"""
return re.sub('[^0-9a-zA-Z_]+', '_', camera)
def get_renderer_prefix(self):
"""Return prefix for specific renderer.
This is for most renderers the same and can be overridden if needed.
Returns:
str: String with image prefix containing tokens
Raises:
:exc:`UnsupportedRendererException`: If we requested image
prefix for renderer we know nothing about.
See :data:`IMAGE_PREFIXES` for mapping of renderers and
image prefixes.
"""
try:
file_prefix = cmds.getAttr(IMAGE_PREFIXES[self.renderer])
except KeyError:
raise UnsupportedRendererException(
"Unsupported renderer {}".format(self.renderer)
)
return file_prefix
def _get_layer_data(self):
# type: () -> LayerMetadata
# ______________________________________________
# ____________________/ ____________________________________________/
# 1 - get scene name /__________________/
# ____________________/
_, scene_basename = os.path.split(cmds.file(q=True, loc=True))
scene_name, _ = os.path.splitext(scene_basename)
file_prefix = self.get_renderer_prefix()
if not file_prefix:
raise RuntimeError("Image prefix not set")
layer_name = self.layer
if self.layer.startswith("rs_"):
layer_name = self.layer[3:]
return LayerMetadata(
frameStart=int(self.get_render_attribute("startFrame")),
frameEnd=int(self.get_render_attribute("endFrame")),
frameStep=int(self.get_render_attribute("byFrameStep")),
padding=int(self.get_render_attribute("extensionPadding")),
# if we have <camera> token in prefix path we'll expect output for
# every renderable camera in layer.
cameras=self.get_renderable_cameras(),
sceneName=scene_name,
layerName=layer_name,
renderer=self.renderer,
defaultExt=cmds.getAttr("defaultRenderGlobals.imfPluginKey"),
filePrefix=file_prefix,
enabledAOVs=self.get_aovs()
)
def _generate_single_file_sequence(
self, layer_data, force_aov_name=None):
# type: (LayerMetadata, str) -> list
expected_files = []
for cam in layer_data.cameras:
file_prefix = layer_data.filePrefix
mappings = (
(R_SUBSTITUTE_SCENE_TOKEN, layer_data.sceneName),
(R_SUBSTITUTE_LAYER_TOKEN, layer_data.layerName),
(R_SUBSTITUTE_CAMERA_TOKEN, self.sanitize_camera_name(cam)),
# this is required to remove unfilled aov token, for example
# in Redshift
(R_REMOVE_AOV_TOKEN, "") if not force_aov_name \
else (R_SUBSTITUTE_AOV_TOKEN, force_aov_name),
(R_CLEAN_FRAME_TOKEN, ""),
(R_CLEAN_EXT_TOKEN, ""),
)
for regex, value in mappings:
file_prefix = re.sub(regex, value, file_prefix)
for frame in range(
int(layer_data.frameStart),
int(layer_data.frameEnd) + 1,
int(layer_data.frameStep),
):
expected_files.append(
"{}.{}.{}".format(
file_prefix,
str(frame).rjust(layer_data.padding, "0"),
layer_data.defaultExt,
)
)
return expected_files
def _generate_aov_file_sequences(self, layer_data):
# type: (LayerMetadata) -> list
expected_files = []
aov_file_list = {}
for aov in layer_data.enabledAOVs:
for cam in layer_data.cameras:
file_prefix = layer_data.filePrefix
mappings = (
(R_SUBSTITUTE_SCENE_TOKEN, layer_data.sceneName),
(R_SUBSTITUTE_LAYER_TOKEN, layer_data.layerName),
(R_SUBSTITUTE_CAMERA_TOKEN,
self.sanitize_camera_name(cam)),
(R_SUBSTITUTE_AOV_TOKEN, aov[0]),
(R_CLEAN_FRAME_TOKEN, ""),
(R_CLEAN_EXT_TOKEN, ""),
)
for regex, value in mappings:
file_prefix = re.sub(regex, value, file_prefix)
aov_files = []
for frame in range(
int(layer_data.frameStart),
int(layer_data.frameEnd) + 1,
int(layer_data.frameStep),
):
aov_files.append(
"{}.{}.{}".format(
file_prefix,
str(frame).rjust(layer_data.padding, "0"),
aov[1],
)
)
# if we have more then one renderable camera, append
# camera name to AOV to allow per camera AOVs.
aov_name = aov[0]
if len(layer_data.cameras) > 1:
aov_name = "{}_{}".format(aov[0],
self.sanitize_camera_name(cam))
aov_file_list[aov_name] = aov_files
file_prefix = layer_data.filePrefix
expected_files.append(aov_file_list)
return expected_files
def get_files(self):
"""Return list of expected files.
It will translate render token strings ('<RenderPass>', etc.) to
their values. This task is tricky as every renderer deals with this
differently. It depends on `get_aovs()` abstract method implemented
for every supported renderer.
"""
layer_data = self._get_layer_data()
expected_files = []
if layer_data.enabledAOVs:
return self._generate_aov_file_sequences(layer_data)
else:
return self._generate_single_file_sequence(layer_data)
def get_renderable_cameras(self):
# type: () -> list
"""Get all renderable cameras.
Returns:
list: list of renderable cameras.
"""
cam_parents = [
cmds.listRelatives(x, ap=True)[-1] for x in cmds.ls(cameras=True)
]
return [
cam
for cam in cam_parents
if self.maya_is_true(cmds.getAttr("{}.renderable".format(cam)))
]
@staticmethod
def maya_is_true(attr_val):
"""Whether a Maya attr evaluates to True.
When querying an attribute value from an ambiguous object the
Maya API will return a list of values, which need to be properly
handled to evaluate properly.
Args:
attr_val (mixed): Maya attribute to be evaluated as bool.
Returns:
bool: cast Maya attribute to Pythons boolean value.
"""
if isinstance(attr_val, types.BooleanType):
return attr_val
if isinstance(attr_val, (types.ListType, types.GeneratorType)):
return any(attr_val)
return bool(attr_val)
@staticmethod
def get_layer_overrides(attribute):
"""Get overrides for attribute on current render layer.
Args:
attribute (str): Maya attribute name.
Returns:
Value of attribute override.
"""
connections = cmds.listConnections(attribute, plugs=True)
if connections:
for connection in connections:
if connection:
# node_name = connection.split(".")[0]
attr_name = "%s.value" % ".".join(
connection.split(".")[:-1]
)
yield cmds.getAttr(attr_name)
def get_render_attribute(self, attribute):
"""Get attribute from render options.
Args:
attribute (str): name of attribute to be looked up.
Returns:
Attribute value
"""
return lib.get_attr_in_layer(
"defaultRenderGlobals.{}".format(attribute), layer=self.layer
)
class ExpectedFilesArnold(AExpectedFiles):
"""Expected files for Arnold renderer.
Attributes:
aiDriverExtension (dict): Arnold AOV driver extension mapping.
Is there a better way?
renderer (str): name of renderer.
"""
aiDriverExtension = {
"jpeg": "jpg",
"exr": "exr",
"deepexr": "exr",
"png": "png",
"tiff": "tif",
"mtoa_shaders": "ass", # TODO: research what those last two should be
"maya": "",
}
def __init__(self, layer, render_instance):
"""Constructor."""
super(ExpectedFilesArnold, self).__init__(layer, render_instance)
self.renderer = "arnold"
def get_aovs(self):
"""Get all AOVs.
See Also:
:func:`AExpectedFiles.get_aovs()`
Raises:
:class:`AOVError`: If AOV cannot be determined.
"""
enabled_aovs = []
try:
if not (
cmds.getAttr("defaultArnoldRenderOptions.aovMode")
and not cmds.getAttr("defaultArnoldDriver.mergeAOVs") # noqa: W503, E501
):
# AOVs are merged in mutli-channel file
self.multipart = True
return enabled_aovs
except ValueError:
# this occurs when Render Setting windows was not opened yet. In
# such case there are no Arnold options created so query for AOVs
# will fail. We terminate here as there are no AOVs specified then.
# This state will most probably fail later on some Validator
# anyway.
return enabled_aovs
# AOVs are set to be rendered separately. We should expect
# <RenderPass> token in path.
# handle aovs from references
use_ref_aovs = self.render_instance.data.get(
"useReferencedAovs", False) or False
ai_aovs = cmds.ls(type="aiAOV")
if not use_ref_aovs:
ref_aovs = cmds.ls(type="aiAOV", referencedNodes=True)
ai_aovs = list(set(ai_aovs) - set(ref_aovs))
for aov in ai_aovs:
enabled = self.maya_is_true(cmds.getAttr("{}.enabled".format(aov)))
ai_driver = cmds.listConnections("{}.outputs".format(aov))[0]
ai_translator = cmds.getAttr("{}.aiTranslator".format(ai_driver))
try:
aov_ext = self.aiDriverExtension[ai_translator]
except KeyError:
msg = (
"Unrecognized arnold " "driver format for AOV - {}"
).format(cmds.getAttr("{}.name".format(aov)))
raise AOVError(msg)
for override in self.get_layer_overrides(
"{}.enabled".format(aov)
):
enabled = self.maya_is_true(override)
if enabled:
# If aov RGBA is selected, arnold will translate it to `beauty`
aov_name = cmds.getAttr("%s.name" % aov)
if aov_name == "RGBA":
aov_name = "beauty"
enabled_aovs.append((aov_name, aov_ext))
# Append 'beauty' as this is arnolds
# default. If <RenderPass> token is specified and no AOVs are
# defined, this will be used.
enabled_aovs.append(
(u"beauty", cmds.getAttr("defaultRenderGlobals.imfPluginKey"))
)
return enabled_aovs
class ExpectedFilesVray(AExpectedFiles):
"""Expected files for V-Ray renderer."""
def __init__(self, layer, render_instance):
"""Constructor."""
super(ExpectedFilesVray, self).__init__(layer, render_instance)
self.renderer = "vray"
def get_renderer_prefix(self):
"""Get image prefix for V-Ray.
This overrides :func:`AExpectedFiles.get_renderer_prefix()` as
we must add `<aov>` token manually.
See also:
:func:`AExpectedFiles.get_renderer_prefix()`
"""
prefix = super(ExpectedFilesVray, self).get_renderer_prefix()
prefix = "{}_<aov>".format(prefix)
return prefix
def _get_layer_data(self):
# type: () -> LayerMetadata
"""Override to get vray specific extension."""
layer_data = super(ExpectedFilesVray, self)._get_layer_data()
default_ext = cmds.getAttr("vraySettings.imageFormatStr")
if default_ext in ["exr (multichannel)", "exr (deep)"]:
default_ext = "exr"
layer_data.defaultExt = default_ext
layer_data.padding = cmds.getAttr("vraySettings.fileNamePadding")
return layer_data
def get_files(self):
"""Get expected files.
This overrides :func:`AExpectedFiles.get_files()` as we
we need to add one sequence for plain beauty if AOVs are enabled
as vray output beauty without 'beauty' in filename.
"""
expected_files = super(ExpectedFilesVray, self).get_files()
layer_data = self._get_layer_data()
# remove 'beauty' from filenames as vray doesn't output it
update = {}
if layer_data.enabledAOVs:
for aov, seqs in expected_files[0].items():
if aov.startswith("beauty"):
new_list = []
for seq in seqs:
new_list.append(seq.replace("_beauty", ""))
update[aov] = new_list
expected_files[0].update(update)
return expected_files
def get_aovs(self):
"""Get all AOVs.
See Also:
:func:`AExpectedFiles.get_aovs()`
"""
enabled_aovs = []
try:
# really? do we set it in vray just by selecting multichannel exr?
if (
cmds.getAttr("vraySettings.imageFormatStr")
== "exr (multichannel)" # noqa: W503
):
# AOVs are merged in mutli-channel file
self.multipart = True
return enabled_aovs
except ValueError:
# this occurs when Render Setting windows was not opened yet. In
# such case there are no VRay options created so query for AOVs
# will fail. We terminate here as there are no AOVs specified then.
# This state will most probably fail later on some Validator
# anyway.
return enabled_aovs
default_ext = cmds.getAttr("vraySettings.imageFormatStr")
if default_ext in ["exr (multichannel)", "exr (deep)"]:
default_ext = "exr"
# add beauty as default
enabled_aovs.append(
(u"beauty", default_ext)
)
# handle aovs from references
use_ref_aovs = self.render_instance.data.get(
"useReferencedAovs", False) or False
# this will have list of all aovs no matter if they are coming from
# reference or not.
vr_aovs = cmds.ls(
type=["VRayRenderElement", "VRayRenderElementSet"]) or []
if not use_ref_aovs:
ref_aovs = cmds.ls(
type=["VRayRenderElement", "VRayRenderElementSet"],
referencedNodes=True) or []
# get difference
vr_aovs = list(set(vr_aovs) - set(ref_aovs))
for aov in vr_aovs:
enabled = self.maya_is_true(cmds.getAttr("{}.enabled".format(aov)))
for override in self.get_layer_overrides(
"{}.enabled".format(aov)
):
enabled = self.maya_is_true(override)
if enabled:
enabled_aovs.append(
(self._get_vray_aov_name(aov), default_ext))
return enabled_aovs
@staticmethod
def _get_vray_aov_name(node):
"""Get AOVs name from Vray.
Args:
node (str): aov node name.
Returns:
str: aov name.
"""
vray_name = None
vray_explicit_name = None
vray_file_name = None
for node_attr in cmds.listAttr(node):
if node_attr.startswith("vray_filename"):
vray_file_name = cmds.getAttr("{}.{}".format(node, node_attr))
elif node_attr.startswith("vray_name"):
vray_name = cmds.getAttr("{}.{}".format(node, node_attr))
elif node_attr.startswith("vray_explicit_name"):
vray_explicit_name = cmds.getAttr(
"{}.{}".format(node, node_attr))
if vray_file_name is not None and vray_file_name != "":
final_name = vray_file_name
elif vray_explicit_name is not None and vray_explicit_name != "":
final_name = vray_explicit_name
elif vray_name is not None and vray_name != "":
final_name = vray_name
else:
continue
# special case for Material Select elements - these are named
# based on the materia they are connected to.
if "vray_mtl_mtlselect" in cmds.listAttr(node):
connections = cmds.listConnections(
"{}.vray_mtl_mtlselect".format(node))
if connections:
final_name += '_{}'.format(str(connections[0]))
return final_name
class ExpectedFilesRedshift(AExpectedFiles):
"""Expected files for Redshift renderer.
Attributes:
unmerged_aovs (list): Name of aovs that are not merged into resulting
exr and we need them specified in expectedFiles output.
"""
unmerged_aovs = ["Cryptomatte"]
def __init__(self, layer, render_instance):
"""Construtor."""
super(ExpectedFilesRedshift, self).__init__(layer, render_instance)
self.renderer = "redshift"
def get_renderer_prefix(self):
"""Get image prefix for Redshift.
This overrides :func:`AExpectedFiles.get_renderer_prefix()` as
we must add `<aov>` token manually.
See also:
:func:`AExpectedFiles.get_renderer_prefix()`
"""
prefix = super(ExpectedFilesRedshift, self).get_renderer_prefix()
prefix = "{}.<aov>".format(prefix)
return prefix
def get_files(self):
"""Get expected files.
This overrides :func:`AExpectedFiles.get_files()` as we
we need to add one sequence for plain beauty if AOVs are enabled
as vray output beauty without 'beauty' in filename.
"""
expected_files = super(ExpectedFilesRedshift, self).get_files()
layer_data = self._get_layer_data()
# Redshift doesn't merge Cryptomatte AOV to final exr. We need to check
# for such condition and add it to list of expected files.
for aov in layer_data.enabledAOVs:
if aov[0].lower() == "cryptomatte":
aov_name = aov[0]
expected_files.append(
{aov_name: self._generate_single_file_sequence(layer_data)}
)
if layer_data.get("enabledAOVs"):
# because if Beauty is added manually, it will be rendered as
# 'Beauty_other' in file name and "standard" beauty will have
# 'Beauty' in its name. When disabled, standard output will be
# without `Beauty`.
if expected_files[0].get(u"Beauty"):
expected_files[0][u"Beauty_other"] = expected_files[0].pop(
u"Beauty")
new_list = [
seq.replace(".Beauty", ".Beauty_other")
for seq in expected_files[0][u"Beauty_other"]
]
expected_files[0][u"Beauty_other"] = new_list
expected_files[0][u"Beauty"] = self._generate_single_file_sequence( # noqa: E501
layer_data, force_aov_name="Beauty"
)
else:
expected_files[0][u"Beauty"] = self._generate_single_file_sequence( # noqa: E501
layer_data
)
return expected_files
def get_aovs(self):
"""Get all AOVs.
See Also:
:func:`AExpectedFiles.get_aovs()`
"""
enabled_aovs = []
try:
if self.maya_is_true(
cmds.getAttr("redshiftOptions.exrForceMultilayer")
):
# AOVs are merged in mutli-channel file
self.multipart = True
return enabled_aovs
except ValueError:
# this occurs when Render Setting windows was not opened yet. In
# such case there are no Redshift options created so query for AOVs
# will fail. We terminate here as there are no AOVs specified then.
# This state will most probably fail later on some Validator
# anyway.
return enabled_aovs
default_ext = cmds.getAttr(
"redshiftOptions.imageFormat", asString=True)
rs_aovs = cmds.ls(type="RedshiftAOV", referencedNodes=False)
for aov in rs_aovs:
enabled = self.maya_is_true(cmds.getAttr("{}.enabled".format(aov)))
for override in self.get_layer_overrides(
"{}.enabled".format(aov)
):
enabled = self.maya_is_true(override)
if enabled:
# If AOVs are merged into multipart exr, append AOV only if it
# is in the list of AOVs that renderer cannot (or will not)
# merge into final exr.
if self.maya_is_true(
cmds.getAttr("redshiftOptions.exrForceMultilayer")
):
if cmds.getAttr("%s.name" % aov) in self.unmerged_aovs:
enabled_aovs.append(
(cmds.getAttr("%s.name" % aov), default_ext)
)
else:
enabled_aovs.append(
(cmds.getAttr("%s.name" % aov), default_ext)
)
if self.maya_is_true(
cmds.getAttr("redshiftOptions.exrForceMultilayer")
):
# AOVs are merged in mutli-channel file
self.multipart = True
return enabled_aovs
class ExpectedFilesRenderman(AExpectedFiles):
"""Expected files for Renderman renderer.
Warning:
This is very rudimentary and needs more love and testing.
"""
def __init__(self, layer, render_instance):
"""Constructor."""
super(ExpectedFilesRenderman, self).__init__(layer, render_instance)
self.renderer = "renderman"
def get_aovs(self):
"""Get all AOVs.
See Also:
:func:`AExpectedFiles.get_aovs()`
"""
enabled_aovs = []
default_ext = "exr"
displays = cmds.listConnections("rmanGlobals.displays")
for aov in displays:
aov_name = str(aov)
if aov_name == "rmanDefaultDisplay":
aov_name = "beauty"
enabled = self.maya_is_true(cmds.getAttr("{}.enable".format(aov)))
for override in self.get_layer_overrides(
"{}.enable".format(aov)
):
enabled = self.maya_is_true(override)
if enabled:
enabled_aovs.append((aov_name, default_ext))
return enabled_aovs
def get_files(self):
"""Get expected files.
This overrides :func:`AExpectedFiles.get_files()` as we
we need to add one sequence for plain beauty if AOVs are enabled
as vray output beauty without 'beauty' in filename.
In renderman we hack it with prepending path. This path would
normally be translated from `rmanGlobals.imageOutputDir`. We skip
this and hardcode prepend path we expect. There is no place for user
to mess around with this settings anyway and it is enforced in
render settings validator.
"""
layer_data = self._get_layer_data()
new_aovs = {}
expected_files = super(ExpectedFilesRenderman, self).get_files()
# we always get beauty
for aov, files in expected_files[0].items():
new_files = []
for file in files:
new_file = "{}/{}/{}".format(
layer_data["sceneName"], layer_data["layerName"], file
)
new_files.append(new_file)
new_aovs[aov] = new_files
return [new_aovs]
class ExpectedFilesMentalray(AExpectedFiles):
"""Skeleton unimplemented class for Mentalray renderer."""
def __init__(self, layer, render_instance):
"""Constructor.
Raises:
:exc:`UnimplementedRendererException`: as it is not implemented.
"""
super(ExpectedFilesMentalray, self).__init__(layer, render_instance)
raise UnimplementedRendererException("Mentalray not implemented")
def get_aovs(self):
"""Get all AOVs.
See Also:
:func:`AExpectedFiles.get_aovs()`
"""
return []
class AOVError(Exception):
"""Custom exception for determining AOVs."""
class UnsupportedRendererException(Exception):
"""Custom exception.
Raised when requesting data from unsupported renderer.
"""
class UnimplementedRendererException(Exception):
"""Custom exception.
Raised when requesting data from renderer that is not implemented yet.
"""

View file

@ -2252,10 +2252,8 @@ def get_attr_in_layer(attr, layer):
try:
if cmds.mayaHasRenderSetup():
log.debug("lib.get_attr_in_layer is not "
"optimized for render setup")
with renderlayer(layer):
return cmds.getAttr(attr)
from . import lib_rendersetup
return lib_rendersetup.get_attr_in_layer(attr, layer)
except AttributeError:
pass

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,343 @@
# -*- coding: utf-8 -*-
"""Library for handling Render Setup in Maya."""
from maya import cmds
import maya.api.OpenMaya as om
import logging
import maya.app.renderSetup.model.utils as utils
from maya.app.renderSetup.model import (
renderSetup
)
from maya.app.renderSetup.model.override import (
AbsOverride,
RelOverride,
UniqueOverride
)
ExactMatch = 0
ParentMatch = 1
ChildMatch = 2
DefaultRenderLayer = "defaultRenderLayer"
log = logging.getLogger(__name__)
def get_rendersetup_layer(layer):
"""Return render setup layer name.
This also converts names from legacy renderLayer node name to render setup
name.
Note: `defaultRenderLayer` is not a renderSetupLayer node but it is however
the valid layer name for Render Setup - so we return that as is.
Example:
>>> for legacy_layer in cmds.ls(type="renderLayer"):
>>> layer = get_rendersetup_layer(legacy_layer)
Returns:
str or None: Returns renderSetupLayer node name if `layer` is a valid
layer name in legacy renderlayers or render setup layers.
Returns None if the layer can't be found or Render Setup is
currently disabled.
"""
if layer == DefaultRenderLayer:
# defaultRenderLayer doesn't have a `renderSetupLayer`
return layer
if not cmds.mayaHasRenderSetup():
return None
if not cmds.objExists(layer):
return None
if cmds.nodeType(layer) == "renderSetupLayer":
return layer
# By default Render Setup renames the legacy renderlayer
# to `rs_<layername>` but lets not rely on that as the
# layer node can be renamed manually
connections = cmds.listConnections(layer + ".message",
type="renderSetupLayer",
exactType=True,
source=False,
destination=True,
plugs=True) or []
return next((conn.split(".", 1)[0] for conn in connections
if conn.endswith(".legacyRenderLayer")), None)
def get_attr_in_layer(node_attr, layer):
"""Return attribute value in Render Setup layer.
This will only work for attributes which can be
retrieved with `maya.cmds.getAttr` and for which
Relative and Absolute overrides are applicable.
Examples:
>>> get_attr_in_layer("defaultResolution.width", layer="layer1")
>>> get_attr_in_layer("defaultRenderGlobals.startFrame", layer="layer")
>>> get_attr_in_layer("transform.translate", layer="layer3")
Args:
attr (str): attribute name as 'node.attribute'
layer (str): layer name
Returns:
object: attribute value in layer
"""
# Delay pymel import to here because it's slow to load
import pymel.core as pm
def _layer_needs_update(layer):
"""Return whether layer needs updating."""
# Use `getattr` as e.g. DefaultRenderLayer does not have the attribute
return getattr(layer, "needsMembershipUpdate", False) or \
getattr(layer, "needsApplyUpdate", False)
def get_default_layer_value(node_attr_):
"""Return attribute value in defaultRenderLayer"""
inputs = cmds.listConnections(node_attr_,
source=True,
destination=False,
# We want to skip conversion nodes since
# an override to `endFrame` could have
# a `unitToTimeConversion` node
# in-between
skipConversionNodes=True,
type="applyOverride") or []
if inputs:
_override = inputs[0]
history_overrides = cmds.ls(cmds.listHistory(_override,
pruneDagObjects=True),
type="applyOverride")
node = history_overrides[-1] if history_overrides else _override
node_attr_ = node + ".original"
return pm.getAttr(node_attr_, asString=True)
layer = get_rendersetup_layer(layer)
rs = renderSetup.instance()
current_layer = rs.getVisibleRenderLayer()
if current_layer.name() == layer:
# Ensure layer is up-to-date
if _layer_needs_update(current_layer):
try:
rs.switchToLayer(current_layer)
except RuntimeError:
# Some cases can cause errors on switching
# the first time with Render Setup layers
# e.g. different overrides to compounds
# and its children plugs. So we just force
# it another time. If it then still fails
# we will let it error out.
rs.switchToLayer(current_layer)
return pm.getAttr(node_attr, asString=True)
overrides = get_attr_overrides(node_attr, layer)
default_layer_value = get_default_layer_value(node_attr)
if not overrides:
return default_layer_value
value = default_layer_value
for match, layer_override, index in overrides:
if isinstance(layer_override, AbsOverride):
# Absolute override
value = pm.getAttr(layer_override.name() + ".attrValue")
if match == ExactMatch:
value = value
if match == ParentMatch:
value = value[index]
if match == ChildMatch:
value[index] = value
elif isinstance(layer_override, RelOverride):
# Relative override
# Value = Original * Multiply + Offset
multiply = pm.getAttr(layer_override.name() + ".multiply")
offset = pm.getAttr(layer_override.name() + ".offset")
if match == ExactMatch:
value = value * multiply + offset
if match == ParentMatch:
value = value * multiply[index] + offset[index]
if match == ChildMatch:
value[index] = value[index] * multiply + offset
else:
raise TypeError("Unsupported override: %s" % layer_override)
return value
def get_attr_overrides(node_attr, layer,
skip_disabled=True,
skip_local_render=True,
stop_at_absolute_override=True):
"""Return all Overrides applicable to the attribute.
Overrides are returned as a 3-tuple:
(Match, Override, Index)
Match:
This is any of ExactMatch, ParentMatch, ChildMatch
and defines whether the override is exactly on the
plug, on the parent or on a child plug.
Override:
This is the RenderSetup Override instance.
Index:
This is the Plug index under the parent or for
the child that matches. The ExactMatch index will
always be None. For ParentMatch the index is which
index the plug is under the parent plug. For ChildMatch
the index is which child index matches the plug.
Args:
node_attr (str): attribute name as 'node.attribute'
layer (str): layer name
skip_disabled (bool): exclude disabled overrides
skip_local_render (bool): exclude overrides marked
as local render.
stop_at_absolute_override: exclude overrides prior
to the last absolute override as they have
no influence on the resulting value.
Returns:
list: Ordered Overrides in order of strength
"""
def get_mplug_children(plug):
"""Return children MPlugs of compound MPlug"""
children = []
if plug.isCompound:
for i in range(plug.numChildren()):
children.append(plug.child(i))
return children
def get_mplug_names(mplug):
"""Return long and short name of MPlug"""
long_name = mplug.partialName(useLongNames=True)
short_name = mplug.partialName(useLongNames=False)
return {long_name, short_name}
def iter_override_targets(_override):
try:
for target in _override._targets():
yield target
except AssertionError:
# Workaround: There is a bug where the private `_targets()` method
# fails on some attribute plugs. For example overrides
# to the defaultRenderGlobals.endFrame
# (Tested in Maya 2020.2)
log.debug("Workaround for %s" % _override)
from maya.app.renderSetup.common.utils import findPlug
attr = _override.attributeName()
if isinstance(_override, UniqueOverride):
node = _override.targetNodeName()
yield findPlug(node, attr)
else:
nodes = _override.parent().selector().nodes()
for node in nodes:
if cmds.attributeQuery(attr, node=node, exists=True):
yield findPlug(node, attr)
# Get the MPlug for the node.attr
sel = om.MSelectionList()
sel.add(node_attr)
plug = sel.getPlug(0)
layer = get_rendersetup_layer(layer)
if layer == DefaultRenderLayer:
# DefaultRenderLayer will never have overrides
# since it's the default layer
return []
rs_layer = renderSetup.instance().getRenderLayer(layer)
if rs_layer is None:
# Renderlayer does not exist
return
# Get any parent or children plugs as we also
# want to include them in the attribute match
# for overrides
parent = plug.parent() if plug.isChild else None
parent_index = None
if parent:
parent_index = get_mplug_children(parent).index(plug)
children = get_mplug_children(plug)
# Create lookup for the attribute by both long
# and short names
attr_names = get_mplug_names(plug)
for child in children:
attr_names.update(get_mplug_names(child))
if parent:
attr_names.update(get_mplug_names(parent))
# Get all overrides of the layer
# And find those that are relevant to the attribute
plug_overrides = []
# Iterate over the overrides in reverse so we get the last
# overrides first and can "break" whenever an absolute
# override is reached
layer_overrides = list(utils.getOverridesRecursive(rs_layer))
for layer_override in reversed(layer_overrides):
if skip_disabled and not layer_override.isEnabled():
# Ignore disabled overrides
continue
if skip_local_render and layer_override.isLocalRender():
continue
# The targets list can be very large so we'll do
# a quick filter by attribute name to detect whether
# it matches the attribute name, or its parent or child
if layer_override.attributeName() not in attr_names:
continue
override_match = None
for override_plug in iter_override_targets(layer_override):
override_match = None
if plug == override_plug:
override_match = (ExactMatch, layer_override, None)
elif parent and override_plug == parent:
override_match = (ParentMatch, layer_override, parent_index)
elif children and override_plug in children:
child_index = children.index(override_plug)
override_match = (ChildMatch, layer_override, child_index)
if override_match:
plug_overrides.append(override_match)
break
if (
override_match and
stop_at_absolute_override and
isinstance(layer_override, AbsOverride) and
# When the override is only on a child plug then it doesn't
# override the entire value so we not stop at this override
not override_match[0] == ChildMatch
):
# If override is absolute override, then BREAK out
# of parent loop we don't need to look any further as
# this is the absolute override
break
return reversed(plug_overrides)

View file

@ -16,12 +16,9 @@ log = logging.getLogger(__name__)
def _get_menu(menu_name=None):
"""Return the menu instance if it currently exists in Maya"""
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
_menu = project_settings["maya"]["scriptsmenu"]["name"]
if menu_name is None:
menu_name = _menu
menu_name = pipeline._menu
widgets = dict((
w.objectName(), w) for w in QtWidgets.QApplication.allWidgets())
menu = widgets.get(menu_name)
@ -58,11 +55,64 @@ def deferred():
parent=pipeline._parent
)
# Find the pipeline menu
top_menu = _get_menu()
# Try to find workfile tool action in the menu
workfile_action = None
for action in top_menu.actions():
if action.text() == "Work Files":
workfile_action = action
break
# Add at the top of menu if "Work Files" action was not found
after_action = ""
if workfile_action:
# Use action's object name for `insertAfter` argument
after_action = workfile_action.objectName()
# Insert action to menu
cmds.menuItem(
"Work Files",
parent=pipeline._menu,
command=launch_workfiles_app,
insertAfter=after_action
)
# Remove replaced action
if workfile_action:
top_menu.removeAction(workfile_action)
def remove_project_manager():
top_menu = _get_menu()
# Try to find "System" menu action in the menu
system_menu = None
for action in top_menu.actions():
if action.text() == "System":
system_menu = action
break
if system_menu is None:
return
# Try to find "Project manager" action in "System" menu
project_manager_action = None
for action in system_menu.menu().children():
if hasattr(action, "text") and action.text() == "Project Manager":
project_manager_action = action
break
# Remove "Project manager" action if was found
if project_manager_action is not None:
system_menu.menu().removeAction(project_manager_action)
log.info("Attempting to install scripts menu ...")
add_build_workfiles_item()
add_look_assigner_item()
modify_workfiles()
remove_project_manager()
try:
import scriptsmenu.launchformaya as launchformaya
@ -110,7 +160,6 @@ def install():
log.info("Skipping openpype.menu initialization in batch mode..")
return
uninstall()
# Allow time for uninstallation to finish.
cmds.evalDeferred(deferred)

View file

@ -99,14 +99,24 @@ class ReferenceLoader(api.Loader):
nodes = self[:]
if not nodes:
return
loaded_containers.append(containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__
))
# FIXME: there is probably better way to do this for looks.
if "look" in self.families:
loaded_containers.append(containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__
))
else:
ref_node = self._get_reference_node(nodes)
loaded_containers.append(containerise(
name=name,
namespace=namespace,
nodes=[ref_node],
context=context,
loader=self.__class__.__name__
))
c += 1
namespace = None
@ -235,9 +245,6 @@ class ReferenceLoader(api.Loader):
self.log.info("Setting %s.verticesOnlySet to False", node)
cmds.setAttr("{}.verticesOnlySet".format(node), False)
# Add new nodes of the reference to the container
cmds.sets(content, forceElement=node)
# Remove any placeHolderList attribute entries from the set that
# are remaining from nodes being removed from the referenced file.
members = cmds.sets(node, query=True)

View file

@ -4,6 +4,8 @@ import os
import json
import appdirs
import requests
import six
import sys
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
@ -12,7 +14,13 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.api import (get_system_settings, get_asset)
from openpype.api import (
get_system_settings,
get_project_settings,
get_asset)
from openpype.modules import ModulesManager
from avalon.api import Session
class CreateRender(plugin.Creator):
@ -83,6 +91,32 @@ class CreateRender(plugin.Creator):
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CreateRender, self).__init__(*args, **kwargs)
deadline_settings = get_system_settings()["modules"]["deadline"]
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
project_settings = get_project_settings(Session["AVALON_PROJECT"])
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
project_settings["deadline"]
["deadline_servers"]
)
self.deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
if not self.deadline_servers:
self.deadline_servers = default_servers
except AttributeError:
# Handle situation were we had only one url for deadline.
manager = ModulesManager()
deadline_module = manager.modules_by_name["deadline"]
# get default deadline webservice url from deadline module
self.deadline_servers = deadline_module.deadline_urls
def process(self):
"""Entry point."""
@ -94,10 +128,10 @@ class CreateRender(plugin.Creator):
use_selection = self.options.get("useSelection")
with lib.undo_chunk():
self._create_render_settings()
instance = super(CreateRender, self).process()
self.instance = super(CreateRender, self).process()
# create namespace with instance
index = 1
namespace_name = "_{}".format(str(instance))
namespace_name = "_{}".format(str(self.instance))
try:
cmds.namespace(rm=namespace_name)
except RuntimeError:
@ -105,12 +139,20 @@ class CreateRender(plugin.Creator):
pass
while cmds.namespace(exists=namespace_name):
namespace_name = "_{}{}".format(str(instance), index)
namespace_name = "_{}{}".format(str(self.instance), index)
index += 1
namespace = cmds.namespace(add=namespace_name)
cmds.setAttr("{}.machineList".format(instance), lock=True)
# add Deadline server selection list
if self.deadline_servers:
cmds.scriptJob(
attributeChange=[
"{}.deadlineServers".format(self.instance),
self._deadline_webservice_changed
])
cmds.setAttr("{}.machineList".format(self.instance), lock=True)
self._rs = renderSetup.instance()
layers = self._rs.getRenderLayers()
if use_selection:
@ -122,7 +164,7 @@ class CreateRender(plugin.Creator):
render_set = cmds.sets(
n="{}:{}".format(namespace, layer.name()))
sets.append(render_set)
cmds.sets(sets, forceElement=instance)
cmds.sets(sets, forceElement=self.instance)
# if no render layers are present, create default one with
# asterisk selector
@ -138,62 +180,61 @@ class CreateRender(plugin.Creator):
renderer = 'renderman'
self._set_default_renderer_settings(renderer)
return self.instance
def _deadline_webservice_changed(self):
"""Refresh Deadline server dependent options."""
# get selected server
from maya import cmds
webservice = self.deadline_servers[
self.server_aliases[
cmds.getAttr("{}.deadlineServers".format(self.instance))
]
]
pools = self._get_deadline_pools(webservice)
cmds.deleteAttr("{}.primaryPool".format(self.instance))
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
cmds.addAttr(self.instance, longName="primaryPool",
attributeType="enum",
enumName=":".join(pools))
cmds.addAttr(self.instance, longName="secondaryPool",
attributeType="enum",
enumName=":".join(["-"] + pools))
def _get_deadline_pools(self, webservice):
# type: (str) -> list
"""Get pools from Deadline.
Args:
webservice (str): Server url.
Returns:
list: Pools.
Throws:
RuntimeError: If deadline webservice is unreachable.
"""
argument = "{}/api/pools?NamesOnly=true".format(webservice)
try:
response = self._requests_get(argument)
except requests.exceptions.ConnectionError as exc:
msg = 'Cannot connect to deadline web service'
self.log.error(msg)
six.reraise(
RuntimeError,
RuntimeError('{} - {}'.format(msg, exc)),
sys.exc_info()[2])
if not response.ok:
self.log.warning("No pools retrieved")
return []
return response.json()
def _create_render_settings(self):
"""Create instance settings."""
# get pools
pools = []
system_settings = get_system_settings()["modules"]
deadline_enabled = system_settings["deadline"]["enabled"]
muster_enabled = system_settings["muster"]["enabled"]
deadline_url = system_settings["deadline"]["DEADLINE_REST_URL"]
muster_url = system_settings["muster"]["MUSTER_REST_URL"]
if deadline_enabled and muster_enabled:
self.log.error(
"Both Deadline and Muster are enabled. " "Cannot support both."
)
raise RuntimeError("Both Deadline and Muster are enabled")
if deadline_enabled:
argument = "{}/api/pools?NamesOnly=true".format(deadline_url)
try:
response = self._requests_get(argument)
except requests.exceptions.ConnectionError as e:
msg = 'Cannot connect to deadline web service'
self.log.error(msg)
raise RuntimeError('{} - {}'.format(msg, e))
if not response.ok:
self.log.warning("No pools retrieved")
else:
pools = response.json()
self.data["primaryPool"] = pools
# We add a string "-" to allow the user to not
# set any secondary pools
self.data["secondaryPool"] = ["-"] + pools
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
self._load_credentials()
self.log.info(">>> Getting pools ...")
try:
pools = self._get_muster_pools()
except requests.exceptions.HTTPError as e:
if e.startswith("401"):
self.log.warning("access token expired")
self._show_login()
raise RuntimeError("Access token expired")
except requests.exceptions.ConnectionError:
self.log.error("Cannot connect to Muster API endpoint.")
raise RuntimeError("Cannot connect to {}".format(muster_url))
pool_names = []
for pool in pools:
self.log.info(" - pool: {}".format(pool["name"]))
pool_names.append(pool["name"])
self.data["primaryPool"] = pool_names
pool_names = []
self.server_aliases = self.deadline_servers.keys()
self.data["deadlineServers"] = self.server_aliases
self.data["suspendPublishJob"] = False
self.data["review"] = True
self.data["extendFrames"] = False
@ -212,6 +253,54 @@ class CreateRender(plugin.Creator):
# Disable for now as this feature is not working yet
# self.data["assScene"] = False
system_settings = get_system_settings()["modules"]
deadline_enabled = system_settings["deadline"]["enabled"]
muster_enabled = system_settings["muster"]["enabled"]
muster_url = system_settings["muster"]["MUSTER_REST_URL"]
if deadline_enabled and muster_enabled:
self.log.error(
"Both Deadline and Muster are enabled. " "Cannot support both."
)
raise RuntimeError("Both Deadline and Muster are enabled")
if deadline_enabled:
# if default server is not between selected, use first one for
# initial list of pools.
try:
deadline_url = self.deadline_servers["default"]
except KeyError:
deadline_url = [
self.deadline_servers[k]
for k in self.deadline_servers.keys()
][0]
pool_names = self._get_deadline_pools(deadline_url)
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
self._load_credentials()
self.log.info(">>> Getting pools ...")
pools = []
try:
pools = self._get_muster_pools()
except requests.exceptions.HTTPError as e:
if e.startswith("401"):
self.log.warning("access token expired")
self._show_login()
raise RuntimeError("Access token expired")
except requests.exceptions.ConnectionError:
self.log.error("Cannot connect to Muster API endpoint.")
raise RuntimeError("Cannot connect to {}".format(muster_url))
for pool in pools:
self.log.info(" - pool: {}".format(pool["name"]))
pool_names.append(pool["name"])
self.data["primaryPool"] = pool_names
# We add a string "-" to allow the user to not
# set any secondary pools
self.data["secondaryPool"] = ["-"] + pool_names
self.options = {"useSelection": False} # Force no content
def _load_credentials(self):
@ -293,9 +382,7 @@ class CreateRender(plugin.Creator):
"""
if "verify" not in kwargs:
kwargs["verify"] = (
False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True
) # noqa
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.post(*args, **kwargs)
def _requests_get(self, *args, **kwargs):
@ -312,9 +399,7 @@ class CreateRender(plugin.Creator):
"""
if "verify" not in kwargs:
kwargs["verify"] = (
False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True
) # noqa
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.get(*args, **kwargs)
def _set_default_renderer_settings(self, renderer):
@ -332,14 +417,10 @@ class CreateRender(plugin.Creator):
if renderer == "arnold":
# set format to exr
cmds.setAttr(
"defaultArnoldDriver.ai_translator", "exr", type="string")
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
self._set_global_output_settings()
# resolution
cmds.setAttr(
"defaultResolution.width",
@ -349,43 +430,12 @@ class CreateRender(plugin.Creator):
asset["data"].get("resolutionHeight"))
if renderer == "vray":
vray_settings = cmds.ls(type="VRaySettingsNode")
if not vray_settings:
node = cmds.createNode("VRaySettingsNode")
else:
node = vray_settings[0]
# set underscore as element separator instead of default `.`
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(
node),
"_"
)
# set format to exr
cmds.setAttr(
"{}.imageFormatStr".format(node), 5)
# animType
cmds.setAttr(
"{}.animType".format(node), 1)
# resolution
cmds.setAttr(
"{}.width".format(node),
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"{}.height".format(node),
asset["data"].get("resolutionHeight"))
self._set_vray_settings(asset)
if renderer == "redshift":
redshift_settings = cmds.ls(type="RedshiftOptions")
if not redshift_settings:
node = cmds.createNode("RedshiftOptions")
else:
node = redshift_settings[0]
_ = self._set_renderer_option(
"RedshiftOptions", "{}.imageFormat", 1
)
# set exr
cmds.setAttr("{}.imageFormat".format(node), 1)
# resolution
cmds.setAttr(
"defaultResolution.width",
@ -394,8 +444,56 @@ class CreateRender(plugin.Creator):
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
self._set_global_output_settings()
@staticmethod
def _set_renderer_option(renderer_node, arg=None, value=None):
# type: (str, str, str) -> str
"""Set option on renderer node.
If renderer settings node doesn't exists, it is created first.
Args:
renderer_node (str): Renderer name.
arg (str, optional): Argument name.
value (str, optional): Argument value.
Returns:
str: Renderer settings node.
"""
settings = cmds.ls(type=renderer_node)
result = settings[0] if settings else cmds.createNode(renderer_node)
cmds.setAttr(arg.format(result), value)
return result
def _set_vray_settings(self, asset):
# type: (dict) -> None
"""Sets important settings for Vray."""
node = self._set_renderer_option(
"VRaySettingsNode", "{}.fileNameRenderElementSeparator", "_"
)
# set format to exr
cmds.setAttr(
"{}.imageFormatStr".format(node), 5)
# animType
cmds.setAttr(
"{}.animType".format(node), 1)
# resolution
cmds.setAttr(
"{}.width".format(node),
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"{}.height".format(node),
asset["data"].get("resolutionHeight"))
@staticmethod
def _set_global_output_settings():
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)

View file

@ -0,0 +1,11 @@
from openpype.hosts.maya.api import plugin
class CreateXgen(plugin.Creator):
"""Xgen interactive export"""
name = "xgen"
label = "Xgen Interactive"
family = "xgen"
icon = "pagelines"
defaults = ['Main']

View file

@ -17,7 +17,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"layout",
"camera",
"rig",
"camerarig"]
"camerarig",
"xgen"]
representations = ["ma", "abc", "fbx", "mb"]
label = "Reference"

View file

@ -49,7 +49,7 @@ import maya.app.renderSetup.model.renderSetup as renderSetup
import pyblish.api
from avalon import maya, api
from openpype.hosts.maya.api.expected_files import ExpectedFiles
from openpype.hosts.maya.api.lib_renderproducts import get as get_layer_render_products # noqa: E501
from openpype.hosts.maya.api import lib
@ -64,6 +64,8 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
def process(self, context):
"""Entry point to collector."""
render_instance = None
deadline_url = None
for instance in context:
if "rendering" in instance.data["families"]:
render_instance = instance
@ -86,6 +88,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
asset = api.Session["AVALON_ASSET"]
workspace = context.data["workspaceDir"]
deadline_settings = (
context.data
["system_settings"]
["modules"]
["deadline"]
)
if deadline_settings["enabled"]:
deadline_url = render_instance.data.get("deadlineUrl")
self._rs = renderSetup.instance()
current_layer = self._rs.getVisibleRenderLayer()
maya_render_layers = {
@ -157,10 +168,21 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
# return all expected files for all cameras and aovs in given
# frame range
ef = ExpectedFiles(render_instance)
exp_files = ef.get(renderer, layer_name)
self.log.info("multipart: {}".format(ef.multipart))
layer_render_products = get_layer_render_products(
layer_name, render_instance)
render_products = layer_render_products.layer_data.products
assert render_products, "no render products generated"
exp_files = []
for product in render_products:
for camera in layer_render_products.layer_data.cameras:
exp_files.append(
{product.productName: layer_render_products.get_files(
product, camera)})
self.log.info("multipart: {}".format(
layer_render_products.multipart))
assert exp_files, "no file names were generated, this is bug"
self.log.info(exp_files)
# if we want to attach render to subset, check if we have AOV's
# in expectedFiles. If so, raise error as we cannot attach AOV
@ -175,24 +197,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
full_exp_files = []
aov_dict = {}
# we either get AOVs or just list of files. List of files can
# mean two things - there are no AOVs enabled or multipass EXR
# is produced. In either case we treat those as `beauty`.
if isinstance(exp_files[0], dict):
for aov, files in exp_files[0].items():
full_paths = []
for e in files:
full_path = os.path.join(workspace, "renders", e)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
aov_dict[aov] = full_paths
else:
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
for aov in exp_files:
full_paths = []
for e in exp_files:
full_path = os.path.join(workspace, "renders", e)
for file in aov[aov.keys()[0]]:
full_path = os.path.join(workspace, "renders", file)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
aov_dict["beauty"] = full_paths
aov_dict[aov.keys()[0]] = full_paths
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
@ -224,7 +237,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"subset": expected_layer_name,
"attachTo": attach_to,
"setMembers": layer_name,
"multipartExr": ef.multipart,
"multipartExr": layer_render_products.multipart,
"review": render_instance.data.get("review") or False,
"publish": True,
@ -263,6 +276,9 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"vrayUseReferencedAovs") or False
}
if deadline_url:
data["deadlineUrl"] = deadline_url
if self.sync_workfile_version:
data["version"] = context.data["version"]
@ -306,10 +322,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
instance.data.update(data)
self.log.debug("data: {}".format(json.dumps(data, indent=4)))
# Restore current layer.
self.log.info("Restoring to {}".format(current_layer.name()))
self._rs.switchToLayer(current_layer)
def parse_options(self, render_globals):
"""Get all overrides with a value, skip those without.
@ -392,11 +404,13 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
rset = self.maya_layers[layer].renderSettingsCollectionInstance()
return rset.getOverrides()
def get_render_attribute(self, attr, layer):
@staticmethod
def get_render_attribute(attr, layer):
"""Get attribute from render options.
Args:
attr (str): name of attribute to be looked up.
attr (str): name of attribute to be looked up
layer (str): name of render layer
Returns:
Attribute value

View file

@ -19,7 +19,8 @@ class ExtractMayaSceneRaw(openpype.api.Extractor):
families = ["mayaAscii",
"setdress",
"layout",
"camerarig"]
"camerarig",
"xgen"]
scene_type = "ma"
def process(self, instance):

View file

@ -0,0 +1,61 @@
import os
from maya import cmds
import avalon.maya
import openpype.api
class ExtractXgenCache(openpype.api.Extractor):
"""Produce an alembic of just xgen interactive groom
"""
label = "Extract Xgen ABC Cache"
hosts = ["maya"]
families = ["xgen"]
optional = True
def process(self, instance):
# Collect the out set nodes
out_descriptions = [node for node in instance
if cmds.nodeType(node) == "xgmSplineDescription"]
start = 1
end = 1
self.log.info("Extracting Xgen Cache..")
dirname = self.staging_dir(instance)
parent_dir = self.staging_dir(instance)
filename = "{name}.abc".format(**instance.data)
path = os.path.join(parent_dir, filename)
with avalon.maya.suspended_refresh():
with avalon.maya.maintained_selection():
command = (
'-file '
+ path
+ ' -df "ogawa" -fr '
+ str(start)
+ ' '
+ str(end)
+ ' -step 1 -mxf -wfw'
)
for desc in out_descriptions:
command += (" -obj " + desc)
cmds.xgmSplineCache(export=True, j=command)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'abc',
'ext': 'abc',
'files': filename,
"stagingDir": dirname,
}
instance.data["representations"].append(representation)
self.log.info("Extracted {} to {}".format(instance, dirname))

View file

@ -259,7 +259,8 @@ class LoadMov(api.Loader):
read_node["last"].setValue(last)
read_node['frame_mode'].setValue("start at")
if int(self.first_frame) == int(read_node['frame'].value()):
if int(float(self.first_frame)) == int(
float(read_node['frame'].value())):
# start at workfile start
read_node['frame'].setValue(str(self.first_frame))
else:

View file

@ -51,7 +51,6 @@ class ExtractReviewDataLut(openpype.api.Extractor):
if "render.farm" in families:
instance.data["families"].remove("review")
instance.data["families"].remove("ftrack")
self.log.debug(
"_ lutPath: {}".format(instance.data["lutPath"]))

View file

@ -45,7 +45,6 @@ class ExtractReviewDataMov(openpype.api.Extractor):
if "render.farm" in families:
instance.data["families"].remove("review")
instance.data["families"].remove("ftrack")
data = exporter.generate_mov(farm=True)
self.log.debug(

View file

@ -2,7 +2,7 @@ import os
import opentimelineio as otio
import pyblish.api
from openpype import lib as plib
from copy import deepcopy
class CollectInstances(pyblish.api.InstancePlugin):
"""Collect instances from editorial's OTIO sequence"""
@ -186,8 +186,8 @@ class CollectInstances(pyblish.api.InstancePlugin):
properities.pop("version")
# adding Review-able instance
subset_instance_data = instance_data.copy()
subset_instance_data.update(properities)
subset_instance_data = deepcopy(instance_data)
subset_instance_data.update(deepcopy(properities))
subset_instance_data.update({
# unique attributes
"name": f"{name}_{subset}",

View file

@ -270,6 +270,7 @@ class CollectTextures(pyblish.api.ContextPlugin):
# store origin
if family == 'workfile':
families = self.workfile_families
families.append("texture_batch_workfile")
new_instance.data["source"] = "standalone publisher"
else:

View file

@ -8,7 +8,7 @@ class ValidateTextureBatch(pyblish.api.InstancePlugin):
label = "Validate Texture Presence"
hosts = ["standalonepublisher"]
order = openpype.api.ValidateContentsOrder
families = ["workfile"]
families = ["texture_batch_workfile"]
optional = False
def process(self, instance):

View file

@ -8,7 +8,7 @@ class ValidateTextureBatchNaming(pyblish.api.InstancePlugin):
label = "Validate Texture Batch Naming"
hosts = ["standalonepublisher"]
order = openpype.api.ValidateContentsOrder
families = ["workfile", "textures"]
families = ["texture_batch_workfile", "textures"]
optional = False
def process(self, instance):

View file

@ -11,7 +11,7 @@ class ValidateTextureBatchWorkfiles(pyblish.api.InstancePlugin):
label = "Validate Texture Workfile Has Resources"
hosts = ["standalonepublisher"]
order = openpype.api.ValidateContentsOrder
families = ["workfile"]
families = ["texture_batch_workfile"]
optional = True
# from presets

View file

@ -50,12 +50,12 @@ class ExtractSequence(pyblish.api.Extractor):
mark_in = instance.context.data["sceneMarkIn"]
mark_out = instance.context.data["sceneMarkOut"]
# Scene start frame offsets the output files, so we need to offset the
# marks.
# Change scene Start Frame to 0 to prevent frame index issues
# - issue is that TVPaint versions deal with frame indexes in a
# different way when Start Frame is not `0`
# NOTE It will be set back after rendering
scene_start_frame = instance.context.data["sceneStartFrame"]
difference = scene_start_frame - mark_in
mark_in += difference
mark_out += difference
lib.execute_george("tv_startframe 0")
# Frame start/end may be stored as float
frame_start = int(instance.data["frameStart"])
@ -145,6 +145,9 @@ class ExtractSequence(pyblish.api.Extractor):
filtered_layers
)
# Change scene frame Start back to previous value
lib.execute_george("tv_startframe {}".format(scene_start_frame))
# Sequence of one frame
if not output_filenames:
self.log.warning("Extractor did not create any output.")

View file

@ -415,13 +415,11 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
"""Plugin entry point."""
self._instance = instance
context = instance.context
self._deadline_url = (
context.data["system_settings"]
["modules"]
["deadline"]
["DEADLINE_REST_URL"]
)
assert self._deadline_url, "Requires DEADLINE_REST_URL"
self._deadline_url = context.data.get("defaultDeadline")
self._deadline_url = instance.data.get(
"deadlineUrl", self._deadline_url)
assert self._deadline_url, "Requires Deadline Webservice URL"
file_path = None
if self.use_published:

View file

@ -1140,7 +1140,8 @@ def prepare_host_environments(data, implementation_envs=True):
# Merge dictionaries
env_values = _merge_env(tool_env, env_values)
loaded_env = _merge_env(acre.compute(env_values), data["env"])
merged_env = _merge_env(env_values, data["env"])
loaded_env = acre.compute(merged_env, cleanup=False)
final_env = None
# Add host specific environments
@ -1191,7 +1192,10 @@ def apply_project_environments_value(project_name, env, project_settings=None):
env_value = project_settings["global"]["project_environments"]
if env_value:
env.update(_merge_env(acre.parse(env_value), env))
env.update(acre.compute(
_merge_env(acre.parse(env_value), env),
cleanup=False
))
return env

View file

@ -72,6 +72,8 @@ class PypeStreamHandler(logging.StreamHandler):
msg = self.format(record)
msg = Terminal.log(msg)
stream = self.stream
if stream is None:
return
fs = "%s\n"
# if no unicode support...
if not USE_UNICODE:

View file

@ -38,6 +38,7 @@ from .muster import MusterModule
from .deadline import DeadlineModule
from .project_manager_action import ProjectManagerAction
from .standalonepublish_action import StandAlonePublishAction
from .python_console_interpreter import PythonInterpreterAction
from .sync_server import SyncServerModule
from .slack import SlackIntegrationModule
@ -77,6 +78,7 @@ __all__ = (
"DeadlineModule",
"ProjectManagerAction",
"StandAlonePublishAction",
"PythonInterpreterAction",
"SyncServerModule",

View file

@ -6,17 +6,25 @@ from openpype.modules import (
class DeadlineModule(PypeModule, IPluginPaths):
name = "deadline"
def __init__(self, manager, settings):
self.deadline_urls = {}
super(DeadlineModule, self).__init__(manager, settings)
def initialize(self, modules_settings):
# This module is always enabled
deadline_settings = modules_settings[self.name]
self.enabled = deadline_settings["enabled"]
self.deadline_url = deadline_settings["DEADLINE_REST_URL"]
deadline_url = deadline_settings.get("DEADLINE_REST_URL")
if deadline_url:
self.deadline_urls = {"default": deadline_url}
else:
self.deadline_urls = deadline_settings.get("deadline_urls") # noqa: E501
def get_global_environments(self):
"""Deadline global environments for OpenPype implementation."""
return {
"DEADLINE_REST_URL": self.deadline_url
}
if not self.deadline_urls:
self.enabled = False
self.log.warning(("default Deadline Webservice URL "
"not specified. Disabling module."))
return
def connect_with_modules(self, *_a, **_kw):
return

View file

@ -0,0 +1,71 @@
# -*- coding: utf-8 -*-
"""Collect Deadline servers from instance.
This is resolving index of server lists stored in `deadlineServers` instance
attribute or using default server if that attribute doesn't exists.
"""
import pyblish.api
class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
"""Collect Deadline Webservice URL from instance."""
order = pyblish.api.CollectorOrder
label = "Deadline Webservice from the Instance"
families = ["rendering"]
def process(self, instance):
instance.data["deadlineUrl"] = self._collect_deadline_url(instance)
self.log.info(
"Using {} for submission.".format(instance.data["deadlineUrl"]))
@staticmethod
def _collect_deadline_url(render_instance):
# type: (pyblish.api.Instance) -> str
"""Get Deadline Webservice URL from render instance.
This will get all configured Deadline Webservice URLs and create
subset of them based upon project configuration. It will then take
`deadlineServers` from render instance that is now basically `int`
index of that list.
Args:
render_instance (pyblish.api.Instance): Render instance created
by Creator in Maya.
Returns:
str: Selected Deadline Webservice URL.
"""
deadline_settings = (
render_instance.context.data
["system_settings"]
["modules"]
["deadline"]
)
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
render_instance.context.data
["project_settings"]
["deadline"]
["deadline_servers"]
)
deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
except AttributeError:
# Handle situation were we had only one url for deadline.
return render_instance.context.data["defaultDeadline"]
return deadline_servers[
list(deadline_servers.keys())[
int(render_instance.data.get("deadlineServers"))
]
]

View file

@ -0,0 +1,21 @@
# -*- coding: utf-8 -*-
"""Collect default Deadline server."""
import pyblish.api
class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin):
"""Collect default Deadline Webservice URL."""
order = pyblish.api.CollectorOrder + 0.01
label = "Default Deadline Webservice"
def process(self, context):
try:
deadline_module = context.data.get("openPypeModules")["deadline"]
except AttributeError:
self.log.error("Cannot get OpenPype Deadline module.")
raise AssertionError("OpenPype Deadline module not found.")
# get default deadline webservice url from deadline module
self.log.debug(deadline_module.deadline_urls)
context.data["defaultDeadline"] = deadline_module.deadline_urls["default"] # noqa: E501

View file

@ -199,7 +199,7 @@ def get_renderer_variables(renderlayer, root):
if extension is None:
extension = "png"
if extension == "exr (multichannel)" or extension == "exr (deep)":
if extension in ["exr (multichannel)", "exr (deep)"]:
extension = "exr"
prefix_attr = "vraySettings.fileNamePrefix"
@ -264,12 +264,13 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
self._instance = instance
self.payload_skeleton = copy.deepcopy(payload_skeleton_template)
self._deadline_url = (
context.data["system_settings"]
["modules"]
["deadline"]
["DEADLINE_REST_URL"]
)
# get default deadline webservice url from deadline module
self.deadline_url = instance.context.data.get("defaultDeadline")
# if custom one is set in instance, use that
if instance.data.get("deadlineUrl"):
self.deadline_url = instance.data.get("deadlineUrl")
assert self.deadline_url, "Requires Deadline Webservice URL"
self._job_info = (
context.data["project_settings"].get(
@ -287,65 +288,76 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
"pluginInfo", {})
)
assert self._deadline_url, "Requires DEADLINE_REST_URL"
context = instance.context
workspace = context.data["workspaceDir"]
anatomy = context.data['anatomy']
instance.data["toBeRenderedOn"] = "deadline"
filepath = None
patches = (
context.data["project_settings"].get(
"deadline", {}).get(
"publish", {}).get(
"MayaSubmitDeadline", {}).get(
"scene_patches", {})
)
# Handle render/export from published scene or not ------------------
if self.use_published:
patched_files = []
for i in context:
if "workfile" in i.data["families"]:
assert i.data["publish"] is True, (
"Workfile (scene) must be published along")
template_data = i.data.get("anatomyData")
rep = i.data.get("representations")[0].get("name")
template_data["representation"] = rep
template_data["ext"] = rep
template_data["comment"] = None
anatomy_filled = anatomy.format(template_data)
template_filled = anatomy_filled["publish"]["path"]
filepath = os.path.normpath(template_filled)
self.log.info("Using published scene for render {}".format(
filepath))
if "workfile" not in i.data["families"]:
continue
assert i.data["publish"] is True, (
"Workfile (scene) must be published along")
template_data = i.data.get("anatomyData")
rep = i.data.get("representations")[0].get("name")
template_data["representation"] = rep
template_data["ext"] = rep
template_data["comment"] = None
anatomy_filled = anatomy.format(template_data)
template_filled = anatomy_filled["publish"]["path"]
filepath = os.path.normpath(template_filled)
self.log.info("Using published scene for render {}".format(
filepath))
if not os.path.exists(filepath):
self.log.error("published scene does not exist!")
raise
# now we need to switch scene in expected files
# because <scene> token will now point to published
# scene file and that might differ from current one
new_scene = os.path.splitext(
os.path.basename(filepath))[0]
orig_scene = os.path.splitext(
os.path.basename(context.data["currentFile"]))[0]
exp = instance.data.get("expectedFiles")
if not os.path.exists(filepath):
self.log.error("published scene does not exist!")
raise
# now we need to switch scene in expected files
# because <scene> token will now point to published
# scene file and that might differ from current one
new_scene = os.path.splitext(
os.path.basename(filepath))[0]
orig_scene = os.path.splitext(
os.path.basename(context.data["currentFile"]))[0]
exp = instance.data.get("expectedFiles")
if isinstance(exp[0], dict):
# we have aovs and we need to iterate over them
new_exp = {}
for aov, files in exp[0].items():
replaced_files = []
for f in files:
replaced_files.append(
f.replace(orig_scene, new_scene)
)
new_exp[aov] = replaced_files
instance.data["expectedFiles"] = [new_exp]
else:
new_exp = []
for f in exp:
new_exp.append(
if isinstance(exp[0], dict):
# we have aovs and we need to iterate over them
new_exp = {}
for aov, files in exp[0].items():
replaced_files = []
for f in files:
replaced_files.append(
f.replace(orig_scene, new_scene)
)
instance.data["expectedFiles"] = [new_exp]
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))
new_exp[aov] = replaced_files
instance.data["expectedFiles"] = [new_exp]
else:
new_exp = []
for f in exp:
new_exp.append(
f.replace(orig_scene, new_scene)
)
instance.data["expectedFiles"] = [new_exp]
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))
# patch workfile is needed
if filepath not in patched_files:
patched_file = self._patch_workfile(filepath, patches)
patched_files.append(patched_file)
all_instances = []
for result in context.data["results"]:
@ -670,7 +682,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
self.log.info(
"Submitting tile job(s) [{}] ...".format(len(frame_payloads)))
url = "{}/api/jobs".format(self._deadline_url)
url = "{}/api/jobs".format(self.deadline_url)
tiles_count = instance.data.get("tilesX") * instance.data.get("tilesY") # noqa: E501
for tile_job in frame_payloads:
@ -754,7 +766,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
# E.g. http://192.168.0.1:8082/api/jobs
url = "{}/api/jobs".format(self._deadline_url)
url = "{}/api/jobs".format(self.deadline_url)
response = self._requests_post(url, json=payload)
if not response.ok:
raise Exception(response.text)
@ -868,10 +880,11 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
payload["JobInfo"].update(job_info_ext)
payload["PluginInfo"].update(plugin_info_ext)
envs = []
for k, v in payload["JobInfo"].items():
if k.startswith("EnvironmentKeyValue"):
envs.append(v)
envs = [
v
for k, v in payload["JobInfo"].items()
if k.startswith("EnvironmentKeyValue")
]
# add app name to environment
envs.append(
@ -892,11 +905,8 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
envs.append(
"OPENPYPE_ASS_EXPORT_STEP={}".format(1))
i = 0
for e in envs:
for i, e in enumerate(envs):
payload["JobInfo"]["EnvironmentKeyValue{}".format(i)] = e
i += 1
return payload
def _get_vray_render_payload(self, data):
@ -964,7 +974,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
payload = self._get_arnold_export_payload(data)
self.log.info("Submitting ass export job.")
url = "{}/api/jobs".format(self._deadline_url)
url = "{}/api/jobs".format(self.deadline_url)
response = self._requests_post(url, json=payload)
if not response.ok:
self.log.error("Submition failed!")
@ -1003,7 +1013,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
"""
if 'verify' not in kwargs:
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
kwargs['verify'] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
# add 10sec timeout before bailing out
kwargs['timeout'] = 10
return requests.post(*args, **kwargs)
@ -1022,7 +1032,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
"""
if 'verify' not in kwargs:
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
kwargs['verify'] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
# add 10sec timeout before bailing out
kwargs['timeout'] = 10
return requests.get(*args, **kwargs)
@ -1069,3 +1079,43 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
result = filename_zero.replace("\\", "/")
return result
def _patch_workfile(self, file, patches):
# type: (str, dict) -> [str, None]
"""Patch Maya scene.
This will take list of patches (lines to add) and apply them to
*published* Maya scene file (that is used later for rendering).
Patches are dict with following structure::
{
"name": "Name of patch",
"regex": "regex of line before patch",
"line": "line to insert"
}
Args:
file (str): File to patch.
patches (dict): Dictionary defining patches.
Returns:
str: Patched file path or None
"""
if os.path.splitext(file)[1].lower() != ".ma" or not patches:
return None
compiled_regex = [re.compile(p["regex"]) for p in patches]
with open(file, "r+") as pf:
scene_data = pf.readlines()
for ln, line in enumerate(scene_data):
for i, r in enumerate(compiled_regex):
if re.match(r, line):
scene_data.insert(ln + 1, patches[i]["line"])
pf.seek(0)
pf.writelines(scene_data)
pf.truncate()
self.log.info(
"Applied {} patch to scene.".format(
patches[i]["name"]))
return file

View file

@ -42,13 +42,12 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
node = instance[0]
context = instance.context
deadline_url = (
context.data["system_settings"]
["modules"]
["deadline"]
["DEADLINE_REST_URL"]
)
assert deadline_url, "Requires DEADLINE_REST_URL"
# get default deadline webservice url from deadline module
deadline_url = instance.context.data["defaultDeadline"]
# if custom one is set in instance, use that
if instance.data.get("deadlineUrl"):
deadline_url = instance.data.get("deadlineUrl")
assert deadline_url, "Requires Deadline Webservice URL"
self.deadline_url = "{}/api/jobs".format(deadline_url)
self._comment = context.data.get("comment", "")
@ -252,39 +251,11 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
environment = dict({key: os.environ[key] for key in keys
if key in os.environ}, **api.Session)
# self.log.debug("enviro: {}".format(pprint(environment)))
for _path in os.environ:
if _path.lower().startswith('openpype_'):
environment[_path] = os.environ[_path]
clean_environment = {}
for key, value in environment.items():
clean_path = ""
self.log.debug("key: {}".format(key))
if "://" in value:
clean_path = value
else:
valid_paths = []
for path in value.split(os.pathsep):
if not path:
continue
try:
path.decode('UTF-8', 'strict')
valid_paths.append(os.path.normpath(path))
except UnicodeDecodeError:
print('path contains non UTF characters')
if valid_paths:
clean_path = os.pathsep.join(valid_paths)
if key == "PYTHONPATH":
clean_path = clean_path.replace('python2', 'python3')
self.log.debug("clean path: {}".format(clean_path))
clean_environment[key] = clean_path
environment = clean_environment
# to recognize job from PYPE for turning Event On/Off
environment["OPENPYPE_RENDER_JOB"] = "1"

View file

@ -5,7 +5,6 @@ import os
import json
import re
from copy import copy, deepcopy
import sys
import openpype.api
from avalon import api, io
@ -615,14 +614,16 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
instance["families"] = families
def process(self, instance):
# type: (pyblish.api.Instance) -> None
"""Process plugin.
Detect type of renderfarm submission and create and post dependend job
in case of Deadline. It creates json file with metadata needed for
publishing in directory of render.
:param instance: Instance data
:type instance: dict
Args:
instance (pyblish.api.Instance): Instance data.
"""
data = instance.data.copy()
context = instance.context
@ -908,13 +909,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
}
if submission_type == "deadline":
self.deadline_url = (
context.data["system_settings"]
["modules"]
["deadline"]
["DEADLINE_REST_URL"]
)
assert self.deadline_url, "Requires DEADLINE_REST_URL"
# get default deadline webservice url from deadline module
self.deadline_url = instance.context.data["defaultDeadline"]
# if custom one is set in instance, use that
if instance.data.get("deadlineUrl"):
self.deadline_url = instance.data.get("deadlineUrl")
assert self.deadline_url, "Requires Deadline Webservice URL"
self._submit_deadline_post_job(instance, render_job, instances)

View file

@ -1,11 +1,10 @@
import pyblish.api
from avalon.vendor import requests
from openpype.plugin import contextplugin_should_run
import os
class ValidateDeadlineConnection(pyblish.api.ContextPlugin):
class ValidateDeadlineConnection(pyblish.api.InstancePlugin):
"""Validate Deadline Web Service is running"""
label = "Validate Deadline Web Service"
@ -13,18 +12,16 @@ class ValidateDeadlineConnection(pyblish.api.ContextPlugin):
hosts = ["maya", "nuke"]
families = ["renderlayer"]
def process(self, context):
# Workaround bug pyblish-base#250
if not contextplugin_should_run(self, context):
return
deadline_url = (
context.data["system_settings"]
["modules"]
["deadline"]
["DEADLINE_REST_URL"]
)
def process(self, instance):
# get default deadline webservice url from deadline module
deadline_url = instance.context.data["defaultDeadline"]
# if custom one is set in instance, use that
if instance.data.get("deadlineUrl"):
deadline_url = instance.data.get("deadlineUrl")
self.log.info(
"We have deadline URL on instance {}".format(
deadline_url))
assert deadline_url, "Requires Deadline Webservice URL"
# Check response
response = self._requests_get(deadline_url)

View file

@ -4,7 +4,6 @@ import pyblish.api
from avalon.vendor import requests
from openpype.api import get_system_settings
from openpype.lib.abstract_submit_deadline import requests_get
from openpype.lib.delivery import collect_frames
@ -22,6 +21,7 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin):
allow_user_override = True
def process(self, instance):
self.instance = instance
frame_list = self._get_frame_list(instance.data["render_job_id"])
for repre in instance.data["representations"]:
@ -129,13 +129,12 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin):
Might be different than job info saved in metadata.json if user
manually changes job pre/during rendering.
"""
deadline_url = (
get_system_settings()
["modules"]
["deadline"]
["DEADLINE_REST_URL"]
)
assert deadline_url, "Requires DEADLINE_REST_URL"
# get default deadline webservice url from deadline module
deadline_url = self.instance.context.data["defaultDeadline"]
# if custom one is set in instance, use that
if self.instance.data.get("deadlineUrl"):
deadline_url = self.instance.data.get("deadlineUrl")
assert deadline_url, "Requires Deadline Webservice URL"
url = "{}/api/jobs?JobID={}".format(deadline_url, job_id)
try:
@ -181,6 +180,10 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin):
"""Returns set of file names from metadata.json"""
expected_files = set()
for file_name in repre["files"]:
files = repre["files"]
if not isinstance(files, list):
files = [files]
for file_name in files:
expected_files.add(file_name)
return expected_files

View file

@ -0,0 +1,61 @@
from openpype.modules.ftrack.lib import ServerAction
class PrivateProjectDetectionAction(ServerAction):
"""Action helps to identify if does not have access to project."""
identifier = "server.missing.perm.private.project"
label = "Missing permissions"
description = (
"Main ftrack event server does not have access to this project."
)
def _discover(self, event):
"""Show action only if there is a selection in event data."""
entities = self._translate_event(event)
if entities:
return None
selection = event["data"].get("selection")
if not selection:
return None
return {
"items": [{
"label": self.label,
"variant": self.variant,
"description": self.description,
"actionIdentifier": self.discover_identifier,
"icon": self.icon,
}]
}
def _launch(self, event):
# Ignore if there are values in event data
# - somebody clicked on submit button
values = event["data"].get("values")
if values:
return None
title = "# Private project (missing permissions) #"
msg = (
"User ({}) or API Key used on Ftrack event server"
" does not have permissions to access this private project."
).format(self.session.api_user)
return {
"type": "form",
"title": "Missing permissions",
"items": [
{"type": "label", "value": title},
{"type": "label", "value": msg},
# Add hidden to be able detect if was clicked on submit
{"type": "hidden", "value": "1", "name": "hidden"}
],
"submit_button_label": "Got it"
}
def register(session):
'''Register plugin. Called when used as an plugin.'''
PrivateProjectDetectionAction(session).register()

View file

@ -1,33 +1,98 @@
import platform
import socket
import getpass
from openpype.modules.ftrack.lib import BaseAction, statics_icon
class ActionAskWhereIRun(BaseAction):
""" Sometimes user forget where pipeline with his credentials is running.
- this action triggers `ActionShowWhereIRun`
"""
ignore_me = True
identifier = 'ask.where.i.run'
label = 'Ask where I run'
description = 'Triggers PC info where user have running OpenPype'
icon = statics_icon("ftrack", "action_icons", "ActionAskWhereIRun.svg")
class ActionWhereIRun(BaseAction):
"""Show where same user has running OpenPype instances."""
def discover(self, session, entities, event):
""" Hide by default - Should be enabled only if you want to run.
- best practise is to create another action that triggers this one
"""
identifier = "ask.where.i.run"
show_identifier = "show.where.i.run"
label = "OpenPype Admin"
variant = "- Where I run"
description = "Show PC info where user have running OpenPype"
return True
def _discover(self, _event):
return {
"items": [{
"label": self.label,
"variant": self.variant,
"description": self.description,
"actionIdentifier": self.discover_identifier,
"icon": self.icon,
}]
}
def launch(self, session, entities, event):
more_data = {"event_hub_id": session.event_hub.id}
self.trigger_action(
"show.where.i.run", event, additional_event_data=more_data
def _launch(self, event):
self.trigger_action(self.show_identifier, event)
def register(self):
# Register default action callbacks
super(ActionWhereIRun, self).register()
# Add show identifier
show_subscription = (
"topic=ftrack.action.launch"
" and data.actionIdentifier={}"
" and source.user.username={}"
).format(
self.show_identifier,
self.session.api_user
)
self.session.event_hub.subscribe(
show_subscription,
self._show_info
)
return True
def _show_info(self, event):
title = "Where Do I Run?"
msgs = {}
all_keys = ["Hostname", "IP", "Username", "System name", "PC name"]
try:
host_name = socket.gethostname()
msgs["Hostname"] = host_name
host_ip = socket.gethostbyname(host_name)
msgs["IP"] = host_ip
except Exception:
pass
try:
system_name, pc_name, *_ = platform.uname()
msgs["System name"] = system_name
msgs["PC name"] = pc_name
except Exception:
pass
try:
msgs["Username"] = getpass.getuser()
except Exception:
pass
for key in all_keys:
if not msgs.get(key):
msgs[key] = "-Undefined-"
items = []
first = True
separator = {"type": "label", "value": "---"}
for key, value in msgs.items():
if first:
first = False
else:
items.append(separator)
self.log.debug("{}: {}".format(key, value))
subtitle = {"type": "label", "value": "<h3>{}</h3>".format(key)}
items.append(subtitle)
message = {"type": "label", "value": "<p>{}</p>".format(value)}
items.append(message)
self.show_interface(items, title, event=event)
def register(session):
'''Register plugin. Called when used as an plugin.'''
ActionAskWhereIRun(session).register()
ActionWhereIRun(session).register()

View file

@ -1,86 +0,0 @@
import platform
import socket
import getpass
from openpype.modules.ftrack.lib import BaseAction
class ActionShowWhereIRun(BaseAction):
""" Sometimes user forget where pipeline with his credentials is running.
- this action shows on which PC, Username and IP is running
- requirement action MUST be registered where we want to locate the PC:
- - can't be used retrospectively...
"""
#: Action identifier.
identifier = 'show.where.i.run'
#: Action label.
label = 'Show where I run'
#: Action description.
description = 'Shows PC info where user have running OpenPype'
def discover(self, session, entities, event):
""" Hide by default - Should be enabled only if you want to run.
- best practise is to create another action that triggers this one
"""
return False
@property
def launch_identifier(self):
return self.identifier
def launch(self, session, entities, event):
# Don't show info when was launch from this session
if session.event_hub.id == event.get("data", {}).get("event_hub_id"):
return True
title = "Where Do I Run?"
msgs = {}
all_keys = ["Hostname", "IP", "Username", "System name", "PC name"]
try:
host_name = socket.gethostname()
msgs["Hostname"] = host_name
host_ip = socket.gethostbyname(host_name)
msgs["IP"] = host_ip
except Exception:
pass
try:
system_name, pc_name, *_ = platform.uname()
msgs["System name"] = system_name
msgs["PC name"] = pc_name
except Exception:
pass
try:
msgs["Username"] = getpass.getuser()
except Exception:
pass
for key in all_keys:
if not msgs.get(key):
msgs[key] = "-Undefined-"
items = []
first = True
splitter = {'type': 'label', 'value': '---'}
for key, value in msgs.items():
if first:
first = False
else:
items.append(splitter)
self.log.debug("{}: {}".format(key, value))
subtitle = {'type': 'label', 'value': '<h3>{}</h3>'.format(key)}
items.append(subtitle)
message = {'type': 'label', 'value': '<p>{}</p>'.format(value)}
items.append(message)
self.show_interface(items, title, event=event)
return True
def register(session):
'''Register plugin. Called when used as an plugin.'''
ActionShowWhereIRun(session).register()

View file

@ -191,15 +191,22 @@ class BaseHandler(object):
if session is None:
session = self.session
_entities = event['data'].get('entities_object', None)
_entities = event["data"].get("entities_object", None)
if _entities is not None and not _entities:
return _entities
if (
_entities is None or
_entities[0].get(
'link', None
_entities is None
or _entities[0].get(
"link", None
) == ftrack_api.symbol.NOT_SET
):
_entities = self._get_entities(event)
event['data']['entities_object'] = _entities
_entities = [
item
for item in self._get_entities(event)
if item is not None
]
event["data"]["entities_object"] = _entities
return _entities

View file

@ -63,8 +63,9 @@ class CollectFtrackFamily(pyblish.api.InstancePlugin):
self.log.debug("Adding ftrack family for '{}'".
format(instance.data.get("family")))
if families and "ftrack" not in families:
instance.data["families"].append("ftrack")
if families:
if "ftrack" not in families:
instance.data["families"].append("ftrack")
else:
instance.data["families"] = ["ftrack"]
else:

View file

@ -0,0 +1,8 @@
from .module import (
PythonInterpreterAction
)
__all__ = (
"PythonInterpreterAction",
)

View file

@ -0,0 +1,45 @@
from .. import PypeModule, ITrayAction
class PythonInterpreterAction(PypeModule, ITrayAction):
label = "Console"
name = "python_interpreter"
admin_action = True
def initialize(self, modules_settings):
self.enabled = True
self._interpreter_window = None
def tray_init(self):
self.create_interpreter_window()
def tray_exit(self):
if self._interpreter_window is not None:
self._interpreter_window.save_registry()
def connect_with_modules(self, *args, **kwargs):
pass
def create_interpreter_window(self):
"""Initializa Settings Qt window."""
if self._interpreter_window:
return
from openpype.modules.python_console_interpreter.window import (
PythonInterpreterWidget
)
self._interpreter_window = PythonInterpreterWidget()
def on_action_trigger(self):
self.show_interpreter_window()
def show_interpreter_window(self):
self.create_interpreter_window()
if self._interpreter_window.isVisible():
self._interpreter_window.activateWindow()
self._interpreter_window.raise_()
return
self._interpreter_window.show()

View file

@ -0,0 +1,8 @@
from .widgets import (
PythonInterpreterWidget
)
__all__ = (
"PythonInterpreterWidget",
)

View file

@ -0,0 +1,583 @@
import os
import re
import sys
import collections
from code import InteractiveInterpreter
import appdirs
from Qt import QtCore, QtWidgets, QtGui
from openpype import resources
from openpype.style import load_stylesheet
from openpype.lib import JSONSettingRegistry
openpype_art = """
. . .. . ..
_oOOP3OPP3Op_. .
.PPpo~. .. ~2p. .. .... . .
.Ppo . .pPO3Op.. . O:. . . .
.3Pp . oP3'. 'P33. . 4 .. . . . .. . . .
.~OP 3PO. .Op3 : . .. _____ _____ _____
.P3O . oP3oP3O3P' . . . . / /./ /./ /
O3:. O3p~ . .:. . ./____/./____/ /____/
'P . 3p3. oP3~. ..P:. . . .. . . .. . . .
. ': . Po' .Opo'. .3O. . o[ by Pype Club ]]]==- - - . .
. '_ .. . . _OP3.. . .https://openpype.io.. .
~P3.OPPPO3OP~ . .. .
. ' '. . .. . . . .. .
"""
class PythonInterpreterRegistry(JSONSettingRegistry):
"""Class handling OpenPype general settings registry.
Attributes:
vendor (str): Name used for path construction.
product (str): Additional name used for path construction.
"""
def __init__(self):
self.vendor = "pypeclub"
self.product = "openpype"
name = "python_interpreter_tool"
path = appdirs.user_data_dir(self.product, self.vendor)
super(PythonInterpreterRegistry, self).__init__(name, path)
class StdOEWrap:
def __init__(self):
self._origin_stdout_write = None
self._origin_stderr_write = None
self._listening = False
self.lines = collections.deque()
if not sys.stdout:
sys.stdout = open(os.devnull, "w")
if not sys.stderr:
sys.stderr = open(os.devnull, "w")
if self._origin_stdout_write is None:
self._origin_stdout_write = sys.stdout.write
if self._origin_stderr_write is None:
self._origin_stderr_write = sys.stderr.write
self._listening = True
sys.stdout.write = self._stdout_listener
sys.stderr.write = self._stderr_listener
def stop_listen(self):
self._listening = False
def _stdout_listener(self, text):
if self._listening:
self.lines.append(text)
if self._origin_stdout_write is not None:
self._origin_stdout_write(text)
def _stderr_listener(self, text):
if self._listening:
self.lines.append(text)
if self._origin_stderr_write is not None:
self._origin_stderr_write(text)
class PythonCodeEditor(QtWidgets.QPlainTextEdit):
execute_requested = QtCore.Signal()
def __init__(self, parent):
super(PythonCodeEditor, self).__init__(parent)
self.setObjectName("PythonCodeEditor")
self._indent = 4
def _tab_shift_right(self):
cursor = self.textCursor()
selected_text = cursor.selectedText()
if not selected_text:
cursor.insertText(" " * self._indent)
return
sel_start = cursor.selectionStart()
sel_end = cursor.selectionEnd()
cursor.setPosition(sel_end)
end_line = cursor.blockNumber()
cursor.setPosition(sel_start)
while True:
cursor.movePosition(QtGui.QTextCursor.StartOfLine)
text = cursor.block().text()
spaces = len(text) - len(text.lstrip(" "))
new_spaces = spaces % self._indent
if not new_spaces:
new_spaces = self._indent
cursor.insertText(" " * new_spaces)
if cursor.blockNumber() == end_line:
break
cursor.movePosition(QtGui.QTextCursor.NextBlock)
def _tab_shift_left(self):
tmp_cursor = self.textCursor()
sel_start = tmp_cursor.selectionStart()
sel_end = tmp_cursor.selectionEnd()
cursor = QtGui.QTextCursor(self.document())
cursor.setPosition(sel_end)
end_line = cursor.blockNumber()
cursor.setPosition(sel_start)
while True:
cursor.movePosition(QtGui.QTextCursor.StartOfLine)
text = cursor.block().text()
spaces = len(text) - len(text.lstrip(" "))
if spaces:
spaces_to_remove = (spaces % self._indent) or self._indent
if spaces_to_remove > spaces:
spaces_to_remove = spaces
cursor.setPosition(
cursor.position() + spaces_to_remove,
QtGui.QTextCursor.KeepAnchor
)
cursor.removeSelectedText()
if cursor.blockNumber() == end_line:
break
cursor.movePosition(QtGui.QTextCursor.NextBlock)
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Backtab:
self._tab_shift_left()
event.accept()
return
if event.key() == QtCore.Qt.Key_Tab:
if event.modifiers() == QtCore.Qt.NoModifier:
self._tab_shift_right()
event.accept()
return
if (
event.key() == QtCore.Qt.Key_Return
and event.modifiers() == QtCore.Qt.ControlModifier
):
self.execute_requested.emit()
event.accept()
return
super(PythonCodeEditor, self).keyPressEvent(event)
class PythonTabWidget(QtWidgets.QWidget):
before_execute = QtCore.Signal(str)
def __init__(self, parent):
super(PythonTabWidget, self).__init__(parent)
code_input = PythonCodeEditor(self)
self.setFocusProxy(code_input)
execute_btn = QtWidgets.QPushButton("Execute", self)
execute_btn.setToolTip("Execute command (Ctrl + Enter)")
btns_layout = QtWidgets.QHBoxLayout()
btns_layout.setContentsMargins(0, 0, 0, 0)
btns_layout.addStretch(1)
btns_layout.addWidget(execute_btn)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(code_input, 1)
layout.addLayout(btns_layout, 0)
execute_btn.clicked.connect(self._on_execute_clicked)
code_input.execute_requested.connect(self.execute)
self._code_input = code_input
self._interpreter = InteractiveInterpreter()
def _on_execute_clicked(self):
self.execute()
def get_code(self):
return self._code_input.toPlainText()
def set_code(self, code_text):
self._code_input.setPlainText(code_text)
def execute(self):
code_text = self._code_input.toPlainText()
self.before_execute.emit(code_text)
self._interpreter.runcode(code_text)
class TabNameDialog(QtWidgets.QDialog):
default_width = 330
default_height = 85
def __init__(self, parent):
super(TabNameDialog, self).__init__(parent)
self.setWindowTitle("Enter tab name")
name_label = QtWidgets.QLabel("Tab name:", self)
name_input = QtWidgets.QLineEdit(self)
inputs_layout = QtWidgets.QHBoxLayout()
inputs_layout.addWidget(name_label)
inputs_layout.addWidget(name_input)
ok_btn = QtWidgets.QPushButton("Ok", self)
cancel_btn = QtWidgets.QPushButton("Cancel", self)
btns_layout = QtWidgets.QHBoxLayout()
btns_layout.addStretch(1)
btns_layout.addWidget(ok_btn)
btns_layout.addWidget(cancel_btn)
layout = QtWidgets.QVBoxLayout(self)
layout.addLayout(inputs_layout)
layout.addStretch(1)
layout.addLayout(btns_layout)
ok_btn.clicked.connect(self._on_ok_clicked)
cancel_btn.clicked.connect(self._on_cancel_clicked)
self._name_input = name_input
self._ok_btn = ok_btn
self._cancel_btn = cancel_btn
self._result = None
self.resize(self.default_width, self.default_height)
def set_tab_name(self, name):
self._name_input.setText(name)
def result(self):
return self._result
def showEvent(self, event):
super(TabNameDialog, self).showEvent(event)
btns_width = max(
self._ok_btn.width(),
self._cancel_btn.width()
)
self._ok_btn.setMinimumWidth(btns_width)
self._cancel_btn.setMinimumWidth(btns_width)
def _on_ok_clicked(self):
self._result = self._name_input.text()
self.accept()
def _on_cancel_clicked(self):
self._result = None
self.reject()
class OutputTextWidget(QtWidgets.QTextEdit):
v_max_offset = 4
def vertical_scroll_at_max(self):
v_scroll = self.verticalScrollBar()
return v_scroll.value() > v_scroll.maximum() - self.v_max_offset
def scroll_to_bottom(self):
v_scroll = self.verticalScrollBar()
return v_scroll.setValue(v_scroll.maximum())
class EnhancedTabBar(QtWidgets.QTabBar):
double_clicked = QtCore.Signal(QtCore.QPoint)
right_clicked = QtCore.Signal(QtCore.QPoint)
mid_clicked = QtCore.Signal(QtCore.QPoint)
def __init__(self, parent):
super(EnhancedTabBar, self).__init__(parent)
self.setDrawBase(False)
def mouseDoubleClickEvent(self, event):
self.double_clicked.emit(event.globalPos())
event.accept()
def mouseReleaseEvent(self, event):
if event.button() == QtCore.Qt.RightButton:
self.right_clicked.emit(event.globalPos())
event.accept()
return
elif event.button() == QtCore.Qt.MidButton:
self.mid_clicked.emit(event.globalPos())
event.accept()
else:
super(EnhancedTabBar, self).mouseReleaseEvent(event)
class PythonInterpreterWidget(QtWidgets.QWidget):
default_width = 1000
default_height = 600
def __init__(self, parent=None):
super(PythonInterpreterWidget, self).__init__(parent)
self.setWindowTitle("OpenPype Console")
self.setWindowIcon(QtGui.QIcon(resources.pype_icon_filepath()))
self.ansi_escape = re.compile(
r"(?:\x1B[@-_]|[\x80-\x9F])[0-?]*[ -/]*[@-~]"
)
self._tabs = []
self._stdout_err_wrapper = StdOEWrap()
output_widget = OutputTextWidget(self)
output_widget.setObjectName("PythonInterpreterOutput")
output_widget.setLineWrapMode(QtWidgets.QTextEdit.NoWrap)
output_widget.setTextInteractionFlags(QtCore.Qt.TextBrowserInteraction)
tab_widget = QtWidgets.QTabWidget(self)
tab_bar = EnhancedTabBar(tab_widget)
tab_widget.setTabBar(tab_bar)
tab_widget.setTabsClosable(False)
tab_widget.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
add_tab_btn = QtWidgets.QPushButton("+", tab_widget)
tab_widget.setCornerWidget(add_tab_btn, QtCore.Qt.TopLeftCorner)
widgets_splitter = QtWidgets.QSplitter(self)
widgets_splitter.setOrientation(QtCore.Qt.Vertical)
widgets_splitter.addWidget(output_widget)
widgets_splitter.addWidget(tab_widget)
widgets_splitter.setStretchFactor(0, 1)
widgets_splitter.setStretchFactor(1, 1)
height = int(self.default_height / 2)
widgets_splitter.setSizes([height, self.default_height - height])
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(widgets_splitter)
line_check_timer = QtCore.QTimer()
line_check_timer.setInterval(200)
line_check_timer.timeout.connect(self._on_timer_timeout)
add_tab_btn.clicked.connect(self._on_add_clicked)
tab_bar.right_clicked.connect(self._on_tab_right_click)
tab_bar.double_clicked.connect(self._on_tab_double_click)
tab_bar.mid_clicked.connect(self._on_tab_mid_click)
tab_widget.tabCloseRequested.connect(self._on_tab_close_req)
self._widgets_splitter = widgets_splitter
self._add_tab_btn = add_tab_btn
self._output_widget = output_widget
self._tab_widget = tab_widget
self._line_check_timer = line_check_timer
self._append_lines([openpype_art])
self.setStyleSheet(load_stylesheet())
self.resize(self.default_width, self.default_height)
self._init_from_registry()
if self._tab_widget.count() < 1:
self.add_tab("Python")
def _init_from_registry(self):
setting_registry = PythonInterpreterRegistry()
try:
width = setting_registry.get_item("width")
height = setting_registry.get_item("height")
if width is not None and height is not None:
self.resize(width, height)
except ValueError:
pass
try:
sizes = setting_registry.get_item("splitter_sizes")
if len(sizes) == len(self._widgets_splitter.sizes()):
self._widgets_splitter.setSizes(sizes)
except ValueError:
pass
try:
tab_defs = setting_registry.get_item("tabs") or []
for tab_def in tab_defs:
widget = self.add_tab(tab_def["name"])
widget.set_code(tab_def["code"])
except ValueError:
pass
def save_registry(self):
setting_registry = PythonInterpreterRegistry()
setting_registry.set_item("width", self.width())
setting_registry.set_item("height", self.height())
setting_registry.set_item(
"splitter_sizes", self._widgets_splitter.sizes()
)
tabs = []
for tab_idx in range(self._tab_widget.count()):
widget = self._tab_widget.widget(tab_idx)
tab_code = widget.get_code()
tab_name = self._tab_widget.tabText(tab_idx)
tabs.append({
"name": tab_name,
"code": tab_code
})
setting_registry.set_item("tabs", tabs)
def _on_tab_right_click(self, global_point):
point = self._tab_widget.mapFromGlobal(global_point)
tab_bar = self._tab_widget.tabBar()
tab_idx = tab_bar.tabAt(point)
last_index = tab_bar.count() - 1
if tab_idx < 0 or tab_idx > last_index:
return
menu = QtWidgets.QMenu(self._tab_widget)
menu.addAction("Rename")
result = menu.exec_(global_point)
if result is None:
return
if result.text() == "Rename":
self._rename_tab_req(tab_idx)
def _rename_tab_req(self, tab_idx):
dialog = TabNameDialog(self)
dialog.set_tab_name(self._tab_widget.tabText(tab_idx))
dialog.exec_()
tab_name = dialog.result()
if tab_name:
self._tab_widget.setTabText(tab_idx, tab_name)
def _on_tab_mid_click(self, global_point):
point = self._tab_widget.mapFromGlobal(global_point)
tab_bar = self._tab_widget.tabBar()
tab_idx = tab_bar.tabAt(point)
last_index = tab_bar.count() - 1
if tab_idx < 0 or tab_idx > last_index:
return
self._on_tab_close_req(tab_idx)
def _on_tab_double_click(self, global_point):
point = self._tab_widget.mapFromGlobal(global_point)
tab_bar = self._tab_widget.tabBar()
tab_idx = tab_bar.tabAt(point)
last_index = tab_bar.count() - 1
if tab_idx < 0 or tab_idx > last_index:
return
self._rename_tab_req(tab_idx)
def _on_tab_close_req(self, tab_index):
if self._tab_widget.count() == 1:
return
widget = self._tab_widget.widget(tab_index)
if widget in self._tabs:
self._tabs.remove(widget)
self._tab_widget.removeTab(tab_index)
if self._tab_widget.count() == 1:
self._tab_widget.setTabsClosable(False)
def _append_lines(self, lines):
at_max = self._output_widget.vertical_scroll_at_max()
tmp_cursor = QtGui.QTextCursor(self._output_widget.document())
tmp_cursor.movePosition(QtGui.QTextCursor.End)
for line in lines:
tmp_cursor.insertText(line)
if at_max:
self._output_widget.scroll_to_bottom()
def _on_timer_timeout(self):
if self._stdout_err_wrapper.lines:
lines = []
while self._stdout_err_wrapper.lines:
line = self._stdout_err_wrapper.lines.popleft()
lines.append(self.ansi_escape.sub("", line))
self._append_lines(lines)
def _on_add_clicked(self):
dialog = TabNameDialog(self)
dialog.exec_()
tab_name = dialog.result()
if tab_name:
self.add_tab(tab_name)
def _on_before_execute(self, code_text):
at_max = self._output_widget.vertical_scroll_at_max()
document = self._output_widget.document()
tmp_cursor = QtGui.QTextCursor(document)
tmp_cursor.movePosition(QtGui.QTextCursor.End)
tmp_cursor.insertText("{}\nExecuting command:\n".format(20 * "-"))
code_block_format = QtGui.QTextFrameFormat()
code_block_format.setBackground(QtGui.QColor(27, 27, 27))
code_block_format.setPadding(4)
tmp_cursor.insertFrame(code_block_format)
char_format = tmp_cursor.charFormat()
char_format.setForeground(
QtGui.QBrush(QtGui.QColor(114, 224, 198))
)
tmp_cursor.setCharFormat(char_format)
tmp_cursor.insertText(code_text)
# Create new cursor
tmp_cursor = QtGui.QTextCursor(document)
tmp_cursor.movePosition(QtGui.QTextCursor.End)
tmp_cursor.insertText("{}\n".format(20 * "-"))
if at_max:
self._output_widget.scroll_to_bottom()
def add_tab(self, tab_name, index=None):
widget = PythonTabWidget(self)
widget.before_execute.connect(self._on_before_execute)
if index is None:
if self._tab_widget.count() > 0:
index = self._tab_widget.currentIndex() + 1
else:
index = 0
self._tabs.append(widget)
self._tab_widget.insertTab(index, widget, tab_name)
self._tab_widget.setCurrentIndex(index)
if self._tab_widget.count() > 1:
self._tab_widget.setTabsClosable(True)
widget.setFocus()
return widget
def showEvent(self, event):
self._line_check_timer.start()
super(PythonInterpreterWidget, self).showEvent(event)
self._output_widget.scroll_to_bottom()
def closeEvent(self, event):
self.save_registry()
super(PythonInterpreterWidget, self).closeEvent(event)
self._line_check_timer.stop()

View file

@ -0,0 +1,15 @@
# -*- coding: utf-8 -*-
"""Collect OpenPype modules."""
from openpype.modules import ModulesManager
import pyblish.api
class CollectModules(pyblish.api.ContextPlugin):
"""Collect OpenPype modules."""
order = pyblish.api.CollectorOrder
label = "OpenPype Modules"
def process(self, context):
manager = ModulesManager()
context.data["openPypeModules"] = manager.modules_by_name

View file

@ -45,7 +45,8 @@ class ExtractBurnin(openpype.api.Extractor):
"fusion",
"aftereffects",
"tvpaint",
"webpublisher"
"webpublisher",
"aftereffects"
# "resolve"
]
optional = True

View file

@ -45,7 +45,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
"fusion",
"tvpaint",
"resolve",
"webpublisher"
"webpublisher",
"aftereffects"
]
# Supported extensions

View file

@ -97,7 +97,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
"background",
"camerarig",
"redshiftproxy",
"effect"
"effect",
"xgen"
]
exclude_families = ["clip"]
db_representation_context_keys = [

View file

@ -16,6 +16,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
def process(self, context):
asset_and_parents = self.get_parents(context)
self.log.debug("__ asset_and_parents: {}".format(asset_and_parents))
if not io.Session:
io.install()
@ -25,7 +26,8 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
self.log.debug("__ db_assets: {}".format(db_assets))
asset_db_docs = {
str(e["name"]): e["data"]["parents"] for e in db_assets}
str(e["name"]): e["data"]["parents"]
for e in db_assets}
self.log.debug("__ project_entities: {}".format(
pformat(asset_db_docs)))
@ -107,6 +109,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
parents = instance.data["parents"]
return_dict.update({
asset: [p["entity_name"] for p in parents]
asset: [p["entity_name"] for p in parents
if p["entity_type"].lower() != "project"]
})
return return_dict

View file

@ -1,4 +1,5 @@
{
"deadline_servers": [],
"publish": {
"ValidateExpectedFiles": {
"enabled": true,
@ -11,6 +12,30 @@
"deadline"
]
},
"ProcessSubmittedJobOnFarm": {
"enabled": true,
"deadline_department": "",
"deadline_pool": "",
"deadline_group": "",
"deadline_chunk_size": 1,
"deadline_priority": 50,
"publishing_script": "",
"skip_integration_repre_list": [],
"aov_filter": {
"maya": [
".+(?:\\.|_)([Bb]eauty)(?:\\.|_).*"
],
"nuke": [
".*"
],
"aftereffects": [
".*"
],
"celaction": [
".*"
]
}
},
"MayaSubmitDeadline": {
"enabled": true,
"optional": false,
@ -21,7 +46,8 @@
"group": "none",
"limit": [],
"jobInfo": {},
"pluginInfo": {}
"pluginInfo": {},
"scene_patches": []
},
"NukeSubmitDeadline": {
"enabled": true,

View file

@ -298,6 +298,18 @@
"add_ftrack_family": true
}
]
},
{
"hosts": [
"aftereffects"
],
"families": [
"render",
"workfile"
],
"tasks": [],
"add_ftrack_family": true,
"advanced_filtering": []
}
]
},

View file

@ -173,28 +173,6 @@
}
]
},
"ProcessSubmittedJobOnFarm": {
"enabled": true,
"deadline_department": "",
"deadline_pool": "",
"deadline_group": "",
"deadline_chunk_size": 1,
"deadline_priority": 50,
"publishing_script": "",
"skip_integration_repre_list": [],
"aov_filter": {
"maya": [
".+(?:\\.|_)([Bb]eauty)(?:\\.|_).*"
],
"nuke": [],
"aftereffects": [
".*"
],
"celaction": [
".*"
]
}
},
"CleanUp": {
"paterns": [],
"remove_temp_renders": false
@ -257,6 +235,16 @@
],
"tasks": [],
"template": "{family}{Task}"
},
{
"families": [
"renderLocal"
],
"hosts": [
"aftereffects"
],
"tasks": [],
"template": "render{Task}{Variant}"
}
]
},

View file

@ -7,6 +7,19 @@
"workfile": "ma",
"yetiRig": "ma"
},
"maya-dirmap": {
"enabled": true,
"paths": {
"source-path": [
"foo1",
"foo2"
],
"destination-path": [
"bar1",
"bar2"
]
}
},
"scriptsmenu": {
"name": "OpenPype Tools",
"definition": [
@ -31,6 +44,12 @@
"Main"
]
},
"CreateRender": {
"enabled": true,
"defaults": [
"Main"
]
},
"CreateAnimation": {
"enabled": true,
"defaults": [
@ -81,12 +100,6 @@
"Main"
]
},
"CreateRender": {
"enabled": true,
"defaults": [
"Main"
]
},
"CreateRenderSetup": {
"enabled": true,
"defaults": [

View file

@ -1,6 +1,5 @@
{
"project_setup": {
"dev_mode": true,
"install_unreal_python_engine": false
"dev_mode": true
}
}

View file

@ -140,7 +140,9 @@
},
"deadline": {
"enabled": true,
"DEADLINE_REST_URL": "http://localhost:8082"
"deadline_urls": {
"default": "http://127.0.0.1:8082"
}
},
"muster": {
"enabled": false,

View file

@ -105,7 +105,8 @@ from .enum_entity import (
AppsEnumEntity,
ToolsEnumEntity,
TaskTypeEnumEntity,
ProvidersEnum
ProvidersEnum,
DeadlineUrlEnumEntity
)
from .list_entity import ListEntity
@ -160,6 +161,7 @@ __all__ = (
"ToolsEnumEntity",
"TaskTypeEnumEntity",
"ProvidersEnum",
"DeadlineUrlEnumEntity",
"ListEntity",

View file

@ -174,6 +174,14 @@ class BaseItemEntity(BaseEntity):
roles = [roles]
self.roles = roles
@abstractmethod
def collect_static_entities_by_path(self):
"""Collect all paths of all static path entities.
Static path is entity which is not dynamic or under dynamic entity.
"""
pass
@property
def require_restart_on_change(self):
return self._require_restart_on_change

View file

@ -141,6 +141,7 @@ class DictConditionalEntity(ItemEntity):
self.enum_key = self.schema_data.get("enum_key")
self.enum_label = self.schema_data.get("enum_label")
self.enum_children = self.schema_data.get("enum_children")
self.enum_default = self.schema_data.get("enum_default")
self.enum_entity = None
@ -277,15 +278,22 @@ class DictConditionalEntity(ItemEntity):
if isinstance(item, dict) and "key" in item:
valid_enum_items.append(item)
enum_keys = []
enum_items = []
for item in valid_enum_items:
item_key = item["key"]
enum_keys.append(item_key)
item_label = item.get("label") or item_key
enum_items.append({item_key: item_label})
if not enum_items:
return
if self.enum_default in enum_keys:
default_key = self.enum_default
else:
default_key = enum_keys[0]
# Create Enum child first
enum_key = self.enum_key or "invalid"
enum_schema = {
@ -293,7 +301,8 @@ class DictConditionalEntity(ItemEntity):
"multiselection": False,
"enum_items": enum_items,
"key": enum_key,
"label": self.enum_label
"label": self.enum_label,
"default": default_key
}
enum_entity = self.create_schema_object(enum_schema, self)
@ -318,6 +327,11 @@ class DictConditionalEntity(ItemEntity):
self.non_gui_children[item_key][child_obj.key] = child_obj
def collect_static_entities_by_path(self):
if self.is_dynamic_item or self.is_in_dynamic_item:
return {}
return {self.path: self}
def get_child_path(self, child_obj):
"""Get hierarchical path of child entity.

View file

@ -203,6 +203,18 @@ class DictImmutableKeysEntity(ItemEntity):
)
self.show_borders = self.schema_data.get("show_borders", True)
def collect_static_entities_by_path(self):
output = {}
if self.is_dynamic_item or self.is_in_dynamic_item:
return output
output[self.path] = self
for children in self.non_gui_children.values():
result = children.collect_static_entities_by_path()
if result:
output.update(result)
return output
def get_child_path(self, child_obj):
"""Get hierarchical path of child entity.

View file

@ -73,21 +73,41 @@ class EnumEntity(BaseEnumEntity):
def _item_initalization(self):
self.multiselection = self.schema_data.get("multiselection", False)
self.enum_items = self.schema_data.get("enum_items")
# Default is optional and non breaking attribute
enum_default = self.schema_data.get("default")
valid_keys = set()
all_keys = []
for item in self.enum_items or []:
valid_keys.add(tuple(item.keys())[0])
key = tuple(item.keys())[0]
all_keys.append(key)
self.valid_keys = valid_keys
self.valid_keys = set(all_keys)
if self.multiselection:
self.valid_value_types = (list, )
self.value_on_not_set = []
value_on_not_set = []
if enum_default:
if not isinstance(enum_default, list):
enum_default = [enum_default]
for item in enum_default:
if item in all_keys:
value_on_not_set.append(item)
self.value_on_not_set = value_on_not_set
else:
for key in valid_keys:
if self.value_on_not_set is NOT_SET:
self.value_on_not_set = key
break
if isinstance(enum_default, list) and enum_default:
enum_default = enum_default[0]
if enum_default in self.valid_keys:
self.value_on_not_set = enum_default
else:
for key in all_keys:
if self.value_on_not_set is NOT_SET:
self.value_on_not_set = key
break
self.valid_value_types = (STRING_TYPE, )
@ -423,3 +443,54 @@ class ProvidersEnum(BaseEnumEntity):
self._current_value = value_on_not_set
self.value_on_not_set = value_on_not_set
class DeadlineUrlEnumEntity(BaseEnumEntity):
schema_types = ["deadline_url-enum"]
def _item_initalization(self):
self.multiselection = self.schema_data.get("multiselection", True)
self.enum_items = []
self.valid_keys = set()
if self.multiselection:
self.valid_value_types = (list,)
self.value_on_not_set = []
else:
for key in self.valid_keys:
if self.value_on_not_set is NOT_SET:
self.value_on_not_set = key
break
self.valid_value_types = (STRING_TYPE,)
# GUI attribute
self.placeholder = self.schema_data.get("placeholder")
def _get_enum_values(self):
system_settings_entity = self.get_entity_from_path("system_settings")
valid_keys = set()
enum_items_list = []
deadline_urls_entity = (
system_settings_entity
["modules"]
["deadline"]
["deadline_urls"]
)
for server_name, url_entity in deadline_urls_entity.items():
enum_items_list.append(
{server_name: "{}: {}".format(server_name, url_entity.value)})
valid_keys.add(server_name)
return enum_items_list, valid_keys
def set_override_state(self, *args, **kwargs):
super(DeadlineUrlEnumEntity, self).set_override_state(*args, **kwargs)
self.enum_items, self.valid_keys = self._get_enum_values()
new_value = []
for key in self._current_value:
if key in self.valid_keys:
new_value.append(key)
self._current_value = new_value

View file

@ -53,6 +53,11 @@ class EndpointEntity(ItemEntity):
def _settings_value(self):
pass
def collect_static_entities_by_path(self):
if self.is_dynamic_item or self.is_in_dynamic_item:
return {}
return {self.path: self}
def settings_value(self):
if self._override_state is OverrideState.NOT_DEFINED:
return NOT_SET

View file

@ -106,6 +106,9 @@ class PathEntity(ItemEntity):
self.valid_value_types = valid_value_types
self.child_obj = self.create_schema_object(item_schema, self)
def collect_static_entities_by_path(self):
return self.child_obj.collect_static_entities_by_path()
def get_child_path(self, _child_obj):
return self.path
@ -192,6 +195,24 @@ class PathEntity(ItemEntity):
class ListStrictEntity(ItemEntity):
schema_types = ["list-strict"]
def __getitem__(self, idx):
if not isinstance(idx, int):
idx = int(idx)
return self.children[idx]
def __setitem__(self, idx, value):
if not isinstance(idx, int):
idx = int(idx)
self.children[idx].set(value)
def get(self, idx, default=None):
if not isinstance(idx, int):
idx = int(idx)
if idx < len(self.children):
return self.children[idx]
return default
def _item_initalization(self):
self.valid_value_types = (list, )
self.require_key = True
@ -222,6 +243,18 @@ class ListStrictEntity(ItemEntity):
super(ListStrictEntity, self).schema_validations()
def collect_static_entities_by_path(self):
output = {}
if self.is_dynamic_item or self.is_in_dynamic_item:
return output
output[self.path] = self
for child_obj in self.children:
result = child_obj.collect_static_entities_by_path()
if result:
output.update(result)
return output
def get_child_path(self, child_obj):
result_idx = None
for idx, _child_obj in enumerate(self.children):

View file

@ -3,6 +3,7 @@ import re
import json
import copy
import inspect
import contextlib
from .exceptions import (
SchemaTemplateMissingKeys,
@ -111,6 +112,10 @@ class SchemasHub:
self._loaded_templates = {}
self._loaded_schemas = {}
# Store validating and validated dynamic template or schemas
self._validating_dynamic = set()
self._validated_dynamic = set()
# It doesn't make sence to reload types on each reset as they can't be
# changed
self._load_types()
@ -126,6 +131,60 @@ class SchemasHub:
def gui_types(self):
return self._gui_types
def get_template_name(self, item_def, default=None):
"""Get template name from passed item definition.
Args:
item_def(dict): Definition of item with "type".
default(object): Default return value.
"""
output = default
if not item_def or not isinstance(item_def, dict):
return output
item_type = item_def.get("type")
if item_type in ("template", "schema_template"):
output = item_def["name"]
return output
def is_dynamic_template_validating(self, template_name):
"""Is template validating using different entity.
Returns:
bool: Is template validating.
"""
if template_name in self._validating_dynamic:
return True
return False
def is_dynamic_template_validated(self, template_name):
"""Is template already validated.
Returns:
bool: Is template validated.
"""
if template_name in self._validated_dynamic:
return True
return False
@contextlib.contextmanager
def validating_dynamic(self, template_name):
"""Template name is validating and validated.
Context manager that cares about storing template name validations of
template.
This is to avoid infinite loop of dynamic children validation.
"""
self._validating_dynamic.add(template_name)
try:
yield
self._validated_dynamic.add(template_name)
finally:
self._validating_dynamic.remove(template_name)
def get_schema(self, schema_name):
"""Get schema definition data by it's name.

View file

@ -45,6 +45,24 @@ class ListEntity(EndpointEntity):
return True
return False
def __getitem__(self, idx):
if not isinstance(idx, int):
idx = int(idx)
return self.children[idx]
def __setitem__(self, idx, value):
if not isinstance(idx, int):
idx = int(idx)
self.children[idx].set(value)
def get(self, idx, default=None):
if not isinstance(idx, int):
idx = int(idx)
if idx < len(self.children):
return self.children[idx]
return default
def index(self, item):
if isinstance(item, BaseEntity):
for idx, child_entity in enumerate(self.children):
@ -141,7 +159,20 @@ class ListEntity(EndpointEntity):
item_schema = self.schema_data["object_type"]
if not isinstance(item_schema, dict):
item_schema = {"type": item_schema}
self.item_schema = item_schema
obj_template_name = self.schema_hub.get_template_name(item_schema)
_item_schemas = self.schema_hub.resolve_schema_data(item_schema)
if len(_item_schemas) == 1:
self.item_schema = _item_schemas[0]
if self.item_schema != item_schema:
if "label" in self.item_schema:
self.item_schema.pop("label")
self.item_schema["use_label_wrap"] = False
else:
self.item_schema = _item_schemas
# Store if was used template or schema
self._obj_template_name = obj_template_name
if self.group_item is None:
self.is_group = True
@ -150,6 +181,12 @@ class ListEntity(EndpointEntity):
self.initial_value = []
def schema_validations(self):
if isinstance(self.item_schema, list):
reason = (
"`ListWidget` has multiple items as object type."
)
raise EntitySchemaError(self, reason)
super(ListEntity, self).schema_validations()
if self.is_dynamic_item and self.use_label_wrap:
@ -167,18 +204,36 @@ class ListEntity(EndpointEntity):
raise EntitySchemaError(self, reason)
# Validate object type schema
child_validated = False
validate_children = True
for child_entity in self.children:
child_entity.schema_validations()
child_validated = True
validate_children = False
break
if not child_validated:
if validate_children and self._obj_template_name:
_validated = self.schema_hub.is_dynamic_template_validated(
self._obj_template_name
)
_validating = self.schema_hub.is_dynamic_template_validating(
self._obj_template_name
)
validate_children = not _validated and not _validating
if not validate_children:
return
def _validate():
idx = 0
tmp_child = self._add_new_item(idx)
tmp_child.schema_validations()
self.children.pop(idx)
if self._obj_template_name:
with self.schema_hub.validating_dynamic(self._obj_template_name):
_validate()
else:
_validate()
def get_child_path(self, child_obj):
result_idx = None
for idx, _child_obj in enumerate(self.children):

View file

@ -242,6 +242,14 @@ class RootEntity(BaseItemEntity):
"""Whan any children has changed."""
self.on_change()
def collect_static_entities_by_path(self):
output = {}
for child_obj in self.non_gui_children.values():
result = child_obj.collect_static_entities_by_path()
if result:
output.update(result)
return output
def get_child_path(self, child_entity):
"""Return path of children entity"""
for key, _child_entity in self.non_gui_children.items():

View file

@ -195,6 +195,7 @@
- all items in `enum_children` must have at least `key` key which represents value stored under `enum_key`
- items can define `label` for UI purposes
- most important part is that item can define `children` key where are definitions of it's children (`children` value works the same way as in `dict`)
- to set default value for `enum_key` set it with `enum_default`
- entity must have defined `"label"` if is not used as widget
- is set as group if any parent is not group
- if `"label"` is entetered there which will be shown in GUI
@ -359,6 +360,9 @@ How output of the schema could look like on save:
- values are defined under value of key `"enum_items"` as list
- each item in list is simple dictionary where value is label and key is value which will be stored
- should be possible to enter single dictionary if order of items doesn't matter
- it is possible to set default selected value/s with `default` attribute
- it is recommended to use this option only in single selection mode
- at the end this option is used only when defying default settings value or in dynamic items
```
{
@ -371,7 +375,7 @@ How output of the schema could look like on save:
{"ftrackreview": "Add to Ftrack"},
{"delete": "Delete output"},
{"slate-frame": "Add slate frame"},
{"no-hnadles": "Skip handle frames"}
{"no-handles": "Skip handle frames"}
]
}
```
@ -417,6 +421,8 @@ How output of the schema could look like on save:
- there are 2 possible ways how to set the type:
1.) dictionary with item modifiers (`number` input has `minimum`, `maximum` and `decimals`) in that case item type must be set as value of `"type"` (example below)
2.) item type name as string without modifiers (e.g. `text`)
3.) enhancement of 1.) there is also support of `template` type but be carefull about endless loop of templates
- goal of using `template` is to easily change same item definitions in multiple lists
1.) with item modifiers
```
@ -442,6 +448,65 @@ How output of the schema could look like on save:
}
```
3.) with template definition
```
# Schema of list item where template is used
{
"type": "list",
"key": "menu_items",
"label": "Menu Items",
"object_type": {
"type": "template",
"name": "template_object_example"
}
}
# WARNING:
# In this example the template use itself inside which will work in `list`
# but may cause an issue in other entity types (e.g. `dict`).
'template_object_example.json' :
[
{
"type": "dict-conditional",
"use_label_wrap": true,
"collapsible": true,
"key": "menu_items",
"label": "Menu items",
"enum_key": "type",
"enum_label": "Type",
"enum_children": [
{
"key": "action",
"label": "Action",
"children": [
{
"type": "text",
"key": "key",
"label": "Key"
}
]
},
{
"key": "menu",
"label": "Menu",
"children": [
{
"key": "children",
"label": "Children",
"type": "list",
"object_type": {
"type": "template",
"name": "template_object_example"
}
}
]
}
]
}
]
```
### dict-modifiable
- one of dictionary inputs, this is only used as value input
- items in this input can be removed and added same way as in `list` input

View file

@ -5,6 +5,12 @@
"collapsible": true,
"is_file": true,
"children": [
{
"type": "deadline_url-enum",
"key": "deadline_servers",
"label": "Deadline Webservice URLs",
"multiselect": true
},
{
"type": "dict",
"collapsible": true,
@ -52,11 +58,106 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "ProcessSubmittedJobOnFarm",
"label": "ProcessSubmittedJobOnFarm",
"checkbox_key": "enabled",
"is_group": true,
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "text",
"key": "deadline_department",
"label": "Deadline department"
},
{
"type": "text",
"key": "deadline_pool",
"label": "Deadline Pool"
},
{
"type": "text",
"key": "deadline_group",
"label": "Deadline Group"
},
{
"type": "number",
"key": "deadline_chunk_size",
"label": "Deadline Chunk Size"
},
{
"type": "number",
"key": "deadline_priority",
"label": "Deadline Priotity"
},
{
"type": "splitter"
},
{
"type": "text",
"key": "publishing_script",
"label": "Publishing script path"
},
{
"type": "list",
"key": "skip_integration_repre_list",
"label": "Skip integration of representation with ext",
"object_type": {
"type": "text"
}
},
{
"type": "dict",
"key": "aov_filter",
"label": "Reviewable subsets filter",
"children": [
{
"type": "list",
"key": "maya",
"label": "Maya",
"object_type": {
"type": "text"
}
},
{
"type": "list",
"key": "nuke",
"label": "Nuke",
"object_type": {
"type": "text"
}
},
{
"type": "list",
"key": "aftereffects",
"label": "After Effects",
"object_type": {
"type": "text"
}
},
{
"type": "list",
"key": "celaction",
"label": "Celaction",
"object_type": {
"type": "text"
}
}
]
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "MayaSubmitDeadline",
"label": "Submit maya job to deadline",
"label": "Submit Maya job to Deadline",
"checkbox_key": "enabled",
"children": [
{
@ -118,6 +219,31 @@
"type": "raw-json",
"key": "pluginInfo",
"label": "Additional PluginInfo data"
},
{
"type": "list",
"key": "scene_patches",
"label": "Scene patches",
"required_keys": ["name", "regex", "line"],
"object_type": {
"type": "dict",
"children": [
{
"key": "name",
"label": "Patch name",
"type": "text"
}, {
"key": "regex",
"label": "Patch regex",
"type": "text"
}, {
"key": "line",
"label": "Patch line",
"type": "text"
}
]
}
}
]
},

View file

@ -14,6 +14,39 @@
"type": "text"
}
},
{
"type": "dict",
"collapsible": true,
"checkbox_key": "enabled",
"key": "maya-dirmap",
"label": "Maya Directory Mapping",
"is_group": true,
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "dict",
"key": "paths",
"children": [
{
"type": "list",
"object_type": "text",
"key": "source-path",
"label": "Source Path"
},
{
"type": "list",
"object_type": "text",
"key": "destination-path",
"label": "Destination Path"
}
]
}
]
},
{
"type": "schema",
"name": "schema_maya_scriptsmenu"

View file

@ -556,101 +556,6 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "ProcessSubmittedJobOnFarm",
"label": "ProcessSubmittedJobOnFarm",
"checkbox_key": "enabled",
"is_group": true,
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "text",
"key": "deadline_department",
"label": "Deadline department"
},
{
"type": "text",
"key": "deadline_pool",
"label": "Deadline Pool"
},
{
"type": "text",
"key": "deadline_group",
"label": "Deadline Group"
},
{
"type": "number",
"key": "deadline_chunk_size",
"label": "Deadline Chunk Size"
},
{
"type": "number",
"key": "deadline_priority",
"label": "Deadline Priotity"
},
{
"type": "splitter"
},
{
"type": "text",
"key": "publishing_script",
"label": "Publishing script path"
},
{
"type": "list",
"key": "skip_integration_repre_list",
"label": "Skip integration of representation with ext",
"object_type": {
"type": "text"
}
},
{
"type": "dict",
"key": "aov_filter",
"label": "Reviewable subsets filter",
"children": [
{
"type": "list",
"key": "maya",
"label": "Maya",
"object_type": {
"type": "text"
}
},
{
"type": "list",
"key": "nuke",
"label": "Nuke",
"object_type": {
"type": "text"
}
},
{
"type": "list",
"key": "aftereffects",
"label": "After Effects",
"object_type": {
"type": "text"
}
},
{
"type": "list",
"key": "celaction",
"label": "Celaction",
"object_type": {
"type": "text"
}
}
]
}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -29,6 +29,26 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "CreateRender",
"label": "Create Render",
"checkbox_key": "enabled",
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "list",
"key": "defaults",
"label": "Default Subsets",
"object_type": "text"
}
]
},
{
"type": "schema_template",
"name": "template_create_plugin",
@ -65,10 +85,6 @@
"key": "CreatePointCache",
"label": "Create Cache"
},
{
"key": "CreateRender",
"label": "Create Render"
},
{
"key": "CreateRenderSetup",
"label": "Create Render Setup"

View file

@ -0,0 +1,58 @@
[
{
"type": "dict-conditional",
"use_label_wrap": true,
"collapsible": true,
"key": "menu_items",
"label": "Menu items",
"enum_key": "type",
"enum_label": "Type",
"enum_children": [
{
"key": "action",
"label": "Action",
"children": [
{
"type": "text",
"key": "key",
"label": "Key"
},
{
"type": "text",
"key": "label",
"label": "Label"
},
{
"type": "text",
"key": "command",
"label": "Comand"
}
]
},
{
"key": "menu",
"label": "Menu",
"children": [
{
"type": "text",
"key": "label",
"label": "Label"
},
{
"key": "children",
"label": "Children",
"type": "list",
"object_type": {
"type": "template",
"name": "example_infinite_hierarchy"
}
}
]
},
{
"key": "separator",
"label": "Separator"
}
]
}
]

View file

@ -82,6 +82,17 @@
}
]
},
{
"type": "list",
"use_label_wrap": true,
"collapsible": true,
"key": "infinite_hierarchy",
"label": "Infinite list template hierarchy",
"object_type": {
"type": "template",
"name": "example_infinite_hierarchy"
}
},
{
"type": "dict",
"key": "schema_template_exaples",

View file

@ -130,9 +130,11 @@
"label": "Enabled"
},
{
"type": "text",
"key": "DEADLINE_REST_URL",
"label": "Deadline Resl URL"
"type": "dict-modifiable",
"object_type": "text",
"key": "deadline_urls",
"required_keys": ["default"],
"label": "Deadline Webservice URLs"
}
]
},

View file

@ -65,6 +65,7 @@ def _load_font():
font_dirs = []
font_dirs.append(os.path.join(fonts_dirpath, "Montserrat"))
font_dirs.append(os.path.join(fonts_dirpath, "Spartan"))
font_dirs.append(os.path.join(fonts_dirpath, "RobotoMono", "static"))
loaded_fonts = []
for font_dir in font_dirs:

View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,77 @@
Roboto Mono Variable Font
=========================
This download contains Roboto Mono as both variable fonts and static fonts.
Roboto Mono is a variable font with this axis:
wght
This means all the styles are contained in these files:
RobotoMono-VariableFont_wght.ttf
RobotoMono-Italic-VariableFont_wght.ttf
If your app fully supports variable fonts, you can now pick intermediate styles
that arent available as static fonts. Not all apps support variable fonts, and
in those cases you can use the static font files for Roboto Mono:
static/RobotoMono-Thin.ttf
static/RobotoMono-ExtraLight.ttf
static/RobotoMono-Light.ttf
static/RobotoMono-Regular.ttf
static/RobotoMono-Medium.ttf
static/RobotoMono-SemiBold.ttf
static/RobotoMono-Bold.ttf
static/RobotoMono-ThinItalic.ttf
static/RobotoMono-ExtraLightItalic.ttf
static/RobotoMono-LightItalic.ttf
static/RobotoMono-Italic.ttf
static/RobotoMono-MediumItalic.ttf
static/RobotoMono-SemiBoldItalic.ttf
static/RobotoMono-BoldItalic.ttf
Get started
-----------
1. Install the font files you want to use
2. Use your app's font picker to view the font family and all the
available styles
Learn more about variable fonts
-------------------------------
https://developers.google.com/web/fundamentals/design-and-ux/typography/variable-fonts
https://variablefonts.typenetwork.com
https://medium.com/variable-fonts
In desktop apps
https://theblog.adobe.com/can-variable-fonts-illustrator-cc
https://helpx.adobe.com/nz/photoshop/using/fonts.html#variable_fonts
Online
https://developers.google.com/fonts/docs/getting_started
https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Fonts/Variable_Fonts_Guide
https://developer.microsoft.com/en-us/microsoft-edge/testdrive/demos/variable-fonts
Installing fonts
MacOS: https://support.apple.com/en-us/HT201749
Linux: https://www.google.com/search?q=how+to+install+a+font+on+gnu%2Blinux
Windows: https://support.microsoft.com/en-us/help/314960/how-to-install-or-remove-a-font-in-windows
Android Apps
https://developers.google.com/fonts/docs/android
https://developer.android.com/guide/topics/ui/look-and-feel/downloadable-fonts
License
-------
Please read the full license text (LICENSE.txt) to understand the permissions,
restrictions and requirements for usage, redistribution, and modification.
You can use them freely in your products & projects - print or digital,
commercial or otherwise.
This isn't legal advice, please consider consulting a lawyer and see the full
license for all details.

Some files were not shown because too many files have changed in this diff Show more